Both the finite element method (FEM) and the boundary element method (BEM) use interpolation in finding a field solution i.e., the methods find the solution at a number of points in the domain of interest and then approximate the solution between these points using interpolation. The points at which the solution is found are known as nodes. Basis functions are used to interpolate the field between nodes within a subregion of the domain known as an element. Interpolation is achieved by mapping the field coordinate onto a local parametric, or , coordinate (which varies from 0 to ) within each element. The global nodes which make up each element are also mapped onto local element nodes and the basis functions are chosen (in terms of polynomials of the local parametric coordinate) such that the interpolated field is equal to the known nodal values at each node and is thus continuous between elements. A schematic of this scheme is shown in Figure 2.1.
The following (Einstein) summation notation will be used throughout these notes. In order to eliminate summation symbols repeated ``dummy'' indices will be used i.e.,
(2.1) 
To indicate an index that is not summed, parentheses will be used i.e., is talking about the singular expression for e.g., , etc.
In order to indicate a summation the sum must occur over indices that are different sub/superscript i.e., the sum must be over an ``upper'' and a ``lower'' index or a ``lower'' and an ``upper'' index. Note that it may be useful to remember that if an index appears in the denominator of a fractional expression then the index upper or lower ness is ``reversed''.
For some quantities with both upper and lower indices a dot will be used to indicate the ``second'' index e.g., in the expression then can be considered the first index and the second index.
One important family of basis functions are the Lagrange basis functions. This family has one basis function for each of the local element nodes and are defined such that, at a particular node, only one basis function is nonzero and has the value of one. In this sense a basis function can be thought of as being associated with a local node and serves to weight the interpolated solution in terms of the field value at that node. Lagrange basis functions hence provide only continuity of the field variable across element boundaries.
The simplest basis functions of the Lagrange family are the onedimensional linear Lagrange basis functions. These basis functions involve two local nodes and are defined as
The interpolation of a field variable, , using these basis functions is given by
(2.3) 
Lagrange basis functions can also be used to provide higher order variations, for example the onedimensional quadratic Lagrange basis functions involve three local nodes and can provide a quadratic variation of field parameter with . They are defined as
(2.5) 
In general the interpolation formula for the Lagrange family of basis functions is, using Einstein summation notation, given by
(2.7) 
Multidimensional Lagrange basis functions can be constructed from the tensor, or outer, products of the onedimensional Lagrange basis functions. For example the twodimensional bilinear Lagrange basis functions have four local nodes with the basis functions given by
(2.8) 
The multidimensional interpolation formula is still a sum of the products of the nodal basis function and the field value at the node. For example the interpolated geometric position vector within an element is given by
(2.9) 
Hermitian basis functions preserve continuity of the derivative of the interpolating variable i.e., continuity, with respect to across element boundaries by defining additional nodal derivative parameters. Like Lagrange bases, Hermitian basis functions are also chosen so that, at a particular node, only one basis function is nonzero and equal to one. They also are chosen so that, at a particular node, the derivative of only one of four basis functions is nonzero and is equal to one. Hermitian basis functions hence serve to weight the interpolated solution in terms of the field value and derivative of the field value at nodes.
Cubic Hermite basis functions are the simplest of the Hermitian family and involve two local nodes per element. The interpolation within each element is in terms of and and is given by
One further step is required to make cubic Hermite basis functions useful in practice. Consider the two cubic Hermite elements shown in Figure 2.3.
The derivative defined at local node is dependent upon the local element coordinate and is therefore, in general, different in the two adjacent elements. Interpretation of the derivative is hence difficult as two derivatives with the same magnitude in different parts of the mesh might represent two completely different physical derivatives. In order to the have a consistent interpretation of the derivative throughout the mesh it is better to base the interpolation on a physical derivative. Derivatives are therefore based on arclength at nodes, , and
By interpolating with respect to rather than with respect to there is some liberty as to the choice of the element scale factor, . The choice of the scale factor will, however, affect how changes with . It is computationally desirable to have a relatively uniform change of with (for example not biasing the Gaussian quadrature  see later  scheme to one end of the element). For this reason the element scale factor is chosen as some function of the arclength of the element, . The simplest linear function that can be chosen is the arclength itself. This type of scaling is called arclength scaling.
To calculate the arclength for a particular element an iterative process is needed. The arclength for a onedimensional element in twodimensions is defined as
However, since the interpolation of , as defined in Equation (2.13), uses the arclength in the calculation of the scaling factor, an iterative root finding technique is needed to obtain the arclength.
Thus, for an element , the onedimensional cubic Hermite interpolation formula in Equation (2.13) becomes
(2.16) 
There is one final condition that must be placed on the to arclength transformation to ensure arclength derivatives. This condition, based on the geometric properties of arclengths, is that the arclength derivative vector at a node must have unit magnitude, that is for global node
Bicubic Hermite basis functions are the twodimensional extension of the onedimensional cubic Hermite basis functions. They are formed from the tensor (or outer) product of two of the onedimensional cubic Hermite basis functions defined in Equation (2.11). The interpolation formula for the point within an element is obtained from the bicubic Hermite interpolation formula (, ),
As with onedimensional cubic Hermite elements, the derivatives with respect to in the twodimensional interpolation formula above are expressed as the product of a nodal arclength derivative and a scale factor. This is, however, complicated by the fact that there are now multiple directions at each node. From the product rule the transformation from an based derivative to an arclength based derivative is given by,
Now, by definition, the arclength direction is only a function of the direction, hence the derivative at local node is
Unlike the onedimensional cubic Hermite case a condition must be placed on this transformation in order to maintain continuity across element boundaries.
Consider the line between global nodes and in the two bicubic Hermite elements shown in Figure 2.5.
For continuity, as opposed to continuity, between these elements the derivative with respect to , that is , must be continuous^{2.1}. The formula for this derivative in element along the boundary between elements and is
Now substituting Equations (2.20) and (2.21) into the above equations yields for element
(2.24) 
(2.25) 
Now local node in element and local node in element is the same as global node and local node in element and local node in element is the same as global node . Hence for a given the condition for continuity across the element boundary is
Now by choosing the scale factors to be equal on either side of node and (i.e., and ), that is nodal based scale factors, Equation (2.27) is satisfied for any choice of the scale factors. Hence nodal scale factors are a sufficient condition to ensure continuity. If it is desired that the scale factors be different either side of the node then Equation (2.27) must be satisfied to ensure continuity. The choice of the scale factors again determines the to spacing. Following on from the cubic Hermite elements the scale factors are chosen to be nodally based and equal to the average arclength on either side of the node for each direction i.e., for the direction
One problem that arises when using quadrilateral elements (such as bicubic Hermite elements) to describe a surface is that it is impossible to 'close the surface' in threedimensions whilst maintaining consistent and directions throughout the mesh. This is important as continuity requires either consistent directions or a transformation at each node to take into account the inconsistent directions (, ).
One solution to this problem is to collapse a bicubic Hermite element. This entails placing one of the four local nodes of the element at the same geometric location as another local node of the element and results in a triangular element from which it is possible to close the surface. There are two main problems with this solution. The first is that one of the two directions at the collapsed node is undefined. The second is that the distance between the two nodes at the same location is zero. Numerical problems can result from this zero distance. An alternative strategy has been developed in which special elements, called ``Hermitesector'' elements, are used to close a bicubic Hermite surface in threedimensions. There are two types of elements depending on whether the (or ) directions come together at local node one or local node three. These two elements are shown in Figure 2.6.
From Figure 2.6 it can be seen that the direction is not unique at the apex nodes. This gives us two choices for the interpolation within the element: ignore the derivative when interpolating or set the derivative identically to zero.
Ignore apex derivative: For this case it can be seen from Figure 2.6 that the interpolation in the direction is just the standard cubic Hermite interpolation. The interpolation in the direction is now a little different in that the nodal arclength derivative has been dropped as it is no longer defined at the apex node. For an apex node one element shown in Figure 2.6(a) the interpolation for the line between local node one and local node is now quadratic and is given by
For the apex node three element shown in Figure 2.6(b) the interpolation for the line connecting local node with local node three is given by
The full interpolation formula for the sector element can then be found by taking the tensor product of the interpolation in the direction, given in Equation (2.10), with the interpolation in the direction (given by either Equations (2.29) or (2.31)). The interpolation formula can be converted from nodal derivatives to nodal arclength derivatives using the procedure outlined for the bicubic Hermite case. For example, the interpolation formulae for an apex node one element is
Care must be taken when using Hermitesector elements for rapidly changing surfaces. Consider an apex node one element with undefined apex derivatives. The rate of change of with respect to along the line from node one to node three (i.e., ) is
(2.32) 
Taking the dot product of with gives
The normality constraint for arclength derivatives means that and thus the right hand side of Equation (2.35) divided by (i.e., normalised by ) is the quadratic
This quadratic is at and always has a root at . Consider the case of this quadratic having its second root in the interval . This would mean that at some point in the interval the dot product of and would go from zero to negative and then positive as changed from 0 to i.e., the angle between and would, at some stage, be greater than ninety degrees. As the direction of the normal to the surface along the line between local node one and three is given by the cross product of and then, if the quadratic became sufficiently negative, the normal to the surface could reverse direction from an outward to an inward normal as changed from 0 to . This is clearly undesirable. In fact even if the quadratic is only slightly negative the resulting surface would be grossly deformed.
To avoid these effects the second root of the quadratic must be outside the interval . From the quadratic formula the conditions for this are
(2.34) 
(2.35) 
(2.36) 
The simplest way to interpret this constraint is that if the element is large (i.e., is large) then must be small. The simplest way for this to happen is to ensure the magnitude of the components of are small (or of opposite sign to the comparable components of ).
The equivalent interpolation formula to Equation (2.33) for an apex node three Hermitesector element is
(2.38) 
Zero apex derivative: For this case the sector basis functions are just the cubic Hermite basis functions. The corresponding interpolation formulae for an apex node one element is hence
(2.39) 
(2.40) 
(2.41) 
(2.42) 
Although the Hermitesector basis function in which the apex node derivatives are identically zero have an increased limit on the crossderivative constraints (a right hand side numerator of instead of ) they have the problem that as all derivatives vanish at the apex any interpolated function has a zero Hessian at the apex. As this can cause numerical problems the Hermitesector basis functions which have an undefined derivative are used in this thesis.
Simplex basis function and its derivatives are evaluated with respect to external coordinates.
For Simplex line elements there are two area coordinates which are a function of i.e.,
(2.43)  
(2.44) 
The derivatives wrt to external coordinates are then given by
(2.45)  
(2.46) 
For Simplex triangle elements there are three area coordinates which are a function of and i.e.,
(2.47)  
(2.48)  
(2.49) 
The derivatives wrt to external coordinates are then given by
(2.50)  
(2.51)  
(2.52)  
(2.53)  
(2.54) 
For Simplex tetrahedral elements there are four area coordinates which are a function of , and i.e.,
(2.55)  
(2.56)  
(2.57)  
(2.58) 
The derivatives wrt to external coordinates are then given by
(2.59)  
(2.60)  
(2.61)  
(2.62)  
(2.63)  
(2.64)  
(2.65)  
(2.66)  
(2.67)  
(2.68)  
(2.69) 
Now, if we have a vector, we can write
(2.70) 
Similarly, the vector can also be written as
(2.71) 
We now note that
(2.72) 
(2.73) 
Metric tensors are the inner product of base vectors. If are the covariant base vectors then the covariant metric tensor is given by
(2.74) 
Similarily if are the contravariant base vectors then the contravariant metric tensor is given by
(2.75) 
We can also form a mixed metric tensor from the dot product of a contravariant and a covariant base vector i.e.,
(2.76) 
(2.77) 
Note that for mixed tensors the ``.'' indicates the order of the index i.e., indicates that the first index is contravariant and the second index is covariant whereas indicates that the first index is covariant and the second index is contravariant.
If the base vectors are all mutually orthogonal and constant then and .
The metric tensors generalise (Euclidean) distance i.e.,
(2.78) 
Note that multiplying by the covariant metric tensor lowers indices i.e.,
(2.79) 
(2.80) 
(2.81) 
The transformation rules for tensors in going from a coordinate system to a coordinate system are as follows:
For a covariant vector (a rank (0,1) tensor)
(2.82) 
For a contravariant vector (a rank (1,0) tensor)
(2.83) 
For a covariant tensor (a rank (0,2) tensor)
(2.84) 
For a contravariant tensor (a rank (2,0) tensor)
(2.85) 
and for Mixed tensors (rank (1,1) tensors)
(2.86) 
(2.87) 
We note that a scalar quantity has derivatives
(2.88) 
Or more formally, the covariant derivative ( ) of a rank 0 tensor is
(2.89) 
The derivatives of a vector are given by
(2.90) 
Now introducing the notation
(2.91) 
Note that the Christoffel symbols of the first kind are given by
(2.92) 
Note that
(2.93) 
The Christoffel symbols of the first kind are also given by
(2.94) 
(2.95) 
Note that Christoffel symbols are not tensors and the have the following transformation laws from to coordinates
(2.96)  
(2.97) 
We can now write (BELOW SEEMS WRONG  CHECK)
(2.98) 
The covariant derivative of a contravariant (rank (0,1)) tensor is
(2.99) 
(2.100) 
The covariant derivative of a contravariant (rank (0,2)) tensor is
(2.101) 
(2.102) 
(2.103) 
For tensor equations to hold in any coordinate system the equations must involve tensor quantities i.e., covariant derivatives rather than partial derivatives.
As the covariant derivative of a scalar is just the partial derivative the gradient of a scalar function using covariant derivatives is
grad  (2.104) 
(2.105) 
The divergence of a vector using covariant derivatives is
div  (2.106) 
The curl of a vector using covariant derivatives is
curl  (2.107) 
The Laplacian of a scalar using covariant derivatives is
div  (2.108) 
The Laplacian of a vector using covariant derivatives is
grad  (2.109) 
The Laplacian of a contravariant (rank (0,1)) tensor is
(2.110) 
(2.111) 
The base vectors with respect to the global coordinate system are
(2.112) 
The covariant metric tensor is
(2.113) 
(2.114) 
The Christoffel symbols of the second kind are all zero.
The global coordinates with respect to the cylindrical polar coordinates are defined by
The base vectors with respect to the global coordinate system are
(2.116) 
The covariant metric tensor is
(2.117) 
(2.118) 
The Christoffell symbols of the second kind are
(2.119)  
(2.120) 
The global coordinates with respect to the cylindrical polar coordinates are defined by
The base vectors with respect to the spherical polar coordinate system are
(2.122) 
The covariant metric tensor is
(2.123) 
(2.124) 
The Christoffell symbols of the second kind are
(2.125)  
(2.126)  
(2.127)  
(2.128)  
(2.129)  
(2.130) 
The global coordinates with respect to the prolate spheroidal coordinates are defined by
The base vectors with respect to the global coordinate system are
(2.132) 
The covariant metric tensor is
(2.133) 
(2.134) 
The Christoffell symbols of the second kind are
(2.135)  
(2.136)  
(2.137)  
(2.138)  
(2.139)  
(2.140)  
(2.141)  
(2.142)  
(2.143)  
(2.144) 
The global coordinates with respect to the oblate spheroidal coordinates are defined by
The base vectors with respect to the global coordinate system are
(2.146) 
The covariant metric tensor is
(2.147) 
(2.148) 
The Christoffell symbols of the second kind are
(2.149)  
(2.150)  
(2.151)  
(2.152)  
(2.153)  
(2.154)  
(2.155)  
(2.156)  
(2.157)  
(2.158) 
The global coordinates with respect to the cylindrical parabolic coordinates are defined by
The base vectors with respect to the global coordinate system are
(2.160) 
The covariant metric tensor is
(2.161) 
(2.162) 
The Christoffell symbols of the second kind are
(2.163)  
(2.164)  
(2.165)  
(2.166)  
(2.167)  
(2.168) 
The global coordinates with respect to the cylindrical parabolic coordinates are defined by
The base vectors with respect to the global coordinate system are
(2.170) 
The covariant metric tensor is
(2.171) 
(2.172) 
The Christoffell symbols of the second kind are
(2.173)  
(2.174)  
(2.175)  
(2.176)  
(2.177)  
(2.178)  
(2.179)  
(2.180)  
(2.181)  
(2.182) 
The general form for static equations is
The general form for dynamic equations is
From (, ) we now expand the unknown vector in terms of a polynomial of degree . With the known values of , , up to at the beginning of the time step we can write the polynomial expansion as
(2.185) 
A recurrance relationship can be established by substituting Equation (2.191) into Equation (3.176) and taking a weighted residual approach i.e.,
Now if
for  (2.186) 
(2.188) 
We note that as is nonlinear we need to evaluate an integral of the form
(2.189) 
To do this we form Taylor's series expansions for about the point i.e.,
Now if we add times Equation (2.200) and times Equation (2.201) we obtain
(2.192) 
Multiplying through by gives
(2.193) 
Therefore
(2.194) 
Now if we recall that
(2.195) 
Error  (2.196) 
Error  (2.197) 
Equation (2.197) now becomes
(2.198) 
Rearranging gives
(2.199) 
(2.200) 
(2.201) 
If then Equation (2.210) is linear in and can be found by solving the linear equation
(2.202) 
(2.203) 
If is not then Equation (2.210) is nonlinear in . To solve this equation we use Newton's method i.e.,
(2.204) 
(2.205) 
(2.206) 
Once has been obtained the values at the next time step can be obtained from
(2.207) 
For algorithms in which the degree of the polynomial, , is higher than the order we require the algorithm to be initialised so that the initial velocity or acceleration can be computed. The initial velocity or acceleration values can be obtained by substituting the initial displacement or initial displacement and velocity values into Equation (3.176), rearranging and solving. For example consider an the case of a second degree polynomial and a first order system. Substituing the initial displacement into Equation (3.176) gives
(2.208) 
(2.209) 
Similarily for a third degree polynomial and a second order system the initial acceleration can be found from
(2.210) 
To evaluate the mean weighted load vector, , we need to evaluate the integral in Equation (2.196). In some cases, however, we can make the assumption that the load vector varies linearly during the time step. In these cases the mean weighted load vector can be computed from
(2.211) 
For this special case, the mean predicited values are given by
(2.212) 
The predicted displacement values are given by
(2.213) 
The amplification matrix is given by
(2.214) 
The right hand side vector is given by
(2.215) 
The nonlinear function is given by
(2.216) 
The Jacobian matrix is given by
(2.217) 
And the time step update is given by
(2.218) 
For this special case, the mean predicited values are given by
(2.219) 
(2.220) 
The predicted displacement values are given by
(2.221) 
The amplification matrix is given by
(2.222) 
The right hand side vector is given by
(2.223) 
The nonlinear function is given by
(2.224) 
The Jacobian matrix is given by
(2.225) 
And the time step update is given by
(2.226) 
For this special case, the mean predicited values are given by
(2.227) 
The predicted displacement values are given by
(2.228) 
The amplification matrix is given by
(2.229) 
The right hand side vector is given by
(2.230) 
The nonlinear function is given by
(2.231) 
The Jacobian matrix is given by
(2.232) 
And the time step update is given by
(2.233) 
The branch of mathematics concerned with the problem of finding a function for which a certain integral of that function is either at its largest or smallest value is called the calculus of variations. When scientific laws are formulated in terms of the principles of the calculus of variations they are termed variational principles.
The generalised Laplace's equation on a domain with boundary in OpenCMISS can be stated as
To complete the description of the boundary value problem, the governing Equation (3.1) is supplemented by suitable boundary conditions on the domain boundary . These boundary conditions can either be of Dirichlet type and specify a solution value, i.e.,
The corresponding weak form of Equation (3.1) is
Applying Green's theorem to Equation (3.4) gives
(3.5) 
If we now consider the integrand of the left hand side of Equation (3.6) in tensorial form with covariant derivatives indicated by a semicolon in the index (see Section 2.2.4 for details) then
Now, Equations (3.7) and (3.8) represent vectors that are covariant and therefore we must convert Equation (3.7) to a contravariant vector by multiplying by the contravariant metric tensor (from to coordinates; see Sections 2.2.2 and 2.2.6) so that we can take the dot product. The final tensorial form for the left hand integral is
(3.9) 
We can now discretise the domain into finite elements i.e., with , Equation (3.11) becomes
(3.12) 
The next step is to approximate the dependent variable, , using basis functions. To simplify this for different types of basis functions an interpolation notation is adopted. This interpolation is such that are the appropriate basis functions for the type of element (e.g., bicubic Hermite, Hermitesector, etc.) and dimension of the problem (one, two or threedimensional). For example if and the element is a cubic Hermite element (Section 2.1.3) then are the cubic Hermite basis functions where the local node number ranges from to and the derivative ranges from 0 to . If, however, and the element is a bicubic Hermite element then ranges from to , ranges from 0 to and is the appropriate basis function for the nodal variable . The nodal variables are defined as follows: , , , , etc. Hence for the bicubic Hermite element the interpolation function multiplies the nodal variable and thus the interpolation function is . The scale factors for the Hermite interpolation are handled by the introduction of an interpolation scale factor defined as follows: , , , , etc. For Lagrangian basis functions the interpolation scale factors are all one. The general form of the interpolation notation for is then
We can also interpolate the other variables in a similar manner i.e.,
(3.14) 
Using a Galerkin finite element approach (where the weighting functions are chosento be the interpolating basis functions) we have
(3.15) 
When dealing with integrals a similar interpolation notation is adopted in that is the appropriate infinitesimal for the dimension of the problem. The limits of the integral are also written in bold font and indicate the appropriate number of integrals for the dimension of the problem. For example if then , but if then .
In order to integrate in coordinates we must convert the spatial derivatives to derivatives with respect to . Using the tensor transformation equations for a covariant vector we have
(3.16) 
(3.17) 
As we only know , the conductivity in the (fibre) coordinate system rather than , the conductivity in the (geometric) coordinate system, we must transform the mixed tensor from to coordinates. However, as the fibre coordinate system is defined in terms of coordinates we first transform the conductivity tensor from to coordinates i.e.,
(3.18) 
(3.19) 
(3.20) 
Using an interpolated variables yields
Taking the fixed nodal degreesoffreedom and scale factors outside the integral we have
This is an equation of the form of
(3.21) 
(3.22) 
The elemental stiffness matrix, , is given by
Note that for Laplace's equation with no conductivity or fibre fields we have
The right hand side flux matrix, , is given by
(3.27) 
Governing equations:
The general form of Poisson's equation with source on a domain with boundary in OpenCMISS can be stated as
Appropriate boundary conditions conditions for the diffusion equation are specification of Dirichlet boundary conditions on the solution, i.e.,
The corresponding weak form of Equation (3.30) can be found by integrating over the spatial domain with a test function i.e.,
Applying the divergence theorem in Equation (3.190) to Equation (3.33) gives
Equation (3.34) can be written in tensor notation as
We can now discretise the spatial domain into finite elements i.e., with . Equation (3.36) becomes
We can now interpolate the variables with basis functions i.e.,
(3.36) 
For a Galerkin finite element formulation we also choose the spatial weighting function to be equal to the basis fucntions i.e.,
(3.37) 
Adopting the standard integration notation we can write the spatial integration term in Equation (3.37) as
Taking values that are constant over the integration interval outside the integration gives
This is an equation of the form
(3.38) 
(3.39) 
The elemental stiffness matrix, , is given by
The elemental flux matrix, , is given by
(3.41) 
(3.42) 
Governing equations:
The general form of the diffusion equation with source on a domain with boundary in OpenCMISS can be stated as
Appropriate boundary conditions conditions for the diffusion equation are specification of Dirichlet boundary conditions on the solution, i.e.,
Appropriate initial conditions for the diffusion equation are the specification of an initial value of the solution, i.e.,
Weak formulation:
The corresponding weak form of Equation (3.47) can be found by integrating over the spatial domain with a test function i.e.,
Applying the divergence theorem in Equation (3.190) to Equation (3.51) gives
Tensor notation:
Equation (3.52) can be written in tensor notation as
Finite element formulation:
We can now discretise the spatial domain into finite elements i.e., with . Equation (3.54) becomes
If we now assume that the dependent variable can be interpolated separately in space and in time we can write
(3.52) 
(3.53) 
We can also interpolate the other variables in a similar manner i.e.,
(3.54) 
For a Galerkin finite element formulation we also choose the spatial weighting function to be equal to the basis fucntions i.e.,
(3.55) 
Spatial integration:
Adopting the standard integration notation we can write the spatial integration term in Equation (3.55) as
Taking values that are constant over the integration interval outside the integration gives
This is an equation of the form
(3.56) 
(3.57) 
The elemental damping matrix, , is given by
(3.58) 
The elemental stiffness matrix, , is given by
(3.59) 
The elemental flux matrix, , is given by
(3.60) 
(3.61) 
Finite element formulation of linear elasticity problems is
Formulation of finite element equations for finite elasticity (large deformation mechanics) implemented in OpenCMISS is based on the principle of virtual work. The finite element model consists of a set of nonlinear algebraic equations. Nonlinearity of equations stems from nonlinear stressstrain relationship and quadratic terms present in the strain tensor. A typical problem in large deformation mechanics involves determination of the deformed geometry or mesh nodal parameters, from the finite element point of view, of the continuum from a known undeformed geometry, subject to boundary conditions and satisfying stressstrain (constitutive) relationship.
The boundary conditions can be either Dirichlet (displacement), Neumann (force) or a combination of them, known as the mixed boundary conditions. Displacement boundary conditions are generally nodal based. However, force boundary conditions can take any of the following forms or a combination of them  nodalbased, distributed load (e.g. pressure) or force acting at a discrete point on the boundary. In the latter two forms, the equivalent nodal forces are determined using the method of work equivalence (, ) and the forces so obtained will then be added to the right hand side or the residual vector of the linear equation system.
There are a numerous ways of describing the mechanical characteristics of deformable materials in large deformation mechanics or finite elasticity analyses. A predominantly used form for representing constitutive properties is a strain energy density function. This model gives the energy required to deform a unit volume (hence energy density) of the deformable continuum as a function of GreenLagrange strain tensor components or its derived variables such as invariants or principal stretches. A material that has a strain energy density function is known as a hyperelastic or Greenelastic material.
The deformed equilibrium state should also give the minimum total elastic potential energy. One can therefore formulate finite element equations using the Variational method approach where an extremum of a functional (in this case total strain energy) is determined to obtain mesh nodal parameters of the deformed continuum. It is also possible to derive the finite element equations starting from the governing equilibrium equations known as Cauchy equation of motion. The weak form of the governing equations is obtained by multiplying them with suitable weighting functions and integrating over the domain (method of weighted residuals). If interpolation or shape functions are used as weighting functions, then the method is called the Galerkin finite element method. All three approaches (virtual work, variational method and Galerkin formulation) result in the same finite element equations.
In the following sections the derivation of kinematic relationships of deformation, energy conjugacy, constitutive relationships and final form the finite element equations using the virtual work approach will be discussed in detail.
  fixed spatial coordinate system axes  orthogonal
  deforming material coordinate system axes  orthogonal in the undeformed state
  element coordinate system  nonorthogonal in general and deforms with continuum
 [
]  spatial coordinates of a particle in the undeformed state wrt  CS
 [
]  spatial coordinates of the same particle in the deformed state wrt  CS
 [
]  material coordinates of the particle wrt  CS (these do not change)
 [
]  element coordinates of the particle wrt  CS (these too do not change)
Since the directional vectors of the material coordinate system at any given point in the undeformed state is mutually orthogonal, the relationship between spatial and material coordinates is simply a rotation. The user must define the undeformed material coordinate system. Typically a nodal based interpolatable field known as fibre information (fibre, imbrication and sheet angles) is input to OpenCMISS. These angles define how much the reference or default material coordinate system must be rotated about the reference material axes. The reference material coordinate system at a given point is defined as follows. The first direction is in the direction. The second direction, is in the plane but orthogonal to . Finally the third direction is determined to be normal to both and . Once the reference coordinate system is defined, it is then rotated about by an angle equal to the interpolated fibre value at the point in counterclock wise direction. This will be followed by a rotation about new axis again in the counterclock wise direction by an angle equal to the sheet value. The final rotation is performed about the current by an angle defined by interpolated sheet value. Note that before a rotation is carried out about an arbitrary axis one must first align(transform) the axis of rotation with one of the spatial coordinate system axes. Once the rotation is done, the rotated coordinate system (material) must be inversetransformed.
Having defined the undeformed orthogonal material coordinate system, the metric tensor can be determined. As mentioned, the tensor contains rotation required to align material coordinate system with spatial coordinate system. This tensor is therefore orthogonal. A similar metric tensor can be defined to relate the deformed coordinates of the point to its material coordinates . Note that the latter coordinates do not change as the continuum deforms and more importantly this tensor is not orthogonal as well. The metric tensor, is called the deformation gradient tensor and denoted as .
It can be shown that the deformation gradient tensor contains rotation when an infinitesimal length in the undeformed state undergoes deformation. Since rotation does not contribute to any strain, it must be removed from the deformation gradient tensor. Any tensor can be decomposed into an orthogonal tensor and a symmetric tensor (known as polar decomposition). In other words, the same deformation can be achieved by first rotating and then stretching (shearing and scaling) or viceverse. Thus, the deformation gradient tensor can be given by,
The rotation present in the deformation gradient tensor can be removed either by right or left multiplication of . The resulting tensors lead to different strain measures. The right Cauchy deformation tensor is obtained from,
Similarly the left Cauchy deformation tensor or the Finger tensor is obtained from the left multiplication of ,
Note that both and are orthogonal tensors and therefore satisfy the following condition,
Since there is no rotation present in both and , they can be used to define suitable strain measures as follows,
and
where
and
are called Green and Almansi strain tensors respectively.
Also note that
is an orthogonal tensor.
It is now necessary to establish a relationship between strain and displacement. Referring to figure 1,
where
is the displacement vector.
Differentiating Equation (3.75) using the chain rule,
Substituting Equation (3.76) into Equation (3.73),
Simplifying,
As can be seen from Equation (3.78) the displacement gradient tensor is defined with respect to undeformed coordinates . This means that the strain tensor has Lagrangian description and hence it is also also called the GreenLagrange strain tensor.
A similar derivation can be employed to establish a relationship between the Almansi and displacement gradient tensors and the final form is given by,
The displacement gradient tensor terms in Equation (3.79) are defined with respect to deformed coordinates and therefore the strain tensor has Eulerian description. Thus it is also known as the AlmansiEuler strain tensor.
where and are Almansi strain tensor and Cauchy stress tensor respectively.
If the deformed body is further deformed by introducing virtual displacements, then the new internal elastic energy can be given by,
Deducting Equation (3.80) from Equation (3.81),
Using Equation (3.79) for virtual strain,
Since virtual displacements are infinitesimally small, quadratic terms in Equation (3.83) can be neglected. The resulting strain tensor, known as small strain tensor , can be given as,
Since both and are symmetric, new vectors are defined by inserting tensor components as follows,
Substituting Equation (3.85) into Equation (3.82),
The strain vector can be related to displacement vector using the following equation,
where and are linear differential operator and displacement vector respectively and given by,
The virtual displacement is a finite element field and hence the value at any point can be obtained by interpolating nodal virtual displacements.
Governing equations:
For a given velocity and a kinematic viscosity of , Burgers's equation in one dimension is
The general form of this equation in OpenCMISS is
Appropriate boundary conditions conditions for Burgers's equation are specification of Dirichlet boundary conditions on the solution, i.e.,
Appropriate initial conditions for the diffusion equation are the specification of an initial value of the solution, i.e.,
Weak formulation:
The corresponding weak form of Equation (3.92) can be found by integrating over the domain with test functions i.e.,
Applying the divergence theorem in Equation (3.190) to Equation (3.96) gives
Tensor notation:
Equation (3.97) can be written in tensor notation as
Finite element formulation:
We can now discretise the domain into finite elements i.e., with . Equation (3.99) now becomes
If we assume that we are in rectangular cartesian coordinates then the Christoffel symbols are all zero and . Equation (3.101) thus becomes
If we now assume that the dependent variable can be interpolated separately in space and in time we can write
(3.95) 
(3.96) 
(3.97) 
We can also interpolate the other variables in a similar manner i.e.,
(3.98) 
For a Galerkin finite element formulation we also choose the spatial weighting function to be equal to the basis fucntions i.e.,
(3.99) 
Spatial integration:
Adopting the standard integration notation we can write the spatial integration term in Equation () as
The Poiseuille equation describes the pressure drop that occurs due to viscous friction from laminar fluid flow over a straight pipe. Poiseuille flow can be derived from the NavierStokes equationsin cylindrical coordinates ( ) by assuming that the flow is:
The Poiseuille equation is:
Rearranging Equation (3.108) gives
(3.101) 
Volumetric flow is average axisymmetric flow, , multipled by the area of the pipe i.e., . Equation (3.108) now becomes
(3.102) 
(3.103) 
(3.104) 
The weak form of the differential form of the Poiseuille equation system consisting of Equation (3.108) and can be written as:
We can now interpolate the dependent variables using basis functions
(3.108) 
Using a Galerkin finite element approach (where the weighting functions are chosen to be the interpolating basis functions) we have
(3.109) 
Adopting the standard integration notation we can write the spatial integration term for each element in Equation (3.119) as
This is an elemental equation of the form
(3.110) 
(3.111) 
The elemental stiffness matrix, , is given by
Note that an additional conservation of mass equations are required to solve the final matrix system.
Stokes flow is a type of fluid flow where advective inertial forces are small compared with viscous forces. The Reynolds number is low, i.e. . For this type of flow, the inertial forces are assumed to be negligible and the NavierStokes equations simplify to give the Stokes equations. In the common case of an incompressible Newtonian fluid, the linear Stokes equations (threedimensional, quasistatic) can be written as:
The corresponding weak form of the equation system consisting of Equation (3.124) and Equation (3.125) can be written as:
Darcy's equation describes the flow of fluid through a porous material. Ignoring inertial and body forces, Darcy's equation is:
(3.122) 
is the relative fluid flow vector, given by where is the porosity, and and are the Eulerian fluid and solid component velocities. is the permeability tensor, is viscosity and is the fluid pressure.
Conservation of mass also gives:
(3.123) 
The weighted residual forms of these equations over a region with surface are:
(3.124) 
(3.125) 
Where is the weighting function for the flow and is the weighting function for pressure. Applying Green's theorem to the pressure term to give the weak form of the equations, and multiplying through by :
(3.126) 
(3.127) 
Where is the normal vector to the surface .
The OpenCMISS implementation of Darcy's equation uses the stabilised form of the finite element equations developed by (, ). This adds stabilising terms, so that the final form of the implemented equations is:
(3.128) 
(3.129) 
The NavierStokes equations arise from applying Newton's second law to fluid motion, i.e. the temporal and spatial fluid inertia is in equilibrium with internal (volume/body) and external (surface) forces. The reaction of surface forces can be described in terms of the fluid stress as the sum of a diffusing viscous term, plus a pressure term. The equations are named for the 19th century contributions of ClaudeLouis Navier and George Gabriel Stokes. A solution of the NavierStokes equations is called a flow field, i.e. velocity and pressure field, which is a description of the fluid at a given point in space and time. In the common case of an incompressible Newtonian fluid, the nonlinear NavierStokes equations (threedimensional, transient) can be written using 'primitive variables' (i.e. uvelocity, ppressure) as:
Using a classic Galerkin formulation, mixed methods are perhaps conceptually the most straightforward method of satisfying LBB, in which velocity is defined over a space one order higher than pressure (e.g. quadratic elements for velocity, linear for pressure), allowing incompressibility to be satisfied. It should be noted that our use of a mixed formulation to satisfy LBB will also be reflected in the shape functions that our weak formulation depends on. For example, using a 2D element with biquadratic velocity and linear pressure, we will have 9 DOFs and weight functions for each velocity component and 4 for the pressure.
Integrating by parts, we will get the weak form of the system of PDEs with the associated natural boundary conditions at the boundary :
For more extensive discussion of this procedure, along with other weak forms of the PDEs, we refer to (, ). From this weak form, we see natural (Neumann) boundary conditions arising as a direct result of the integration. Neumann boundary conditions will consist of a pressure term and viscous stress acting normal to a given boundary.
(3.133)  
(3.134) 
where is the contravariant metric tensor and is the Christoffel symbol of the second kind for the spatial coordinates.
We can now discretise the domain into finite elements i.e., with . Equation (3.149) now becomes:
We will assume that the dependent variables and can be interpolated separately in space and time. Here we must also be careful to note again the discrepancy between the functional spaces for velocity and pressure using a mixed formulation to satisfy the LBB consistency requirement. We will therefore define two separate weighting functions: for the velocity space on , the test function will be and for the pressure space, , giving:
(3.136)  
(3.137) 
In standard interpolation notation within an element, we transform from to :
(3.138)  
(3.139) 
where are the time varying nodal degreesoffreedom for velocity component , node , global derivative , are the corresponding velocity basis functions and are the scale factors. The scalar pressure DOFs, are interpolated similarly.
For the natural boundary, we can separate into its component velocity and pressure terms as noted in 3.144 and shown in . For our current treatment, we will also assume the values of viscosity and density are constant. These can be interpolated:
(3.140) 
Using the spatial weighting functions for a Galerkin finite element formulation:
(3.141) 
Using standard integration notation, we can rewrite our Galerkin FEM formulation from :
Where represents the jacobian matrix. If we assume a rectangular cartesian coordinate system, this simplifies significantly, as the contravariant will become and the Christoffel symbols will all be zero. This gives:
or, rearranged to pull constants out of the integrals:
We now seek to assemble this into the corresponding general form for dynamic equations, as outlined in section 2.3.2:
We will assume cartesian coordinates and denote the corresponding velocity components , with representing the number of velocity DOFs and the number of pressure DOFs. The vector in then becomes:
(3.143) 
All the components of 3.161 will similarly depend on the number of dimensions,, and the size of the matrices will be: .
Returning to the general case, the mass matrix will take the form:
(3.144) 
(3.145) 
The element stiffness matrix will contain the linear components of the system. For the classic Galerkin formulation this includes the viscous term with the laplacian operator and the pressure gradient term. As the velocity and pressure DOFs are both included in , will be assembled so that the corresponding operators are applied to the DOFs discontinuously.
(3.146) 
To attain a square matrix of the required size, we must also apply the transpose of the gradient terms acting on the pressure DOFs. For example, in a 3 dimensional example system, will take the form:
(3.147) 
The nonlinear vector, will provide the convective operators and for the standard Galerkin formulation:
(3.148) 
For convection dominated flows, it is often useful to modify the Galerkin formulation account for numerical problems associated with large asymmetric velocity gradients. We describe augmenting the Galerkin NavierStokes formulation to form a PetrovGalerkin form, which uses a different test function for the convective term to stabilize the algorithm with a balancing diffusive term. Please refer to section for more details.
For the NavierStokes equations (and most nonlinear problems), the use of a PetrovGalerkin formulation becomes more difficult than for linear cases like advectiondiffusion, as consistent weighting can cause instabilities when used with additional terms like body forces (, ). These issues can be overcome with more complex PetrovGalerkin formulations, as those derived by Hughes and colleagues in the 1980s. A useful alternative route is to apply SUPG weights to the convective operator in elements as a function of each element's Reynolds number (referred to subsequently as the cell Reynolds number). This can provide the desired stabilizing effect in the direction of large velocity gradients but can be easily implemented and is practically useful.
Assuming cartesian coordinates and applying PetrovGalerkin weights to the convective term, the finite element formulation described in equation can be written:
where will be the classic Galerkin weight and will be the PetrovGalerkin for the NavierStokes equations. We must also define the Peclet number, , for the NavierStokes equations to describe the ratio of the convective:viscous operators. Here, the effective Peclet number will be based on the cell Reynolds number:
(3.149) 
(3.150) 
We will retain the same value for as found for the linearized advectiondiffusion equation.
Equation then becomes after integration and simplification:
Notice the integration by parts is done only to the standard Galerkin weights this is because the artificial diffusion added by the PetrovGalerkin terms should only contribute within the elements
The use of the PetrovGalerkin formulation in this way allows for the stabilization of moderately convective problems. However, it also results in nonsymmetric mass matrices with the additional term, making classical masslumping more difficult. As a result, the use of explicit methods also becomes more difficult for timedependent problems.
The corresponding weak form of the equation system consisting of Equation () and Equation () can be written in the general dynamic form (see section 2.3.2)
The corresponding weak form of the equation system consisting of Equation () and Equation () can be written as:
(3.158) 
(3.159) 
The NavierStokes equation can be written as
Rearranging for gives
(3.162) 
(3.164) 
Note that from Equation (3.184) we can write
Taking the divergence of both sides of Equation (3.184) gives
The corresponding weak form of Equation (3.187) is
Now the divergence theorm states that
Applying the divergence theorm where gives
Note that when is used in Equation (3.190) you get Green's identity i.e.,
Now if we apply Equation (3.191) to the left hand side of Equation (3.188) we get
(3.171) 
Now if we apply Equation (3.190) to the right hand side of Equation (3.188) we get
Substituting Equation (3.193) and Equation (3.194) into Equation (3.188) gives
(3.174) 
Now, using a standard Galerkin Finite Element approach Equation (3.197) can be written in the following form
Note that you can write in terms of the velocity stiffness matrix, the mass matrix, the vector of know velocity DOFs and the nonlinear NavierStokes vector.
Note that is most likely signular and therefore you will need to set a reference pressure Dirichlet condition somewhere and measure the pressures relative to this boundary condition value.
Note that this formulation removes all the nasty Neumann conditions in favour of a Dirichlet condition. It also removes the surface integrals in favour of volume integrals!
The bidomain model can be thought of as two coexistant intra and extrcelluar spaces. The potential in the intracellular space is denoted as and the potential in the extracellular space is denoted as . The intra and extracellular potentials are related through the transmembrane voltage, , i.e.,
The bidomain equations are
where is the surface to volume ratio for the cell, is the specific membrance capacitance is the intracellular conductivity tensor, is the extracellular conductivity tensor, is the ionic current, is the transmembrane current
Consider the initial value problem
(3.180)  
(3.181) 
For Gudunov splitting we first solve
(3.182)  
(3.183) 
(3.184)  
(3.185) 
(3.186) 
For Strang splitting we first solve
(3.187)  
(3.188) 
(3.189)  
(3.190) 
(3.191)  
(3.192) 
(3.193) 
Under certain conditions the Biodomain equations given in Equation (3.199) and Equation (3.200) can be simplified to
where is the surface to volume ratio for the cell, is the specific membrance capacitance, is the ionic current, is the intracellular stimulus current and is the conductivity tensor given by
Rearrangign Equation (3.215) to give
(3.196) 
CMISSProblemFiniteElasticityDarcyType couples the finite elasticity equations and Darcy's equation by solving each equation in term, iterating between the two equations until convergence is reached. The full Darcy equations with velocity and pressure are solved.
Darcy's equation is solved on the deformed geometry and the finite elasticity constitutive relation has a dependence on the Darcy fluid pressure.
The fully dynamic governing equations for poromechanics are described in (, ). CMISSProblemfiniteElasticityFluidPressureType implements a static form of these equations and strongly couples the equations for finite elasticity to an equation describing the Darcy fluid pressure. Darcy's equation describing the fluid flow is substituted into the mass conservation equation to give:
(3.197) 
A static form of the constitutive law developed by (, ) is implemented in the equations set subtype CMISSEquationsSetElasticityFluidPressureStaticInriaSubtype.
From tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/lpde/lpde101.pdf we have for the onedimension diffusion equation
(4.1) 
(4.2) 
Now, OpenCMISS solves diffusion equations of the form
To prove the analytic solution we differentiate Equation (4.4) to give
(4.5)  
(4.6)  
(4.7)  
(4.8) 
Substiting Equation (4.23) into Equation (4.3) gives
(4.9)  
(4.10) 
The analytic field component definitions in OpenCMISS are shown in Table 4.1.

From tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/npde/npde1104.pdf we have for the onedimension polynomial source diffusion equation
(4.11) 
(4.12) 
(4.13) 
(4.14) 
(4.15) 
Now, OpenCMISS solves quadratic source diffusion equations of the form
(4.18) 
(4.19) 
(4.20) 
To prove the analytic solution we differentiate Equation (4.17) to give
(4.21)  
(4.22)  
(4.23) 
The analytic field component definitions in OpenCMISS are shown in Table .

From tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/npde/npde1105.pdf we have for the onedimension exponential source diffusion equation
(4.24) 
(4.25) 
(4.26) 
(4.27) 
Now, OpenCMISS solves diffusion equations of the form
(4.30) 
(4.31) 
To prove the analytic solution we differentiate Equation (4.29) to give .....
The analytic field component definitions in OpenCMISS are shown in Table .

The onedimension Burgers' equation in OpenCMISS form can be written as
Adapted from tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/npde/npde1301.pdf we have for the onedimension Burgers' equation the analytic solution
To prove the analytic solution we differentiate Equation (4.33) to give
(4.34)  
(4.35)  
(4.36)  
(4.37)  
(4.38) 
Substiting Equation (4.38) into Equation (4.32) gives
(4.39)  
(4.40) 
The analytic field component definitions in OpenCMISS are shown in Table 4.4.

Adapted from tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/npde/npde1301.pdf we have for the onedimension Burgers's equation the analytic solution
To prove the analytic solution we differentiate Equation (4.33) to give
(4.42)  
(4.43)  
(4.44)  
(4.45)  
(4.46) 
Substiting Equation (4.46) into Equation (4.32) gives
(4.47)  
(4.48)  
(4.49)  
(4.50) 
The analytic field component definitions in OpenCMISS are shown in Table 4.5.

There are two general forms of coupling that concern us, which we will term volumecoupling and interfacecoupling. A volumecoupled problem is one where several equations are defined over the whole of a region, and communicate over the entirety of that region (although the coupling terms may be zero in certain parts of the domain). An interfacecoupled problem concerns those cases where two, or more, separate regions are interacting at a common interace. In 3d this interface is a surface, and in 2d it is a line. A typical example of this type of problem is fluidstructure interaction. An example of a volumecoupled problem are double porosity networks, often used in porous flow modelling for geophysics (citation). Clearly, it is conceivable that both types of coupling could exist in a single problem, for example, if one wished to model cardiac electrophysiology, finite deformation solid mechanics and fluid mechanics within the ventricle.
Two solution strategies are available to us within OpenCMISS, a partitioned approach and a monolithic approach. A partitioned solution strategy involves solving each separate equation in turn, passing the relevant information from one equation to the next. In certain cases, subiteration may be required to ensure that a converged solution is reached. For coupling together only two equations this can be an attractive option, however, it can become unwieldy if many equations are to be coupled together. In contrast, the aim of a monolithic strategy is to solve all the equations together, in one solution process, by computing the matrix form of each equation, and then assembling all of these forms into one large matrix system, which is then passed to the solver e.g. PETSc. Again, it is possible to use both strategies within the same problem, if required. For example, if the time scale of one process is much slower than the others, it may be better to solve this separately, and for a larger time step, than the other equations.
For a new coupled problem it is necessary to define appropriate new classes, types & subtypes for the problem and the equations. For a single physics case the problem hierarchy takes the following form (as appropriate):
However, for the equations type hierarchy, it is usually not necessary to use the multiphysics equations class. For example, the types could be declared in the following way for the first equation:
The obvious implication of the above is that a new PHYS_ONE_PHYS_TWO_EQUATION_ROUTINES.f90 must be created, which in principle will include all the same subroutines as any other equations_routines.f90 file e.g. finite element calculate, equations set setup, problem setup. However, usually only the problem related information needs to be setup in the new file, with the equations specific setup included within the separate equations routines of the component physics (using CASE statements on the subtype to select the appropriate sections of code). More details on this will be given in Sections 6.1.3 & 6.1.4.
For a single physics problem a dependent field will usually possess two field variable types, the U variable type (for the left hand side) and the delUdelN type (for the right hand side). For a multiphysics problem we must define additional field variable types, typically two for every additional equations set. The list of possible field variable types is given in field_routines.f90 . Whilst, it is sensible to assign additional field variables in ascending order, this is not necessary, excepting that a dependent field must have at least a U variable type. Attempting to setup a dependent field with only V and delVdelN variable types will result in an error.
As there is only a single dependent field, its setup need only occur in the first of the equations set's setup routines, with the remaining equations set merely checking that the correct field setup has occurred. The key subroutine calls that must be made (either within the library for an autocreated field, or within the example file for a userspecified field) are:
Then the field dimension, field data type and field number of components must be set/checked for each variable type that has been defined on the field.
The equations set field is a mandatory data structure that must be defined for all problems. However, if only a single equations set is being used for that subtype, it does not need to be initialised. However, a variable still needs to be defined in the example file, and passed in to the equations set create start:
Need to describe the setup of the equations set field. Number of components, data type etc.
Although there is only a single dependent field, we still define separate fields of other types for each equations set, that is, separate materials, source and independent fields, as appropriate for the particular problem under investigation.
For each equations set there must be a separate boundary condition object defined as standard. However, it is important to ensure the the correct field variable type is passed into the call to CMISSBoundaryConditionsSetNode, in agreement with the field variable associated with the particular equations set.
The pre and post solve routines will call various specific pre/postsolve routines, such as output data, update boundary conditions etc. These pre/postsolve routines can be those specific to PHYS_ONE and PHYS_TWO, and contained within PHYS_ONE_EQUATION_ROUTINES.f90 and PHYS_TWO_EQUATION_ROUTINES.f90 to eliminate excessive code duplication.
PHYS_ONE_PHYS_TWO_PROBLEM_SETUP defines the various solvers required for each equation. Some key subroutine calls required are:
Not yet attempted.
For certain physical problems it is required that several equations of the same type must be coupled in the same domain. This means that the equations_set_subtype is no longer sufficient for distinguishing between the equations, and the specific behaviour one desires for each. It is neceessary in these cases to make use of the equations_set_field structure, which is defined for each equations_set, and contains two integers. The first integer is the id number of that particular equations_set, with the second integer defining the total number of equations_sets of that type. Obtaining the first integer from the equations_set_field within the finite_element_calculate subroutine allows the correct field_variable_type to be identified for interpolation purposes, as well as enabling an automated setup of the dependent field, equations_mappings etc.
The diffusion equation once converted into a finiteelement formulation will have a matrix formulation similar to:
Considering a general case of N diffusion equations, there will be N equations sets, with N respective materials fields, to store the diffusion coefficient tensor for each equation. If we now require the equations to be coupled, there will also be ceofficients that determine the strength of this coupling. For each equation set this will be a further coefficients. Rather than creating a further N materials fields, an extra variable type is defined for the existing fields, for simplicity, the V variable type. The V variable type is set to be a vector dimension type, with components. With coupling terms the matrix system is:
For a standard diffusion equation there are two dynamic matrices defined, stiffness and damping, which are mapped to the U variable type. To store the coupling contributions from the additional equations, extra matrices must be defined. An array of linear matrices are defined, which map to the other field variable types (V, U1, U2, etc). The terms on the diagonal of the global coupling matrix must be included within the stiffness matrix, as it is not possible to have both a dynamic matrix (stiffness) and an additional linear matrix mapping to the same field variable type (the library will be modified in future to allow this, see tracker item 2741 to check progress on this feature).
For a simple implementation of this refer to the example, /examples/MultiPhysics/CoupledDiffusionDiffusion/MonolithicSchemeTest .
One solver, but add multiple solver equations.
Show some example lines for the mappings that need to be defined in the set setup.
In the set_linear_setup, for the setup case(EQUATIONS_SET_SETUP_FINISH_ACTION), the appropriate field variable must be mapped to the particular equations set that is being setup. This requires a certain degree of hard coding and a priori knowledge about the field variable types avaiable, and the order in which they will be assigned to equations sets.
First, retrieve the equations set field information and store in a local array.
Here arrays the store the mapping between the equations set id and the field variable type are allocated and initialised, for use when setting the mappings.
Finally, set the mapping for the dynamic variable (as standard for the diffusion equation), the linear variable types (for the coupling matrices), the right hand side variable (i.e. delUdelN types) and the source variable. Note that the source variable that is mapped is always the FIELD_U_VARIABLE because we use separate source fields (containing only a U variable) for each equations set.
For a monolithic solution step there is only a single solver equation to setup, but several equations set equations to map into this. In the example file this is done in the following way:
Currently field_IO_routines does not support the export of fields containing field variable types other than U and delUdelN. Therefore, it is necessary to use the FieldML output routines, and output separate files, one for each variable type, using calls of the following form:
Dirichlet boundary condition is a type of essential boundary condition where the value of the variable in an equation does not change at a particular computational node  for e.g. pressure or temperature. In contrast, a natural boundary condition can be flux in diffusion problem or stress in linear elasticity. Dirichlet boundary condition is enforced as a fixed specification of values at a set of boundary nodes. This type of boundary condition is implemented by modifying the set of equations as opposed to the right hand side of the equation. In a finite element framework, by enforcing the Dirichlet boundary condition an equation matrix which has been formed by the test function is replaced, to give a new set of equations.
The Dirichlet boundary condition being imposed on a particular nodal value implies that the specific row of the equation matrix corresponding to that particular node is independent of the rest of the nodes and hence can be eliminated and the matrix simplified in the process. For example, if the weak form of any of the problems is,
Now, if you would like to impose a Dirichlet boundary condition on the first node only i.e., on as , for examples as in the Laplace equation where Dirichlet boundary condition is implemented on the first and last node only,
which modifies the equation matrix as,
We can perform another elimination and rewrite the equation as,
Now we can store a reduced matrix as,
In this section we give a short description of some of the routines and OpenCMISS coding details required to modify a part of the code related to boundary conditions.
In an example file, for example Laplace, the boundary condition is implemented in the following steps,
EQUATIONS_SET_BOUNDARY_CONDITIONS_CREATE_START
 As an overview this call stack passes through the EQUATIONS SET SETUP (for e.g. Classical field Laplace Standard Laplace setup) to use the information of the pointer to the particular `equation set' and return a pointer to the created boundary condition.
DECOMPOSITION_NODE_DOMAIN_GET implements a tree search to get a nodal value
BOUNDARY_CONDITIONS_SET_NODE specifies the type of boundary condition on the particular node.
EQUATIONS_SET_BOUNDARY_CONDITIONS_CREATE_FINISH
EQUATIONS_SET_SETUP sets up the specific details of the equation set by using the
EQUATIONS_SET_SETUP_INFO
EQUATIONS_SET_BOUNDARY_CONDITIONS_GET Obtains boundary condition information for a particular type of equation set (Classical field Laplace)
EQUATIONS_SET_BOUNDARY_CONDITIONS_CREATE_FINISHIn boundary conditions routine the Dirichlet boundary condition is imposed by first calculating the Dirichlet degrees of freedom. Then the rowcolumn information of the equations matrix is obtained by obtaining information about the distributed matrix, more specifically the:
In OpenCMISS the sparse distributed matrix is stored in a format called Compressed Row Storage (CRS) format (more details given in the next section). There are several other possibilities such as a column based approach: a Compressed Column Storage (CCS) format or a Block Storage type which have not been implemented yet.
In our current structure the equation matrix is stored in the row based format i.e., CRS. To implement a Dirichlet boundary condition however, the information of the degrees of freedom are required which are stored in the columns of the matrix. Therefore it is useful to store the matrix in a column based approach in order to avoid redundant looping over all rows. Currently a linked list based approach has been proposed which stores the matrix in a CRS format as well as a linked list format when a Dirichlet boundary condition is imposed on the problem. In order reduce the memory usage of storing the equation matrix in two different data formats we store only the degrees of freedom of the matrix that correspond to the nodes in the boundary. The boundary nodes are calculated using a parameter mapping function of the local columns obtained using the global to local mapping of columns of the equation matrix. In this routine the columnrow information of the matrix are obtained from a linked list and the degree of freedom are calculated and stored to be used in the calculation of the right hand side of the equation.
The Compressed row storage (CRS) is a storage format which stores the nonzero elements of a matrix sequentially. The storage algorithm is outlined by the following example. For a matrix of the form,
The compressed row storage or CRS is given by a pointer to the first element of row of A and a one dimensional array of the column indices,
For a row , the number of nonzero elements is easily obtained from  . For more example on the implementation refer to the matrix vector routines in OpenCMISS repository.
This chapter is intended for new and existing developers of OpenCMISS. It contains tips from the developers who previously encountered the learning curve, and are now trying to reduce it for those who are new to OpenCMISS. New developers are encouraged to use this chapter written in an informal narrative style as an independent study guide to get up to speed with the codebase. The sections are ordered in an increasing level of difficulty, which introduce the basic concepts first and then progress through to more advanced features of the code. Care was taken not to introduce too much information at once  as such, it may at times appear to lack rigor, but after reading this chapter developers will be empowered to answer their own questions. In addition, existing developers of OpenCMISS will find that this chapter may serve as a reference to assist daytoday development work, and to keep uptodate with extensions that are made in the core library functionalities.
Let's assume for the sake of discussion that this is your first encounter with OpenCMISS code. If not, simply skip to the next section. As you may already know, OpenCMISS source code is split up into two major parts, one which provides the core library functionality, like evaluating a basis function or solving a pde with finite element method, and one which solves a particular problem by using these library routines. The source files (fortran .f90 and some .c files) for the library routines are entirely contained within /cm/src, whereas the example files can be found under /cm/examples.
As a new developer, a good place to attack the 300,000+ lines of source code is to start
at an example file because it gives a good bird's eye view. for historical reasons (it was
the first to be set up and the most ``proper'') the Laplace example is often used as a
showcase. So let's go ahead and fire up the following file in your favourite editor (In Linux
Kate, emacs or gedit work well. In Windows, maybe emacs, Kate or Notepad++ are okay):
/cm/examples/ClassicalField/Laplace/Laplace/src/LaplaceExample.f90.
Later in this chapter we will address the finer details of this example file, however, for
now we'll look at the general outline and flow. Scroll down and have a brief look  after
all the variable declarations, there should be a call to
All OpenCMISS routines calls are made after this line, since this routine tells the library
that we are ready to start using it. Similarly, near the end of the file there is a call to
which initiates the finalising of objects which had been created by OpenCMISS throughout
the execution.
Of course, what comes in between these calls does all the interesting stuff. It's about 200
lines of solid blocks of code, but there is an easier way to read this  most tasks in an
example file are arranged in blocks, which looks like
where **** denotes the type of the object such as coordinate system, region, basis etc.
The initialise call usually creates a space in memory for the object and perhaps assign the
default values for some of its members. If you forget to issue this call, the executable may
or may not throw up an error that says "**** is already associated." depending on whether the
developer has written code to check it. Using an object without initialising it may lead to
some subtle memory problems. You have been warned.
Between CreateStart and CreateFinish, we basically call routines that assign properties to
shape and mould this specific instance of the object. It's only when the CreateFinish call
is issued, that OpenCMISS oils up its gears and actually gets to work. Thus it is possible that
you may have entered conflicting arguments, but the error may not occur until the CreateFinish
is called. Because a lot happens in CreateFinish, it's usually the place that you might want
to put a stop flag in your debugger (which will be discussed later).
So, armed with the above knowledge, most of the example file can be broken down into these
blocks:
From these, you probably won't touch CoordinateSystem (except for changing dimensions), Region
and Decomposition because, well, there's nothing much to change unless you're doing something
quite advanced. This leaves for an average Joe developer/user the following bits to tinker with:
Basis, Mesh, Field, EquationsSet, BoundaryConditions, Problem and Solver.
Basis objects are required for all finite element problems, which currently is the only solution method type implemented. A mesh object holds the geometry and mapping information. Any kind of numerical data that you might want to hold in a vector or matrix, such as the dependent (unknown) variables, material parameters or the nodal coordinates themselves are neatly packaged into the Field type object, which has several variants. These objects are described in further detail in section 7.11.4, but for now we will crack on with this introduction. EquationsSets, Solver and BoundaryConditions objects are so big and important that they have their own designated sections elsewhere in this document (7.11.5, 7.4 and 7.7). This might leave you wondering what the role of the Problem object is  this one manages the control loops, which is a general way to handle linear/nonlinear/steady/timedependent problems. The Problem objects also holds meta information regarding what equations are being solved, including coupled physics problems that have been introduced recently.
If you're a keen developer and you have peeked ahead at any of the library source files, you
might have noticed that they look quite a bit different from the example source file. Every
OpenCMISS routine called from the example file begin with CMISS... The reason
for this is because the example file may only use the library through OpenCMISS bindings,
or application programming interface rather than directly calling the routines from the
core library itself. This layer of separation is a pretty standard thing and it protects the user
from working directly with the object pointers which may be dangerous. All binding routines are
implemented in the file /src/opencmiss.f90 which is the only module we include via the call
at the top of the example file. When you start developing usercallable library routines, you
will have to also write (and maintain!) the bindings if they're to be used in the example file.
Once you start to modify the code yourself, there will invariably be errors. That's okay. What
you should know though is how OpenCMISS handles errors. When there is an error in the library
routine, in most cases OpenCMISS won't exit straight away with an alltoo familiar message like
"segmentation fault" but takes a more graceful approach. This is great for users of the library,
but as a developer it can take a little while to pinpoint exactly at which line the error has
occurred. The default error handling behaviour is to output the error and continue execution,
which, for a scientific code like this usually leads to more errors. This behaviour can be
changed via
which forces the program to stop after the error message has been printed.
While we're on the subject of bughunting, let's address the issue of viewing variables. There are a couple of different approaches one can use to check on the value of variables midexecution. The first is to use a debugger like TotalView, which isn't free but is worth every penny. The other way is to go oldschool and put WRITE(*,*) all over the source file (Don't forget to remove this before committing). This approach involves you having to recompile the entire library which is quite timeconsuming. Also, because most data you will be interested in are encapsulated under extensive object structures, it may require some time to figure out exactly what to print out.
Having TotalView installed also helps with looking at routines. At this point we will break
through the surface of the example file and go under. Let's take the error handling mode setting
routine from above. To follow it down, open up /src/opencmiss.f90 and CtrlF for the routine.
Between the ENTERS and EXITS routines (which will be described later) you will
see that the binding routine simply makes a call to the actual routine which does the work:
This subroutine is not defined in opencmiss.f90, as it only contains the bindings. The
convention in OpenCMISS code is to prefix every routine name with the module name, which, in this
case is CMISS. You can now open up /src/cmiss.f90 and search again for the routine.
If you hate having to connect the dots every step of the way in this fashion, you can fire up the
example in TotalView and simply double click on the routine names to dive into them.
You should now have a good background to start modifying or setting up your own example files.
A large part of doing this involves copy & pasting an existing example and modifying them to fit
your own problem (be sure to use the svn cp command to avoid nagging emails). In this case,
you might end up spending a lot of time figuring out what arguments a certain function should be called
with. For example, you might want to change the type of matrix storage from Full to Sparse. The
binding routine that sets this is called
The second argument, which is what you want to change, is meant to be selected from a set of named
constants. How do I know this? It's obvious after a while, but you can CtrlF for this routine in
opencmiss.f90. There, you will see in the comment next to SparsityType it will say
see OPENCMISS_EquationsSparsityTypes. These constants are also defined within
opencmiss.f90 so CtrlF it again and it will take you to the near the top of the file where
are defined. The whole thing can also be done by looking at the developer's page:
http://cmiss.bioeng.auckland.ac.nz/OpenCMISS/doc/doxygen/html/
if you have a lot of patience that is.
Lastly, if you don't know already, learn how to search within full directory of files for a keyword in your editor. Because Fortran doesn't have a great IDE (integrated development environment), this is unfortunately the fastest way to find information you're after.
As outlined in the previous section, the example file manages two major tasks. The first is all about the data, that is, to do with defining and assigning various bits of information like mesh coordinates, material parameters and so on. The second is all about doing, like setting up and calling the solver, for example.
By data model, I am simply referring here as to how these different bits of numbers are stored, passed around, and accessed. If I asked you right now to find, say, 'the ycoordinate of node 7', you'll probably notice that it's not very straightforward since all numbers have been locked away inside the OpenCMISS data structures. Indeed this is a source of frustration and makes the learning curve steep. Why is this so complicated? The answer is simple  it's because you haven't read this section yet. Oh and also having the formal structures to support data handling makes the parallel coding much easier, even if it does cause some overhead in the code writing.
The key to understanding the OpenCMISS data model is to get to grips with the hierarchy of data objects,
which we'll go through now. We'll do this pretty quickly now and come back later to look at the details.
Put very simply,
TOP FIELD
MIDDLE VARIABLE, MESH
BOTTOM BASIS
That's it. Easy eh? Now the details:
The thing to remember is, OpenCMISS has its own shelves it packs away data onto, and at times it will
seem inflexible. In some cases you might be right, but do you want to learn this or not?! Moreover,
these shelves (objects) have names which you may already associate in your mind with something else
 don't let it throw you off. Be strong.
In the below list, pay attention to the indentation for the hierarchical order.
Did you notice how parameter_sets contain the data, while param_to_dof_map contains its mapping into arrays? The structure above separates data from bookkeeping.