OpenCMISS NOTES


http://www.opencmiss.org/












December 7, 2016





Copyright 2009-
Auckland Bioengineering Institute, University of Auckland,
University of Oxford and
King's College London, University of London.


Contents


Introduction


Theory


Basis Functions and Interpolation

Both the finite element method (FEM) and the boundary element method (BEM) use interpolation in finding a field solution i.e., the methods find the solution at a number of points in the domain of interest and then approximate the solution between these points using interpolation. The points at which the solution is found are known as nodes. Basis functions are used to interpolate the field between nodes within a subregion of the domain known as an element. Interpolation is achieved by mapping the field coordinate onto a local parametric, or $ \xi $, coordinate (which varies from 0 to $ 1$) within each element. The global nodes which make up each element are also mapped onto local element nodes and the basis functions are chosen (in terms of polynomials of the local parametric coordinate) such that the interpolated field is equal to the known nodal values at each node and is thus continuous between elements. A schematic of this scheme is shown in Figure 2.1.

Figure 2.1: A schematic of the relationship between local and global nodes, elements and the parametric elemental $ \xi $ coordinate.
Figure: A schematic of the relationship between local and global nodes, elements and the parametric elemental $ \xi $ coordinate.
\begin{figure}\centering\input{figs/Theory/nodesandelements.pstex} \ifthenelse{\...
... elements and the parametric
elemental $\xi$\ coordinate.}{}}{ }{ }\end{figure}


Summation Notation

The following (Einstein) summation notation will be used throughout these notes. In order to eliminate summation symbols repeated ``dummy'' indices will be used i.e.,

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\sum_{i=1}^{n} }\,a^{i}b_{i} }=a^{i}b_{i}$ (2.1)

To indicate an index that is not summed, parentheses will be used i.e., $ a^{(i)}b_{(i)}$ is talking about the singular expression for $ i$ e.g., $ a^{1}b_{1}$, $ a^{2}b_{2}$ etc.

In order to indicate a summation the sum must occur over indices that are different sub/super-script i.e., the sum must be over an ``upper'' and a ``lower'' index or a ``lower'' and an ``upper'' index. Note that it may be useful to remember that if an index appears in the denominator of a fractional expression then the index upper- or lower- ness is ``reversed''.

For some quantities with both upper and lower indices a dot will be used to indicate the ``second'' index e.g., in the expression $ A^{i}_{.j}$ then $ i$ can be considered the first index and $ j$ the second index.


Lagrangian Basis Functions

One important family of basis functions are the Lagrange basis functions. This family has one basis function for each of the local element nodes and are defined such that, at a particular node, only one basis function is non-zero and has the value of one. In this sense a basis function can be thought of as being associated with a local node and serves to weight the interpolated solution in terms of the field value at that node. Lagrange basis functions hence provide only $ C^{0}$ continuity of the field variable across element boundaries.

Linear Lagrange basis functions

The simplest basis functions of the Lagrange family are the one-dimensional linear Lagrange basis functions. These basis functions involve two local nodes and are defined as

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{ \ensuremath{ \varphi_{...
...arphi_{2} }\ensuremath{\left( \xi \right)} } }&=\xi \end{split}\end{displaymath} (2.2)

The interpolation of a field variable, $ u$, using these basis functions is given by

\begin{displaymath}\begin{split}\ensuremath{ u\ensuremath{\left( \xi \right)} }&...
...t)}\ensuremath{ {u}^{1} }+\xi\ensuremath{ {u}^{2} } \end{split}\end{displaymath} (2.3)

where $ \ensuremath{ {u}^{1} }$ and $ \ensuremath{ {u}^{2} }$ are the values of the field variable at the first and second local nodes respectively. These basis functions hence provide a linear variation between the local nodal values with the local element coordinate, $ \xi $.

Quadratic Lagrange basis functions

Lagrange basis functions can also be used to provide higher order variations, for example the one-dimensional quadratic Lagrange basis functions involve three local nodes and can provide a quadratic variation of field parameter with $ \xi $. They are defined as

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{ \ensuremath{ \varphi_{...
...)} } }&=2\xi\ensuremath{\left( \xi-\frac12 \right)} \end{split}\end{displaymath} (2.4)

and their interpolation formula is

\begin{displaymath}\begin{split}\ensuremath{ u\ensuremath{\left( \xi \right)} }&...
...h{\left( \xi-\frac12 \right)}\ensuremath{ {u}^{3} } \end{split}\end{displaymath} (2.5)

In general the interpolation formula for the Lagrange family of basis functions is, using Einstein summation notation, given by

$\displaystyle \ensuremath{ u\ensuremath{\left( \xi \right)} }=\ensuremath{ \ens...
...h{\left( \xi \right)} } }\ensuremath{ {u}^{\alpha} }\quad \alpha=1,\ldots,n_{e}$ (2.6)

where $ n_{e}$ is the number of local nodes in the element. Einstein summation notation uses a repeated index in a product expression to imply summation. For example Equation (2.6) is equivalent to

$\displaystyle \ensuremath{ u\ensuremath{\left( \xi \right)} }=\ensuremath{ \ens...
...rphi_{\alpha} }\ensuremath{\left( \xi \right)} } }\ensuremath{ {u}^{\alpha} } }$ (2.7)

Bilinear Lagrange basis functions

Multi-dimensional Lagrange basis functions can be constructed from the tensor, or outer, products of the one-dimensional Lagrange basis functions. For example the two-dimensional bilinear Lagrange basis functions have four local nodes with the basis functions given by

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{ \ensuremath{ \varphi_{...
...uremath{\xi_{1}}\xspace \ensuremath{\xi_{2}}\xspace \end{split}\end{displaymath} (2.8)

The multi-dimensional interpolation formula is still a sum of the products of the nodal basis function and the field value at the node. For example the interpolated geometric position vector within an element is given by

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{\boldsymbol{x}}\ensurem...
...} }\ensuremath{ {\ensuremath{\boldsymbol{x}}}^{4} } \end{split}\end{displaymath} (2.9)

where, for the vector field, each component is interpolated separately using the given basis functions.


Hermitian Basis Functions

Hermitian basis functions preserve continuity of the derivative of the interpolating variable i.e., $ C^{1}$ continuity, with respect to $ \xi $ across element boundaries by defining additional nodal derivative parameters. Like Lagrange bases, Hermitian basis functions are also chosen so that, at a particular node, only one basis function is non-zero and equal to one. They also are chosen so that, at a particular node, the derivative of only one of four basis functions is non-zero and is equal to one. Hermitian basis functions hence serve to weight the interpolated solution in terms of the field value and derivative of the field value at nodes.

Cubic Hermite basis functions

Cubic Hermite basis functions are the simplest of the Hermitian family and involve two local nodes per element. The interpolation within each element is in terms of $ \ensuremath{ {\ensuremath{\boldsymbol{x}}}^{\alpha} }$ and $ \ensuremath{\left. \ensuremath{ \dfrac{ d \ensuremath{\boldsymbol{x}}}{d \xi} } \right\vert}_{\alpha} $ and is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremath{\left( \xi \r...
...\ensuremath{ \dfrac{ d \ensuremath{\boldsymbol{x}}}{d \xi} } \right\vert}_{2} }$ (2.10)

where the four one-dimensional cubic Hermite basis functions are given in Equation (2.11) and shown in Figure 2.2.

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{ \ensuremath{ \Psi_{1}^...
...nsuremath{\left( \xi \right)} } } &= \xi^{2}(\xi-1) \end{split}\end{displaymath} (2.11)

Figure 2.2: Cubic Hermite basis functions.
Figure: Cubic Hermite basis functions.
\begin{figure}\centering\input{plots/Theory/chbfuns.pstex} \ifthenelse{\equal{Cubic Hermite basis functions.}{}}{ }{ }\end{figure}

One further step is required to make cubic Hermite basis functions useful in practice. Consider the two cubic Hermite elements shown in Figure 2.3.

Figure: Two cubic Hermite elements (denoted by $ \mathit{1}$ and $ \mathit{2}$) formed from three nodes (shown as a $ \bullet$ and denoted by $ \mathbf{1}, \mathbf{2}$ and $ \mathbf{3}$) and having arc-lengths $ s_{1}$ and $ s_{2}$ respectively.
Figure 2.3: Two cubic Hermite elements (denoted by $ \mathit{1}$ and $ \mathit{2}$) formed from three nodes (shown as a $ \bullet$ and denoted by $ \mathbf{1}, \mathbf{2}$ and $ \mathbf{3}$) and having arc-lengths $ s_{1}$ and $ s_{2}$ respectively.
\begin{figure}\centering\input{figs/Theory/cubichermiteelem.pstex} \ifthenelse{\equal{Two
cubic Hermite elements formed from three nodes.}{}}{ }{ }\end{figure}

Figure: Two cubic Hermite elements (denoted by $ \mathit{1}$ and $ \mathit{2}$) formed from three nodes (shown as a $ \bullet$ and denoted by $ \mathbf{1}, \mathbf{2}$ and $ \mathbf{3}$) and having arc-lengths $ s_{1}$ and $ s_{2}$ respectively.
Figure 2.4: Two cubic Hermite elements (denoted by $ \mathit{1}$ and $ \mathit{2}$) formed from three nodes (shown as a $ \bullet$ and denoted by $ \mathbf{1}, \mathbf{2}$ and $ \mathbf{3}$) and having arc-lengths $ s_{1}$ and $ s_{2}$ respectively.

The derivative $ \ensuremath{ \ensuremath{\left. \ensuremath{ \dfrac{ d \ensuremath{\boldsymbol{x}}}{d \xi} } \right\vert}_{\alpha} }$ defined at local node $ \alpha$ is dependent upon the local element $ \xi $-coordinate and is therefore, in general, different in the two adjacent elements. Interpretation of the derivative is hence difficult as two derivatives with the same magnitude in different parts of the mesh might represent two completely different physical derivatives. In order to the have a consistent interpretation of the derivative throughout the mesh it is better to base the interpolation on a physical derivative. Derivatives are therefore based on arc-length at nodes, $ \ensuremath{ \dfrac{ d \ensuremath{ {\ensuremath{\boldsymbol{x}}}^{\alpha} }}{d s} }$, and

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{\left. \ensuremath{ \df...
...\ensuremath{ S }\ensuremath{\left( {e} \right)} } } \end{split}\end{displaymath} (2.12)

is used to determine $ \ensuremath{ \ensuremath{\left. \ensuremath{ \dfrac{ d \ensuremath{\boldsymbol{x}}}{d \xi} } \right\vert}_{\alpha} }$. Here $ \ensuremath{ \dfrac{ d \ensuremath{\boldsymbol{x}}}{d s} }$ is a physical arc-length derivative, $ \ensuremath{ \Delta\ensuremath{\left( \alpha,e \right)} }$ is the global node number of local node $ \alpha$ in element $ e$, $ \ensuremath{\left( \ensuremath{ \dfrac{ d s}{d \xi} } \right)}_{e}$ is an element scale factor, denoted by $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {e} \right)} } }$, which scales the arc-length derivative to the $ \xi $-coordinate derivative. Thus $ \ensuremath{ \dfrac{ d \ensuremath{\boldsymbol{x}}}{d s} }$ is constrained to be continuous across element boundaries rather than $ \ensuremath{ \dfrac{ d \ensuremath{\boldsymbol{x}}}{d \xi} }$. The cubic Hermite interpolation formula now becomes

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremath{\left( \xi \r...
... }\ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {e} \right)} } }$ (2.13)

By interpolating with respect to $ s$ rather than with respect to $ \xi $ there is some liberty as to the choice of the element scale factor, $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {e} \right)} } }$. The choice of the scale factor will, however, affect how $ \xi $ changes with $ s$. It is computationally desirable to have a relatively uniform change of $ \xi $ with $ s$ (for example not biasing the Gaussian quadrature - see later - scheme to one end of the element). For this reason the element scale factor is chosen as some function of the arc-length of the element, $ s_{e}$. The simplest linear function that can be chosen is the arc-length itself. This type of scaling is called arc-length scaling.

To calculate the arc-length for a particular element an iterative process is needed. The arc-length for a one-dimensional element in two-dimensions is defined as

arc-length, $\displaystyle s_{e}=\ensuremath{ \ensuremath{ \displaystyle\int\limits_{0}^{1} ...
...\ensuremath{ y\ensuremath{\left( \xi \right)} }}{d \xi} } \right)}^{2}}\,d\xi }$ (2.14)

However, since the interpolation of $ \ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremath{\left( \xi \right)} }$, as defined in Equation (2.13), uses the arc-length in the calculation of the scaling factor, an iterative root finding technique is needed to obtain the arc-length.

Thus, for an element $ e$, the one-dimensional cubic Hermite interpolation formula in Equation (2.13) becomes

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremath{\left( \xi \r...
...\ensuremath{ S }\ensuremath{\left( \ensuremath{\left( e \right)},u \right)} } }$ (2.15)

where $ \alpha$ varies from $ 1$ to $ 2$, $ u$ varies from 0 to $ 1$, $ \ensuremath{ {\ensuremath{\boldsymbol{x}}}^{\alpha} }_{,0}=\ensuremath{ {\ensuremath{\boldsymbol{x}}}^{\alpha} }$, $ \ensuremath{ {\ensuremath{\boldsymbol{x}}}^{\alpha} }_{,1}= \ensuremath{ \dfrac{ d \ensuremath{ {\ensuremath{\boldsymbol{x}}}^{\alpha} }}{d s} }$, $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( e,0 \right)} } }=1$ and $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( e,1 \right)} } }=...
...uremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {e} \right)} } }=s_{e}$. Note that in summation notation an index that has a bracket, $ ()$, around it indicates that that index is not summed over e.g., Equation (2.15) is equivalent to

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremath{\left( \xi \r...
...1}\ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( e,1 \right)} } }$ (2.16)

i.e., there is an implied sum with $ \alpha$ and $ u$ for $ \ensuremath{ \ensuremath{ \ensuremath{ \Psi_{\alpha}^{u} }\ensuremath{\left( \xi \right)} } }$ and $ \ensuremath{ {\ensuremath{\boldsymbol{x}}}^{\alpha} }_{,u}$ but not for $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( e,u \right)} } }$.

There is one final condition that must be placed on the $ \xi $ to arc-length transformation to ensure arc-length derivatives. This condition, based on the geometric properties of arc-lengths, is that the arc-length derivative vector at a node must have unit magnitude, that is for global node $ A$

$\displaystyle \ensuremath{ {\ensuremath{\left\Vert \ensuremath{ \dfrac{ d \ensuremath{ {\ensuremath{\boldsymbol{x}}}^{A} }}{d s} } \right\Vert}_{2}} }=1$ (2.17)

This ensures that there is continuity with respect to a physical parameter, $ s$, rather than with respect to a mathematical parameter $ \xi $. The set of mesh parameters, $ \ensuremath{\boldsymbol{u}}$, for cubic Hermite interpolation hence contains the set of nodal values (or positions), the set of nodal arc-length derivatives and the set of scale factors.

Extension to higher orders

Bicubic Hermite basis functions are the two-dimensional extension of the one-dimensional cubic Hermite basis functions. They are formed from the tensor (or outer) product of two of the one-dimensional cubic Hermite basis functions defined in Equation (2.11). The interpolation formula for the point $ \ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremath{\left( \ensuremath{\xi_{1}}\xspace ,\ensuremath{\xi_{2}}\xspace \right)} }$ within an element is obtained from the bicubic Hermite interpolation formula (, ),

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{\boldsymbol{x}}\ensurem...
...}\ensuremath{\xi_{2}}\xspace } } \right\vert}_{4} } \end{split}\end{displaymath} (2.18)

As with one-dimensional cubic Hermite elements, the derivatives with respect to $ \xi $ in the two-dimensional interpolation formula above are expressed as the product of a nodal arc-length derivative and a scale factor. This is, however, complicated by the fact that there are now multiple $ \xi $ directions at each node. From the product rule the transformation from an $ \xi $ based derivative to an arc-length based derivative is given by,

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{...
...\ensuremath{ \dfrac{\ensuremath{\partial}s_{2}}{\ensuremath{\partial}\xi_{l}} }$ (2.19)

Now, by definition, the $ \ensuremath{{l}^{\text{th}}}$ arc-length direction is only a function of the $ \ensuremath{{l}^{\text{th}}}$ $ \xi $ direction, hence the derivative at local node $ \alpha$ is

$\displaystyle \ensuremath{ \ensuremath{\left. \ensuremath{ \dfrac{\ensuremath{\...
... }\ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( e,l \right)} } }$ (2.20)

and the cross-derivative is

$\displaystyle \ensuremath{ \ensuremath{\left. \ensuremath{ \dfrac{\ensuremath{\...
...} \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( e,2 \right)} } }$ (2.21)

Unlike the one-dimensional cubic Hermite case a condition must be placed on this transformation in order to maintain $ C^{1}$ continuity across element boundaries.

Consider the line between global nodes $ \mathbf{1}$ and $ \mathbf{2}$ in the two bicubic Hermite elements shown in Figure 2.5.

Figure: Two bicubic Hermite elements (denoted by $ \mathit{1}$ and $ \mathit{2}$). The global node numbers are given in boldface, the local node numbers in normal text and the element scale factors used along each line are denoted by $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {l} \right)} } }$.
Figure 2.5: Two bicubic Hermite elements (denoted by $ \mathit{1}$ and $ \mathit{2}$). The global node numbers are given in boldface, the local node numbers in normal text and the element scale factors used along each line are denoted by $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {l} \right)} } }$.
\begin{figure}\centering\input{figs/Theory/bichelementcont.pstex} \ifthenelse{\equal{Continuity of two bicubic
Hermite elements.}{}}{ }{ }\end{figure}

For $ C^{1}$ continuity, as opposed to $ G^{1}$ continuity, between these elements the derivative with respect to $ \ensuremath{\xi_{1}}\xspace $, that is $ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremat...
...\xi_{2}}\xspace \right)} }}{\ensuremath{\partial}\ensuremath{\xi_{1}}\xspace } $, must be continuous2.1. The formula for this derivative in element $ \mathit{1}$ along the boundary between elements $ \mathit{1}$ and $ \mathit{2}$ is

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath...
...\xspace \ensuremath{\partial}\ensuremath{\xi_{2}}\xspace } } \right\vert}_{4} }$ (2.22)

and for element $ \mathit{2}$ is

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath...
...\xspace \ensuremath{\partial}\ensuremath{\xi_{2}}\xspace } } \right\vert}_{3} }$ (2.23)

Now substituting Equations (2.20) and (2.21) into the above equations yields for element $ \mathit{1}$

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath...
... }\ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {4} \right)} } }$ (2.24)

and for element $ \mathit{2}$

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath...
... }\ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {4} \right)} } }$ (2.25)

Now local node $ 2$ in element $ \mathit{1}$ and local node $ 1$ in element $ \mathit{2}$ is the same as global node $ \mathbf{1}$ and local node $ 4$ in element $ \mathit{1}$ and local node $ 3$ in element $ \mathit{2}$ is the same as global node $ \mathbf{2}$. Hence for a given $ \ensuremath{\xi_{2}}\xspace $ the condition for $ C^{1}$ continuity across the element boundary is

\begin{multline}
\ensuremath{ \ensuremath{ \ensuremath{ \Psi_{0}^{1} }\ensurema...
... \ensuremath{ \ensuremath{ S }\ensuremath{\left( {4} \right)} } }
\end{multline}

or

\begin{multline}
\ensuremath{\left( \ensuremath{ \ensuremath{ \ensuremath{ S }\...
...h{ \ensuremath{ S }\ensuremath{\left( {4} \right)} } } \right)}
\end{multline}

Now by choosing the scale factors to be equal on either side of node $ \mathbf{1}$ and $ \mathbf{2}$ (i.e., $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {2} \right)} } }=...
...nsuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( \mathbf{1} \right)} } }$ and $ \ensuremath{ \ensuremath{ \ensuremath{ S }\ensuremath{\left( {5} \right)} } }=...
...nsuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( \mathbf{2} \right)} } }$), that is nodal based scale factors, Equation (2.27) is satisfied for any choice of the scale factors. Hence nodal scale factors are a sufficient condition to ensure $ C^{1}$ continuity. If it is desired that the scale factors be different either side of the node then Equation (2.27) must be satisfied to ensure continuity. The choice of the scale factors again determines the $ \xi $ to $ s$ spacing. Following on from the cubic Hermite elements the scale factors are chosen to be nodally based and equal to the average arc-length on either side of the node for each $ \xi $ direction i.e., for the $ \ensuremath{{l}^{\text{th}}}$ direction

$\displaystyle \ensuremath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\...
...th{\left( \ensuremath{ A_{\oplus}\ensuremath{\left( l \right)} } \right)} }}{2}$ (2.26)

where $ \ensuremath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( A,l \right)} } }$ is the nodal scale factor in the $ \ensuremath{{l}^{\text{th}}}$ $ \xi $ direction at global node $ A$, $ \ensuremath{ A_{\ominus}\ensuremath{\left( l \right)} }$ is the element immediately preceding (in the $ {l}^{\text{th}}$ direction) node $ A$, and $ \ensuremath{ A_{\oplus}\ensuremath{\left( l \right)} }$ is the element immediately after (in the $ {l}^{\text{th}}$ direction) node $ A$ and $ \ensuremath{ s_{l}\ensuremath{\left( e \right)} }$ is the arc-length in the $ {l}^{\text{th}}$ $ \xi $ direction from node $ A$ in element $ e$. This type of scaling is known as average arc-length scaling.


Hermite-sector elements

One problem that arises when using quadrilateral elements (such as bicubic Hermite elements) to describe a surface is that it is impossible to 'close the surface' in three-dimensions whilst maintaining consistent $ \ensuremath{\xi_{1}}\xspace $ and $ \ensuremath{\xi_{2}}\xspace $ directions throughout the mesh. This is important as $ C^{1}$ continuity requires either consistent $ \xi $ directions or a transformation at each node to take into account the inconsistent directions (, ).

One solution to this problem is to collapse a bicubic Hermite element. This entails placing one of the four local nodes of the element at the same geometric location as another local node of the element and results in a triangular element from which it is possible to close the surface. There are two main problems with this solution. The first is that one of the two $ \xi $ directions at the collapsed node is undefined. The second is that the distance between the two nodes at the same location is zero. Numerical problems can result from this zero distance. An alternative strategy has been developed in which special elements, called ``Hermite-sector'' elements, are used to close a bicubic Hermite surface in three-dimensions. There are two types of elements depending on whether the $ \xi $ (or $ s$) directions come together at local node one or local node three. These two elements are shown in Figure 2.6.

Figure: Hermite-sector elements. (a) Apex node one element. (b) Apex node three element.
Figure 2.6: Hermite-sector elements. (a) Apex node one element. (b) Apex node three element.
\begin{figure}\centering\input{figs/Theory/hermitesectors.pstex} \ifthenelse{\equal{Hermite-sector elements.}{}}{ }{ }\end{figure}

From Figure 2.6 it can be seen that the $ s_{2}$ direction is not unique at the apex nodes. This gives us two choices for the interpolation within the element: ignore the $ s_{2}$ derivative when interpolating or set the $ s_{2}$ derivative identically to zero.

Ignore $ s_{2}$ apex derivative: For this case it can be seen from Figure 2.6 that the interpolation in the $ \ensuremath{\xi_{1}}\xspace $ direction is just the standard cubic Hermite interpolation. The interpolation in the $ \ensuremath{\xi_{2}}\xspace $ direction is now a little different in that the nodal arc-length derivative has been dropped as it is no longer defined at the apex node. For an apex node one element shown in Figure 2.6(a) the interpolation for the line between local node one and local node $ n$ is now quadratic and is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremath{\left( \ensur...
...ol{x}}}{\ensuremath{\partial}\ensuremath{\xi_{2}}\xspace } } \right\vert}_{n} }$ (2.27)

with the basis functions given by

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{ \ensuremath{ \zeta_{1}...
...} }\ensuremath{\left( \xi \right)} } }&=\xi^{2}-\xi \end{split}\end{displaymath} (2.28)

For the apex node three element shown in Figure 2.6(b) the interpolation for the line connecting local node $ n$ with local node three is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremath{\left( \ensur...
...ol{x}}}{\ensuremath{\partial}\ensuremath{\xi_{2}}\xspace } } \right\vert}_{n} }$ (2.29)

with the basis functions given by

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{ \ensuremath{ \eta_{1} ...
...} }\ensuremath{\left( \xi \right)} } }&=\xi-\xi^{2} \end{split}\end{displaymath} (2.30)

The full interpolation formula for the sector element can then be found by taking the tensor product of the interpolation in the $ \ensuremath{\xi_{1}}\xspace $ direction, given in Equation (2.10), with the interpolation in the $ \ensuremath{\xi_{2}}\xspace $ direction (given by either Equations (2.29) or (2.31)). The interpolation formula can be converted from nodal $ \xi $ derivatives to nodal arc-length derivatives using the procedure outlined for the bicubic Hermite case. For example, the interpolation formulae for an apex node one element is

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{\boldsymbol{x}}\ensurem...
...h{ \mathcal{S} }\ensuremath{\left( 3,2 \right)} } } \end{split}\end{displaymath} (2.31)

Care must be taken when using Hermite-sector elements for rapidly changing surfaces. Consider an apex node one element with undefined $ s_{2}$ apex derivatives. The rate of change of $ \ensuremath{\boldsymbol{x}}$ with respect to $ \ensuremath{\xi_{1}}\xspace $ along the line from node one to node three (i.e., $ \ensuremath{\xi_{1}}\xspace =1$) is

\begin{displaymath}\begin{split}\ensuremath{ \dfrac{\ensuremath{\partial}\ensure...
...al{S} }\ensuremath{\left( 3,2 \right)} } } \right)} \end{split}\end{displaymath} (2.32)

Taking the dot product of $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath{\boldsymbol{...
...xi_{2}}\xspace \right)} }}{\ensuremath{\partial}\ensuremath{\xi_{1}}\xspace } }$ with $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ {\ensuremath{\boldsymbol{x}}}^{3} }}{\ensuremath{\partial}s_{1}} }$ gives

$\displaystyle \ensuremath{\ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{...
...h{ {\ensuremath{\boldsymbol{x}}}^{3} }}{\ensuremath{\partial}s_{1}} }} \right)}$ (2.33)

The normality constraint for arc-length derivatives means that $ \ensuremath{\ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ {\ensuremath...
...suremath{ {\ensuremath{\boldsymbol{x}}}^{3} }}{\ensuremath{\partial}s_{1}} }}=1$ and thus the right hand side of Equation (2.35) divided by $ \ensuremath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( 3,1 \right)} } }$ (i.e., normalised by $ \ensuremath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( 3,1 \right)} } }$) is the quadratic

$\displaystyle \ensuremath{\left( 2\ensuremath{\xi_{2}}\xspace -\ensuremath{\xi_...
...ensuremath{ {\ensuremath{\boldsymbol{x}}}^{3} }}{\ensuremath{\partial}s_{1}} }}$    

or

$\displaystyle \ensuremath{\left( \ensuremath{ \ensuremath{ \ensuremath{ \mathca...
...{x}}}^{3} }}{\ensuremath{\partial}s_{1}} }} \right)}\ensuremath{\xi_{2}}\xspace$    

This quadratic is $ 1$ at $ \ensuremath{\xi_{2}}\xspace =1$ and always has a root at $ \ensuremath{\xi_{2}}\xspace =0$. Consider the case of this quadratic having its second root in the interval $ (0,1)$. This would mean that at some point in the interval $ (0,1)$ the dot product of $ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath{\boldsymbol{x}}\ensuremat...
...\xi_{2}}\xspace \right)} }}{\ensuremath{\partial}\ensuremath{\xi_{1}}\xspace } $ and $ \dfrac{\ensuremath{\partial}\ensuremath{ {\ensuremath{\boldsymbol{x}}}^{3} }}{\ensuremath{\partial}s_{1}} $ would go from zero to negative and then positive as $ \ensuremath{\xi_{2}}\xspace $ changed from 0 to $ 1$ i.e., the angle between $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath{\boldsymbol{...
...xi_{2}}\xspace \right)} }}{\ensuremath{\partial}\ensuremath{\xi_{1}}\xspace } }$ and $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ {\ensuremath{\boldsymbol{x}}}^{3} }}{\ensuremath{\partial}s_{1}} }$ would, at some stage, be greater than ninety degrees. As the direction of the normal to the surface along the line between local node one and three is given by the cross product of $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath{\boldsymbol{...
...xi_{2}}\xspace \right)} }}{\ensuremath{\partial}\ensuremath{\xi_{1}}\xspace } }$ and $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ \ensuremath{\boldsymbol{...
...xi_{2}}\xspace \right)} }}{\ensuremath{\partial}\ensuremath{\xi_{2}}\xspace } }$ then, if the quadratic became sufficiently negative, the normal to the surface could reverse direction from an outward to an inward normal as $ \ensuremath{\xi_{2}}\xspace $ changed from 0 to $ 1$. This is clearly undesirable. In fact even if the quadratic is only slightly negative the resulting surface would be grossly deformed.

To avoid these effects the second root of the quadratic must be outside the interval $ (0,1)$. From the quadratic formula the conditions for this are

$\displaystyle \dfrac{\ensuremath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensur...
...emath{ {\ensuremath{\boldsymbol{x}}}^{3} }}{\ensuremath{\partial}s_{1}} }}-1}<0$ (2.34)

and

$\displaystyle \dfrac{\ensuremath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensur...
...emath{ {\ensuremath{\boldsymbol{x}}}^{3} }}{\ensuremath{\partial}s_{1}} }}-1}>1$ (2.35)

that is (for the line from local node one to local node $ n$)

$\displaystyle \ensuremath{\ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensure...
...th{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( n,2 \right)} } }}$ (2.36)

The simplest way to interpret this constraint is that if the element is large (i.e., $ \ensuremath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( n,2 \right)} } }$ is large) then $ \ensuremath{\ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{ {\ensur...
...ensuremath{ {\ensuremath{\boldsymbol{x}}}^{n} }}{\ensuremath{\partial}s_{1}} }}$ must be small. The simplest way for this to happen is to ensure the magnitude of the components of $ \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{ {\ensuremath{\boldsymbol{x}}}^{n} }}{\ensuremath{\partial}s_{1} \ensuremath{\partial}s_{2}} }$ are small (or of opposite sign to the comparable components of $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ {\ensuremath{\boldsymbol{x}}}^{n} }}{\ensuremath{\partial}s_{1}} }$).

The equivalent interpolation formula to Equation (2.33) for an apex node three Hermite-sector element is

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{\boldsymbol{x}}\ensurem...
...h{ \mathcal{S} }\ensuremath{\left( 2,2 \right)} } } \end{split}\end{displaymath} (2.37)

and the equivalent constraint for apex node three Hermite-sector elements (for the line from local node $ n$ to local node three) is

$\displaystyle \ensuremath{\ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensure...
...th{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( n,2 \right)} } }}$ (2.38)

Zero $ s_{2}$ apex derivative: For this case the sector basis functions are just the cubic Hermite basis functions. The corresponding interpolation formulae for an apex node one element is hence

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{\boldsymbol{x}}\ensurem...
...h{ \mathcal{S} }\ensuremath{\left( 3,2 \right)} } } \end{split}\end{displaymath} (2.39)

and the condition to avoid reversal of the normal is

$\displaystyle \ensuremath{\ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensure...
...th{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( n,2 \right)} } }}$ (2.40)

and for the apex node three element the interpolation formula is

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{\boldsymbol{x}}\ensurem...
...h{ \mathcal{S} }\ensuremath{\left( 2,2 \right)} } } \end{split}\end{displaymath} (2.41)

with a condition of

$\displaystyle \ensuremath{\ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensure...
...th{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( n,2 \right)} } }}$ (2.42)

Although the Hermite-sector basis function in which the $ s_{2}$ apex node derivatives are identically zero have an increased limit on the cross-derivative constraints (a right hand side numerator of $ \pm 3$ instead of $ \pm 2$) they have the problem that as all derivatives vanish at the apex any interpolated function has a zero Hessian at the apex. As this can cause numerical problems the Hermite-sector basis functions which have an undefined $ s_{2}$ derivative are used in this thesis.

Simplex Basis Functions

Simplex basis function and its derivatives are evaluated with respect to external $ \ensuremath{\boldsymbol{\xi}}$ coordinates.

For Simplex line elements there are two area coordinates which are a function of $ \xi_{1}$ i.e.,

$\displaystyle L_{1}$ $\displaystyle = 1 - \xi_{1}$ (2.43)
$\displaystyle L_{2}$ $\displaystyle = \xi_{1} - 1$ (2.44)

The derivatives wrt to external coordinates are then given by

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}\xi_{1}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbo...
...al}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{1}} }$ (2.45)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{\xi_{1}}^{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...suremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{L_{2}}^{2}} }$ (2.46)

For Simplex triangle elements there are three area coordinates which are a function of $ \xi_{1}$ and $ \xi_{2}$ i.e.,

$\displaystyle L_{1}$ $\displaystyle = 1 - \xi_{1}$ (2.47)
$\displaystyle L_{2}$ $\displaystyle = 1 - \xi_{2}$ (2.48)
$\displaystyle L_{3}$ $\displaystyle = \xi_{1} + \xi_{2} - 1$ (2.49)

The derivatives wrt to external coordinates are then given by

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}\xi_{1}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbo...
...al}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{1}} }$ (2.50)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}\xi_{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbo...
...al}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{2}} }$ (2.51)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{\xi_{1}}^{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...suremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{L_{3}}^{2}} }$ (2.52)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{\xi_{2}}^{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...suremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{L_{3}}^{2}} }$ (2.53)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsy...
...uremath{ n_{} }}}}{\ensuremath{\partial}\xi_{1} \ensuremath{\partial}\xi_{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{1} \ensuremath{\partial}L_{2}} }$ (2.54)

For Simplex tetrahedral elements there are four area coordinates which are a function of $ \xi_{1}$, $ \xi_{2}$ and $ \xi_{3}$ i.e.,

$\displaystyle L_{1}$ $\displaystyle = 1 - \xi_{1}$ (2.55)
$\displaystyle L_{2}$ $\displaystyle = 1 - \xi_{2}$ (2.56)
$\displaystyle L_{3}$ $\displaystyle = 1 - \xi_{3}$ (2.57)
$\displaystyle L_{4}$ $\displaystyle = \xi_{1} + \xi_{2} + \xi_{3} - 2$ (2.58)

The derivatives wrt to external coordinates are then given by

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}\xi_{1}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbo...
...al}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{1}} }$ (2.59)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}\xi_{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbo...
...al}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{2}} }$ (2.60)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}\xi_{3}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbo...
...al}\ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{3}} }$ (2.61)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{\xi_{1}}^{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...suremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{L_{4}}^{2}} }$ (2.62)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{\xi_{2}}^{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...suremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{L_{4}}^{2}} }$ (2.63)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{\xi_{3}}^{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...suremath{\boldsymbol{\ensuremath{ n_{} }}}}{\ensuremath{\partial}{L_{4}}^{2}} }$ (2.64)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsy...
...uremath{ n_{} }}}}{\ensuremath{\partial}\xi_{1} \ensuremath{\partial}\xi_{2}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{1} \ensuremath{\partial}L_{2}} }$ (2.65)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsy...
...uremath{ n_{} }}}}{\ensuremath{\partial}\xi_{1} \ensuremath{\partial}\xi_{3}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{1} \ensuremath{\partial}L_{3}} }$ (2.66)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\boldsy...
...uremath{ n_{} }}}}{\ensuremath{\partial}\xi_{2} \ensuremath{\partial}\xi_{3}} }$ $\displaystyle = \ensuremath{ \dfrac{\ensuremath{\partial}^{2} \ensuremath{\bold...
...\ensuremath{ n_{} }}}}{\ensuremath{\partial}L_{2} \ensuremath{\partial}L_{3}} }$ (2.67)
$\displaystyle \ensuremath{ \dfrac{ \ensuremath{\partial}^{3} \ensuremath{\bolds...
...h{\partial}\xi_{1} \ensuremath{\partial}\xi_{2} \ensuremath{\partial}\xi_{3}} }$ $\displaystyle = \ensuremath{ \dfrac{ \ensuremath{\partial}^{3} \ensuremath{\bol...
...remath{ n_{} }}}}{\ensuremath{\partial}L_{3} \ensuremath{\partial}L_{4}^{2}} }+$ (2.68)
  $\displaystyle \ensuremath{ \dfrac{ \ensuremath{\partial}^{3} \ensuremath{\bolds...
...uremath{\partial}L_{1} \ensuremath{\partial}L_{2} \ensuremath{\partial}L_{3}} }$ (2.69)

Tensor Analysis

Base vectors

Now, if we have a vector, $ \ensuremath{\boldsymbol{v}}$ we can write

$\displaystyle \ensuremath{\boldsymbol{v}}=v^{i}\ensuremath{\boldsymbol{g}}_{i}$ (2.70)

where $ v^{i}$ are the components of the contravariant vector, and $ \ensuremath{\boldsymbol{g}}_{i}$ are the covariant base vectors.

Similarly, the vector $ \ensuremath{\boldsymbol{v}}$ can also be written as

$\displaystyle \ensuremath{\boldsymbol{v}}=v_{i}\ensuremath{\boldsymbol{g}}^{i}$ (2.71)

where $ v_{i}$ are the components of the covariant vector, and $ \ensuremath{\boldsymbol{g}}^{i}$ are the contravariant base vectors.

We now note that

$\displaystyle \ensuremath{\boldsymbol{v}}=v^{i}\ensuremath{\boldsymbol{g}}_{i}=v^{i}\sqrt{g_{ii}}\hat{\ensuremath{\boldsymbol{g}}_{i}}$ (2.72)

where $ v^{i}\sqrt{g_{ii}}$ are the physical components of the vector and $ \hat{\ensuremath{\boldsymbol{g}}_{i}}$ are the unit vectors given by

$\displaystyle \hat{\ensuremath{\boldsymbol{g}}_{i}}=\dfrac{\ensuremath{\boldsymbol{g}}_{i}}{\sqrt{g_{ii}}}$ (2.73)


Metric Tensors

Metric tensors are the inner product of base vectors. If $ \ensuremath{\boldsymbol{g}}_{i}$ are the covariant base vectors then the covariant metric tensor is given by

$\displaystyle g_{ij}=\ensuremath{\ensuremath{\boldsymbol{g}}_{i}\cdot\ensuremath{\boldsymbol{g}}_{j}}$ (2.74)

Similarily if $ \ensuremath{\boldsymbol{g}}^{i}$ are the contravariant base vectors then the contravariant metric tensor is given by

$\displaystyle g^{ij}=\ensuremath{\ensuremath{\boldsymbol{g}}^{i}\cdot\ensuremath{\boldsymbol{g}}^{j}}$ (2.75)

We can also form a mixed metric tensor from the dot product of a contravariant and a covariant base vector i.e.,

$\displaystyle g^{i}_{.j}=\ensuremath{\ensuremath{\boldsymbol{g}}^{i}\cdot\ensuremath{\boldsymbol{g}}_{j}}$ (2.76)

and

$\displaystyle g_{i}^{.j}=\ensuremath{\ensuremath{\boldsymbol{g}}_{i}\cdot\ensuremath{\boldsymbol{g}}^{j}}$ (2.77)

Note that for mixed tensors the ``.'' indicates the order of the index i.e., $ g^{i}_{.j}$ indicates that the first index is contravariant and the second index is covariant whereas $ g_{i}^{.j}$ indicates that the first index is covariant and the second index is contravariant.

If the base vectors are all mutually orthogonal and constant then $ \ensuremath{\boldsymbol{g}}_{i}=\ensuremath{\boldsymbol{g}}^{i}$ and $ g_{ij}=g^{ij}$.

The metric tensors generalise (Euclidean) distance i.e.,

$\displaystyle ds^{2}=g_{ij}dx^{i}dx^{j}$ (2.78)

Note that multiplying by the covariant metric tensor lowers indices i.e.,

\begin{displaymath}\begin{split}\ensuremath{\boldsymbol{a}}_{i} &= g_{ij}\ensure...
...}g_{jl}A^{kl} = g_{jk}A_{i}^{.k} = g_{ik}A^{k}_{.j} \end{split}\end{displaymath} (2.79)

and that multiplying by the contravariant metric tensor raises indices i.e.,

\begin{displaymath}\begin{split}\ensuremath{\boldsymbol{a}}^{i} &= g^{ij}\ensure...
...}g^{jl}A_{kl} = g^{ik}A_{k}^{.j} = g^{jk}A^{i}_{.k} \end{split}\end{displaymath} (2.80)

and for the mixed tensors

\begin{displaymath}\begin{split}A_{i}^{.j} &= g^{jk}A_{ik} = g_{ik}A^{kj} \\ A^{i}_{.j} &= g^{ik}A_{kj} = g_{jk}A^{ik} \\ \end{split}\end{displaymath} (2.81)

Transformations

The transformation rules for tensors in going from a $ \ensuremath{\boldsymbol{\nu}}$ coordinate system to a $ \ensuremath{\boldsymbol{\xi}}$ coordinate system are as follows:

For a covariant vector (a rank (0,1) tensor)

$\displaystyle {\tilde{a}}_{i}=\ensuremath{ \dfrac{\ensuremath{\partial}\nu^{a}}{\ensuremath{\partial}\xi^{i}} }a_{a}$ (2.82)

For a contravariant vector (a rank (1,0) tensor)

$\displaystyle {\tilde{a}}^{i}=\ensuremath{ \dfrac{\ensuremath{\partial}\xi^{i}}{\ensuremath{\partial}\nu^{a}} }a^{a}$ (2.83)

For a covariant tensor (a rank (0,2) tensor)

$\displaystyle {\tilde{A}}_{ij}=\ensuremath{ \dfrac{\ensuremath{\partial}\nu^{a}...
...ath{ \dfrac{\ensuremath{\partial}\nu^{b}}{\ensuremath{\partial}\xi^{j}} }A_{ab}$ (2.84)

For a contravariant tensor (a rank (2,0) tensor)

$\displaystyle {\tilde{A}}^{ij}=\ensuremath{ \dfrac{\ensuremath{\partial}\xi^{i}...
...ath{ \dfrac{\ensuremath{\partial}\xi^{j}}{\ensuremath{\partial}\nu^{b}} }A^{ab}$ (2.85)

and for Mixed tensors (rank (1,1) tensors)

$\displaystyle {\tilde{A}}^{i}_{.j}=\ensuremath{ \dfrac{\ensuremath{\partial}\xi...
... \dfrac{\ensuremath{\partial}\nu^{b}}{\ensuremath{\partial}\xi^{j}} }A^{a}_{.b}$ (2.86)

and

$\displaystyle {\tilde{A}}_{i}^{.j}=\ensuremath{ \dfrac{\ensuremath{\partial}\nu...
... \dfrac{\ensuremath{\partial}\xi^{j}}{\ensuremath{\partial}\nu^{b}} }A_{a}^{.b}$ (2.87)


Derivatives

Scalars

We note that a scalar quantity $ \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} }$ has derivatives

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}\xi^{i}} }=\ensuremath{ {u}_{,i} }$ (2.88)

Or more formally, the covariant derivative ( $ \ensuremath{ {\cdot}_{;\cdot}
}$) of a rank 0 tensor $ u$ is

$\displaystyle \ensuremath{ {u}_{;i} }=\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}\xi^{i}} }=\ensuremath{ {u}_{,i} }$ (2.89)

Vectors

The derivatives of a vector $ \ensuremath{\boldsymbol{v}}$ are given by

\begin{displaymath}\begin{split}\ensuremath{ \dfrac{\ensuremath{\partial}\ensure...
...nsuremath{ {\ensuremath{\boldsymbol{g}}_{k}}_{,i} } \end{split}\end{displaymath} (2.90)

Now introducing the notation

$\displaystyle \ensuremath{ \Gamma^{i}_{jk} } = \ensuremath{\ensuremath{\boldsym...
...remath{\partial}\ensuremath{\boldsymbol{g}}_{j}}{\ensuremath{\partial}x^{k}} }}$ (2.91)

where $ \ensuremath{ \Gamma^{i}_{jk}
}$ are the Christoffel symbols of the second kind.

Note that the Christoffel symbols of the first kind are given by

$\displaystyle \ensuremath{ \Gamma_{ijk} } = \ensuremath{\ensuremath{\boldsymbol...
...remath{\partial}\ensuremath{\boldsymbol{g}}_{j}}{\ensuremath{\partial}x^{k}} }}$ (2.92)

Note that

\begin{displaymath}\begin{split}\ensuremath{ \Gamma^{i}_{jk} } &= \ensuremath{\e...
...{l}} \\ &= \ensuremath{ \Gamma^{i}_{jl} }g^{i}_{.l} \end{split}\end{displaymath} (2.93)

The Christoffel symbols of the first kind are also given by

$\displaystyle \ensuremath{ \Gamma_{ijk} }=\frac{1}{2}\ensuremath{\left( \ensure...
...h{ \dfrac{\ensuremath{\partial}g_{jk}}{\ensuremath{\partial}\xi^{i}} } \right)}$ (2.94)

and that Christoffel symbols of the second kind are given by

\begin{displaymath}\begin{split}\ensuremath{ \Gamma^{i}_{jk} } &= g^{il}\ensurem...
...al}g_{jk}}{\ensuremath{\partial}\xi^{l}} } \right)} \end{split}\end{displaymath} (2.95)

Note that Christoffel symbols are not tensors and the have the following transformation laws from $ \ensuremath{\boldsymbol{\nu}}$ to $ \ensuremath{\boldsymbol{\xi}}$ coordinates

$\displaystyle \ensuremath{ \Gamma_{ijk} }$ $\displaystyle = \ensuremath{ \Gamma_{abc} }\ensuremath{ \dfrac{\ensuremath{\par...
...tial}^{2} \nu^{c}}{\ensuremath{\partial}\xi^{j} \ensuremath{\partial}\xi^{k}} }$ (2.96)
$\displaystyle \ensuremath{ \Gamma^{i}_{jk} }$ $\displaystyle = \ensuremath{ \Gamma^{a}_{bc} }\ensuremath{ \dfrac{\ensuremath{\...
...tial}^{2} \nu^{a}}{\ensuremath{\partial}\xi^{j} \ensuremath{\partial}\xi^{k}} }$ (2.97)

We can now write (BELOW SEEMS WRONG - CHECK)

\begin{displaymath}\begin{split}\ensuremath{ {\ensuremath{\boldsymbol{v}}}_{,i} ...
...math{ {v^{k}}_{;i} }\ensuremath{\boldsymbol{g}}_{k} \end{split}\end{displaymath} (2.98)

where $ \ensuremath{ {v^{k}}_{;i}
}$ is the covariant derivative of $ v^{k}$ .

The covariant derivative of a contravariant (rank (0,1)) tensor $ v^{k}$ is

$\displaystyle \ensuremath{ {v^{k}}_{;i} } =\ensuremath{ {v^{k}}_{,i} } +\ensuremath{ \Gamma^{k}_{ij} }v^{j}$ (2.99)

and the covariant derivative of a covariant tensor (rank (1,0)) $ v_{k}$ is

$\displaystyle \ensuremath{ {v_{k}}_{;i} } =\ensuremath{ {v_{k}}_{,i} } -\ensuremath{ \Gamma^{j}_{ki} }v_{j}$ (2.100)

Tensors

The covariant derivative of a contravariant (rank (0,2)) tensor $ W^{mn}$ is

$\displaystyle \ensuremath{ {W^{mn}}_{;i} }=\ensuremath{ {W^{mn}}_{,i} } +\ensuremath{ \Gamma^{m}_{ji} }W^{jn}+\ensuremath{ \Gamma^{n}_{ji} }W^{mj}$ (2.101)

and the covariant derivative of a covariant (rank (2,0)) tensor $ W_{mn}$ is

$\displaystyle \ensuremath{ {W_{mn}}_{;i} }=\ensuremath{ {W_{mn}}_{,i} } -\ensuremath{ \Gamma^{j}_{mi} }W_{jn}-\ensuremath{ \Gamma^{j}_{ni} }W_{mj}$ (2.102)

and the covariant derivative of a mixed (rank (1,1)) tensor $ W^{m}_{.n}$ is

$\displaystyle \ensuremath{ {W^{m}_{.n}}_{;i} }=\ensuremath{ {W^{m}_{.n}}_{,i} }...
...nsuremath{ \Gamma^{m}_{ji} }W^{j}_{.n}-\ensuremath{ \Gamma^{j}_{ni} }W^{m}_{.j}$ (2.103)

Common Operators

For tensor equations to hold in any coordinate system the equations must involve tensor quantities i.e., covariant derivatives rather than partial derivatives.

Gradient

As the covariant derivative of a scalar is just the partial derivative the gradient of a scalar function $ \phi$ using covariant derivatives is

grad $\displaystyle \phi = \ensuremath{\ensuremath{\nabla}\phi} =\ensuremath{ {\phi}_...
...{\boldsymbol{g}}^{i}=\ensuremath{ {\phi}_{,i} } \ensuremath{\boldsymbol{g}}^{i}$ (2.104)

and

$\displaystyle \ensuremath{\ensuremath{\nabla}\phi} =\ensuremath{ {\phi}_{,i} } ...
...symbol{g}}^{i}=\ensuremath{ {\phi}_{,i} } g^{ij}\ensuremath{\boldsymbol{g}}_{j}$ (2.105)

Divergence

The divergence of a vector using covariant derivatives is

div $\displaystyle \ensuremath{\boldsymbol{\phi}} = \ensuremath{ \ensuremath{\nabla\...
...th{\left( \sqrt{\ensuremath{\left\vert g \right\vert}}\phi^{i} \right)}}_{,i} }$ (2.106)

where $ g$ is the determinant of the covariant metric tensor $ g_{ij}$.

Curl

The curl of a vector using covariant derivatives is

curl $\displaystyle \ensuremath{\boldsymbol{\phi}} = \ensuremath{ \ensuremath{\nabla\...
...}_{;i} }-\ensuremath{ {\phi_{i}}_{;j} } \right)}\ensuremath{\boldsymbol{g}}_{k}$ (2.107)

where $ g$ is the determinant of the covariant metric tensor $ g_{ij}$.

Laplacian

The Laplacian of a scalar using covariant derivatives is

$\displaystyle \ensuremath{\ensuremath{\nabla^{2}}{\phi}}=$div$\displaystyle \ensuremath{\left( \text{grad }\phi \right)}=\ensuremath{ \ensure...
...{ {\ensuremath{\left( \sqrt{g}g^{ij}\ensuremath{ {\phi}_{,j} } \right)}}_{,i} }$ (2.108)

where $ g$ is the determinant of the covariant metric tensor $ g_{ij}$.

The Laplacian of a vector using covariant derivatives is

$\displaystyle \ensuremath{\ensuremath{\nabla^{2}}{\ensuremath{\boldsymbol{\phi}}}}=$grad $\displaystyle \ensuremath{\left( \text{div }\ensuremath{\boldsymbol{\phi}} \rig...
...emath{ \ensuremath{\left. \ensuremath{\boldsymbol{\phi}} \right\vert}^{i}_{i} }$ (2.109)

The Laplacian of a contravariant (rank (0,1)) tensor $ \phi^{k}$ is

$\displaystyle \ensuremath{\ensuremath{\nabla^{2}}{\ensuremath{\boldsymbol{\phi}...
...}_{ij} }}{\ensuremath{\partial}x^{h}} } \right)}\ensuremath{\boldsymbol{e}}^{k}$ (2.110)

and the covariant derivative of a covariant tensor (rank (1,0)) $ \phi_{k}$ is

$\displaystyle \ensuremath{\ensuremath{\nabla^{2}}{\ensuremath{\boldsymbol{\phi}...
...}_{ij} }}{\ensuremath{\partial}x^{i}} } \right)}\ensuremath{\boldsymbol{e}}_{k}$ (2.111)


Coordinate Systems

Rectangular Cartesian

The base vectors with respect to the global coordinate system are

$\displaystyle \ensuremath{\boldsymbol{g}}_{i}=\begin{bmatrix}\ensuremath{\bolds...
...ensuremath{\boldsymbol{i}}_{2} \\ \ensuremath{\boldsymbol{i}}_{3} \end{bmatrix}$ (2.112)

The covariant metric tensor is

$\displaystyle g_{ij}=\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$ (2.113)

and the contravariant metric tensor is

$\displaystyle g^{ij}=\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$ (2.114)

The Christoffel symbols of the second kind are all zero.

Cylindrical Polar

The global coordinates $ \ensuremath{\left( x,y,z \right)}$ with respect to the cylindrical polar coordinates $ \ensuremath{\left( r,\theta,z \right)}$ are defined by

\begin{equation*}\begin{aligned}x = r\cos\theta & \qquad r \ge0 \\ y = r\sin\the...
...theta\le2\pi \\ z = z & \qquad -\infty < z < \infty \end{aligned}\end{equation*}

The base vectors with respect to the global coordinate system are

$\displaystyle \ensuremath{\boldsymbol{g}}_{i}=\begin{bmatrix}\cos\theta\ensurem...
...ensuremath{\boldsymbol{i}}_{2} \\ \ensuremath{\boldsymbol{i}}_{3} \end{bmatrix}$ (2.116)

The covariant metric tensor is

$\displaystyle g_{ij}=\begin{bmatrix}1 & 0 & 0 \\ 0 & r^{2} & 0 \\ 0 & 0 & 1 \end{bmatrix}$ (2.117)

and the contravariant metric tensor is

$\displaystyle g^{ij}=\begin{bmatrix}1 & 0 & 0 \\ 0 & \frac{1}{r^{2}} & 0 \\ 0 & 0 & 1 \end{bmatrix}$ (2.118)

The Christoffell symbols of the second kind are

$\displaystyle \ensuremath{ \Gamma^{r}_{\theta\theta} }$ $\displaystyle =-r$ (2.119)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{r\theta} }=\ensuremath{ \Gamma^{\theta}_{\theta r} }$ $\displaystyle =\frac{1}{r}$ (2.120)

with all other Christoffell symbols zero.

Spherical Polar

The global coordinates $ \ensuremath{\left( x,y,z \right)}$ with respect to the cylindrical polar coordinates $ \ensuremath{\left( r,\theta,\phi \right)}$ are defined by

\begin{equation*}\begin{aligned}x = r\cos\theta\sin\phi & \qquad r \ge 0 \\ y = ...
...e 2\pi \\ z = r\cos\phi & \qquad 0 \le \phi \le \pi \end{aligned}\end{equation*}

The base vectors with respect to the spherical polar coordinate system are

$\displaystyle \ensuremath{\boldsymbol{g}}_{i}=\begin{bmatrix}\cos\theta\sin\phi...
...math{\boldsymbol{i}}_{2}-r\sin\phi\ensuremath{\boldsymbol{i}}_{3} \end{bmatrix}$ (2.122)

The covariant metric tensor is

$\displaystyle g_{ij}=\begin{bmatrix}1 & 0 & 0 \\ 0 & r^{2}\sin^{2}\phi & 0 \\ 0 & 0 & r^{2} \end{bmatrix}$ (2.123)

and the contravariant metric tensor is

$\displaystyle g^{ij}=\begin{bmatrix}1 & 0 & 0 \\ 0 & \frac{1}{r^{2}\sin^{2}\phi} & 0 \\ 0 & 0 & \frac{1}{r^{2}} \end{bmatrix}$ (2.124)

The Christoffell symbols of the second kind are

$\displaystyle \ensuremath{ \Gamma^{r}_{\theta\theta} }$ $\displaystyle =-r\sin^{2}\phi$ (2.125)
$\displaystyle \ensuremath{ \Gamma^{r}_{\phi\phi} }$ $\displaystyle =-r$ (2.126)
$\displaystyle \ensuremath{ \Gamma^{\phi}_{\theta\theta} }$ $\displaystyle =-\sin\phi\cos\phi$ (2.127)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{r\theta} }=\ensuremath{ \Gamma^{\theta}_{\theta r} }$ $\displaystyle =\frac{1}{r}$ (2.128)
$\displaystyle \ensuremath{ \Gamma^{\phi}_{r\phi} }=\ensuremath{ \Gamma^{\phi}_{\phi r} }$ $\displaystyle =\frac{1}{r}$ (2.129)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{\theta\phi} }=\ensuremath{ \Gamma^{\theta}_{\phi\theta} }$ $\displaystyle =\cot\phi$ (2.130)

with all other Christofell symbols zero.

Prolate Spheroidal

The global coordinates $ \ensuremath{\left( x,y,z \right)}$ with respect to the prolate spheroidal coordinates $ \ensuremath{\left( \lambda,\mu,\theta \right)}$ are defined by

\begin{equation*}\begin{aligned}x = a\sinh\lambda\sin\mu\cos\theta & \qquad \lam...
...a\cosh\lambda\cos\mu & \qquad 0 \le \theta \le 2\pi \end{aligned}\end{equation*}

where $ a\ge0$ is the focus.

The base vectors with respect to the global coordinate system are

$\displaystyle \ensuremath{\boldsymbol{g}}_{i}=\begin{bmatrix}a\cosh\lambda\sin\...
...{1}+a\sinh\lambda\sin\mu\cos\theta\ensuremath{\boldsymbol{i}}_{2} \end{bmatrix}$ (2.132)

The covariant metric tensor is

$\displaystyle g_{ij}=\begin{bmatrix}a^{2}\ensuremath{\left( \sinh^{2}\lambda+\s...
...n^{2}\mu \right)} & 0 \\ 0 & 0 & a^{2}\sinh^{2}\lambda\sin^{2}\mu \end{bmatrix}$ (2.133)

and the contravariant metric tensor is

$\displaystyle g^{ij}=\begin{bmatrix}\frac{1}{a^{2}\ensuremath{\left( \sinh^{2}\...
...ight)}} & 0 \\ 0 & 0 & \frac{1}{a^{2}\sinh^{2}\lambda\sin^{2}\mu} \end{bmatrix}$ (2.134)

The Christoffell symbols of the second kind are

$\displaystyle \ensuremath{ \Gamma^{\lambda}_{\lambda\lambda} }$ $\displaystyle =\frac{\sinh\lambda\cosh\lambda}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.135)
$\displaystyle \ensuremath{ \Gamma^{\lambda}_{\mu\mu} }$ $\displaystyle =\frac{-\sinh\lambda\cosh\lambda}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.136)
$\displaystyle \ensuremath{ \Gamma^{\lambda}_{\theta\theta} }$ $\displaystyle =\frac{-\sinh\lambda\cosh\lambda\sin^{2}\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.137)
$\displaystyle \ensuremath{ \Gamma^{\lambda}_{\lambda\mu} }$ $\displaystyle =\frac{\sin\mu\cos\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.138)
$\displaystyle \ensuremath{ \Gamma^{\mu}_{\mu\mu} }$ $\displaystyle =\frac{\sin\mu\cos\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.139)
$\displaystyle \ensuremath{ \Gamma^{\mu}_{\lambda\lambda} }$ $\displaystyle =\frac{-\sin\mu\cos\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.140)
$\displaystyle \ensuremath{ \Gamma^{\mu}_{\theta\theta} }$ $\displaystyle =\frac{-\sinh^{2}\lambda\sin\mu\cos\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.141)
$\displaystyle \ensuremath{ \Gamma^{\mu}_{\mu\lambda} }$ $\displaystyle =\frac{\sinh\lambda\cosh\lambda}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.142)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{\theta\lambda} }$ $\displaystyle =\frac{\cosh\lambda}{\sinh\lambda}$ (2.143)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{\theta\mu} }$ $\displaystyle =\frac{\cos\mu}{\sin\mu}$ (2.144)

with all other Christofell symbols zero.

Oblate Spheroidal

The global coordinates $ \ensuremath{\left( x,y,z \right)}$ with respect to the oblate spheroidal coordinates $ \ensuremath{\left( \lambda,\mu,\theta \right)}$ are defined by

\begin{equation*}\begin{aligned}x = a\cosh\lambda\cos\mu\cos\theta & \qquad \lam...
...a\sinh\lambda\sin\mu & \qquad 0 \le \theta \le 2\pi \end{aligned}\end{equation*}

where $ a\ge0$ is the focus.

The base vectors with respect to the global coordinate system are

$\displaystyle \ensuremath{\boldsymbol{g}}_{i}=\begin{bmatrix}a\sinh\lambda\cos\...
...{1}+a\cosh\lambda\cos\mu\cos\theta\ensuremath{\boldsymbol{i}}_{2} \end{bmatrix}$ (2.146)

The covariant metric tensor is

$\displaystyle g_{ij}=\begin{bmatrix}a^{2}\ensuremath{\left( \sinh^{2}\lambda+\s...
...n^{2}\mu \right)} & 0 \\ 0 & 0 & a^{2}\cosh^{2}\lambda\cos^{2}\mu \end{bmatrix}$ (2.147)

and the contravariant metric tensor is

$\displaystyle g^{ij}=\begin{bmatrix}\frac{1}{a^{2}\ensuremath{\left( \sinh^{2}\...
...ight)}} & 0 \\ 0 & 0 & \frac{1}{a^{2}\cosh^{2}\lambda\cos^{2}\mu} \end{bmatrix}$ (2.148)

The Christoffell symbols of the second kind are

$\displaystyle \ensuremath{ \Gamma^{\lambda}_{\lambda\lambda} }$ $\displaystyle =\frac{\sinh\lambda\cosh\lambda}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.149)
$\displaystyle \ensuremath{ \Gamma^{\lambda}_{\mu\mu} }$ $\displaystyle =\frac{-\sinh\lambda\cosh\lambda}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.150)
$\displaystyle \ensuremath{ \Gamma^{\lambda}_{\theta\theta} }$ $\displaystyle =\frac{-\sinh\lambda\cosh\lambda\cos^{2}\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.151)
$\displaystyle \ensuremath{ \Gamma^{\lambda}_{\lambda\mu} }$ $\displaystyle =\frac{\sin\mu\cos\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.152)
$\displaystyle \ensuremath{ \Gamma^{\mu}_{\mu\mu} }$ $\displaystyle =\frac{\sin\mu\cos\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.153)
$\displaystyle \ensuremath{ \Gamma^{\mu}_{\lambda\lambda} }$ $\displaystyle =\frac{-\sin\mu\cos\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.154)
$\displaystyle \ensuremath{ \Gamma^{\mu}_{\theta\theta} }$ $\displaystyle =\frac{\cosh^{2}\lambda\sin\mu\cos\mu}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.155)
$\displaystyle \ensuremath{ \Gamma^{\mu}_{\mu\lambda} }$ $\displaystyle =\frac{\sinh\lambda\cosh\lambda}{\sinh^{2}\lambda+\sin^{2}\mu}$ (2.156)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{\theta\lambda} }$ $\displaystyle =\frac{\sinh\lambda}{\cosh\lambda}$ (2.157)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{\theta\mu} }$ $\displaystyle =\frac{-\sin\mu}{\cos\mu}$ (2.158)

with all other Christofell symbols zero.

Cylindrical parabolic

The global coordinates $ \ensuremath{\left( x,y,z \right)}$ with respect to the cylindrical parabolic coordinates $ \ensuremath{\left( \xi,\eta,z \right)}$ are defined by

\begin{equation*}\begin{aligned}x = \xi\eta & \qquad -\infty < \xi < \infty \\ y...
...d \eta \ge 0 \\ z = z & \qquad -\infty < z < \infty \end{aligned}\end{equation*}

The base vectors with respect to the global coordinate system are

$\displaystyle \ensuremath{\boldsymbol{g}}_{i}=\begin{bmatrix}\eta\ensuremath{\b...
...\ensuremath{\boldsymbol{i}}_{2}\\ \ensuremath{\boldsymbol{i}}_{3} \end{bmatrix}$ (2.160)

The covariant metric tensor is

$\displaystyle g_{ij}=\begin{bmatrix}\xi^{2}+\eta^{2} & 0 & 0 \\ 0 & \xi^{2}+\eta^{2} & 0 \\ 0 & 0 & 1 \end{bmatrix}$ (2.161)

and the contravariant metric tensor is

$\displaystyle g^{ij}=\begin{bmatrix}\frac{1}{\xi^{2}+\eta^{2}}& 0 & 0 \\ 0 & \frac{1}{\xi^{2}+\eta^{2}} & 0 \\ 0 & 0 & 1 \end{bmatrix}$ (2.162)

The Christoffell symbols of the second kind are

$\displaystyle \ensuremath{ \Gamma^{\xi}_{\xi\xi} }$ $\displaystyle =\frac{\xi}{\xi^{2}+\eta^{2}}$ (2.163)
$\displaystyle \ensuremath{ \Gamma^{\eta}_{\eta\eta} }$ $\displaystyle =\frac{\eta}{\xi^{2}+\eta^{2}}$ (2.164)
$\displaystyle \ensuremath{ \Gamma^{\eta}_{\xi\xi} }$ $\displaystyle =\frac{-\eta}{\xi^{2}+\eta^{2}}$ (2.165)
$\displaystyle \ensuremath{ \Gamma^{\xi}_{\eta\eta} }$ $\displaystyle =\frac{-\xi}{\xi^{2}+\eta^{2}}$ (2.166)
$\displaystyle \ensuremath{ \Gamma^{\xi}_{\xi\eta} }=\ensuremath{ \Gamma^{\xi}_{\eta\xi} }$ $\displaystyle =\frac{\eta}{\xi^{2}+\eta^{2}}$ (2.167)
$\displaystyle \ensuremath{ \Gamma^{\eta}_{\xi\eta} }=\ensuremath{ \Gamma^{\eta}_{\eta\xi} }$ $\displaystyle =\frac{\xi}{\xi^{2}+\eta^{2}}$ (2.168)

with all other Christofell symbols zero.

Parabolic polar

The global coordinates $ \ensuremath{\left( x,y,z \right)}$ with respect to the cylindrical parabolic coordinates $ \ensuremath{\left( \xi,\eta,\theta \right)}$ are defined by

\begin{equation*}\begin{aligned}x = \xi\eta\cos\theta & \qquad \xi \ge 0 \\ y = ...
...^{2}-\eta^{2} \right)} & \qquad 0 \le \theta < 2\pi \end{aligned}\end{equation*}

The base vectors with respect to the global coordinate system are

$\displaystyle \ensuremath{\boldsymbol{g}}_{i}=\begin{bmatrix}\eta\cos\theta\ens...
...ldsymbol{i}}_{1}+\xi\eta\cos\theta\ensuremath{\boldsymbol{i}}_{2} \end{bmatrix}$ (2.170)

The covariant metric tensor is

$\displaystyle g_{ij}=\begin{bmatrix}\xi^{2}+\eta^{2} & 0 & 0 \\ 0 & \xi^{2}+\eta^{2} & 0 \\ 0 & 0 & \xi\eta \end{bmatrix}$ (2.171)

and the contravariant metric tensor is

$\displaystyle g^{ij}=\begin{bmatrix}\frac{1}{\xi^{2}+\eta^{2}}& 0 & 0 \\ 0 & \frac{1}{\xi^{2}+\eta^{2}} & 0 \\ 0 & 0 & \frac{1}{\xi\eta} \end{bmatrix}$ (2.172)

The Christoffell symbols of the second kind are

$\displaystyle \ensuremath{ \Gamma^{\xi}_{\xi\xi} }$ $\displaystyle =\frac{\xi}{\xi^{2}+\eta^{2}}$ (2.173)
$\displaystyle \ensuremath{ \Gamma^{\eta}_{\eta\eta} }$ $\displaystyle =\frac{\eta}{\xi^{2}+\eta^{2}}$ (2.174)
$\displaystyle \ensuremath{ \Gamma^{\xi}_{\eta\eta} }$ $\displaystyle =\frac{-\xi}{\xi^{2}+\eta^{2}}$ (2.175)
$\displaystyle \ensuremath{ \Gamma^{\eta}_{\xi\xi} }$ $\displaystyle =\frac{-\eta}{\xi^{2}+\eta^{2}}$ (2.176)
$\displaystyle \ensuremath{ \Gamma^{\eta}_{\theta\theta} }$ $\displaystyle =\frac{-\xi^{2}\eta}{\xi^{2}+\eta^{2}}$ (2.177)
$\displaystyle \ensuremath{ \Gamma^{\xi}_{\theta\theta} }$ $\displaystyle =\frac{-\xi\eta^{2}}{\xi^{2}+\eta^{2}}$ (2.178)
$\displaystyle \ensuremath{ \Gamma^{\xi}_{\xi\eta} }=\ensuremath{ \Gamma^{\xi}_{\eta\xi} }$ $\displaystyle =\frac{\eta}{\xi^{2}+\eta^{2}}$ (2.179)
$\displaystyle \ensuremath{ \Gamma^{\eta}_{\xi\eta} }=\ensuremath{ \Gamma^{\eta}_{\eta\xi} }$ $\displaystyle =\frac{\xi}{\xi^{2}+\eta^{2}}$ (2.180)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{\xi\theta} }=\ensuremath{ \Gamma^{\theta}_{\theta\xi} }$ $\displaystyle =\frac{1}{\xi}$ (2.181)
$\displaystyle \ensuremath{ \Gamma^{\theta}_{\eta\theta} }=\ensuremath{ \Gamma^{\theta}_{\theta\eta} }$ $\displaystyle =\frac{1}{\eta}$ (2.182)

with all other Christofell symbols zero.

Equation set types

Static Equations

The general form for static equations is

Dynamic Equations

The general form for dynamic equations is

$\displaystyle \ensuremath{\boldsymbol{M}}\ensuremath{ \ddot{\ensuremath{\boldsy...
...math{\boldsymbol{f}}\ensuremath{\left( t \right)} }=\ensuremath{\boldsymbol{0}}$ (2.183)

where $ \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t \right)} }$ is the unknown ``displacement vector'', $ \ensuremath{\boldsymbol{M}}$ is the mass matrix, $ \ensuremath{\boldsymbol{C}}$ is the damping matrix, $ \ensuremath{\boldsymbol{K}}$ is the stiffness matrix, $ \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t \right)} } \right)} }$ a non-linear vector function and $ \ensuremath{ \ensuremath{\boldsymbol{f}}\ensuremath{\left( t \right)} }$ the forcing vector.

From (, ) we now expand the unknown vector $ \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t \right)} }$ in terms of a polynomial of degree $ p$. With the known values of $ \ensuremath{\boldsymbol{u}}_{n}$, $ \dot{\ensuremath{\boldsymbol{u}}}_{n}$, $ \ddot{\ensuremath{\boldsymbol{u}}}_{n}$ up to $ \ensuremath{
\stackrel{\scriptscriptstyle p-1}{\ensuremath{\boldsymbol{u}}} }_{n}$ at the beginning of the time step $ \Delta t$ we can write the polynomial expansion as

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t_{n}+...
...ymbol{u}}} }_{n}+ \dfrac{1}{p!}\tau^{p}\ensuremath{\boldsymbol{\alpha}}^{p}_{n}$ (2.184)

where the only unknown is the the vector $ \ensuremath{\boldsymbol{\alpha}}^{p}_{n}$,

$\displaystyle \ensuremath{\boldsymbol{\alpha}}^{p}_{n}\approx\ensuremath{ \stac...
...}}} }\equiv\ensuremath{ \dfrac{ d^{p} \ensuremath{\boldsymbol{u}}}{d {t}^{p}} }$ (2.185)

A recurrance relationship can be established by substituting Equation (2.191) into Equation (3.176) and taking a weighted residual approach i.e.,

\begin{multline}
\ensuremath{ \displaystyle\int\limits_{0}^{\Delta
t} }\ensure...
... t_{n}+\tau \right)} }\right] d\tau = \ensuremath{\boldsymbol{0}}
\end{multline}

where $ \ensuremath{ W\ensuremath{\left( \tau \right)} }$ is some weight function, $ \tau=t-t_{n}$ and $ \Delta
t=t_{n+1}-t_{n}$. Dividing by $ \ensuremath{ \ensuremath{ \displaystyle\int\limits_{0}^{\Delta t} }\,\ensuremath{ W\ensuremath{\left( \tau \right)} }\,d\tau }$ we obtain

\begin{multline}
\dfrac{\ensuremath{ \ensuremath{ \displaystyle\int\limits_{0}^...
...math{\left( \tau \right)} }\,d\tau }}=\ensuremath{\boldsymbol{0}}
\end{multline}

Now if

$\displaystyle \theta_{k}=\dfrac{\ensuremath{ \ensuremath{ \displaystyle\int\lim...
...ts_{0}^{\Delta t} }\,\ensuremath{ W\ensuremath{\left( \tau \right)} }\,d\tau }}$    for  $\displaystyle k=0,1,\ldots,p$ (2.186)

and

$\displaystyle \bar{\ensuremath{\boldsymbol{f}}}=\dfrac{\ensuremath{ \ensuremath...
...ts_{0}^{\Delta t} }\,\ensuremath{ W\ensuremath{\left( \tau \right)} }\,d\tau }}$ (2.187)

we can write

\begin{multline}
\ensuremath{\boldsymbol{M}}\ensuremath{\left( \ddot{\bar{\ensu...
...}+\bar{\ensuremath{\boldsymbol{f}}}=\ensuremath{\boldsymbol{0}}
\end{multline}

where

\begin{displaymath}\begin{split}\bar{\ensuremath{\boldsymbol{u}}}_{n+1} &= \ensu...
...scriptstyle q}{\ensuremath{\boldsymbol{u}}} }_{n} } \end{split}\end{displaymath} (2.188)

We note that as $ \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t \right)} } \right)} }$ is nonlinear we need to evaluate an integral of the form

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{0}^{\Delta t}...
...ath{\boldsymbol{u}}\ensuremath{\left( t_{n}+\tau \right)} } \right)} }\,d\tau }$ (2.189)

To do this we form Taylor's series expansions for $ \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t \right)} } \right)} }$ about the point $ \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t_{n}+\tau \right)} }$ i.e.,

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensur...
... } + \ensuremath{\ensuremath{ \mathrm{O}\ensuremath{\left( \tau^{2} \right)} }}$ (2.190)

and

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensur...
... } + \ensuremath{\ensuremath{ \mathrm{O}\ensuremath{\left( \tau^{2} \right)} }}$ (2.191)

Now if we add $ \dfrac{1}{\tau}$ times Equation (2.200) and $ \dfrac{1}{t_{n+1}-t_{n}-\tau}=\dfrac{1}{\Delta t-\tau}$ times Equation (2.201) we obtain

$\displaystyle \dfrac{\ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left(...
...ght)}\ensuremath{\ensuremath{ \mathrm{O}\ensuremath{\left( \tau^{2} \right)} }}$ (2.192)

Multiplying through by $ \dfrac{\tau\ensuremath{\left( \Delta t-\tau \right)}}{\Delta t}$ gives

$\displaystyle \dfrac{\Delta t-\tau}{\Delta t}\ensuremath{ \ensuremath{\boldsymb...
...)} }+\ensuremath{\ensuremath{ \mathrm{O}\ensuremath{\left( \tau^{2} \right)} }}$ (2.193)

Therefore

$\displaystyle \dfrac{\ensuremath{ \ensuremath{ \displaystyle\int\limits_{0}^{\D...
...ts_{0}^{\Delta t} }\,\ensuremath{ W\ensuremath{\left( \tau \right)} }\,d\tau }}$ (2.194)

Now if we recall that

$\displaystyle \theta_{1}=\dfrac{\ensuremath{ \ensuremath{ \displaystyle\int\lim...
...ts_{0}^{\Delta t} }\,\ensuremath{ W\ensuremath{\left( \tau \right)} }\,d\tau }}$ (2.195)

we can write

$\displaystyle \dfrac{\ensuremath{ \ensuremath{ \displaystyle\int\limits_{0}^{\D...
...h{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t_{n+1} \right)} } \right)} }+$Error (2.196)

where

Error$\displaystyle =\dfrac{\ensuremath{ \ensuremath{ \displaystyle\int\limits_{0}^{\...
...ts_{0}^{\Delta t} }\,\ensuremath{ W\ensuremath{\left( \tau \right)} }\,d\tau }}$ (2.197)

Equation (2.197) now becomes


as $ \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t_{n} \right)} }=\ensuremath{\boldsymbol{u}}_{n}$ and $ \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t_{n+1} \right)} }=...
...{u}}}_{n+1}+
\dfrac{{\Delta t}^{p}}{p!}\ensuremath{\boldsymbol{\alpha}}^{p}_{n}$ where $ \hat{\ensuremath{\boldsymbol{u}}}_{n+1}$ is the predicted displacement at the new time step and is given by

$\displaystyle \hat{\ensuremath{\boldsymbol{u}}}_{n+1}=\ensuremath{ \ensuremath{...
...nsuremath{ \stackrel{\scriptscriptstyle q}{\ensuremath{\boldsymbol{u}}} }_{n} }$ (2.198)

Rearranging gives


or

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{\psi}}\ensuremath{\left( \en...
...bol{u}}_{n} \right)} }+\ensuremath{\boldsymbol{b}}= \ensuremath{\boldsymbol{0}}$ (2.199)

where $ \ensuremath{\boldsymbol{A}}$ is the Amplification matrix given by

$\displaystyle \ensuremath{\boldsymbol{A}}=\dfrac{\theta_{p-2}{\Delta t}^{p-2}}{...
...boldsymbol{C}}+ \dfrac{\theta_{p}{\Delta t}^{p}}{p!}\ensuremath{\boldsymbol{K}}$ (2.200)

and $ \ensuremath{\boldsymbol{b}}$ is the right hand side vector given by

$\displaystyle \ensuremath{\boldsymbol{b}}=\ensuremath{\boldsymbol{M}}\ddot{\bar...
...ol{K}}\bar{\ensuremath{\boldsymbol{u}}}_{n+1}+\bar{\ensuremath{\boldsymbol{f}}}$ (2.201)

If $ \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensuremath{\boldsymbol{u}} \right)} }\equiv\ensuremath{\boldsymbol{0}}$ then Equation (2.210) is linear in $ \ensuremath{\boldsymbol{\alpha}}^{p}_{n}$ and $ \ensuremath{\boldsymbol{\alpha}}^{p}_{n}$ can be found by solving the linear equation

$\displaystyle \ensuremath{\boldsymbol{\alpha}}^{p}_{n} =-\ensuremath{{\ensurema...
...r{\ensuremath{\boldsymbol{u}}}_{n+1}+\bar{\ensuremath{\boldsymbol{f}}} \right)}$ (2.202)

or

$\displaystyle \ensuremath{\boldsymbol{\alpha}}^{p}_{n} =-\ensuremath{{\ensuremath{\boldsymbol{A}}}^{-1}}\ensuremath{\boldsymbol{b}}$ (2.203)

If $ \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensuremath{\boldsymbol{u}} \right)} }$ is not $ \equiv\ensuremath{\boldsymbol{0}}$ then Equation (2.210) is nonlinear in $ \ensuremath{\boldsymbol{\alpha}}^{p}_{n}$. To solve this equation we use Newton's method i.e.,

\begin{displaymath}\begin{split}\text{1. } & \ensuremath{ \ensuremath{\boldsymbo...
...+\delta \ensuremath{\boldsymbol{\alpha}}^{p}_{n(i)} \end{split}\end{displaymath} (2.204)

where $ \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensuremath{\boldsymbol{\alpha}}^{p}_{n} \right)} }$ is the Jacobian and is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensur...
...n} \right)} }}{\ensuremath{\partial}\ensuremath{\boldsymbol{\alpha}}^{p}_{n}} }$ (2.205)

or

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensur...
...n} \right)} }}{\ensuremath{\partial}\ensuremath{\boldsymbol{\alpha}}^{p}_{n}} }$ (2.206)

Once $ \ensuremath{\boldsymbol{\alpha}}^{p}_{n}$ has been obtained the values at the next time step can be obtained from

\begin{displaymath}\begin{split}\ensuremath{\boldsymbol{u}}_{n+1} &= \ensuremath...
...n}+\Delta t\ensuremath{\boldsymbol{\alpha}}^{p}_{n} \end{split}\end{displaymath} (2.207)

For algorithms in which the degree of the polynomial, $ p$, is higher than the order we require the algorithm to be initialised so that the initial velocity or acceleration can be computed. The initial velocity or acceleration values can be obtained by substituting the initial displacement or initial displacement and velocity values into Equation (3.176), rearranging and solving. For example consider an the case of a second degree polynomial and a first order system. Substituing the initial displacement $ \ensuremath{\boldsymbol{u}}_{0}$ into Equation (3.176) gives

$\displaystyle \ensuremath{\boldsymbol{C}}\dot{\ensuremath{\boldsymbol{u}}}_{0}+...
...0} \right)} }+\bar{\ensuremath{\boldsymbol{f}}}_{0}=\ensuremath{\boldsymbol{0}}$ (2.208)

and therefore an approximation to the initial velocity can be found from

$\displaystyle \dot{\ensuremath{\boldsymbol{u}}}_{0}=-\ensuremath{{\ensuremath{\...
...h{\boldsymbol{u}}_{0} \right)} }+\bar{\ensuremath{\boldsymbol{f}}}_{0} \right)}$ (2.209)

Similarily for a third degree polynomial and a second order system the initial acceleration can be found from

$\displaystyle \ddot{\ensuremath{\boldsymbol{u}}}_{0}=-\ensuremath{{\ensuremath{...
...h{\boldsymbol{u}}_{0} \right)} }+\bar{\ensuremath{\boldsymbol{f}}}_{0} \right)}$ (2.210)

To evaluate the mean weighted load vector, $ \bar{\ensuremath{\boldsymbol{f}}}$, we need to evaluate the integral in Equation (2.196). In some cases, however, we can make the assumption that the load vector varies linearly during the time step. In these cases the mean weighted load vector can be computed from

$\displaystyle \bar{\ensuremath{\boldsymbol{f}}}=\theta_{1}\ensuremath{\boldsymb...
...}_{n+1}+\ensuremath{\left( 1-\theta_{1} \right)}\ensuremath{\boldsymbol{f}}_{n}$ (2.211)

Special SN11 case, p=1

For this special case, the mean predicited values are given by

$\displaystyle \bar{\ensuremath{\boldsymbol{u}}}_{n+1} = \ensuremath{\boldsymbol{u}}_{n}$ (2.212)

The predicted displacement values are given by

$\displaystyle \hat{\ensuremath{\boldsymbol{u}}}_{n+1} = \ensuremath{\boldsymbol{u}}_{n}$ (2.213)

The amplification matrix is given by

$\displaystyle \ensuremath{\boldsymbol{A}}=\ensuremath{\boldsymbol{C}}+\theta_{1}\Delta t \ensuremath{\boldsymbol{K}}$ (2.214)

The right hand side vector is given by

$\displaystyle \ensuremath{\boldsymbol{b}}=\ensuremath{\boldsymbol{K}}\bar{\ensuremath{\boldsymbol{u}}}_{n+1}+\bar{\ensuremath{\boldsymbol{f}}}$ (2.215)

The nonlinear function is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{\psi}}\ensuremath{\left( \en...
...bol{u}}_{n} \right)} }+\ensuremath{\boldsymbol{b}}= \ensuremath{\boldsymbol{0}}$ (2.216)

The Jacobian matrix is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensur...
...n} \right)} }}{\ensuremath{\partial}\ensuremath{\boldsymbol{\alpha}}^{1}_{n}} }$ (2.217)

And the time step update is given by

$\displaystyle \ensuremath{\boldsymbol{u}}_{n+1} = \ensuremath{\boldsymbol{u}}_{n}+\Delta t\ensuremath{\boldsymbol{\alpha}}^{1}_{n}$ (2.218)

Special SN21 case, p=2

For this special case, the mean predicited values are given by

\begin{displaymath}\begin{split}\bar{\ensuremath{\boldsymbol{u}}}_{n+1} &= \ensu...
...}}}}_{n+1} &= \dot{\ensuremath{\boldsymbol{u}}}_{n} \end{split}\end{displaymath} (2.219)

where

$\displaystyle \dot{\ensuremath{\boldsymbol{u}}}_{0}=-\ensuremath{{\ensuremath{\...
...h{\boldsymbol{u}}_{0} \right)} }+\bar{\ensuremath{\boldsymbol{f}}}_{0} \right)}$ (2.220)

The predicted displacement values are given by

$\displaystyle \hat{\ensuremath{\boldsymbol{u}}}_{n+1} = \ensuremath{\boldsymbol{u}}_{n}+\Delta t\dot{\ensuremath{\boldsymbol{u}}}_{n}$ (2.221)

The amplification matrix is given by

$\displaystyle \ensuremath{\boldsymbol{A}}=\theta_{1}\Delta t\ensuremath{\boldsymbol{C}}+\dfrac{\theta_{2}{\Delta t}^{2}}{2}\ensuremath{\boldsymbol{K}}$ (2.222)

The right hand side vector is given by

$\displaystyle \ensuremath{\boldsymbol{b}}=\ensuremath{\boldsymbol{C}}\dot{\bar{...
...ol{K}}\bar{\ensuremath{\boldsymbol{u}}}_{n+1}+\bar{\ensuremath{\boldsymbol{f}}}$ (2.223)

The nonlinear function is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{\psi}}\ensuremath{\left( \en...
...bol{u}}_{n} \right)} }+\ensuremath{\boldsymbol{b}}= \ensuremath{\boldsymbol{0}}$ (2.224)

The Jacobian matrix is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensur...
...n} \right)} }}{\ensuremath{\partial}\ensuremath{\boldsymbol{\alpha}}^{2}_{n}} }$ (2.225)

And the time step update is given by

\begin{displaymath}\begin{split}\ensuremath{\boldsymbol{u}}_{n+1} &= \ensuremath...
...n}+\Delta t\ensuremath{\boldsymbol{\alpha}}^{2}_{n} \end{split}\end{displaymath} (2.226)

Special SN22 case, p=2

For this special case, the mean predicited values are given by

\begin{displaymath}\begin{split}\bar{\ensuremath{\boldsymbol{u}}}_{n+1} &= \ensu...
...}}}}_{n+1} &= \dot{\ensuremath{\boldsymbol{u}}}_{n} \end{split}\end{displaymath} (2.227)

The predicted displacement values are given by

$\displaystyle \hat{\ensuremath{\boldsymbol{u}}}_{n+1} = \ensuremath{\boldsymbol{u}}_{n}+\Delta t\dot{\ensuremath{\boldsymbol{u}}}_{n}$ (2.228)

The amplification matrix is given by

$\displaystyle \ensuremath{\boldsymbol{A}}=\ensuremath{\boldsymbol{M}}+\theta_{1...
...{\boldsymbol{C}}+\dfrac{\theta_{2}{\Delta t}^{2}}{2}\ensuremath{\boldsymbol{K}}$ (2.229)

The right hand side vector is given by

$\displaystyle \ensuremath{\boldsymbol{b}}=\ensuremath{\boldsymbol{C}}\dot{\bar{...
...ol{K}}\bar{\ensuremath{\boldsymbol{u}}}_{n+1}+\bar{\ensuremath{\boldsymbol{f}}}$ (2.230)

The nonlinear function is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{\psi}}\ensuremath{\left( \en...
...bol{u}}_{n} \right)} }+\ensuremath{\boldsymbol{b}}= \ensuremath{\boldsymbol{0}}$ (2.231)

The Jacobian matrix is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensur...
...n} \right)} }}{\ensuremath{\partial}\ensuremath{\boldsymbol{\alpha}}^{2}_{n}} }$ (2.232)

And the time step update is given by

\begin{displaymath}\begin{split}\ensuremath{\boldsymbol{u}}_{n+1} &= \ensuremath...
...n}+\Delta t\ensuremath{\boldsymbol{\alpha}}^{2}_{n} \end{split}\end{displaymath} (2.233)

Interface Conditions

Variational principles

The branch of mathematics concerned with the problem of finding a function for which a certain integral of that function is either at its largest or smallest value is called the calculus of variations. When scientific laws are formulated in terms of the principles of the calculus of variations they are termed variational principles.

Lagrange Multipliers


Equation Sets

Classical Field Class

Generalised Laplace Equation

Governing equations:

The generalised Laplace's equation on a domain $ \Omega$ with boundary $ \Gamma$ in OpenCMISS can be stated as

$\displaystyle \boxed{ \ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\left( \e...
...th{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }} \right)}} }=0 }$ (3.1)

where $ \ensuremath{\boldsymbol{x}}\in\Omega$, $ \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ is the potential and $ \ensuremath{ \ensuremath{\boldsymbol{\sigma}}\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ is the (positive definite) conductivity tensor throughout the domain. Note that $ \ensuremath{\boldsymbol{\sigma}}=\ensuremath{\boldsymbol{I}}$ gives the standard form of Laplace's equation i.e., $ \ensuremath{\ensuremath{\nabla^{2}}{u}}=0$.

To complete the description of the boundary value problem, the governing Equation (3.1) is supplemented by suitable boundary conditions on the domain boundary $ \Gamma$. These boundary conditions can either be of Dirichlet type and specify a solution value, $ d$ i.e.,

$\displaystyle \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}} \rig...
...math{\boldsymbol{x}} \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{D},$ (3.2)

and/or of Neumann type and specify the solution gradient in normal direction, $ e$ i.e.,

$\displaystyle \ensuremath{ q\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \r...
...th{\boldsymbol{x}},t \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{N},$ (3.3)

where $ \ensuremath{ q\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \right)} }$ is the flux in the normal direction, $ \ensuremath{\boldsymbol{n}}$ is the normal vector to the boudary and $ \Gamma = \Gamma_D \cup \Gamma_N$.

Weak formulation:

The corresponding weak form of Equation (3.1) is

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...dsymbol{\sigma}} \ensuremath{\ensuremath{\nabla}\phi} \right)}} }w\,d\Omega }=0$ (3.4)

where $ w$ is a suitable test function (For a definition of what constitutes a ``suitable'' test and trial function, see Section  - still has to be written).

Applying Green's theorem to Equation (3.4) gives

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...suremath{\nabla}\phi} \right)}\cdot\ensuremath{\boldsymbol{n}}}w\,d\Gamma } = 0$ (3.5)

or

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...suremath{\nabla}\phi} \right)}\cdot\ensuremath{\boldsymbol{n}}w}\,d\Gamma } \,,$ (3.6)

Tensor notation:

If we now consider the integrand of the left hand side of Equation (3.6) in tensorial form with covariant derivatives indicated by a semi-colon in the index (see Section 2.2.4 for details) then

$\displaystyle \ensuremath{\boldsymbol{\sigma}} \ensuremath{\ensuremath{\nabla}u} = \sigma^{i}_{.j}\ensuremath{ {u}_{;i} }$ (3.7)

and

$\displaystyle \ensuremath{\ensuremath{\nabla}w} = \ensuremath{ {w}_{;k} }$ (3.8)

Now, Equations (3.7) and (3.8) represent vectors that are covariant and therefore we must convert Equation (3.7) to a contravariant vector by multiplying by the contravariant metric tensor (from $ \ensuremath{\boldsymbol{i}}$ to $ \ensuremath{\boldsymbol{x}}$ coordinates; see Sections 2.2.2 and 2.2.6) so that we can take the dot product. The final tensorial form for the left hand integral is

$\displaystyle \ensuremath{\ensuremath{\left( \ensuremath{\boldsymbol{\sigma}} \...
...a}w} } = G^{jk}\sigma^{i}_{.j}\ensuremath{ {\phi}_{;i} }\ensuremath{ {w}_{;k} }$ (3.9)

and thus Equation (3.6) becomes

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...ts_{\Gamma}^{} }\,G^{jk}\sigma^{i}_{.j}\ensuremath{ {u}_{;i} }n_{k}w\,d\Gamma }$ (3.10)

or

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...\ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Gamma}^{} }\,qw\,d\Gamma }$ (3.11)

Finite element formulation:

We can now discretise the domain into finite elements i.e., $ \Omega=
\displaystyle{\bigcup_{e=1}^{E}}\Omega_{e}$ with $ \Gamma=\displaystyle{\bigcup_{f=1}^{F}}\Gamma_{f}$, Equation (3.11) becomes

$\displaystyle \ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensuremath{...
...uremath{ \ensuremath{ \displaystyle\int\limits_{\Gamma_{f}}^{} }\,qw\,d\Omega }$ (3.12)

The next step is to approximate the dependent variable, $ u$, using basis functions. To simplify this for different types of basis functions an interpolation notation is adopted. This interpolation is such that $ \ensuremath{ \ensuremath{ \ensuremath{ \psi_{n}^{\beta} }\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }$ are the appropriate basis functions for the type of element (e.g., bicubic Hermite, Hermite-sector, etc.) and dimension of the problem (one, two or three-dimensional). For example if $ \ensuremath{\boldsymbol{\xi}}=\ensuremath{\left\{ \xi \right\}}$ and the element is a cubic Hermite element (Section 2.1.3) then $ \ensuremath{ \ensuremath{ \ensuremath{ \psi_{n}^{\beta} }\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }$ are the cubic Hermite basis functions where the local node number $ n$ ranges from $ 1$ to $ 2$ and the derivative $ \beta$ ranges from 0 to $ 1$. If, however, $ \ensuremath{\boldsymbol{\xi}}=\ensuremath{\left\{ \ensuremath{\xi_{1}}\xspace ,\ensuremath{\xi_{2}}\xspace \right\}}$ and the element is a bicubic Hermite element then $ n$ ranges from $ 1$ to $ 4$, $ \beta$ ranges from 0 to $ 3$ and $ \ensuremath{ \ensuremath{ \ensuremath{ \psi_{n}^{\beta} }\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }$ is the appropriate basis function for the nodal variable $ \ensuremath{ \ensuremath{ {u}^{n} }_{,\beta} }$. The nodal variables are defined as follows: $ \ensuremath{ \ensuremath{ {u}^{n} }_{,0} }=\ensuremath{ {u}^{n} }$, $ \ensuremath{ \ensuremath{ {u}^{n} }_{,1} }=\ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ {u}^{n} }}{\ensuremath{\partial}s_{1}} }$, $ \ensuremath{ \ensuremath{ {u}^{n} }_{,2} }=\ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{ {u}^{n} }}{\ensuremath{\partial}s_{2}} }$, $ \ensuremath{ \ensuremath{ {u}^{n} }_{,3} }=\ensuremath{ \dfrac{\ensuremath{\pa...
...ensuremath{ {u}^{n} }}{\ensuremath{\partial}s_{1} \ensuremath{\partial}s_{2}} }$, etc. Hence for the bicubic Hermite element the interpolation function $ \ensuremath{ \ensuremath{ \ensuremath{ \psi_{2}^{3} }\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }$ multiplies the nodal variable $ \ensuremath{ \ensuremath{ {u}^{2} }_{,3} }=\ensuremath{ \dfrac{\ensuremath{\partial}^{2} u^{2}}{\ensuremath{\partial}s_{1} \ensuremath{\partial}s_{2}} }$ and thus the interpolation function is $ \ensuremath{ \ensuremath{ \ensuremath{ \Psi_{2}^{1} }\ensuremath{\left( \ensur...
...math{ \Psi_{1}^{1} }\ensuremath{\left( \ensuremath{\xi_{2}}\xspace \right)} } }$. The scale factors for the Hermite interpolation are handled by the introduction of an interpolation scale factor $ \ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( n,\beta \right)} } }$ defined as follows: $ \ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( n,0 \right)} } }=1$, $ \ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( n,1 \rig...
...ath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( n,1 \right)} } }$, $ \ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( n,2 \rig...
...ath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( n,2 \right)} } }$, $ \ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( n,3 \rig...
...ath{ \ensuremath{ \ensuremath{ \mathcal{S} }\ensuremath{\left( n,2 \right)} } }$, etc. For Lagrangian basis functions the interpolation scale factors are all one. The general form of the interpolation notation for $ u$ is then

$\displaystyle \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \r...
...{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( n,\beta \right)} } }$ (3.13)

We can also interpolate the other variables in a similar manner i.e.,

\begin{displaymath}\begin{split}\ensuremath{ q\ensuremath{\left( \ensuremath{\bo...
...hrm{S} }\ensuremath{\left( p,\delta \right)} } } \\ \end{split}\end{displaymath} (3.14)

where $ \ensuremath{ \ensuremath{ {q}^{o} }_{,\gamma} }$ and $ \ensuremath{ \ensuremath{ {\ensuremath{\boldsymbol{\sigma}}}^{p} }_{,\delta} }$ are the nodal degrees-of-freedom for the flux variable and conductivity tensor.

Using a Galerkin finite element approach (where the weighting functions are chosento be the interpolating basis functions) we have

$\displaystyle \ensuremath{ w\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \r...
... \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( m,\alpha \right)} } }$ (3.15)

Spatial integration:

When dealing with integrals a similar interpolation notation is adopted in that $ d\ensuremath{\boldsymbol{\xi}}$ is the appropriate infinitesimal for the dimension of the problem. The limits of the integral are also written in bold font and indicate the appropriate number of integrals for the dimension of the problem. For example if $ \ensuremath{\boldsymbol{\xi}}=\ensuremath{\left\{ \ensuremath{\xi_{1}}\xspace ,\ensuremath{\xi_{2}}\xspace \right\}}$ then $ \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\ensuremath{\boldsymbol{0}...
...imits_{0}^{1} }\,x\,d\ensuremath{\xi_{1}}\xspace d\ensuremath{\xi_{2}}\xspace }$, but if $ \ensuremath{\boldsymbol{\xi}}=\ensuremath{\left\{ \ensuremath{\xi_{1}}\xspace ,\ensuremath{\xi_{2}}\xspace ,\ensuremath{\xi_{3}}\xspace \right\}}$ then $ \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\ensuremath{\boldsymbol{0}...
...math{\xi_{1}}\xspace d\ensuremath{\xi_{2}}\xspace d\ensuremath{\xi_{3}}\xspace $.

In order to integrate in $ \ensuremath{\boldsymbol{\xi}}$ coordinates we must convert the spatial derivatives to derivatives with respect to $ \ensuremath{\boldsymbol{\xi}}$. Using the tensor transformation equations for a covariant vector we have

$\displaystyle \ensuremath{ {u}_{;i} }=\ensuremath{ \dfrac{\ensuremath{\partial}...
...}} }\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}\xi^{s}} }$ (3.16)

and

$\displaystyle \ensuremath{ {w}_{;k} }=\ensuremath{ \dfrac{\ensuremath{\partial}...
...}} }\ensuremath{ \dfrac{\ensuremath{\partial}w}{\ensuremath{\partial}\xi^{r}} }$ (3.17)

As we only know $ \tilde{\ensuremath{\boldsymbol{\sigma}}}$, the conductivity in the $ \ensuremath{\boldsymbol{\nu}}$ (fibre) coordinate system rather than $ \ensuremath{\boldsymbol{\sigma}}$, the conductivity in the $ \ensuremath{\boldsymbol{x}}$ (geometric) coordinate system, we must transform the mixed tensor $ {\tilde{\sigma}}^{a}_{.b}$ from $ \ensuremath{\boldsymbol{\nu}}$ to $ \ensuremath{\boldsymbol{x}}$ coordinates. However, as the fibre coordinate system is defined in terms of $ \ensuremath{\boldsymbol{\xi}}$ coordinates we first transform the conductivity tensor from $ \ensuremath{\boldsymbol{\nu}}$ to $ \ensuremath{\boldsymbol{\xi}}$ coordinates i.e.,

$\displaystyle \ensuremath{ \sigma^{d}_{.e}\ensuremath{\left( \ensuremath{\bolds...
...de{\sigma}}^{a}_{.b}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} }$ (3.18)

with $ .b$ indicating that $ b$ is the ``second'' index (Section 2.1.1), and then transform the conductivity from $ \ensuremath{\boldsymbol{\xi}}$ coordinates to $ \ensuremath{\boldsymbol{x}}$ coordinates i.e.,

\begin{displaymath}\begin{split}\ensuremath{ \sigma^{i}_{.j}\ensuremath{\left( \...
...ath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } \end{split}\end{displaymath} (3.19)

where $ \tilde{\ensuremath{\boldsymbol{\sigma}}}$ is interpolated in the normal way i.e.,

$\displaystyle \ensuremath{ \tilde{\ensuremath{\boldsymbol{\sigma}}}\ensuremath{...
... \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( p,\delta \right)} } }$ (3.20)

Using an interpolated variables yields

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \displays...
...{ \mathrm{S} }\ensuremath{\left( m,\alpha \right)} } }\,d\Gamma }
\end{multline}

where $ \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} }$ is the Jacobian of the transformation from the integration $ \ensuremath{\boldsymbol{x}}$ to $ \ensuremath{\boldsymbol{\xi}}$ coordinates.

Taking the fixed nodal degrees-of-freedom and scale factors outside the integral we have

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensurema...
...th{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }\,d\Gamma }
\end{multline}

This is an equation of the form of

$\displaystyle \ensuremath{\boldsymbol{K}}\ensuremath{\boldsymbol{u}}=\ensuremath{\boldsymbol{f}}$ (3.21)

where

$\displaystyle \ensuremath{\boldsymbol{f}}=\ensuremath{\boldsymbol{N}}\ensuremath{\boldsymbol{q}}$ (3.22)

The elemental stiffness matrix, $ K^{\alpha\beta}_{mn}$, is given by

$\displaystyle K^{\alpha\beta}_{mn}=\ensuremath{ \ensuremath{ \ensuremath{ \math...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.23)

where

$\displaystyle \ensuremath{ \gamma^{rs}\ensuremath{\left( \ensuremath{\boldsymbo...
...\ensuremath{ \dfrac{\ensuremath{\partial}\xi^{s}}{\ensuremath{\partial}x^{k}} }$ (3.24)

Note that for Laplace's equation with no conductivity or fibre fields we have

$\displaystyle \ensuremath{ \gamma^{rs}\ensuremath{\left( \ensuremath{\boldsymbo...
...\ensuremath{ \dfrac{\ensuremath{\partial}\xi^{s}}{\ensuremath{\partial}x^{k}} }$ (3.25)

and that for rectangular-Cartesian coordinates systems $ G^{ik}=\ensuremath{\delta^{{i}{k}}}$ and thus $ i=k$. This now gives

$\displaystyle \ensuremath{ \gamma^{rs}\ensuremath{\left( \ensuremath{\boldsymbo...
...math{ \dfrac{\ensuremath{\partial}\xi^{s}}{\ensuremath{\partial}x^{i}} }=g^{rs}$ (3.26)

where $ g^{rs}$ is the contravariant metric tensor from $ \ensuremath{\boldsymbol{x}}$ to $ \ensuremath{\boldsymbol{\xi}}$ coordinates. It may thus be helpful to think about $ \gamma^{rs}$ as a scaled/transformed contravariant metric tensor.

The right hand side flux matrix, $ N^{\alpha\gamma}_{mo}$, is given by

$\displaystyle N^{\alpha\gamma}_{mo} = \ensuremath{ \ensuremath{ \ensuremath{ \m...
...amma} }\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }\,d\Gamma }$ (3.27)

Poisson Equations

Generalised Poisson Equation

Governing equations:

The general form of Poisson's equation with source on a domain $ \Omega$ with boundary $ \Gamma$ in OpenCMISS can be stated as

$\displaystyle \boxed{ \ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\left( \e...
... }+ \ensuremath{ a\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }=0 }$ (3.28)

where $ \ensuremath{\boldsymbol{x}}\in\Omega$, $ \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ is the dependent variable, $ \ensuremath{ \ensuremath{\boldsymbol{\sigma}}\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ is the conductivity tensor throughout the domain and $ \ensuremath{ a\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ is a source term.

Appropriate boundary conditions conditions for the diffusion equation are specification of Dirichlet boundary conditions on the solution, $ d$ i.e.,

$\displaystyle \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \r...
...th{\boldsymbol{x}},t \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{D},$ (3.29)

and/or Neumann conditions in terms of the solution flux in the normal direction, $ e$ i.e.,

$\displaystyle \ensuremath{ q\ensuremath{\left( \ensuremath{\boldsymbol{x}} \rig...
...math{\boldsymbol{x}} \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{N},$ (3.30)

where $ \ensuremath{ q\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \right)} }$ is the flux in the normal direction, $ \ensuremath{\boldsymbol{n}}$ is the normal vector to the boudary and $ \Gamma=\ensuremath{\Gamma_{D}\cup\Gamma_{N}}$.

Weak formulation:

The corresponding weak form of Equation (3.30) can be found by integrating over the spatial domain with a test function i.e.,

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...\sigma}} \ensuremath{\ensuremath{\nabla}u} \right)}} }+a \right)}w\,d\Omega }=0$ (3.31)

where $ w$ is a suitable spatial test function.

Applying the divergence theorem in Equation (3.190) to Equation (3.33) gives

$\displaystyle -\ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }...
...nsuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\,aw\,d\Omega }=0$ (3.32)

where $ \ensuremath{\boldsymbol{n}}$ is the unit outward normal vector to the boundary $ \Gamma$.

Tensor notation:

Equation (3.34) can be written in tensor notation as

$\displaystyle -\ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }...
...nsuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\,aw\,d\Omega }=0$ (3.33)

or

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...nsuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\,aw\,d\Omega }=0$ (3.34)

where $ G^{jk}$ is the contravariant metric tensor for the spatial coordinate system.

Finite element formulation:

We can now discretise the spatial domain into finite elements i.e., $ \Omega=
\displaystyle{\bigcup_{e=1}^{E}}\Omega_{e}$ with $ \Gamma=\displaystyle{\bigcup_{f=1}^{F}}\Gamma_{f}$. Equation (3.36) becomes

$\displaystyle \ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensuremath{...
...uremath{ \ensuremath{ \displaystyle\int\limits_{\Omega_{e}}^{} }\,aw\,d\Omega }$ (3.35)

We can now interpolate the variables with basis functions i.e.,

\begin{displaymath}\begin{split}\ensuremath{ u\ensuremath{\left( \ensuremath{\bo...
...mathrm{S} }\ensuremath{\left( p,\delta \right)} } } \end{split}\end{displaymath} (3.36)

where $ \ensuremath{ \ensuremath{ {u}^{n} }_{,\beta} }$, $ \ensuremath{ \ensuremath{ {q}^{o} }_{,\gamma} }$, $ \ensuremath{ \ensuremath{ {\tilde{\ensuremath{\boldsymbol{\sigma}}}}^{p} }_{,\delta} }$ and $ \ensuremath{ \ensuremath{ {a}^{p} }_{,\delta} }$ are the nodal degrees-of-freedom for the variables.

For a Galerkin finite element formulation we also choose the spatial weighting function $ w$ to be equal to the basis fucntions i.e.,

$\displaystyle \ensuremath{ w\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \r...
... \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( m,\alpha \right)} } }$ (3.37)

Spatial integration:

Adopting the standard integration notation we can write the spatial integration term in Equation (3.37) as

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \displays...
...l{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }
\end{multline}

where $ \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} }$ is the Jacobian of the transformation from the integration $ \ensuremath{\boldsymbol{x}}$ to $ \ensuremath{\boldsymbol{\xi}}$ coordinates.

Taking values that are constant over the integration interval outside the integration gives

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensurema...
...l{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }
\end{multline}

where $ \ensuremath{ \gamma^{rs}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} }$ is defined in Equations (3.26)-(3.28).

This is an equation of the form

$\displaystyle \ensuremath{\boldsymbol{K}}\ensuremath{\boldsymbol{u}}=\ensuremath{\boldsymbol{f}}$ (3.38)

where

$\displaystyle \ensuremath{\boldsymbol{f}}=\ensuremath{\boldsymbol{N}}\ensuremath{\boldsymbol{q}}+\ensuremath{\boldsymbol{R}}\ensuremath{\boldsymbol{a}}$ (3.39)

The elemental stiffness matrix, $ K^{\alpha\beta}_{mn}$, is given by

$\displaystyle K^{\alpha\beta}_{mn}=\ensuremath{ \ensuremath{ \ensuremath{ \math...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.40)

The elemental flux matrix, $ N^{\alpha\gamma}_{mo}$, is given by

$\displaystyle N^{\alpha\gamma}_{mo} =\ensuremath{ \ensuremath{ \ensuremath{ \ma...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.41)

and the elemental source matrix, $ R^{\alpha\delta}_{mp}$,

$\displaystyle R^{\alpha\delta}_{mp}=\ensuremath{ \ensuremath{ \ensuremath{ \mat...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.42)

Diffusion Equations

General Diffusion Equation

Governing equations:

The general form of the diffusion equation with source on a domain $ \Omega$ with boundary $ \Gamma$ in OpenCMISS can be stated as

$\displaystyle \boxed{ \ensuremath{ a\ensuremath{\left( \ensuremath{\boldsymbol{...
...+ \ensuremath{ b\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \right)} }=0 }$ (3.43)

where $ \ensuremath{\boldsymbol{x}}\in\Omega$, $ \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \right)} }$ is the quatity that diffuses, $ \ensuremath{ a\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ is a temporal weighting, $ \ensuremath{ \ensuremath{\boldsymbol{\sigma}}\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ is the conductivity tensor throughout the domain and $ \ensuremath{ b\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \right)} }$ is a source term.

Appropriate boundary conditions conditions for the diffusion equation are specification of Dirichlet boundary conditions on the solution, $ d$ i.e.,

$\displaystyle \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \r...
...th{\boldsymbol{x}},t \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{D},$ (3.44)

and/or Neumann conditions in terms of the solution flux in the normal direction, $ e$ i.e.,

$\displaystyle \ensuremath{ q\ensuremath{\left( \ensuremath{\boldsymbol{x}} \rig...
...math{\boldsymbol{x}} \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{N},$ (3.45)

where $ \ensuremath{ q\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \right)} }$ is the flux in the normal direction, $ \ensuremath{\boldsymbol{n}}$ is the normal vector to the boudary and $ \Gamma=\ensuremath{\Gamma_{D}\cup\Gamma_{N}}$.

Appropriate initial conditions for the diffusion equation are the specification of an initial value of the solution, $ f$ i.e.,

$\displaystyle \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}},0 \r...
...suremath{\boldsymbol{x}} \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Omega.$ (3.46)

Weak formulation:

The corresponding weak form of Equation (3.47) can be found by integrating over the spatial domain with a test function i.e.,

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...\sigma}} \ensuremath{\ensuremath{\nabla}u} \right)}} }+a \right)}w\,d\Omega }=0$ (3.47)

where $ w$ is a suitable spatial test function.

Applying the divergence theorem in Equation (3.190) to Equation (3.51) gives

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...nsuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\,bw\,d\Omega }=0$ (3.48)

where $ \ensuremath{\boldsymbol{n}}$ is the unit outward normal vector to the boundary $ \Gamma$.

Tensor notation:

Equation (3.52) can be written in tensor notation as

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...nsuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\,bw\,d\Omega }=0$ (3.49)

or

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...nsuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\,bw\,d\Omega }=0$ (3.50)

where $ G^{jk}$ is the contravariant metric tensor for the spatial coordinate system.

Finite element formulation:

We can now discretise the spatial domain into finite elements i.e., $ \Omega=
\displaystyle{\bigcup_{e=1}^{E}}\Omega_{e}$ with $ \Gamma=\displaystyle{\bigcup_{f=1}^{F}}\Gamma_{f}$. Equation (3.54) becomes

$\displaystyle \ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensuremath{...
...emath{ \ensuremath{ \displaystyle\int\limits_{\Omega_{e}}^{} }\,bw\,d\Omega }=0$ (3.51)

If we now assume that the dependent variable $ u$ can be interpolated separately in space and in time we can write

$\displaystyle \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \r...
... \right)} } }\ensuremath{ \ensuremath{ {u}^{n} }\ensuremath{\left( t \right)} }$ (3.52)

or, in standard interpolation notation within an element,

$\displaystyle \ensuremath{ u\ensuremath{\left( \ensuremath{\boldsymbol{\xi}},t ...
...{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( n,\beta \right)} } }$ (3.53)

where $ \ensuremath{ \ensuremath{ \ensuremath{ {u}^{n} }_{,\beta} }\ensuremath{\left( t \right)} }$ are the time varying nodal degrees-of-freedom for node $ n$, global derivative $ \beta$, $ \ensuremath{ \ensuremath{ \ensuremath{ \psi_{n}^{\beta} }\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }$ are the corresponding basis functions and $ \ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( n,\beta \right)} } }$ are the scale factors.

We can also interpolate the other variables in a similar manner i.e.,

\begin{displaymath}\begin{split}\ensuremath{ q\ensuremath{\left( \ensuremath{\bo...
...mathrm{S} }\ensuremath{\left( p,\delta \right)} } } \end{split}\end{displaymath} (3.54)

where $ \ensuremath{ \ensuremath{ \ensuremath{ {q}^{o} }_{,\gamma} }\ensuremath{\left( t \right)} }$, $ \ensuremath{ \ensuremath{ {a}^{p} }_{,\delta} }$, $ \ensuremath{ \ensuremath{ {\tilde{\ensuremath{\boldsymbol{\sigma}}}}^{p} }_{,\delta} }$ and $ \ensuremath{ \ensuremath{ \ensuremath{ {b}^{p} }_{,\delta} }\ensuremath{\left( t \right)} }$ are the nodal degrees-of-freedom for the variables.

For a Galerkin finite element formulation we also choose the spatial weighting function $ w$ to be equal to the basis fucntions i.e.,

$\displaystyle \ensuremath{ w\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \r...
... \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( m,\alpha \right)} } }$ (3.55)

Spatial integration:

Adopting the standard integration notation we can write the spatial integration term in Equation (3.55) as


where $ \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} }$ is the Jacobian of the transformation from the integration $ \ensuremath{\boldsymbol{x}}$ to $ \ensuremath{\boldsymbol{\xi}}$ coordinates.

Taking values that are constant over the integration interval outside the integration gives


where $ \ensuremath{ \gamma^{rs}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} }$ is defined in Equations (3.26)-(3.28).

This is an equation of the form

$\displaystyle \ensuremath{\boldsymbol{C}}\ensuremath{ \dot{\ensuremath{\boldsym...
...math{\boldsymbol{f}}\ensuremath{\left( t \right)} }=\ensuremath{\boldsymbol{0}}$ (3.56)

where

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{f}}\ensuremath{\left( t \rig...
...mbol{R}}\ensuremath{ \ensuremath{\boldsymbol{b}}\ensuremath{\left( t \right)} }$ (3.57)

The elemental damping matrix, $ C^{\alpha\beta}_{mn}$, is given by

$\displaystyle C^{\alpha\beta}_{mn} = \ensuremath{ \ensuremath{ \ensuremath{ \ma...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.58)

The elemental stiffness matrix, $ K^{\alpha\beta}_{mn}$, is given by

$\displaystyle K^{\alpha\beta}_{mn} = -\ensuremath{ \ensuremath{ \ensuremath{ \m...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.59)

The elemental flux matrix, $ N^{\alpha\gamma}_{mo}$, is given by

$\displaystyle N^{\alpha\gamma}_{mo} =\ensuremath{ \ensuremath{ \ensuremath{ \ma...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.60)

and the elemental source matrix, $ R^{\alpha\delta}_{mp}$,

$\displaystyle R^{\alpha\delta}_{mp}=\ensuremath{ \ensuremath{ \ensuremath{ \mat...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.61)

Helmholtz Equation

Wave Equation

Advection-Diffusion Equation

Reaction-Diffusion Equation

Biharmonic Equation

Elasticity Class

Linear Elasticity

Finite element formulation of linear elasticity problems is

Finite Elasticity

Formulation of finite element equations for finite elasticity (large deformation mechanics) implemented in OpenCMISS is based on the principle of virtual work. The finite element model consists of a set of non-linear algebraic equations. Non-linearity of equations stems from non-linear stress-strain relationship and quadratic terms present in the strain tensor. A typical problem in large deformation mechanics involves determination of the deformed geometry or mesh nodal parameters, from the finite element point of view, of the continuum from a known undeformed geometry, subject to boundary conditions and satisfying stress-strain (constitutive) relationship.

The boundary conditions can be either Dirichlet (displacement), Neumann (force) or a combination of them, known as the mixed boundary conditions. Displacement boundary conditions are generally nodal based. However, force boundary conditions can take any of the following forms or a combination of them - nodal-based, distributed load (e.g. pressure) or force acting at a discrete point on the boundary. In the latter two forms, the equivalent nodal forces are determined using the method of work equivalence (, ) and the forces so obtained will then be added to the right hand side or the residual vector of the linear equation system.

There are a numerous ways of describing the mechanical characteristics of deformable materials in large deformation mechanics or finite elasticity analyses. A predominantly used form for representing constitutive properties is a strain energy density function. This model gives the energy required to deform a unit volume (hence energy density) of the deformable continuum as a function of Green-Lagrange strain tensor components or its derived variables such as invariants or principal stretches. A material that has a strain energy density function is known as a hyperelastic or Green-elastic material.

The deformed equilibrium state should also give the minimum total elastic potential energy. One can therefore formulate finite element equations using the Variational method approach where an extremum of a functional (in this case total strain energy) is determined to obtain mesh nodal parameters of the deformed continuum. It is also possible to derive the finite element equations starting from the governing equilibrium equations known as Cauchy equation of motion. The weak form of the governing equations is obtained by multiplying them with suitable weighting functions and integrating over the domain (method of weighted residuals). If interpolation or shape functions are used as weighting functions, then the method is called the Galerkin finite element method. All three approaches (virtual work, variational method and Galerkin formulation) result in the same finite element equations.

In the following sections the derivation of kinematic relationships of deformation, energy conjugacy, constitutive relationships and final form the finite element equations using the virtual work approach will be discussed in detail.

Kinematics of Deformation

In order to track the deformation of an infinitesimal length at a particle of the continuum, two coordinates systems are defined. An arbitrary orthogonal spatial coordinate system, which is fixed in space and a material coordinate system which is attached to the continuum and deforms with the continuum. The material coordinate system, in general, is a curvi-linear coordinate system but must have mutually orthogonal axes at the undeformed state. However, in the deformed state, these axes are no longer orthogonal as they deform with the continuum (fig 1). In addition to these coordinate systems, there exist finite element coordinate systems (one for each element) as well. These coordinates are normalised and vary from 0.0 to 1.0. The following notations are used to represent various coordinate systems and coordinates of a particle of the continuum.

$ Y_{1}$-$ Y_{2}$-$ Y_{3}$ - fixed spatial coordinate system axes - orthogonal
$ N_{1}$-$ N_{2}$-$ N_{3}$ - deforming material coordinate system axes - orthogonal in the undeformed state
$ \Xi_{1}$-$ \Xi_{2}$-$ \Xi_{3}$ - element coordinate system - non-orthogonal in general and deforms with continuum

$ x_{1}$-$ x_{2}$-$ x_{3}$ [ $ \ensuremath{\boldsymbol{x}}$] - spatial coordinates of a particle in the undeformed state wrt $ Y_{1}$-$ Y_{2}$-$ Y_{3}$ CS
$ z_{1}$-$ z_{2}$-$ z_{3}$ [ $ \ensuremath{\boldsymbol{z}}$] - spatial coordinates of the same particle in the deformed state wrt $ Y_{1}$-$ Y_{2}$-$ Y_{3}$ CS
$ \nu_{1}$-$ \nu_{2}$-$ \nu_{3}$ [ $ \ensuremath{\boldsymbol{\nu}}$] - material coordinates of the particle wrt $ N_{1}$-$ N_{2}$-$ N_{3}$ CS (these do not change)
$ \xi_{1}$-$ \xi_{2}$-$ \xi_{3}$ [ $ \ensuremath{\boldsymbol{\xi}}$] - element coordinates of the particle wrt $ \Xi_{1}$-$ \Xi_{2}$-$ \Xi_{3}$ CS (these too do not change)

Since the directional vectors of the material coordinate system at any given point in the undeformed state is mutually orthogonal, the relationship between spatial $ \ensuremath{\boldsymbol{x}}$ and material $ \ensuremath{\boldsymbol{\nu}}$ coordinates is simply a rotation. The user must define the undeformed material coordinate system. Typically a nodal based interpolatable field known as fibre information (fibre, imbrication and sheet angles) is input to OpenCMISS. These angles define how much the reference or default material coordinate system must be rotated about the reference material axes. The reference material coordinate system at a given point is defined as follows. The first direction $ \nu_{1}$ is in the $ \xi_{1}$ direction. The second direction, $ \nu_{2}$ is in the $ \xi_{1}-\xi_{2}$ plane but orthogonal to $ \nu_{1}$. Finally the third direction $ \nu_{3}$ is determined to be normal to both $ \nu_{1}$ and $ \nu_{2}$. Once the reference coordinate system is defined, it is then rotated about $ \nu_{3}$ by an angle equal to the interpolated fibre value at the point in counter-clock wise direction. This will be followed by a rotation about new $ \nu_{2}$ axis again in the counter-clock wise direction by an angle equal to the sheet value. The final rotation is performed about the current $ \nu_{1}$ by an angle defined by interpolated sheet value. Note that before a rotation is carried out about an arbitrary axis one must first align(transform) the axis of rotation with one of the spatial coordinate system axes. Once the rotation is done, the rotated coordinate system (material) must be inverse-transformed.

Having defined the undeformed orthogonal material coordinate system, the metric tensor $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{x}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{\nu}}} }$ can be determined. As mentioned, the tensor $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{x}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{\nu}}} }$ contains rotation required to align material coordinate system with spatial coordinate system. This tensor is therefore orthogonal. A similar metric tensor can be defined to relate the deformed coordinates $ \ensuremath{\boldsymbol{z}}$ of the point to its material coordinates $ \ensuremath{\boldsymbol{\nu}}$. Note that the latter coordinates do not change as the continuum deforms and more importantly this tensor is not orthogonal as well. The metric tensor, $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{z}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{\nu}}} }$ is called the deformation gradient tensor and denoted as $ \ensuremath{\boldsymbol{F}}$.

$\displaystyle \ensuremath{\boldsymbol{F}}=\ensuremath{ \dfrac{\ensuremath{\part...
...suremath{\boldsymbol{z}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{\nu}}} }$ (3.62)

It can be shown that the deformation gradient tensor contains rotation when an infinitesimal length $ \ensuremath{\boldsymbol{dr_{0}}}$ in the undeformed state undergoes deformation. Since rotation does not contribute to any strain, it must be removed from the deformation gradient tensor. Any tensor can be decomposed into an orthogonal tensor and a symmetric tensor (known as polar decomposition). In other words, the same deformation can be achieved by first rotating $ \ensuremath{\boldsymbol{dr}}$ and then stretching (shearing and scaling) or vice-verse. Thus, the deformation gradient tensor can be given by,

$\displaystyle \ensuremath{\boldsymbol{F}}=\ensuremath{ \dfrac{\ensuremath{\part...
...math{\boldsymbol{U}}=\ensuremath{\boldsymbol{V}}\ensuremath{\boldsymbol{R_{1}}}$ (3.63)

The rotation present in the deformation gradient tensor can be removed either by right or left multiplication of $ \ensuremath{\boldsymbol{F}}$. The resulting tensors lead to different strain measures. The right Cauchy deformation tensor $ \ensuremath{\boldsymbol{C}}$ is obtained from,

$\displaystyle \ensuremath{\boldsymbol{C}}=\ensuremath{{[\ensuremath{\boldsymbol...
...l{U}}=\ensuremath{{\ensuremath{\boldsymbol{U}}}^{T}}\ensuremath{\boldsymbol{U}}$ (3.64)

Similarly the left Cauchy deformation tensor or the Finger tensor $ \boldsymbol{B}$ is obtained from the left multiplication of $ \boldsymbol{F}$,

$\displaystyle \ensuremath{\boldsymbol{B}}=[\ensuremath{\boldsymbol{V}}\ensurema...
...^{T}}=\ensuremath{\boldsymbol{V}}\ensuremath{{\ensuremath{\boldsymbol{V}}}^{T}}$ (3.65)

Note that both $ \ensuremath{\boldsymbol{R}}$ and $ \ensuremath{\boldsymbol{R_{1}}}$ are orthogonal tensors and therefore satisfy the following condition,

$\displaystyle \ensuremath{{\ensuremath{\boldsymbol{R}}}^{T}}\ensuremath{\boldsy...
...}\ensuremath{{\ensuremath{\boldsymbol{R_{1}}}}^{T}}=\ensuremath{\boldsymbol{I}}$ (3.66)

Since there is no rotation present in both $ \ensuremath{\boldsymbol{C}}$ and $ \ensuremath{\boldsymbol{B}}$, they can be used to define suitable strain measures as follows,

$\displaystyle \ensuremath{\boldsymbol{E}}=\frac{1}{2}\ensuremath{\left( \ensure...
... \right)}= \frac{1}{2}(\ensuremath{\boldsymbol{C}}-\ensuremath{\boldsymbol{I}})$ (3.67)

and

$\displaystyle \ensuremath{\boldsymbol{e}}=\frac{1}{2}\ensuremath{\left\{ \ensur...
...th{\left( \ensuremath{\boldsymbol{I}}-\ensuremath{\boldsymbol{B}}^{-1} \right)}$ (3.68)

where $ \ensuremath{\boldsymbol{E}}$ and $ \ensuremath{\boldsymbol{e}}$ are called Green and Almansi strain tensors respectively. Also note that $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{x}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{\nu}}} }$ is an orthogonal tensor.

It is now necessary to establish a relationship between strain and displacement. Referring to figure 1,

$\displaystyle \ensuremath{\boldsymbol{z}}=\ensuremath{\boldsymbol{x}}+\ensuremath{\boldsymbol{u}}$ (3.69)

where $ \boldsymbol{u}$ is the displacement vector.

Differentiating Equation (3.75) using the chain rule,

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{...
...suremath{\boldsymbol{x}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{\nu}}} }$ (3.70)

Substituting Equation (3.76) into Equation (3.73),

$\displaystyle \ensuremath{\boldsymbol{E}}=\frac{1}{2}\ensuremath{\left\{ \ensur...
...\partial}\ensuremath{\boldsymbol{\nu}}} }-\ensuremath{\boldsymbol{I}} \right\}}$ (3.71)

Simplifying,

$\displaystyle \ensuremath{\boldsymbol{E}}=\frac{1}{2}\ensuremath{{\ensuremath{ ...
...suremath{\boldsymbol{x}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{\nu}}} }$ (3.72)

As can be seen from Equation (3.78) the displacement gradient tensor $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{u}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{x}}} }$ is defined with respect to undeformed coordinates $ \ensuremath{\boldsymbol{x}}$. This means that the strain tensor $ \ensuremath{\boldsymbol{E}}$ has Lagrangian description and hence it is also also called the Green-Lagrange strain tensor.

A similar derivation can be employed to establish a relationship between the Almansi and displacement gradient tensors and the final form is given by,

$\displaystyle \ensuremath{\boldsymbol{e}}=\frac{1}{2}\ensuremath{ \dfrac{\ensur...
...ensuremath{\boldsymbol{u}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{z}}} }$ (3.73)

The displacement gradient tensor terms in Equation (3.79) are defined with respect to deformed coordinates $ \ensuremath{\boldsymbol{z}}$ and therefore the strain tensor has Eulerian description. Thus it is also known as the Almansi-Euler strain tensor.

Energy Conjugacy

Constitutive models

Principle of Virtual Work

Elastic potential energy or simply elastic energy associated with the deformation can be given by strain and its energetically conjugate stress. Note that the Cauchy stress and Almansi-Euler strain tensors and Second Piola-Kirchhoff (2PK) and Green-Lagrange tensors are energetically conjugate. Thus, the total internal energy due to strain in the body at the deformed state (fig. 3.1) can be given by,

$\displaystyle W_{int}=\ensuremath{ \ensuremath{ \displaystyle\int\limits_{0}^{v} }\,(\ensuremath{\boldsymbol{e}}:\ensuremath{\boldsymbol{\sigma}})\,dv }$ (3.74)

where $ \boldsymbol{e}$ and $ \boldsymbol{\sigma}$ are Almansi strain tensor and Cauchy stress tensor respectively.

If the deformed body is further deformed by introducing virtual displacements, then the new internal elastic energy can be given by,

$\displaystyle {W_{int}+\delta W_{int}}=\ensuremath{ \ensuremath{ \displaystyle\...
...\ensuremath{\boldsymbol{(e+\delta{e})}}:\ensuremath{\boldsymbol{\sigma}}]\,dv }$ (3.75)

Deducting Equation (3.80) from Equation (3.81),

$\displaystyle \delta W_{int}=\ensuremath{ \ensuremath{ \displaystyle\int\limits...
...\boldsymbol{\delta \epsilon}} : \ensuremath{\boldsymbol{\sigma}} \right)}\,dv }$ (3.76)

Using Equation (3.79) for virtual strain,

$\displaystyle \ensuremath{\boldsymbol{\delta e}}=\ensuremath{ \dfrac{\ensuremat...
...ath{\boldsymbol{\delta u}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{z}}} }$ (3.77)

Since virtual displacements are infinitesimally small, quadratic terms in Equation (3.83) can be neglected. The resulting strain tensor, known as small strain tensor $ \boldsymbol{\epsilon}$, can be given as,

$\displaystyle \ensuremath{\boldsymbol{\delta \epsilon}}=\ensuremath{ \dfrac{\en...
...oldsymbol{\delta u}}}{\ensuremath{\partial}\ensuremath{\boldsymbol{z}}} }}^{T}}$ (3.78)

Since both $ \ensuremath{\boldsymbol{\sigma}}$ and $ \ensuremath{\boldsymbol{\delta \epsilon}}$ are symmetric, new vectors are defined by inserting tensor components as follows,

$\displaystyle \ensuremath{\boldsymbol{\delta \epsilon}}=\ensuremath{{\ensuremat...
...pace{4 pt} 2\delta \sigma_{23} \hspace{4 pt} 2\delta \sigma_{13} \right]}}^{T}}$ (3.79)

Substituting Equation (3.85) into Equation (3.82),

$\displaystyle \delta W_{int}=\ensuremath{ \ensuremath{ \displaystyle\int\limits...
...dsymbol{\delta \epsilon}}}^{T}} \ensuremath{\boldsymbol{\sigma}} \right)}\,dv }$ (3.80)

The strain vector $ \ensuremath{\boldsymbol{\delta \epsilon}}$ can be related to displacement vector using the following equation,

$\displaystyle \ensuremath{\boldsymbol{\delta \epsilon}}=\ensuremath{\boldsymbol{D}} \ensuremath{\boldsymbol{\delta u}}$ (3.81)

where $ \ensuremath{\boldsymbol{D}}$ and $ \ensuremath{\boldsymbol{u}}$ are linear differential operator and displacement vector respectively and given by,

\begin{displaymath}\begin{array}{c} \ensuremath{\boldsymbol{D}} \end{array} = \e...
...partial}}{\ensuremath{\partial}z_{1}} } \\ \end{array} \right)}\end{displaymath} (3.82)

$\displaystyle \ensuremath{\boldsymbol{\delta u}}=\ensuremath{{\ensuremath{\left...
...elta u_{1} \hspace{4 pt} \delta u_{2} \hspace{4 pt} \delta u_{3} \right)}}^{T}}$ (3.83)

The virtual displacement is a finite element field and hence the value at any point can be obtained by interpolating nodal virtual displacements.

$\displaystyle \ensuremath{\boldsymbol{\delta u}}=\ensuremath{\boldsymbol{\Phi}}\ensuremath{\boldsymbol{\Delta}}$ (3.84)

Fluid Mechanics Class

Burgers's Equations

Generalised Burgers's Equation

Governing equations:

For a given velocity $ \ensuremath{ u\ensuremath{\left( x,t \right)} }$ and a kinematic viscosity of $ \nu$, Burgers's equation in one dimension is

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}...
...ensuremath{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }$ (3.85)

The general form of this equation in OpenCMISS is

$\displaystyle \ensuremath{ a\ensuremath{\left( \ensuremath{\boldsymbol{x}} \rig...
...{\left( \ensuremath{\boldsymbol{x}},t \right)} }} }=\ensuremath{\boldsymbol{0}}$ (3.86)

where $ \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \right)} }$ is the velocity vector and $ \ensuremath{ a\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$, $ \ensuremath{ b\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ and $ \ensuremath{ c\ensuremath{\left( \ensuremath{\boldsymbol{x}} \right)} }$ are material parameters. The standard form of Burgers's equation can be found with $ a=1$, $ b=-\nu$ and $ c=1$.

Appropriate boundary conditions conditions for Burgers's equation are specification of Dirichlet boundary conditions on the solution, $ \ensuremath{\boldsymbol{d}}$ i.e.,

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( \ensur...
...th{\boldsymbol{x}},t \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{D},$ (3.87)

and/or Neumann conditions in terms of the solution flux in the normal direction, $ \ensuremath{\boldsymbol{e}}$ i.e.,

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{q}}\ensuremath{\left( \ensur...
...th{\boldsymbol{x}},t \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{N},$ (3.88)

where $ \ensuremath{ \ensuremath{\boldsymbol{q}}\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \right)} }$ is the flux in the normal direction, $ \ensuremath{\boldsymbol{n}}$ is the normal vector to the boudary and $ \Gamma=\ensuremath{\Gamma_{D}\cup\Gamma_{N}}$.

Appropriate initial conditions for the diffusion equation are the specification of an initial value of the solution, $ \ensuremath{\boldsymbol{f}}$ i.e.,

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( \ensur...
...suremath{\boldsymbol{x}} \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Omega.$ (3.89)

Weak formulation:

The corresponding weak form of Equation (3.92) can be found by integrating over the domain with test functions i.e.,

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...}} } \right)}\ensuremath{\boldsymbol{w}}\,d\Omega }=\ensuremath{\boldsymbol{0}}$ (3.90)

where $ \ensuremath{\boldsymbol{w}}$ are suitable spatial test functions.

Applying the divergence theorem in Equation (3.190) to Equation (3.96) gives

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...}} } \right)}\ensuremath{\boldsymbol{w}}\,d\Omega }=\ensuremath{\boldsymbol{0}}$ (3.91)

Tensor notation:

Equation (3.97) can be written in tensor notation as

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...jk}u_{j}\ensuremath{ {u_{i}}_{;k} }w_{i}\,d\Omega }=\ensuremath{\boldsymbol{0}}$ (3.92)

or

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...jk}u_{j}\ensuremath{ {u_{i}}_{;k} }w_{i}\,d\Omega }=\ensuremath{\boldsymbol{0}}$ (3.93)

or


where $ G^{jk}$ is the contravariant metric tensor and $ \ensuremath{ \Gamma^{i}_{jk}
}$ is the Christoffel symbol for the spatial coordinates.

Finite element formulation:

We can now discretise the domain into finite elements i.e., $ \Omega=
\displaystyle{\bigcup_{e=1}^{E}}\Omega_{e}$ with $ \Gamma=\displaystyle{\bigcup_{f=1}^{F}}\Gamma_{f}$. Equation (3.99) now becomes


If we assume that we are in rectangular cartesian coordinates then the Christoffel symbols are all zero and $ G^{jk}=\ensuremath{\delta^{{j}{k}}}$. Equation (3.101) thus becomes

$\displaystyle \ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensuremath{...
...,cu^{k}\ensuremath{ {u_{i}}_{,k} } w_{i}\,d\Omega }=\ensuremath{\boldsymbol{0}}$ (3.94)

If we now assume that the dependent variable $ \ensuremath{\boldsymbol{u}}$ can be interpolated separately in space and in time we can write

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( \ensur...
...\ensuremath{ {\ensuremath{\boldsymbol{u}}}^{n} }\ensuremath{\left( t \right)} }$ (3.95)

or, in standard interpolation notation within an element,

$\displaystyle \ensuremath{ u_{i}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}...
...\ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( i,n,\beta \right)} } }$ (3.96)

and

$\displaystyle \ensuremath{ u^{i}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}...
...\ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( j,n,\beta \right)} } }$ (3.97)

where $ \ensuremath{ \ensuremath{ \ensuremath{ {u}^{n} }_{i,\beta} }\ensuremath{\left( t \right)} }$ are the time varying nodal degrees-of-freedom for velocity component $ i$, node $ n$, global derivative $ \beta$, $ \ensuremath{ \ensuremath{ \ensuremath{ \psi_{in}^{\beta} }\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }$ are the corresponding basis functions and $ \ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( i,n,\beta \right)} } }$ are the scale factors.

We can also interpolate the other variables in a similar manner i.e.,

\begin{displaymath}\begin{split}\ensuremath{ q_{i}\ensuremath{\left( \ensuremath...
...mathrm{S} }\ensuremath{\left( p,\delta \right)} } } \end{split}\end{displaymath} (3.98)

where $ \ensuremath{ \ensuremath{ \ensuremath{ {q_{i}}^{o} }_{,\gamma} }\ensuremath{\left( t \right)} }$, $ \ensuremath{ \ensuremath{ {a}^{p} }_{,\delta} }$, $ \ensuremath{ \ensuremath{ {b}^{p} }_{,\delta} }$ and $ \ensuremath{ \ensuremath{ {c}^{p} }_{,\delta} }$ are the nodal degrees-of-freedom for the variables.

For a Galerkin finite element formulation we also choose the spatial weighting function $ \ensuremath{\boldsymbol{w}}$ to be equal to the basis fucntions i.e.,

$\displaystyle \ensuremath{ w_{i}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}...
...ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( i,m,\alpha \right)} } }$ (3.99)

Spatial integration:

Adopting the standard integration notation we can write the spatial integration term in Equation ([*]) as

Poiseuille Flow Equations

Governing equations:

The Poiseuille equation describes the pressure drop that occurs due to viscous friction from laminar fluid flow over a straight pipe. Poiseuille flow can be derived from the Navier-Stokes equationsin cylindrical coordinates ( $ \ensuremath{\boldsymbol{u}}=\ensuremath{\left( u_{r},u_{\theta},u_{x} \right)}$) by assuming that the flow is:

  1. Steady i.e., $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{u}}}{\ensuremath{\partial}t} }=0$
  2. The radial and swirl components of fluid velocity are zero i.e., $ u_{r}=u_{\theta}=0$
  3. The flow is axisymmetric i.e., $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{u}}}{\ensuremath{\partial}\theta} }=0$ and fully developed i.e., $ \ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsymbol{u}}}{\ensuremath{\partial}x} }=0$.

The Poiseuille equation is:

$\displaystyle \boxed{ \Delta{p}=\dfrac{8\mu LQ}{\pi R^{4}} }$ (3.100)

where $ \Delta{p}$ is the pressure drop along the pipe, $ \mu$ is the viscosity $ L$ is the length of the pipe $ Q$ is volumetric flow and $ R$ is the radius of the pipe.

Rearranging Equation (3.108) gives

$\displaystyle Q=\dfrac{\pi R^{2}R^{2}\Delta{p}}{8\mu L}$ (3.101)

Volumetric flow is average axisymmetric flow, $ \bar{u}$, multipled by the area of the pipe $ A$ i.e., $ Q=\bar{u}A$. Equation (3.108) now becomes

$\displaystyle \bar{u}A=\dfrac{AR^{2}}{8\mu}\Delta{p}$ (3.102)

or

$\displaystyle \dfrac{R^{2}}{8\mu} \ensuremath{\ensuremath{\nabla}p} -\bar{u}=0$ (3.103)

as

$\displaystyle \dfrac{\Delta p}{L}=\ensuremath{ \dfrac{\ensuremath{\partial}p}{\ensuremath{\partial}x} }= \ensuremath{\ensuremath{\nabla}p}$ (3.104)

Weak formulation:

The weak form of the differential form of the Poiseuille equation system consisting of Equation (3.108) and can be written as:

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...{R^{2}}{8\mu} \ensuremath{\ensuremath{\nabla}p} -\bar{u} \right)}w\,d\Omega }=0$ (3.105)

Finite element formulation:

We can now discretise the domain into finite elements. Equation (3.113) becomes

$\displaystyle \ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensuremath{...
...{R^{2}}{8\mu} \ensuremath{\ensuremath{\nabla}p} -\bar{u} \right)}w\,d\Omega }=0$ (3.106)

or

$\displaystyle \ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{\left[ \ensur...
...\displaystyle\int\limits_{\Omega_{e}}^{} }\,\bar{u}\,dw }\,d\Omega } \right]}=0$ (3.107)

We can now interpolate the dependent variables using basis functions

\begin{displaymath}\begin{split}\ensuremath{ p\ensuremath{\left( \xi \right)} } ...
...emath{\left( \xi \right)} } &= \ensuremath{ u_{e} } \end{split}\end{displaymath} (3.108)

Using a Galerkin finite element approach (where the weighting functions are chosen to be the interpolating basis functions) we have

$\displaystyle \ensuremath{ w\ensuremath{\left( \xi \right)} }=\ensuremath{ \ens...
... \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( m,\alpha \right)} } }$ (3.109)

Spatial integration:

Adopting the standard integration notation we can write the spatial integration term for each element in Equation (3.119) as


or, taking the fixed degrees-of-freedom and scale factors outside the integrals, we have


This is an elemental equation of the form

$\displaystyle K^{\alpha\beta}_{mn}\ensuremath{ \ensuremath{ {p}^{n} }_{,\beta} }-f^{\alpha}_{m}\ensuremath{ u_{e} }=\ensuremath{\boldsymbol{0}}$ (3.110)

or

$\displaystyle \left[ \begin{array}{cc} K^{\alpha\beta}_{mn} & -f^{\alpha}_{m} \...
... = \left[ \begin{array}{c} \ensuremath{\boldsymbol{0}} \\ 0 \end{array} \right]$ (3.111)

The elemental stiffness matrix, $ K^{\alpha\beta}_{mn}$, is given by

$\displaystyle K^{\alpha\beta}_{mn}=\frac{R^{2}}{8\mu}\ensuremath{ \ensuremath{ ...
...\left\vert \ensuremath{ J\ensuremath{\left( \xi \right)} } \right\vert}\,d\xi }$ (3.112)

and the elemental stiffness vector, $ f^{\alpha}_{m}$, is given by

$\displaystyle f^{\alpha}_{m}=\ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} ...
...\left\vert \ensuremath{ J\ensuremath{\left( \xi \right)} } \right\vert}\,d\xi }$ (3.113)

Note that an additional conservation of mass equations are required to solve the final matrix system.

Stokes Equations

Governing equations:

Stokes flow is a type of fluid flow where advective inertial forces are small compared with viscous forces. The Reynolds number is low, i.e. $ \textit{Re}\ll 1$. For this type of flow, the inertial forces are assumed to be negligible and the Navier-Stokes equations simplify to give the Stokes equations. In the common case of an incompressible Newtonian fluid, the linear Stokes equations (three-dimensional, quasi-static) can be written as:

$\displaystyle \ensuremath{\boldsymbol{f}}-\ensuremath{\nabla}{p}+\mu\ensuremath{\ensuremath{\nabla^{2}}{\ensuremath{\boldsymbol{u}}}}=\ensuremath{\boldsymbol{0}}$ (3.114)

accompanied by the conservation of mass

$\displaystyle \ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\boldsymbol{u}}} }=0$ (3.115)

where $ \ensuremath{\boldsymbol{u}}(\ensuremath{\boldsymbol{x}},t)=(u_1,u_2,u_3)^T$ is the velocity vector depending on spatial coordinates $ \ensuremath{\boldsymbol{x}}=(x_1,x_2,x_3)^T$ and the time $ t$, $ p$ is the scalar pressure, $ \ensuremath{\boldsymbol{f}}$ an applied body force, and the material parameter $ \mu$ is fluid viscosity, respectively. Whereas Equation (3.124) has been formulated in Eulerian form, moving domain approaches often require the ALE modification taking an additional term into account, depending on the fluid density $ \rho$ and a correction velocity $ \ensuremath{\boldsymbol{u}}^*$ which yields to:

$\displaystyle \ensuremath{\boldsymbol{f}}-\ensuremath{\nabla}{p}+\mu\ensuremath...
...nsuremath{\boldsymbol{u}}^*\cdot\ensuremath{\nabla})\ensuremath{\boldsymbol{u}}$ (3.116)

Although by definition a Stokes flow has no dependence on time (other than through changing boundary conditions), Equation (3.126) can be modified easily towards a dynamic Stokes flow:

$\displaystyle \rho\ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsym...
...\nabla}{p}+\mu\ensuremath{\ensuremath{\nabla^{2}}{\ensuremath{\boldsymbol{u}}}}$ (3.117)

The following section, however, describes the reordered quasi-static formulation of Equation (3.126):

$\displaystyle -\ensuremath{\nabla}{p}+\mu\ensuremath{\ensuremath{\nabla^{2}}{\e...
...cdot\ensuremath{\nabla})\ensuremath{\boldsymbol{u}}=\ensuremath{\boldsymbol{f}}$ (3.118)

Weak formulation:

The corresponding weak form of the equation system consisting of Equation (3.124) and Equation (3.125) can be written as:

$\displaystyle -\ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }...
...\Omega}^{} }\,\ensuremath{\boldsymbol{f}}\ensuremath{\boldsymbol{v}}\,d\Omega }$ (3.119)

Applying Green's theorm to Equation (3.129) gives

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{ \displaystyle\int\limi...
...bol{u}}\cdot\ensuremath{\boldsymbol{n}}}\,d\Gamma } \end{split}\end{displaymath} (3.120)

The left-hand side of Equation (3.130) represents the unknown velocity and pressure field on $ \Omega$ whereas the right-hand side represents Neumann boundary conditions on $ \Gamma$ and source terms (here in form of volume or body forces) on $ \Omega$. For now we want to neglect the right-hand side and continue with the homogeneous form of Equation (3.130)

\begin{displaymath}\begin{split}\ensuremath{ \ensuremath{ \displaystyle\int\limi...
...bla}\cdot \ensuremath{\boldsymbol{u}}q\,d\Omega }=0 \end{split}\end{displaymath} (3.121)

Tensor notation:

If we now consider the integrand of the left hand side of Equation (3.131) in tensorial form with covariant derivatives then

Darcy Equation

Darcy's equation describes the flow of fluid through a porous material. Ignoring inertial and body forces, Darcy's equation is:

$\displaystyle \ensuremath{\boldsymbol{v}} = -\frac{\ensuremath{\boldsymbol{K}}}{\mu} \ensuremath{\ensuremath{\nabla}p}$ (3.122)

$ \ensuremath{\boldsymbol{v}}$ is the relative fluid flow vector, given by $ n (\ensuremath{\boldsymbol{v}}^f - \ensuremath{\boldsymbol{v}}^s)$ where $ n$ is the porosity, and $ \ensuremath{\boldsymbol{v}}^f$ and $ \ensuremath{\boldsymbol{v}}^s$ are the Eulerian fluid and solid component velocities. $ \ensuremath{\boldsymbol{K}}$ is the permeability tensor, $ \mu$ is viscosity and $ p$ is the fluid pressure.

Conservation of mass also gives:

$\displaystyle \ensuremath{\ensuremath{\nabla}\cdot} \ensuremath{\boldsymbol{v}} = 0$ (3.123)

The weighted residual forms of these equations over a region $ \Omega$ with surface $ \Gamma$ are:

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...uremath{\ensuremath{\nabla}p} \ensuremath{\boldsymbol{w}}\right)\,d\Omega } = 0$ (3.124)

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...ensuremath{\ensuremath{\nabla}\cdot} \ensuremath{\boldsymbol{v}}\,d\Omega } = 0$ (3.125)

Where $ \ensuremath{\boldsymbol{w}}$ is the weighting function for the flow and $ q$ is the weighting function for pressure. Applying Green's theorem to the pressure term to give the weak form of the equations, and multiplying through by $ \mu \ensuremath{\boldsymbol{K}}^{-1}$:

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...}^{} }\,p \ensuremath{\boldsymbol{n}} \ensuremath{\boldsymbol{w}}\,d\Gamma }= 0$ (3.126)

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...ensuremath{\ensuremath{\nabla}\cdot} \ensuremath{\boldsymbol{v}}\,d\Omega } = 0$ (3.127)

Where $ \ensuremath{\boldsymbol{n}}$ is the normal vector to the surface $ \Gamma$.

The OpenCMISS implementation of Darcy's equation uses the stabilised form of the finite element equations developed by (, ). This adds stabilising terms, so that the final form of the implemented equations is:

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...}^{} }\,p \ensuremath{\boldsymbol{n}} \ensuremath{\boldsymbol{w}}\,d\Gamma }= 0$ (3.128)

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...uremath{\nabla}p} \cdot \ensuremath{\ensuremath{\nabla}q} \right)\,d\Omega }= 0$ (3.129)

Navier-Stokes Equations

Governing equations:

The Navier-Stokes equations arise from applying Newton's second law to fluid motion, i.e. the temporal and spatial fluid inertia is in equilibrium with internal (volume/body) and external (surface) forces. The reaction of surface forces can be described in terms of the fluid stress as the sum of a diffusing viscous term, plus a pressure term. The equations are named for the 19th century contributions of Claude-Louis Navier and George Gabriel Stokes. A solution of the Navier-Stokes equations is called a flow field, i.e. velocity and pressure field, which is a description of the fluid at a given point in space and time. In the common case of an incompressible Newtonian fluid, the nonlinear Navier-Stokes equations (three-dimensional, transient) can be written using 'primitive variables' (i.e. u-velocity, p-pressure) as:

$\displaystyle \rho\ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsym...
...\nabla}{p}+\mu\ensuremath{\ensuremath{\nabla^{2}}{\ensuremath{\boldsymbol{u}}}}$ (3.130)

accompanied by the conservation of mass (incompressibility)

$\displaystyle \ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\boldsymbol{u}}} }=0$ (3.131)

where $ \ensuremath{\boldsymbol{u}}(\ensuremath{\boldsymbol{x}},t)=(u_1,u_2,u_3)^T$ is the velocity vector depending on spatial coordinates $ \ensuremath{\boldsymbol{x}}=(x_1,x_2,x_3)^T$ and the time $ t$, $ p$ is the scalar pressure, $ \ensuremath{\boldsymbol{f}}$ an applied body force, and the material parameters $ \mu$ and $ \rho$ are the fluid viscosity and density, respectively.The first term represents unsteady accelerative inertial contributions, the second represents the nonlinear convective acceleration terms, the $ \ensuremath{\nabla}{p}$ term the pressure contributions, and the last term represents viscous stresses in the system. As with Stokes flow, the incompressibility condition $ \ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\boldsymbol{u}}} }=0$ also creates restrictions on the formulation of the velocity and pressure spaces using finite elements known as the Ladyzhenkaya, Babuska, and Brezzi (LBB) or inf-sup consistency condition. Several methods have been devised to define a pressure function that is consistent with the velocity space using primitive variables. These include mixed element methods, penalty methods, generalized petrov-galerkin methods using pressure poisson correction, operator splitting, and semi-implicit pressure correction (, ).

Using a classic Galerkin formulation, mixed methods are perhaps conceptually the most straightforward method of satisfying LBB, in which velocity is defined over a space one order higher than pressure (e.g. quadratic elements for velocity, linear for pressure), allowing incompressibility to be satisfied. It should be noted that our use of a mixed formulation to satisfy LBB will also be reflected in the shape functions that our weak formulation depends on. For example, using a 2D element with biquadratic velocity and linear pressure, we will have 9 DOFs and weight functions for each velocity component and 4 for the pressure.

Weak Formulation

The weak form of equations 3.140 and 3.141 can be found by applying standard Galerkin test functions $ \ensuremath{\boldsymbol{w}}$:

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...}} +\ensuremath{\nabla}{p} \right)}{\ensuremath{\boldsymbol{w}}}\,d\Omega } = 0$ (3.132)

Integrating by parts, we will get the weak form of the system of PDEs with the associated natural boundary conditions at the boundary $ \Gamma_N$:

\begin{multline}
\ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^...
...suremath{\boldsymbol{n}}}\ensuremath{\boldsymbol{w}}\,d\Gamma_N }
\end{multline}

For more extensive discussion of this procedure, along with other weak forms of the PDEs, we refer to (, ). From this weak form, we see natural (Neumann) boundary conditions arising as a direct result of the integration. Neumann boundary conditions will consist of a pressure term and viscous stress acting normal to a given boundary.

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{q}}\ensuremath{\left( \ensur...
...right)} }}\cdot\ensuremath{\boldsymbol{n}}}-p\cdot{\ensuremath{\boldsymbol{n}}}$ (3.133)
$\displaystyle \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{N}$ (3.134)

Specification of Neumann boundaries will simply require the specification of the terms across element DOFs. Dirichlet boundary conditions on a boundary $ \Gamma_D$ for velocity will take the form:

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( \ensur...
...ath{\boldsymbol{x}},t \right)} } \quad \ensuremath{\boldsymbol{x}}\in\Gamma_{D}$ (3.135)

Tensor notation

Assuming no forcing terms and substituting the natural boundary as defined above, equation [*] in tensor notation becomes:

\begin{multline}
\ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^...
...ma_N}^{} }\,{q_{i}}{w_i}\,d\Gamma_N }=\ensuremath{\boldsymbol{0}}
\end{multline}

or

\begin{multline}
\ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^...
...ma_N}^{} }\,{q_{i}}{w_i}\,d\Gamma_N }=\ensuremath{\boldsymbol{0}}
\end{multline}

where $ G^{jk}$ is the contravariant metric tensor and $ \ensuremath{ \Gamma^{i}_{jk}
}$ is the Christoffel symbol of the second kind for the spatial coordinates.

Finite Element Formulation

We can now discretise the domain into finite elements i.e., $ \Omega=
\displaystyle{\bigcup_{e=1}^{E}}\Omega_{e}$ with $ \Gamma=\displaystyle{\bigcup_{f=1}^{F}}\Gamma_{f}$. Equation (3.149) now becomes:

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensure...
...ma_N}^{} }\,{q_{i}}{w_i}\,d\Gamma_N }=\ensuremath{\boldsymbol{0}}
\end{multline}

or

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensure...
...tyle\int\limits_{\Gamma_N}^{} }\,G^{jk}{p}{n_i}{w_i}\,d\Gamma_N }
\end{multline}

We will assume that the dependent variables $ \ensuremath{\boldsymbol{u}}$ and $ p$ can be interpolated separately in space and time. Here we must also be careful to note again the discrepancy between the functional spaces for velocity and pressure using a mixed formulation to satisfy the LBB consistency requirement. We will therefore define two separate weighting functions: for the velocity space on $ \Omega$, the test function will be $ \psi_i$ and for the pressure space, $ \phi_i$, giving:

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( \ensur...
...\ensuremath{ {\ensuremath{\boldsymbol{u}}}^{n} }\ensuremath{\left( t \right)} }$ (3.136)
$\displaystyle \ensuremath{ p\ensuremath{\left( \ensuremath{\boldsymbol{x}},t \r...
...dsymbol{x}}})\ensuremath{ \ensuremath{ {p}^{n} }\ensuremath{\left( t \right)} }$ (3.137)

In standard interpolation notation within an element, we transform from $ \ensuremath{\boldsymbol{x}}$ to $ \ensuremath{\boldsymbol{\xi}}$:

$\displaystyle \ensuremath{ u_{i}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}...
...\ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( i,n,\beta \right)} } }$ (3.138)
$\displaystyle \ensuremath{ p_{}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}}...
...\ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( i,n,\beta \right)} } }$ (3.139)

where $ \ensuremath{ \ensuremath{ \ensuremath{ {u}^{n} }_{i,\beta} }\ensuremath{\left( t \right)} }$ are the time varying nodal degrees-of-freedom for velocity component $ i$, node $ n$, global derivative $ \beta$, $ \ensuremath{ \ensuremath{ \ensuremath{ \psi_{in}^{\beta} }\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} } }$ are the corresponding velocity basis functions and $ \ensuremath{ \ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( i,n,\beta \right)} } }$ are the scale factors. The scalar pressure DOFs, $ \ensuremath{ \ensuremath{ \ensuremath{ {p}^{n} }_{,\beta} }\ensuremath{\left( t \right)} }$ are interpolated similarly.

For the natural boundary, we can separate $ q_i$ into its component velocity and pressure terms as noted in 3.144 and shown in [*]. For our current treatment, we will also assume the values of viscosity $ \mu$ and density $ \rho$ are constant. These can be interpolated:

\begin{displaymath}\begin{split}\ensuremath{ q_{i}\ensuremath{\left( \ensuremath...
...hrm{S} }\ensuremath{\left( r,\delta \right)} } } \\ \end{split}\end{displaymath} (3.140)

Using the spatial weighting functions for a Galerkin finite element formulation:

$\displaystyle \ensuremath{ w_{i}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}...
...ensuremath{ \ensuremath{ \mathrm{S} }\ensuremath{\left( i,m,\alpha \right)} } }$ (3.141)

Spatial Integration

Using standard integration notation, we can rewrite our Galerkin FEM formulation from [*]:

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensurem...
...l{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }
\end{multline}

Where $ \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensuremath{\boldsymbol{\xi}} \right)} }$ represents the jacobian matrix. If we assume a rectangular cartesian coordinate system, this simplifies significantly, as the contravariant $ G^{jk}$ will become $ \delta^{jk}$ and the Christoffel symbols $ \ensuremath{ \Gamma^{i}_{jk}
}$ will all be zero. This gives:

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensurem...
...l{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }
\end{multline}

or, rearranged to pull constants out of the integrals:

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}{\ensuremath{ \ensure...
...l{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }
\end{multline}

General form

We now seek to assemble this into the corresponding general form for dynamic equations, as outlined in section 2.3.2:

$\displaystyle \ensuremath{\boldsymbol{M}}\ensuremath{ \ddot{\ensuremath{\boldsy...
...math{\boldsymbol{f}}\ensuremath{\left( t \right)} }=\ensuremath{\boldsymbol{0}}$ (3.142)

where $ \dot{\ensuremath{\boldsymbol{d}}}(t)$ and $ \ddot{\ensuremath{\boldsymbol{d}}}(t)$ represent the first and second derivatives (respectively) of the degrees of freedom vector $ \ensuremath{\boldsymbol{d}}(t)$, which consists of the dependent variables $ \ensuremath{\boldsymbol{u}}(\ensuremath{\boldsymbol{x}},t)$ and $ p(x,t)$. $ \ensuremath{\boldsymbol{M}}$ is the mass matrix, which provides the shape function based weights and $ \ensuremath{\boldsymbol{C}}$ is the transient damping matrix. $ \ensuremath{\boldsymbol{K}}$ represents the stiffness matrix, which will contain the linear parts of the operator. $ \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t \right)} } \right)} }$ is the nonlinear vector function that will be used to represent the convective term and $ \ensuremath{ \ensuremath{\boldsymbol{f}}\ensuremath{\left( t \right)} }$ the forcing vector.

We will assume cartesian coordinates $ \ensuremath{\boldsymbol{x}}=\ensuremath{\left\{ x,y,z \right\}}$ and denote the corresponding velocity components $ \ensuremath{\boldsymbol{u}}=\ensuremath{\left\{ u,v,w \right\}}$, with $ N$ representing the number of velocity DOFs and $ M$ the number of pressure DOFs. The vector $ d$ in then becomes:

$\displaystyle \ensuremath{\boldsymbol{d}}=[u_1u_2...u_Nv_1v_2...v_Nw_1w_2...w_N...
...ol{v}}\\ \ensuremath{\boldsymbol{w}}\\ \ensuremath{\boldsymbol{p}}\end{bmatrix}$ (3.143)

All the components of 3.161 will similarly depend on the number of dimensions,$ n_{dim}$, and the size of the matrices will be: $ (N{\cdot}{n_{dim}}+M)^{n_{dim}}$.

Returning to the general case, the mass matrix $ \ensuremath{\boldsymbol{M}}$ will take the form:

$\displaystyle [M^{\alpha\beta}_{mn}]=\ensuremath{ \ensuremath{ \displaystyle\in...
...ath{\boldsymbol{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.144)

Without using mass-lumping, the damping matrix $ \ensuremath{\boldsymbol{C}}$:

$\displaystyle [C^{\alpha\beta}_{mn}]={\ensuremath{ \ensuremath{ \ensuremath{ \m...
...th{\boldsymbol{\xi}} \right)} } \right\vert}}\,d\ensuremath{\boldsymbol{\xi}} }$ (3.145)

The element stiffness matrix $ \ensuremath{\boldsymbol{K}}$ will contain the linear components of the system. For the classic Galerkin formulation this includes the viscous term with the laplacian operator and the pressure gradient term. As the velocity and pressure DOFs are both included in $ d$, $ \ensuremath{\boldsymbol{K}}$ will be assembled so that the corresponding operators are applied to the DOFs discontinuously.

$\displaystyle [K^{\alpha\beta}_{mn}]= \begin{cases}[A^{\alpha\beta}_{mn}]= {\en...
...d\qquad\qquad\qquad\qquad\qquad \text{for $n>{N{\cdot}{N_{dim}}}$}. \end{cases}$ (3.146)

To attain a square matrix of the required size, we must also apply the transpose of the gradient terms acting on the pressure DOFs. For example, in a 3 dimensional example system, $ \ensuremath{\boldsymbol{K}}$ will take the form:

$\displaystyle \ensuremath{\boldsymbol{K}}= \begin{bmatrix}\ensuremath{\boldsymb...
... & -\ensuremath{\boldsymbol{B}}^T_z & \ensuremath{\boldsymbol{0}} \end{bmatrix}$ (3.147)

The nonlinear vector, $ \ensuremath{\boldsymbol{g}}(t)$ will provide the convective operators and for the standard Galerkin formulation:

$\displaystyle [g^{\alpha\beta}_{mn}]= {\ensuremath{ \ensuremath{ \ensuremath{ \...
...math{\boldsymbol{\xi}} \right)} } \right\vert}}d{\ensuremath{\boldsymbol{\xi}}}$ (3.148)

Streamline Upwind/Petrov-Galerkin (SUPG)

For convection dominated flows, it is often useful to modify the Galerkin formulation account for numerical problems associated with large asymmetric velocity gradients. We describe augmenting the Galerkin Navier-Stokes formulation to form a Petrov-Galerkin form, which uses a different test function for the convective term to stabilize the algorithm with a balancing diffusive term. Please refer to section [*] for more details.

For the Navier-Stokes equations (and most nonlinear problems), the use of a Petrov-Galerkin formulation becomes more difficult than for linear cases like advection-diffusion, as consistent weighting can cause instabilities when used with additional terms like body forces (, ). These issues can be overcome with more complex Petrov-Galerkin formulations, as those derived by Hughes and colleagues in the 1980s. A useful alternative route is to apply SUPG weights to the convective operator in elements as a function of each element's Reynolds number (referred to subsequently as the cell Reynolds number). This can provide the desired stabilizing effect in the direction of large velocity gradients but can be easily implemented and is practically useful.

Assuming cartesian coordinates and applying Petrov-Galerkin weights to the convective term, the finite element formulation described in equation [*] can be written:

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}\ensuremath{ \ensure...
...t\limits_{\Gamma_N}^{} }\,{\delta}^{jk}{p}{n_i}{w_i}\,d\Gamma_N }
\end{multline}

where $ w_i$ will be the classic Galerkin weight and $ \Psi_i$ will be the Petrov-Galerkin for the Navier-Stokes equations. We must also define the Peclet number, $ \gamma$, for the Navier-Stokes equations to describe the ratio of the convective:viscous operators. Here, the effective Peclet number will be based on the cell Reynolds number:

$\displaystyle Re = \frac{\rho\bar{U}L}{\mu} =\frac{\bar{U}L}{\nu}$ (3.149)

where $ \bar{U}$ is the characteristic velocity and $ \nu=\frac{\mu}{\rho}$ is the kinematic viscosity. $ L$ will be the characteristic length scale for a given element. The effective Peclet number $ \gamma$ will be:

$\displaystyle \gamma= \frac{Re}{2} = \frac{\bar{u}L}{2\nu}$ (3.150)

We will retain the same value for $ \alpha={\coth{\frac{\gamma}{2}} - \frac{2}{\gamma}}$ as found for the linearized advection-diffusion equation.

$\displaystyle \Psi_i = \ensuremath{\left( \frac{\alpha{L}}{2}{\frac{u_j}{\ensuremath{\left\vert u_j \right\vert}}} \right)}{\ensuremath{ {w_{i}}_{,k} } }$ (3.151)

,

Equation [*] then becomes after integration and simplification:

\begin{multline}
\ensuremath{\displaystyle\sum}_{e=1}^{E}{\ensuremath{ \ensure...
...l{\xi}} \right)} } \right\vert}\,d\ensuremath{\boldsymbol{\xi}} }
\end{multline}

Notice the integration by parts is done only to the standard Galerkin weights- this is because the artificial diffusion added by the Petrov-Galerkin terms should only contribute within the elements

The use of the Petrov-Galerkin formulation in this way allows for the stabilization of moderately convective problems. However, it also results in nonsymmetric mass matrices with the additional term, making classical mass-lumping more difficult. As a result, the use of explicit methods also becomes more difficult for time-dependent problems.

Arbitrary Lagrangian-Eulerian (ALE) Formulation

Whereas Equation ([*]) has been formulated in Eulerian form, moving domain approaches often require the ALE modification taking an additional term into account, depending on the fluid density $ \rho$ and a correction velocity $ \ensuremath{\boldsymbol{u}}^*$ which yields to:

$\displaystyle \rho((\ensuremath{\boldsymbol{u}}-\ensuremath{\boldsymbol{u}}^*)\...
...\nabla}{p}+\mu\ensuremath{\ensuremath{\nabla^{2}}{\ensuremath{\boldsymbol{u}}}}$ (3.152)

So far, the nonlinear term in Equation ([*]) represents the fluid spatial acceleration only. Equation (3.174) now also takes the dynamic inertia terms into account

$\displaystyle \rho\ensuremath{ \dfrac{\ensuremath{\partial}\ensuremath{\boldsym...
...\nabla}{p}+\mu\ensuremath{\ensuremath{\nabla^{2}}{\ensuremath{\boldsymbol{u}}}}$ (3.153)

which gives us the complete Navier-Stokes equations in ALE formulation. The following section, however, describes the reordered quasi-static formulation of Equation (3.173):

$\displaystyle \rho((\ensuremath{\boldsymbol{u}}-\ensuremath{\boldsymbol{u}}^*)\...
...ensuremath{\boldsymbol{u}}}}+\ensuremath{\nabla}{p}=\ensuremath{\boldsymbol{f}}$ (3.154)

Weak formulation:

The corresponding weak form of the equation system consisting of Equation ([*]) and Equation ([*]) can be written in the general dynamic form (see section 2.3.2)

$\displaystyle \ensuremath{\boldsymbol{M}}\ensuremath{ \ddot{\ensuremath{\boldsy...
...math{\boldsymbol{f}}\ensuremath{\left( t \right)} }=\ensuremath{\boldsymbol{0}}$ (3.155)

where u(t) is the dependent variables vector $ \ensuremath{\boldsymbol{u}}(\ensuremath{\boldsymbol{x}},t)$ and $ p$ for the degrees of freedom. $ \ensuremath{\boldsymbol{M}}$ is the mass matrix, which provides the shape function based weights, $ \ensuremath{\boldsymbol{C}}$ is the transient damping matrix (which we will discuss further below). $ \ensuremath{\boldsymbol{K}}$ represents the stiffness matrix, which will contain the linear parts of the operator, including the viscous terms, the conservation of mass terms, and pressure terms. $ \ensuremath{ \ensuremath{\boldsymbol{g}}\ensuremath{\left( \ensuremath{ \ensuremath{\boldsymbol{u}}\ensuremath{\left( t \right)} } \right)} }$ is the nonlinear vector function for the convective terms and $ \ensuremath{ \ensuremath{\boldsymbol{f}}\ensuremath{\left( t \right)} }$ the forcing vector.

The corresponding weak form of the equation system consisting of Equation ([*]) and Equation ([*]) can be written as:

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...\Omega}^{} }\,\ensuremath{\boldsymbol{f}}\ensuremath{\boldsymbol{v}}\,d\Omega }$ (3.156)

The general form for this kind of equation system is

$\displaystyle \ensuremath{\boldsymbol{K}}{\ensuremath{\boldsymbol{\hat{u}}}}+ \...
... {\ensuremath{\boldsymbol{{u}}}} \right)} }={\ensuremath{\boldsymbol{\hat{f}}}}$ (3.157)

where $ {\ensuremath{\boldsymbol{\hat{u}}}}$ is the vector of unknown ``DOFs'', $ \ensuremath{\boldsymbol{K}}$ is the stiffness matrix, $ \ensuremath{ \ensuremath{\boldsymbol{\hat{g}}}\ensuremath{\left( {\ensuremath{\boldsymbol{{u}}}} \right)} }$ a non-linear vector function and $ {\ensuremath{\boldsymbol{\hat{f}}}}$ the forcing vector. In Equation (3.177) the only real non-linear term is represented by $ \ensuremath{ \ensuremath{\boldsymbol{\hat{g}}}\ensuremath{\left( \ensuremath{\...
...math{\nabla})\ensuremath{\boldsymbol{u}}\ensuremath{\boldsymbol{v}} \,d\Omega }$. If $ \ensuremath{ \ensuremath{\boldsymbol{\hat{g}}}\ensuremath{\left( \ensuremath{\boldsymbol{u}} \right)} }$ is not $ \equiv\ensuremath{\boldsymbol{0}}$ then we use Newton's method i.e.,

\begin{displaymath}\begin{split}\text{1. } & \ensuremath{ \ensuremath{\boldsymbo...
...mbol{u}}_{i}+\delta \ensuremath{\boldsymbol{u}}_{i} \end{split}\end{displaymath} (3.158)

where $ \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensuremath{\boldsymbol{u}} \right)} }$ is the Jacobian and is given by

$\displaystyle \ensuremath{ \ensuremath{\boldsymbol{J}}\ensuremath{\left( \ensur...
...\boldsymbol{u}} \right)} }}{\ensuremath{\partial}\ensuremath{\boldsymbol{u}}} }$ (3.159)

with the stiffness matrix $ \ensuremath{\boldsymbol{K}}$ derived from Equation (3.178) by applying Green's theorem as follows:

\begin{displaymath}\begin{split}\ensuremath{\boldsymbol{K}}\ensuremath{\boldsymb...
...nabla}\cdot \ensuremath{\boldsymbol{u}}q\,d\Omega } \end{split}\end{displaymath} (3.160)

and $ \ensuremath{ \ensuremath{\boldsymbol{\psi}}\ensuremath{\left( \ensuremath{\bol...
...\left( \ensuremath{\boldsymbol{u}} \right)} }+\ensuremath{\boldsymbol{\hat{f}}}$.

Pressure Poisson Equation

The Navier-Stokes equation can be written as

$\displaystyle \rho\ensuremath{\left( \ensuremath{ \dfrac{\ensuremath{\partial}\...
...ath{\nabla\cdot \ensuremath{\ensuremath{\nabla}\ensuremath{\boldsymbol{u}}} } }$ (3.161)

Rearranging for $ \ensuremath{\nabla}{p}$ gives

$\displaystyle \ensuremath{\nabla}{p}=\mu\ensuremath{ \ensuremath{\nabla\cdot \e...
...u}}\cdot \ensuremath{\ensuremath{\nabla}\ensuremath{\boldsymbol{u}}} } \right)}$ (3.162)

or

$\displaystyle \ensuremath{\ensuremath{\nabla}p} =\ensuremath{\boldsymbol{b}}$ (3.163)

where

$\displaystyle \ensuremath{\boldsymbol{b}}=\mu\ensuremath{ \ensuremath{\nabla\cd...
...u}}\cdot \ensuremath{\ensuremath{\nabla}\ensuremath{\boldsymbol{u}}} } \right)}$ (3.164)

Note that from Equation (3.184) we can write

$\displaystyle \ensuremath{ \ensuremath{\ensuremath{\nabla}p} \cdot\ensuremath{\...
...l{n}}}=\ensuremath{\ensuremath{\boldsymbol{b}}\cdot\ensuremath{\boldsymbol{n}}}$ (3.165)

Taking the divergence of both sides of Equation (3.184) gives

$\displaystyle \ensuremath{ \ensuremath{\nabla\cdot \ensuremath{\ensuremath{\nabla}p} } }=\ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\boldsymbol{b}}} }$ (3.166)

The corresponding weak form of Equation (3.187) is

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...,\ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\boldsymbol{b}}} }w\,d\Omega }$ (3.167)

Now the divergence theorm states that

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...uremath{\ensuremath{\boldsymbol{F}}\cdot\ensuremath{\boldsymbol{n}}}\,d\Gamma }$ (3.168)

Applying the divergence theorm where $ \ensuremath{\boldsymbol{F}}=g\ensuremath{\boldsymbol{f}}$ gives

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...uremath{\ensuremath{\boldsymbol{f}}\cdot\ensuremath{\boldsymbol{n}}}\,d\Gamma }$ (3.169)

Note that when $ \ensuremath{\boldsymbol{f}}= \ensuremath{\ensuremath{\nabla}f} $ is used in Equation (3.190) you get Green's identity i.e.,

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
... \ensuremath{\ensuremath{\nabla}f} \cdot\ensuremath{\boldsymbol{n}}}\,d\Gamma }$ (3.170)

Now if we apply Equation (3.191) to the left hand side of Equation (3.188) we get

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...math{\ensuremath{\nabla}p} \cdot \ensuremath{\ensuremath{\nabla}w} }\,d\Omega }$ (3.171)

or from Equation (3.186)

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...math{\ensuremath{\nabla}p} \cdot \ensuremath{\ensuremath{\nabla}w} }\,d\Omega }$ (3.172)

Now if we apply Equation (3.190) to the right hand side of Equation (3.188) we get

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...\ensuremath{\boldsymbol{b}}\cdot \ensuremath{\ensuremath{\nabla}w} }\,d\Omega }$ (3.173)

Substituting Equation (3.193) and Equation (3.194) into Equation (3.188) gives

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Gamma}^{} }\...
...\ensuremath{\boldsymbol{b}}\cdot \ensuremath{\ensuremath{\nabla}w} }\,d\Omega }$ (3.174)

or

$\displaystyle \ensuremath{ \ensuremath{ \displaystyle\int\limits_{\Omega}^{} }\...
...\ensuremath{\boldsymbol{b}}\cdot \ensuremath{\ensuremath{\nabla}w} }\,d\Omega }$ (3.175)

Now, using a standard Galerkin Finite Element approach Equation (3.197) can be written in the following form

$\displaystyle \ensuremath{\boldsymbol{K}}_{p}\ensuremath{\boldsymbol{p}}=\ensuremath{\boldsymbol{s}}$ (3.176)

where $ \ensuremath{\boldsymbol{K}}_{p}$ is the pressure stiffness matrix, $ \ensuremath{\boldsymbol{p}}$ the vector of unknown pressure DOFs and $ \ensuremath{\boldsymbol{s}}$ the source vector.

Note that you can write $ \ensuremath{\boldsymbol{s}}$ in terms of $ \ensuremath{\boldsymbol{K}}_{u}$ the velocity stiffness matrix, $ \ensuremath{\boldsymbol{M}}$ the mass matrix, $ \ensuremath{\boldsymbol{u}}$ the vector of know velocity DOFs and $ \ensuremath{ \ensuremath{\boldsymbol{h}}\ensuremath{\left( \ensuremath{\boldsymbol{u}} \right)} }$ the nonlinear Navier-Stokes vector.

Note that $ \ensuremath{\boldsymbol{K}}_{P}$ is most likely signular and therefore you will need to set a reference pressure Dirichlet condition somewhere and measure the pressures relative to this boundary condition value.

Note that this formulation removes all the nasty Neumann conditions in favour of a Dirichlet condition. It also removes the surface integrals in favour of volume integrals!

Electromechanics Class

Electrostatic Equations

Magnetostatic Equations

Maxwell Equations

Bioelectrics Class

Bidomain Equation

The bidomain model can be thought of as two co-existant intra and extrcelluar spaces. The potential in the intracellular space is denoted as $ \phi_{i}$ and the potential in the extracellular space is denoted as $ \phi_{e}$. The intra and extracellular potentials are related through the transmembrane voltage, $ V_{m}$, i.e.,

$\displaystyle V_{m}=\phi_{i}-\phi_{e}$ (3.177)

The bidomain equations are

$\displaystyle A_{m}C_{m}\ensuremath{ \dfrac{\ensuremath{\partial}V_{m}}{\ensure...
...eft( \ensuremath{\boldsymbol{\sigma}}_{i}\ensuremath{\nabla}{V_{m}} \right)}} }$ $\displaystyle =\ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\left( \ensurema...
...bla}{\phi_{e}} \right)}} }-A_{m}\ensuremath{\left( I_{ion}-I_{m} \right)}+i_{i}$ (3.178)
$\displaystyle \ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\left( \ensuremat...
...math{\boldsymbol{\sigma}}_{i} \right)}\ensuremath{\nabla}{\phi_{e}} \right)}} }$ $\displaystyle =-\ensuremath{ \ensuremath{\nabla\cdot\ensuremath{\left( \ensuremath{\boldsymbol{\sigma}}_{i}\ensuremath{\nabla}{V_{m}} \right)}} }-i_{i}+i_{e}$ (3.179)

where $ A_{m}$ is the surface to volume ratio for the cell, $ C_{m}$ is the specific membrance capacitance $ \ensuremath{\boldsymbol{\sigma}}_{i}$ is the intracellular conductivity tensor, $ \ensuremath{\boldsymbol{\sigma}}_{e}$ is the extracellular conductivity tensor, $ I_{ion}$ is the ionic current, $ I_{m}$ is the transmembrane current

Operator splitting

Consider the initial value problem

$\displaystyle \ensuremath{ \dfrac{ d u}{d t} }$ $\displaystyle =\ensuremath{\left( L_{1}+L_{2} \right)}u$ (3.180)
$\displaystyle \ensuremath{ u\ensuremath{\left( 0 \right)} }$ $\displaystyle =u_{0}$ (3.181)

where $ L_{1}$ and $ L_{2}$ are some operators.

For Gudunov splitting we first solve

$\displaystyle \ensuremath{ \dfrac{ d v}{d t} }$ $\displaystyle =L_{1}v$ (3.182)
$\displaystyle \ensuremath{ v\ensuremath{\left( 0 \right)} }$ $\displaystyle =u_{0}$ (3.183)

for $ t\in\ensuremath{\left[ 0,\Delta t \right]}$ to give $ \ensuremath{ v\ensuremath{\left( \Delta t \right)} }$. Next we solve

$\displaystyle \ensuremath{ \dfrac{ d w}{d t} }$ $\displaystyle =L_{2}w$ (3.184)
$\displaystyle \ensuremath{ w\ensuremath{\left( 0 \right)} }$ $\displaystyle =\ensuremath{ v\ensuremath{\left( \Delta t \right)} }$ (3.185)

for $ t\in\ensuremath{\left[ 0,\Delta t \right]}$ to give $ \ensuremath{ w\ensuremath{\left( \Delta t \right)} }$. We now set

$\displaystyle \ensuremath{ \tilde{u}\ensuremath{\left( \Delta t \right)} }=\ensuremath{ w\ensuremath{\left( \Delta t \right)} }$ (3.186)

where $ \tilde{u}$ is a approximate solution to the orginal initial value problem.

For Strang splitting we first solve

$\displaystyle \ensuremath{ \dfrac{ d v}{d t} }$ $\displaystyle =L_{1}v$ (3.187)
$\displaystyle \ensuremath{ v\ensuremath{\left( 0 \right)} }$ $\displaystyle =u_{0}$ (3.188)

for $ t\in\ensuremath{\left[ 0,\frac{\Delta t}{2} \right]}$ to give $ \ensuremath{ v\ensuremath{\left( \frac{\Delta t}{2} \right)} }$. Next we solve

$\displaystyle \ensuremath{ \dfrac{ d w}{d t} }$ $\displaystyle =L_{2}w$ (3.189)
$\displaystyle \ensuremath{ w\ensuremath{\left( 0 \right)} }$ $\displaystyle =\ensuremath{ v\ensuremath{\left( \frac{\Delta t}{2} \right)} }$ (3.190)

for $ t\in\ensuremath{\left[ 0,\Delta t \right]}$ to give $ \ensuremath{ w\ensuremath{\left( \Delta t \right)} }$. Next we solve

$\displaystyle \ensuremath{ \dfrac{ d v}{d t} }$ $\displaystyle =L_{1}v$ (3.191)
$\displaystyle \ensuremath{ v\ensuremath{\left( \frac{\Delta t}{2} \right)} }$ $\displaystyle =\ensuremath{ w\ensuremath{\left( \Delta t \right)} }$ (3.192)

for $ t\in\ensuremath{\left[ \frac{\Delta t}{2},\Delta t \right]}$ to give $ \ensuremath{ v\ensuremath{\left( \Delta t \right)} }$.We now set

$\displaystyle \ensuremath{ \tilde{u}\ensuremath{\left( \Delta t \right)} }=\ensuremath{ v\ensuremath{\left( \Delta t \right)} }$ (3.193)

where $ \tilde{u}$ is a approximate solution to the orginal initial value problem.

Monodomain Equation

Under certain conditions the Biodomain equations given in Equation (3.199) and Equation (3.200) can be simplified to

$\displaystyle A_{m}C_{m}\ensuremath{ \dfrac{\ensuremath{\partial}V_{m}}{\ensure...
...ldsymbol{\sigma}}_{m}\ensuremath{\nabla}{V_{m}} \right)}} }=-A_{m}I_{ion}+i_{i}$ (3.194)

where $ A_{m}$ is the surface to volume ratio for the cell, $ C_{m}$ is the specific membrance capacitance, $ I_{ion}$ is the ionic current, $ i_{i}$ is the intracellular stimulus current and $ \ensuremath{\boldsymbol{\sigma}}_{m}$ is the conductivity tensor given by

$\displaystyle \ensuremath{\boldsymbol{\sigma}}_{m}=\dfrac{\ensuremath{\boldsymb...
...{e}}{\ensuremath{\boldsymbol{\sigma}}_{i}+\ensuremath{\boldsymbol{\sigma}}_{e}}$ (3.195)

Rearrangign Equation (3.215) to give

$\displaystyle A_{m}C_{m}\ensuremath{ \dfrac{\ensuremath{\partial}V_{m}}{\ensure...
...ol{\sigma}}_{m}\ensuremath{\nabla}{V_{m}} \right)}} }-I_{ion}+i_{i} \right)}}{}$ (3.196)

Multiphysics Class

Poroelasticity

Coupled Darcy Equation and Elasticity with Sub-iterations

CMISSProblemFiniteElasticityDarcyType couples the finite elasticity equations and Darcy's equation by solving each equation in term, iterating between the two equations until convergence is reached. The full Darcy equations with velocity and pressure are solved.

Darcy's equation is solved on the deformed geometry and the finite elasticity constitutive relation has a dependence on the Darcy fluid pressure.

Coupled Darcy Fluid Pressure and Elasticity

The fully dynamic governing equations for poromechanics are described in (, ). CMISSProblemfiniteElasticityFluidPressureType implements a static form of these equations and strongly couples the equations for finite elasticity to an equation describing the Darcy fluid pressure. Darcy's equation describing the fluid flow is substituted into the mass conservation equation to give:

$\displaystyle \ensuremath{\ensuremath{\nabla}\cdot} \left(\frac{\ensuremath{\boldsymbol{K}}}{\mu} \ensuremath{\ensuremath{\nabla}p} \right) = 0$ (3.197)

A static form of the constitutive law developed by (, ) is implemented in the equations set subtype CMISSEquationsSetElasticityFluidPressureStaticInriaSubtype.

Modal Class

Fitting Class

Optimisation Class


Analytic Solutions

Classical Field Class

Diffusion Equation

One Dimensional Analytic Function 1

From tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/lpde/lpde101.pdf we have for the one-dimension diffusion equation

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}...
...ensuremath{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }$ (4.1)

the analytic solution

$\displaystyle \ensuremath{ u\ensuremath{\left( x,t \right)} }=Ae^{-a\mu^{2}t}\cos(\mu x + B) + C$ (4.2)

Now, OpenCMISS solves diffusion equations of the form

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}...
...suremath{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }=0$ (4.3)

and therefore $ a=-k$. Choosing $ \mu=\dfrac{2\pi}{L}$ gives the OpenCMISS diffusion equation one dimensional analytic function 1, namely

$\displaystyle \ensuremath{ u\ensuremath{\left( x,t \right)} }=Ae^{\frac{4\pi^{2}kt}{L^{2}}}\cos(\frac{2\pi x}{L} + B) + C$ (4.4)

To prove the analytic solution we differentiate Equation (4.4) to give

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}t} }$ $\displaystyle = \dfrac{4\pi^{2}k}{L^{2}}Ae^{\frac{4\pi^{2}kt}{L^{2}}}\cos(\frac{2\pi x}{L} + B)$ (4.5)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }$ $\displaystyle = \dfrac{-2\pi}{L}Ae^{\frac{4\pi^{2}kt}{L^{2}}}\sin(\frac{2\pi x}{L} + B)$ (4.6)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }$ $\displaystyle = \dfrac{2\pi}{L}\dfrac{-2\pi}{L}Ae^{\frac{4\pi^{2}kt}{L^{2}}}\cos(\frac{2\pi x}{L} + B)$ (4.7)
  $\displaystyle = \dfrac{-4\pi^{2}}{L^{2}}Ae^{\frac{4\pi^{2}kt}{L^{2}}}\cos(\frac{2\pi x}{L} + B)$ (4.8)

Substiting Equation (4.23) into Equation (4.3) gives

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}...
...ensuremath{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }$ $\displaystyle = \dfrac{4\pi^{2}k}{L^{2}}Ae^{\frac{4\pi^{2}kt}{L^{2}}}\cos(\frac...
...\dfrac{-4\pi^{2}}{L^{2}}Ae^{\frac{4\pi^{2}kt}{L^{2}}}\cos(\frac{2\pi x}{L} + B)$ (4.9)
  $\displaystyle = 0$ (4.10)

The analytic field component definitions in OpenCMISS are shown in Table 4.1.


Table 4.1: OpenCMISS analytic field components for the one-dimension diffusion equation analytic function 1.
Analytic constant Analytic field component
$ A$ $ 1$
$ B$ $ 2$
$ C$ $ 3$
$ L$ $ 4$


One Dimensional Quadratic Source Analytic Function 1

From tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/npde/npde1104.pdf we have for the one-dimension polynomial source diffusion equation

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}...
...{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }+qw+rw^{m}$ (4.11)

the analytic solution

$\displaystyle \ensuremath{ u\ensuremath{\left( x,t \right)} }=\ensuremath{\left...
...beta+Ae^{\ensuremath{\left( \lambda t \pm\mu x \right)}} \right]}^\frac{2}{1-m}$ (4.12)

where

$\displaystyle \beta=\sqrt{\dfrac{-r}{q}}$ (4.13)

and

$\displaystyle \lambda=\dfrac{q\ensuremath{\left( 1-m \right)}\ensuremath{\left( m+3 \right)}}{2\ensuremath{\left( m+1 \right)}}$ (4.14)

and

$\displaystyle \mu=\sqrt{\dfrac{q\ensuremath{\left( 1-m \right)}^{2}}{2\ensuremath{\left( m+1 \right)}}}$ (4.15)

Now, OpenCMISS solves quadratic source diffusion equations of the form

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}...
...frac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }+a+bu+cu^{2}=0$ (4.16)

and therefore $ k=-1$, $ m=2$, $ a=0$, $ q=-b$ and $ r=-c$. Choosing the positive wave gives the OpenCMISS quadratic source diffusion equation one dimensional analytic function 1, namely

$\displaystyle \ensuremath{ u\ensuremath{\left( x,t \right)} }=\dfrac{1}{\ensuremath{\left[ \beta+Ae^{\ensuremath{\left( \lambda t + \mu x \right)}} \right]}^{2}}$ (4.17)

where

$\displaystyle \beta=\sqrt{\dfrac{-c}{b}}$ (4.18)

and

$\displaystyle \lambda=\dfrac{5b}{6}$ (4.19)

and

$\displaystyle \mu=\sqrt{\dfrac{-b}{6}}$ (4.20)

To prove the analytic solution we differentiate Equation (4.17) to give

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}t} }$ $\displaystyle = \dfrac{2\lambda A\ensuremath{\left( \lambda t+\mu x \right)}e^{...
...h{\left[ \beta+Ae^{\ensuremath{\left( \lambda t + \mu x \right)}} \right]}^{3}}$ (4.21)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }$ $\displaystyle = \dfrac{2\mu A\ensuremath{\left( \lambda t+\mu x \right)}e^{\ens...
...h{\left[ \beta+Ae^{\ensuremath{\left( \lambda t + \mu x \right)}} \right]}^{3}}$ (4.22)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }$ $\displaystyle =$ (4.23)

The analytic field component definitions in OpenCMISS are shown in Table [*].


Table 4.2: OpenCMISS analytic field components for the one-dimension quadratic source diffusion equation analytic function 1.
Analytic constant Analytic field component
$ A$ $ 1$


One Dimensional Exponential Source Analytic Function 1

From tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/npde/npde1105.pdf we have for the one-dimension exponential source diffusion equation

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}...
...c{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }+q+re^{\lambda u}$ (4.24)

the analytic solution

$\displaystyle \ensuremath{ u\ensuremath{\left( x,t \right)} }=\dfrac{-2}{\lambd...
...eta+Ae^{\ensuremath{\left( \pm\mu x - \frac{1}{2}q \lambda t \right)}} \right]}$ (4.25)

where

$\displaystyle \beta=\sqrt{\dfrac{-r}{q}}$ (4.26)

and

$\displaystyle \mu=\sqrt{\dfrac{q\lambda}{2}}$ (4.27)

Now, OpenCMISS solves diffusion equations of the form

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}...
...\dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }+a+be^{cx}=0$ (4.28)

and therefore $ k=-1$, $ q=-a$ and $ r=-b$. Choosing the positive wave gives the OpenCMISS exponential source diffusion equation one dimensional analytic function 1, namely

$\displaystyle \ensuremath{ u\ensuremath{\left( x,t \right)} }=\dfrac{-2}{c}\ln\ensuremath{\left[ \beta+Ae^{\mu\ensuremath{\left( x - \mu t \right)}} \right]}$ (4.29)

where

$\displaystyle \beta=\sqrt{\dfrac{-b}{a}}$ (4.30)

and

$\displaystyle \mu=\sqrt{\dfrac{-ac}{2}}$ (4.31)

To prove the analytic solution we differentiate Equation (4.29) to give .....

The analytic field component definitions in OpenCMISS are shown in Table [*].


Table 4.3: OpenCMISS analytic field components for the one-dimension exponential source diffusion equation analytic function 1.
Analytic constant Analytic field component
$ A$ $ 1$


Elasticity Class

Fluid Mechanics Class

Burgers' Equation

One Dimensional Analytic Function 1

The one-dimension Burgers' equation in OpenCMISS form can be written as

$\displaystyle a\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial...
...2}} }+cu\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }=0$ (4.32)

Adapted from tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/npde/npde1301.pdf we have for the one-dimension Burgers' equation the analytic solution

$\displaystyle \ensuremath{ u\ensuremath{\left( x,t \right)} }=\dfrac{A+ax}{B+ct}$ (4.33)

To prove the analytic solution we differentiate Equation (4.33) to give

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}t} }$ $\displaystyle = \dfrac{-c\ensuremath{\left( A+ax \right)}}{\ensuremath{\left( B+ct \right)}^{2}}$ (4.34)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }$ $\displaystyle = \dfrac{a}{\ensuremath{\left( B+ct \right)}}$ (4.35)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }$ $\displaystyle = 0$ (4.36)
$\displaystyle u\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }$ $\displaystyle = \dfrac{a\ensuremath{\left( A+ax \right)}}{\ensuremath{\left( B+ct \right)}^{2}}$ (4.37)
  (4.38)

Substiting Equation (4.38) into Equation (4.32) gives

$\displaystyle a\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial...
...^{2}} }+cu\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }$ $\displaystyle = \dfrac{-ac\ensuremath{\left( A+ax \right)}}{\ensuremath{\left( ...
...dfrac{ac\ensuremath{\left( A+ax \right)}}{\ensuremath{\left( B+ct \right)}^{2}}$ (4.39)
  $\displaystyle = 0$ (4.40)

The analytic field component definitions in OpenCMISS are shown in Table 4.4.


Table 4.4: OpenCMISS analytic field components for the one-dimension diffusion equation analytic function 1.
Analytic constant Analytic field component
$ A$ $ 1$
$ B$ $ 2$


One Dimensional Analytic Function 2

Adapted from tex2html_begingroupsfhttp://eqworld.ipmnet.ru/en/solutions/npde/npde1301.pdf we have for the one-dimension Burgers's equation the analytic solution

$\displaystyle \ensuremath{ u\ensuremath{\left( x,t \right)} }=aA+\dfrac{2b}{c\ensuremath{\left( x-cAt+B \right)}}$ (4.41)

To prove the analytic solution we differentiate Equation (4.33) to give

$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}t} }$ $\displaystyle = \dfrac{2bcA}{c\ensuremath{\left( x-cAt+B \right)}^{2}}$ (4.42)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }$ $\displaystyle = \dfrac{-2b}{c\ensuremath{\left( x-cAt+B \right)}^{2}}$ (4.43)
$\displaystyle \ensuremath{ \dfrac{\ensuremath{\partial}^{2} u}{\ensuremath{\partial}{x}^{2}} }$ $\displaystyle = \dfrac{4b}{c\ensuremath{\left( x-cAt+B \right)}^{3}}$ (4.44)
$\displaystyle u\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }$ $\displaystyle = \dfrac{-2abA}{c\ensuremath{\left( x-cAt+B \right)}^{2}} - \dfrac{4b^{2}}{c^{2}\ensuremath{\left( x-cAt+B \right)}^{3}}$ (4.45)
  (4.46)

Substiting Equation (4.46) into Equation (4.32) gives

$\displaystyle a\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial...
...^{2}} }+cu\ensuremath{ \dfrac{\ensuremath{\partial}u}{\ensuremath{\partial}x} }$ $\displaystyle = \dfrac{2abcA}{c\ensuremath{\left( x-cAt+B \right)}^{2}}+\dfrac{4b^{2}}{c\ensuremath{\left( x-cAt+B \right)}^{3}}-$ (4.47)
  $\displaystyle \qquad \dfrac{-2abcA}{c\ensuremath{\left( x-cAt+B \right)}^{2}}-\dfrac{4b^{2}c}{c^{2}\ensuremath{\left( x-cAt+B \right)}^{3}}$ (4.48)
  $\displaystyle =\dfrac{2abcA-2abcA}{c\ensuremath{\left( x-cAt+B \right)}^{2}}+\dfrac{4b^{2}-4b^{2}}{c\ensuremath{\left( x-cAt+B \right)}^{3}}$ (4.49)
  $\displaystyle = 0$ (4.50)

The analytic field component definitions in OpenCMISS are shown in Table 4.5.


Table 4.5: OpenCMISS analytic field components for the one-dimension Burgers' equation analytic function 2.
Analytic constant Analytic field component
$ A$ $ 1$
$ B$ $ 2$


Electromechanics Class

Bioelectrics Class

Modal Class

Fitting Class

Optimisation Class


Solvers

Linear Solvers

Linear Direct Solvers

Linear Iterative Solvers

Nonlinear Solvers

Newton Solvers

Newton Line Search Solvers

Newton Trust Region Solvers

Quasi-Newton Solvers

Broyden-Fletcher-Goldfarb-Shanno (BFGS) Solvers

Sequential Quadratic Program (SQP) Solvers

Dynamic Solvers

Differential-Algebraic Equation (DAE) Solvers

Eigenproblem Solvers

Optimisation Solvers


Coupling

There are two general forms of coupling that concern us, which we will term volume-coupling and interface-coupling. A volume-coupled problem is one where several equations are defined over the whole of a region, and communicate over the entirety of that region (although the coupling terms may be zero in certain parts of the domain). An interface-coupled problem concerns those cases where two, or more, separate regions are interacting at a common interace. In 3-d this interface is a surface, and in 2-d it is a line. A typical example of this type of problem is fluid-structure interaction. An example of a volume-coupled problem are double porosity networks, often used in porous flow modelling for geophysics (citation). Clearly, it is conceivable that both types of coupling could exist in a single problem, for example, if one wished to model cardiac electrophysiology, finite deformation solid mechanics and fluid mechanics within the ventricle.

Volume coupling

Two solution strategies are available to us within OpenCMISS, a partitioned approach and a monolithic approach. A partitioned solution strategy involves solving each separate equation in turn, passing the relevant information from one equation to the next. In certain cases, sub-iteration may be required to ensure that a converged solution is reached. For coupling together only two equations this can be an attractive option, however, it can become unwieldy if many equations are to be coupled together. In contrast, the aim of a monolithic strategy is to solve all the equations together, in one solution process, by computing the matrix form of each equation, and then assembling all of these forms into one large matrix system, which is then passed to the solver e.g. PETSc. Again, it is possible to use both strategies within the same problem, if required. For example, if the time scale of one process is much slower than the others, it may be better to solve this separately, and for a larger time step, than the other equations.

Common aspects

Problem & equation class/type/subtype

For a new coupled problem it is necessary to define appropriate new classes, types & subtypes for the problem and the equations. For a single physics case the problem hierarchy takes the following form (as appropriate):

  1. PROBLEM_CLASSICAL_FIELD_CLASS
  2. PROBLEM_POISSON_EQUATION_TYPE
  3. PROBLEM_LINEAR_SOURCE_POISSON_SUBTYPE
For a coupled system, with equations, PHYSICS_ONE and PHYSICS_TWO the hierarchy should approximately follow:
  1. PROBLEM_MULTI_PHYSICS_CLASS
  2. PROBLEM_PHYS_ONE_PHYS_TWO_EQUATION_TYPE
  3. PROBLEM_PHYS_ONE_PHYS_TWO_EQUATION_STANDARD_SUBTYPE

However, for the equations type hierarchy, it is usually not necessary to use the multi-physics equations class. For example, the types could be declared in the following way for the first equation:

  1. EQUATIONS_SET_CLASSICAL_FIELD_CLASS
  2. EQUATIONS_SET_PHYS_ONE_EQUATION_TYPE
  3. EQUATIONS_SET_PHYS_ONE_COUPLED_PHYS_TWO_EQUATION_SUBTYPE
and for the second equation:
  1. EQUATIONS_SET_CLASSICAL_FIELD_CLASS
  2. EQUATIONS_SET_PHYS_TWO_EQUATION_TYPE
  3. EQUATIONS_SET_PHYS_TWO_COUPLED_PHYS_ONE_EQUATION_SUBTYPE

The obvious implication of the above is that a new PHYS_ONE_PHYS_TWO_EQUATION_ROUTINES.f90 must be created, which in principle will include all the same subroutines as any other equations_routines.f90 file e.g. finite element calculate, equations set setup, problem setup. However, usually only the problem related information needs to be setup in the new file, with the equations specific setup included within the separate equations routines of the component physics (using CASE statements on the subtype to select the appropriate sections of code). More details on this will be given in Sections 6.1.3 & 6.1.4.

Dependent field

For a single physics problem a dependent field will usually possess two field variable types, the U variable type (for the left hand side) and the delUdelN type (for the right hand side). For a multi-physics problem we must define additional field variable types, typically two for every additional equations set. The list of possible field variable types is given in field_routines.f90 . Whilst, it is sensible to assign additional field variables in ascending order, this is not necessary, excepting that a dependent field must have at least a U variable type. Attempting to setup a dependent field with only V and delVdelN variable types will result in an error.

As there is only a single dependent field, its setup need only occur in the first of the equations set's setup routines, with the remaining equations set merely checking that the correct field setup has occurred. The key subroutine calls that must be made (either within the library for an auto-created field, or within the example file for a user-specified field) are:


\begin{lstlisting}
CALL FIELD_NUMBER_OF_VARIABLES_CHECK(EQUATIONS_SET_SETUP& ERR...
...IABLE_TYPE,FIELD_DELVDELN_VARIABLE_TYPE/), &
& ERR,ERROR,*999)
\end{lstlisting}
Then the field dimension, field data type and field number of components must be set/checked for each variable type that has been defined on the field.

Equations set field

The equations set field is a mandatory data structure that must be defined for all problems. However, if only a single equations set is being used for that subtype, it does not need to be initialised. However, a variable still needs to be defined in the example file, and passed in to the equations set create start:


\begin{lstlisting}
CALL CMISSEquationsSetCreateStart(EquationsSetUserNumberDiff...
...ion(icompartment),EquationsSetDiffusion(icompartment),&
& Err)
\end{lstlisting}

Need to describe the setup of the equations set field. Number of components, data type etc.

Other fields

Although there is only a single dependent field, we still define separate fields of other types for each equations set, that is, separate materials, source and independent fields, as appropriate for the particular problem under investigation.

Boundary conditions

For each equations set there must be a separate boundary condition object defined as standard. However, it is important to ensure the the correct field variable type is passed into the call to CMISSBoundaryConditionsSetNode, in agreement with the field variable associated with the particular equations set.

Equations mapping


Partitioned scheme specifics

The setup of the partitioned solution scheme is placed in the new PHYS_ONE_PHYS_TWO_EQUATION_ROUTINES.f90 file. Specifically, the following subroutines will need to be completed:
  1. PHYS_ONE_PHYS_TWO_PRE_SOLVE
  2. PHYS_ONE_PHYS_TWO_POST_SOLVE
  3. PHYS_ONE_PHYS_TWO_PROBLEM_SETUP
  4. PHYS_ONE_PHYS_TWO_PROBLEM_SUBTYPE_SET

The pre and post solve routines will call various specific pre/post-solve routines, such as output data, update boundary conditions etc. These pre/post-solve routines can be those specific to PHYS_ONE and PHYS_TWO, and contained within PHYS_ONE_EQUATION_ROUTINES.f90 and PHYS_TWO_EQUATION_ROUTINES.f90 to eliminate excessive code duplication.

PHYS_ONE_PHYS_TWO_PROBLEM_SETUP defines the various solvers required for each equation. Some key subroutine calls required are:


\begin{lstlisting}
CALL SOLVERS_CREATE_START(CONTROL_LOOP,SOLVERS,ERR,ERROR,*999...
...HYSICS_TWO,ERR,ERROR,*999)
!
!Set properties of solver two...
!
\end{lstlisting}


Monolithic scheme specifics

Coupling together equations of differing type/subtype

Not yet attempted.

Coupling together several equations of the same type/subtype

For certain physical problems it is required that several equations of the same type must be coupled in the same domain. This means that the equations_set_subtype is no longer sufficient for distinguishing between the equations, and the specific behaviour one desires for each. It is neceessary in these cases to make use of the equations_set_field structure, which is defined for each equations_set, and contains two integers. The first integer is the id number of that particular equations_set, with the second integer defining the total number of equations_sets of that type. Obtaining the first integer from the equations_set_field within the finite_element_calculate subroutine allows the correct field_variable_type to be identified for interpolation purposes, as well as enabling an automated setup of the dependent field, equations_mappings etc.

The diffusion equation once converted into a finite-element formulation will have a matrix formulation similar to:

$\displaystyle \mathbf{M}_1\frac{d \mathbf{c}_1}{dt} + \mathbf{K}_1 \mathbf{c}_1 = \mathbf{f}_1$     (6.1)

To solve three such diffusion equations monolithically, would result in the following matrix system (before discretisation in time):
$\displaystyle \displaystyle \left( \begin{array}{ccc}
\mathbf{M}_1 & {\boldsymb...
...egin{array}{c}
\mathbf{c}_1\\
\mathbf{c}_2\\
\mathbf{c}_3
\end{array} \right]$ $\displaystyle +$ $\displaystyle \displaystyle \left( \begin{array}{ccc}
\mathbf{K}_1 & {\boldsymb...
...egin{array}{c}
\mathbf{f}_1\\
\mathbf{f}_2\\
\mathbf{f}_3
\end{array} \right]$ (6.2)

Considering a general case of N diffusion equations, there will be N equations sets, with N respective materials fields, to store the diffusion coefficient tensor for each equation. If we now require the equations to be coupled, there will also be ceofficients that determine the strength of this coupling. For each equation set this will be a further $ N$ coefficients. Rather than creating a further N materials fields, an extra variable type is defined for the existing fields, for simplicity, the V variable type. The V variable type is set to be a vector dimension type, with $ N$ components. With coupling terms the matrix system is:

$\displaystyle \displaystyle \left( \begin{array}{ccc}
\mathbf{M}_1 & {\boldsymb...
...{0}} \\
{\boldsymbol{0}} & {\boldsymbol{0}} & \mathbf{K}_3
\end{array} \right)$   $\displaystyle \left[ \begin{array}{c}
\mathbf{c}_1\\
\mathbf{c}_2\\
\mathbf{c}_3
\end{array} \right]$ (6.3)
$\displaystyle +
\left( \begin{array}{ccc}
-\boldsymbol{\lambda}_{12}-\boldsymbo...
...{32} & -\boldsymbol{\lambda}_{31}-\boldsymbol{\lambda}_{32}
\end{array} \right)$ $\displaystyle \left[ \begin{array}{c}
\mathbf{c}_1\\
\mathbf{c}_2\\
\mathbf{c}_3
\end{array} \right]$ $\displaystyle =
\left[ \begin{array}{c}
\mathbf{f}_1\\
\mathbf{f}_2\\
\mathbf{f}_3
\end{array} \right]$ (6.4)

For a standard diffusion equation there are two dynamic matrices defined, stiffness and damping, which are mapped to the U variable type. To store the coupling contributions from the additional equations, extra matrices must be defined. An array of $ N-1$ linear matrices are defined, which map to the $ N-1$ other field variable types (V, U1, U2, etc). The terms on the diagonal of the global coupling matrix must be included within the stiffness matrix, as it is not possible to have both a dynamic matrix (stiffness) and an additional linear matrix mapping to the same field variable type (the library will be modified in future to allow this, see tracker item 2741 to check progress on this feature).

For a simple implementation of this refer to the example, /examples/MultiPhysics/CoupledDiffusionDiffusion/MonolithicSchemeTest .

Setting up the solver

One solver, but add multiple solver equations.

Show some example lines for the mappings that need to be defined in the set setup.

In the set_linear_setup, for the setup case(EQUATIONS_SET_SETUP_FINISH_ACTION), the appropriate field variable must be mapped to the particular equations set that is being setup. This requires a certain degree of hard coding and a priori knowledge about the field variable types avaiable, and the order in which they will be assigned to equations sets.

First, retrieve the equations set field information and store in a local array.
\begin{lstlisting}
EQUATIONS_SET_FIELD_FIELD=>EQUATIONS_SET& EQUATIONS_SET_FIELD...
...S_SET_FIELD_DATA(1)
Ncompartments = EQUATIONS_SET_FIELD_DATA(2)
\end{lstlisting}

Here arrays the store the mapping between the equations set id and the field variable type are allocated and initialised, for use when setting the mappings.


\begin{lstlisting}
CALL EQUATIONS_MAPPING_LINEAR_MATRICES_NUMBER_SET &
& (EQUAT...
...U_TYPES(num_var_count)=VARIABLE_TYPES(2*num_var-1)
ENDIF
ENDDO
\end{lstlisting}

Finally, set the mapping for the dynamic variable (as standard for the diffusion equation), the linear variable types (for the coupling matrices), the right hand side variable (i.e. delUdelN types) and the source variable. Note that the source variable that is mapped is always the FIELD_U_VARIABLE because we use separate source fields (containing only a U variable) for each equations set.


\begin{lstlisting}
CALL EQUATIONS_MAPPING_DYNAMIC_VARIABLE_TYPE_SET &
& (EQUATI...
...T &
& (EQUATIONS_MAPPING,FIELD_U_VARIABLE_TYPE,ERR,ERROR,*999)
\end{lstlisting}

For a monolithic solution step there is only a single solver equation to setup, but several equations set equations to map into this. In the example file this is done in the following way:


\begin{lstlisting}
DO icompartment=1,Ncompartments
CALL CMISSSolverEquationsEqu...
...nsSetDiffusion(icompartment),&
& EquationsSetIndex,Err)
ENDDO
\end{lstlisting}

IO

Currently field_IO_routines does not support the export of fields containing field variable types other than U and delUdelN. Therefore, it is necessary to use the FieldML output routines, and output separate files, one for each variable type, using calls of the following form:


\begin{lstlisting}
CALL FieldmlOutput_AddField(fieldmlInfo,baseName//''.dependen...
... & region, mesh, DependentField,CMISSFieldVVariableType, err )
\end{lstlisting}

Interface coupling

Theory

Dirichlet Boundary Condition

Introduction to Dirichlet boundary condition

Dirichlet boundary condition is a type of essential boundary condition where the value of the variable in an equation does not change at a particular computational node - for e.g. pressure or temperature. In contrast, a natural boundary condition can be flux in diffusion problem or stress in linear elasticity. Dirichlet boundary condition is enforced as a fixed specification of values at a set of boundary nodes. This type of boundary condition is implemented by modifying the set of equations as opposed to the right hand side of the equation. In a finite element framework, by enforcing the Dirichlet boundary condition an equation matrix which has been formed by the test function $ w$ is replaced, to give a new set of equations.

The Dirichlet boundary condition being imposed on a particular nodal value implies that the specific row of the equation matrix corresponding to that particular node is independent of the rest of the nodes and hence can be eliminated and the matrix simplified in the process. For example, if the weak form of any of the problems is,

$\displaystyle \left( \begin{array}{cccc}
M_{11} & M_{12} & \hdots& M_{1N} \\
M...
...n{array}{c}
b_{1} \\
b_{2} \\
b_{3} \\
\vdots\\
b_{n} \end{array} \right) $

Now, if you would like to impose a Dirichlet boundary condition on the first node only i.e., on $ \phi_1$ as $ C$, for examples as in the Laplace equation where Dirichlet boundary condition is implemented on the first and last node only,

$\displaystyle \left( \begin{array}{cccc}
1 & 0 & \hdots& 0
\end{array} \right)...
...begin{array}{c}
C \\
b_{2} \\
b_{3} \\
\vdots\\
b_{n} \end{array} \right) $

which modifies the equation matrix as,

$\displaystyle \left( \begin{array}{cccc}
1 & 0 & \hdots& 0 \\
M_{21} & M_{22} ...
...begin{array}{c}
C \\
b_{2} \\
b_{3} \\
\vdots\\
b_{n} \end{array} \right) $

We can perform another elimination and rewrite the equation as,

$\displaystyle \left( \begin{array}{cccc}
1 & 0 & \hdots& 0 \\
0 & M_{22} & \hd...
...M_{21} \\
b_{3} - C M_{31}\\
\vdots\\
b_{n} - C M_{N1} \end{array} \right) $

Now we can store a reduced matrix as,

$\displaystyle \left( \begin{array}{cccc}
M_{22} & M_{23} & \hdots& M_{2N} \\
M...
...vdots& \ddots& \vdots\\
M_{n2} & M_{n3} & \hdots & M_{NN} \end{array} \right)
$

and the right hand side is now,

$\displaystyle \left( \begin{array}{c}
C \\
(b_{2} - C M_{21}) \\
(b_{3} - C M_{31})\\
\vdots\\
(b_{n} - C M_{N1}) \end{array} \right)
$

Therefore to calculate the right hand side of the equation numerically it is required to store the degrees of freedom associated with the node on which Dirichlet condition is enforced. The implementation of this step is done in OpenCMISS in the boundary conditions routines and is explained in detail in the following section.

Dirichlet boundary condition in OpenCMISS

In this section we give a short description of some of the routines and OpenCMISS coding details required to modify a part of the code related to boundary conditions.

In an example file, for example Laplace, the boundary condition is implemented in the following steps,

In OpenCMISS the sparse distributed matrix is stored in a format called Compressed Row Storage (CRS) format (more details given in the next section). There are several other possibilities such as a column based approach: a Compressed Column Storage (CCS) format or a Block Storage type which have not been implemented yet.

In our current structure the equation matrix is stored in the row based format i.e., CRS. To implement a Dirichlet boundary condition however, the information of the degrees of freedom are required which are stored in the columns of the matrix. Therefore it is useful to store the matrix in a column based approach in order to avoid redundant looping over all rows. Currently a linked list based approach has been proposed which stores the matrix in a CRS format as well as a linked list format when a Dirichlet boundary condition is imposed on the problem. In order reduce the memory usage of storing the equation matrix in two different data formats we store only the degrees of freedom of the matrix that correspond to the nodes in the boundary. The boundary nodes are calculated using a parameter mapping function of the local columns obtained using the global to local mapping of columns of the equation matrix. In this routine the column-row information of the matrix are obtained from a linked list and the degree of freedom are calculated and stored to be used in the calculation of the right hand side of the equation.

More on matrix storage

The Compressed row storage (CRS) is a storage format which stores the nonzero elements of a matrix sequentially. The storage algorithm is outlined by the following example. For a matrix of the form,

$\displaystyle \left( \begin{array}{cccccc}
7 & 0 &-3 & 0 &-1 & 0 \\
2 & 8 & ...
...0 & 0 \\
0 &-1 & 0 & 0 & 4 & 0 \\
0 & 0 & 0 &-2 & 0 & 6 \end{array} \right)$

The compressed row storage or CRS is given by a pointer to the first element of row $ i$ of A and a one dimensional array of the column indices,

$\displaystyle \left( \begin{array}{cccccccccccccc}
row \hspace{0.8mm} indices ...
...ightarrow & 7 &-3 &-1 & 2 & 8 & 1 & -3 & 5 & -1 & 4 & -2& 6 \end{array} \right)$

For a row $ i$, the number of non-zero elements is easily obtained from $ row \hspace{0.8mm} indices(i+1)$- $ row \hspace{0.8mm} indices(i)$. For more example on the implementation refer to the matrix vector routines in OpenCMISS repository.

Neuman Boundary Condition

Robin Boundary Condition


Developers' Document


Introduction

This chapter is intended for new and existing developers of OpenCMISS. It contains tips from the developers who previously encountered the learning curve, and are now trying to reduce it for those who are new to OpenCMISS. New developers are encouraged to use this chapter written in an informal narrative style as an independent study guide to get up to speed with the codebase. The sections are ordered in an increasing level of difficulty, which introduce the basic concepts first and then progress through to more advanced features of the code. Care was taken not to introduce too much information at once - as such, it may at times appear to lack rigor, but after reading this chapter developers will be empowered to answer their own questions. In addition, existing developers of OpenCMISS will find that this chapter may serve as a reference to assist day-to-day development work, and to keep up-to-date with extensions that are made in the core library functionalities.

The Anatomy of an Example File

Let's assume for the sake of discussion that this is your first encounter with OpenCMISS code. If not, simply skip to the next section. As you may already know, OpenCMISS source code is split up into two major parts, one which provides the core library functionality, like evaluating a basis function or solving a pde with finite element method, and one which solves a particular problem by using these library routines. The source files (fortran .f90 and some .c files) for the library routines are entirely contained within /cm/src, whereas the example files can be found under /cm/examples.

As a new developer, a good place to attack the 300,000+ lines of source code is to start at an example file because it gives a good bird's eye view. for historical reasons (it was the first to be set up and the most ``proper'') the Laplace example is often used as a showcase. So let's go ahead and fire up the following file in your favourite editor (In Linux Kate, emacs or gedit work well. In Windows, maybe emacs, Kate or Notepad++ are okay):
/cm/examples/ClassicalField/Laplace/Laplace/src/LaplaceExample.f90.

Later in this chapter we will address the finer details of this example file, however, for now we'll look at the general outline and flow. Scroll down and have a brief look - after all the variable declarations, there should be a call to
\begin{lstlisting}
CALL CMISSInitialise(...)
\end{lstlisting}
All OpenCMISS routines calls are made after this line, since this routine tells the library that we are ready to start using it. Similarly, near the end of the file there is a call to
\begin{lstlisting}
CALL CMISSFinalise(...)
\end{lstlisting}
which initiates the finalising of objects which had been created by OpenCMISS throughout the execution.

Of course, what comes in between these calls does all the interesting stuff. It's about 200 lines of solid blocks of code, but there is an easier way to read this - most tasks in an example file are arranged in blocks, which looks like
\begin{lstlisting}
CALL CMISS****TypeInitialise(...)
CALL CMISS****CreateStart(...)
...
CALL CMISS****CreateFinish(...)
\end{lstlisting}
where **** denotes the type of the object such as coordinate system, region, basis etc. The initialise call usually creates a space in memory for the object and perhaps assign the default values for some of its members. If you forget to issue this call, the executable may or may not throw up an error that says "**** is already associated." depending on whether the developer has written code to check it. Using an object without initialising it may lead to some subtle memory problems. You have been warned. Between CreateStart and CreateFinish, we basically call routines that assign properties to shape and mould this specific instance of the object. It's only when the CreateFinish call is issued, that OpenCMISS oils up its gears and actually gets to work. Thus it is possible that you may have entered conflicting arguments, but the error may not occur until the CreateFinish is called. Because a lot happens in CreateFinish, it's usually the place that you might want to put a stop flag in your debugger (which will be discussed later).

So, armed with the above knowledge, most of the example file can be broken down into these blocks:
\begin{lstlisting}
CoordinateSystem
Region
Basis
Mesh/GeneratedMesh
Decomposition
Field
EquationsSet
BoundaryConditions
Problem
Solver
\end{lstlisting}
From these, you probably won't touch CoordinateSystem (except for changing dimensions), Region and Decomposition because, well, there's nothing much to change unless you're doing something quite advanced. This leaves for an average Joe developer/user the following bits to tinker with: Basis, Mesh, Field, EquationsSet, BoundaryConditions, Problem and Solver.

Basis objects are required for all finite element problems, which currently is the only solution method type implemented. A mesh object holds the geometry and mapping information. Any kind of numerical data that you might want to hold in a vector or matrix, such as the dependent (unknown) variables, material parameters or the nodal coordinates themselves are neatly packaged into the Field type object, which has several variants. These objects are described in further detail in section 7.11.4, but for now we will crack on with this introduction. EquationsSets, Solver and BoundaryConditions objects are so big and important that they have their own designated sections elsewhere in this document (7.11.5, 7.4 and 7.7). This might leave you wondering what the role of the Problem object is - this one manages the control loops, which is a general way to handle linear/nonlinear/steady/time-dependent problems. The Problem objects also holds meta information regarding what equations are being solved, including coupled physics problems that have been introduced recently.

If you're a keen developer and you have peeked ahead at any of the library source files, you might have noticed that they look quite a bit different from the example source file. Every OpenCMISS routine called from the example file begin with CMISS... The reason for this is because the example file may only use the library through OpenCMISS bindings, or application programming interface rather than directly calling the routines from the core library itself. This layer of separation is a pretty standard thing and it protects the user from working directly with the object pointers which may be dangerous. All binding routines are implemented in the file /src/opencmiss.f90 which is the only module we include via the call
\begin{lstlisting}
USE OPENCMISS
\end{lstlisting}
at the top of the example file. When you start developing user-callable library routines, you will have to also write (and maintain!) the bindings if they're to be used in the example file.

Once you start to modify the code yourself, there will invariably be errors. That's okay. What you should know though is how OpenCMISS handles errors. When there is an error in the library routine, in most cases OpenCMISS won't exit straight away with an all-too familiar message like "segmentation fault" but takes a more graceful approach. This is great for users of the library, but as a developer it can take a little while to pinpoint exactly at which line the error has occurred. The default error handling behaviour is to output the error and continue execution, which, for a scientific code like this usually leads to more errors. This behaviour can be changed via
\begin{lstlisting}
CALL CMISSErrorHandlingModeSet(CMISSTrapError,Err)
\end{lstlisting}
which forces the program to stop after the error message has been printed.

While we're on the subject of bug-hunting, let's address the issue of viewing variables. There are a couple of different approaches one can use to check on the value of variables mid-execution. The first is to use a debugger like TotalView, which isn't free but is worth every penny. The other way is to go old-school and put WRITE(*,*) all over the source file (Don't forget to remove this before committing). This approach involves you having to re-compile the entire library which is quite time-consuming. Also, because most data you will be interested in are encapsulated under extensive object structures, it may require some time to figure out exactly what to print out.

Having TotalView installed also helps with looking at routines. At this point we will break through the surface of the example file and go under. Let's take the error handling mode setting routine from above. To follow it down, open up /src/opencmiss.f90 and Ctrl-F for the routine. Between the ENTERS and EXITS routines (which will be described later) you will see that the binding routine simply makes a call to the actual routine which does the work:
\begin{lstlisting}
CALL CMISS_ERROR_HANDLING_MODE_SET(...)
\end{lstlisting}
This subroutine is not defined in opencmiss.f90, as it only contains the bindings. The convention in OpenCMISS code is to prefix every routine name with the module name, which, in this case is CMISS. You can now open up /src/cmiss.f90 and search again for the routine. If you hate having to connect the dots every step of the way in this fashion, you can fire up the example in TotalView and simply double click on the routine names to dive into them.

You should now have a good background to start modifying or setting up your own example files. A large part of doing this involves copy & pasting an existing example and modifying them to fit your own problem (be sure to use the svn cp command to avoid nagging emails). In this case, you might end up spending a lot of time figuring out what arguments a certain function should be called with. For example, you might want to change the type of matrix storage from Full to Sparse. The binding routine that sets this is called
\begin{lstlisting}
CMISSEquationsSparsityTypeSet(Equations,SparsityType,Err)
\end{lstlisting}
The second argument, which is what you want to change, is meant to be selected from a set of named constants. How do I know this? It's obvious after a while, but you can Ctrl-F for this routine in opencmiss.f90. There, you will see in the comment next to SparsityType it will say see OPENCMISS_EquationsSparsityTypes. These constants are also defined within opencmiss.f90 so Ctrl-F it again and it will take you to the near the top of the file where
\begin{lstlisting}
CMISSEquationsSparseMatrices
CMISSEquationsFullMatrices
\end{lstlisting}
are defined. The whole thing can also be done by looking at the developer's page:
http://cmiss.bioeng.auckland.ac.nz/OpenCMISS/doc/doxygen/html/ if you have a lot of patience that is.

Lastly, if you don't know already, learn how to search within full directory of files for a keyword in your editor. Because Fortran doesn't have a great IDE (integrated development environment), this is unfortunately the fastest way to find information you're after.


Data Model in OpenCMISS

As outlined in the previous section, the example file manages two major tasks. The first is all about the data, that is, to do with defining and assigning various bits of information like mesh coordinates, material parameters and so on. The second is all about doing, like setting up and calling the solver, for example.

By data model, I am simply referring here as to how these different bits of numbers are stored, passed around, and accessed. If I asked you right now to find, say, 'the y-coordinate of node 7', you'll probably notice that it's not very straightforward since all numbers have been locked away inside the OpenCMISS data structures. Indeed this is a source of frustration and makes the learning curve steep. Why is this so complicated? The answer is simple - it's because you haven't read this section yet. Oh and also having the formal structures to support data handling makes the parallel coding much easier, even if it does cause some overhead in the code writing.

The key to understanding the OpenCMISS data model is to get to grips with the hierarchy of data objects, which we'll go through now. We'll do this pretty quickly now and come back later to look at the details. Put very simply,
TOP FIELD
MIDDLE VARIABLE, MESH
BOTTOM BASIS

That's it. Easy eh? Now the details: The thing to remember is, OpenCMISS has its own shelves it packs away data onto, and at times it will seem inflexible. In some cases you might be right, but do you want to learn this or not?! Moreover, these shelves (objects) have names which you may already associate in your mind with something else - don't let it throw you off. Be strong. In the below list, pay attention to the indentation for the hierarchical order.

  1. In terms of data, FIELD is the top structure. Think of this as a continuous spatial distribution of numbers, to be discretised by the structures below in the hierarchy. Note there are different types of fields, like geometric or materials field. This list applies mainly to the dependent field, which is another name for the unknown variables of the problem.
  2. The field is continuously varying, but we characterise it by a discrete vector of numbers. This should hint to you that there is a MESH involved in this whole business - you're right. And you figured it out all on your own.

Did you notice how parameter_sets contain the data, while param_to_dof_map contains its mapping into arrays? The structure above separates data from bookkeeping.


Solver Object


PETSc and OpenCMISS


Overview of Finite Element Routines


Boundary conditions


Time Integrations


Parallel Execution


HECToR


Description of OpenCMISS Objects


Basis Object


Mesh Object


Domain Object


Field Object


EquationsSet Object


Decomposition Object

CMISS Conventions, Bits and Bobs

No References!


Index

arc-length definition
Cubic Hermite basis functions
arc-length scaling
Cubic Hermite basis functions
bicubic Hermite basis
$ \xi $ interpolation formula
Extension to higher orders
cubic Hermite basis
$ \xi $ interpolation formula
Cubic Hermite basis functions
basis functions formulae
Cubic Hermite basis functions
Einstein summation notation
Quadratic Lagrange basis functions
element scale factor
Cubic Hermite basis functions
Hermite-sector basis
apex node one
arc-length interpolation formula
Hermite-sector elements
cross-derivative condition
Hermite-sector elements
apex node three
arc-length interpolation formula
Hermite-sector elements
cross-derivative condition
Hermite-sector elements
Hermite-sector elements
Hermite-sector elements
quadratic Hermite basis
apex node one
basis functions formulae
Hermite-sector elements
apex node three
basis functions formulae
Hermite-sector elements

About this document ...



Footnotes

... continuous2.1
For $ C^{1}$ continuity the normals either side of an element boundary must be in the same direction and have the same magnitude. For $ G^{1}$ continuity the normals must only have the same direction.