Mixed method & conservation¶

$\newcommand{\dive}{\mathop{\mathrm{div}}} \newcommand{\grad}{\mathop{\mathrm{grad}}} \newcommand{\om}{\varOmega} \newcommand{\oh}{\varOmega_h}$

A single boundary value problem can admit different weak formulations, possibly set in different spaces. For the Poisson equation, the primal weak form can be approximated by the Lagrange finite element method, while the mixed weak form can be approximated by a pair of $H(\dive)$ and DG finite element spaces. This notebook is on the latter.

From the practical standpoint, one of the primary reasons for using the mixed method for the diffusion equation is conservation, a discrete structure-preservation property, which we shall define below. We will use the Raviart-Thomas elements we discussed earlier to implement and study a conservative mixed method in this notebook.

Two typical examples¶

Many boundary value problems arise by combining physical conservation laws with empirical constitutive laws. Here are two examples:

Example A: Porous media flow¶

For slow viscous flow through a permeable porous medium, Darcy law is an empirical statement connecting the flux of the fluid $q$ to the pressure ($p$) gradient by

$$ q = -K \grad p $$

where $K$ is the permeability tensor of the rock or the porous formation. Conservation of mass then implies

$$ \dive q = f $$

where $f$ represents injection sources, wells, or sinks.


Historically, Example A was studied extensively by the fossil fuel industry, but as you can imagine, here in Portland, it's more likely to be applied in studying the percolation of hot water through coffee grinds!

Example B: Diffusion of heat¶

Returning to the stationary heat dissipation example from a previous notebook, recall that Fourier's law of heat conduction is a constitutive law that states (in the absence of advective and radiative effects) that the flux of heat energy $q$ (sometimes also called heat flux density) through a material is related to the gradient of the temperature $T$ by $$ q = -\kappa \,\grad( T) $$

where $\kappa$ is the thermal conductivity of the material. Conservation of energy implies that $$ \dive q = f $$

where $f$ represents the heat source.

Note that in both examples, elimination of flux results in an equation of the form $-\dive (\alpha \grad u) = f$ (with $u$ equal to either $T$ or $p$, and $\alpha$ equal to either $\kappa$ or $K$), which we have treated previously using Lagrange elements. In this notebook, rather than eliminating the flux $q$, our focus is on approximating $q$. Methods that preserve the conservation law of the flux $q$ discretely (made precise shortly), are referred to as conservative methods.

A simple model problem¶

Let us solve a specific example built from Example B. We want to simulate a steady heat flux $\newcommand{\R}{\mathbb{R}}$ $q: \om \to \R^2$ on a rectangular bar-shaped domain $\om$ of length 6 units and width 2 units, divided into equal left and right halves. More specifics of the problem appear after we draw the geometry.

In [1]:
import ngsolve as ng
from ngsolve.webgui import Draw
from netgen.occ import X, Y, MoveTo, Rectangle, Glue, OCCGeometry

rgtbar = Rectangle(3, 2).Face()
lftbar = MoveTo(-3, 0).Rectangle(3, 2).Face()
lftbar.faces.name = 'lftbar'; rgtbar.faces.name = 'rgtbar'
lftbar.edges.Min(X).name = 'lft'; lftbar.edges.Min(Y).name = 'bot'; lftbar.edges.Max(Y).name = 'top'; lftbar.edges.Max(X).name = 'mid'
rgtbar.edges.Max(X).name = 'rgt'; rgtbar.edges.Min(Y).name = 'bot'; rgtbar.edges.Max(Y).name = 'top'
bar = Glue([lftbar, rgtbar]); Draw(bar);
geo = OCCGeometry(bar, dim=2)
mesh = ng.Mesh(geo.GenerateMesh(maxh=1/4))

Here are the further specifications of the thermal problem:

  • Materials whose isotropic thermal conductivity ($\kappa$) values equal 1 and 10 occupy the left and right halves, respectively.
  • The left boundary edge of $\Omega$ is kept at temperature $T=1$ while the right edge is kept at $T=1/10$.
  • The top and bottom boundary edges are perfectly insulated, i.e, the outward flux $q \cdot n$ vanishes there.
  • The bar is heated by a source which is modeled by
$$ f(x, y) = 5 \exp(-10 [(x/5)^2 + (y-1)^2]). $$

Here is a plot of $f$:

In [2]:
from ngsolve import x, y, exp
f = 5 * exp( -10*( (x/5)**2 + (y-1)**2))
Draw(f, mesh);

Clearly, this heat source appears centered in the domain. However, the material is not homogenous as seen from the thermal conductivity plot next:

In [3]:
kappa = mesh.MaterialCF({'lftbar': 1, 'rgtbar': 10})
Draw(kappa, mesh);

Finally, the other piece of given information on boundary conditions tells us that heat may enter the bar through the left end. Here is a visualization of the boundary data extension:

In [4]:
Tbdr = mesh.BoundaryCF({'lft': 1, 'rgt': 0.1})
Tboundary = ng.GridFunction(ng.H1(mesh)); Tboundary.Set(Tbdr, definedon=mesh.Boundaries('lft|rgt'));
Draw(Tboundary);

Given all the above bits of information, the problem is to compute the temperature $T$ and the heat flux $q$. We will solve it momentarily. But do you have any guesses right away on which side will get heated up more in steady state?

Conservation in the finite element context¶

Suppose we use a mesh $\oh$ of a $d$-dimensional domain $\om$ to compute a discrete approximation $q_h$ to the exact flux $q$. Let $P_p(\oh) = \prod_{K \in \oh} P_p(K)$ denote the DG space, so piecewise polynomial vector fields of degree at most $p$ are in $P_p(\oh)^d.$

Definition: We say that a vector field $q_h$ in $P_p(\oh)^d$ is a conservative flux approximation of an exact flux $q$ if

  • (1) on all interior mesh facets, ⟦ $q_h\cdot n$ ⟧ $ = 0$, and
  • (2) on every mesh element $K \in \oh$,
$$ \int_{\partial K} q_h \cdot n = \int_{\partial K} q\cdot n. $$

In condition (1), we have used the notation ⟦ $q_h\cdot n$ ⟧ for the jump of the normal component of the flux $q$. Note that by the divergence theorem the right hand side of condition (2) equals $\int_{\partial K} q\cdot n = \int_K \dive q = \int_K f$ with $f = \dive q$. Also note that both conditions of this definition refer only to the values of $q_h$ on the facets. (Indeed, there are methods that produce flux approximations only along the mesh facets, and the same definition can be used to decide if such fluxes are conservative or not. The mixed method however will produce fluxes on the whole domain.)

To understand why conditions (1) and (2) are interesting properties for an approximation to have, consider one of the above-mentioned applications, say the diffusion of heat. Recall that in physics, conservation of energy is often written in integral form over an control volume $S$ as $$ \int_{\partial S} q \cdot n = \int_S f, $$ i.e., the flux of heat leaving the subdomain $S$ through its boundary (equal to the left hand side above) must equal the amount of heat produced by sources $f$ within $S$ (equal to the right hand side above). Because this conservation statement should hold for any control volume $S$, the integral form and the divergence theorem results in the differential equation $\dive q = f$, which is one of the conservation equations stated previously.

Now, consider how we may obtain a discrete version of the integral form of the conservation law. Instead of arbitrary control volumes, we now restrict ourselves to discrete control volumes, which are the unions of the elements used in the computation. Namely, consider any subset of mesh elements $O_h \subseteq \oh$ and put $S_h = \cup\{ K: K \in O_h\}$. If the discrete flux out of the subdomain $S_h$ satisfies $$ \int_{\partial S_h} q_h \cdot n = \int_{S_h} f, $$ then it makes sense to declare $q_h$ to be a conservative flux, since this is the discrete analog of the exact integral form of the conservation law, $ \int_{\partial S} q \cdot n = \int_S f. $ We can immediately verify that the above equation is a consequence of conditions (1) and (2) in our definition. Note how we need both (1) and (2) to accomplish the interelement cancellations of influx and outflux within $S_h$.

One of the modern themes in numerical solution of partial differential equations, called structure preservation, studies questions like this: how can we construct methods that (not only converge optimally, but also) yield solutions that preserve certain critical features or structures of the exact solution? In the context of our model problem, methods that produce a conservative flux $q_h$, as defined above, are structure-preserving, the preserved structure being a conservation law.

Another way of thinking about conservation is in terms of superconvergence of certain functionals. (The phenomena where the error, or some functional of the error, goes to zero at an unexpectedly high speed is called superconvergence. We have seen a superconvergence example in another notebook.) Condition (2) in the definition of conservation, $ \int_{\partial K} q_h \cdot n = \int_{\partial K} q\cdot n, $ can equivalently be written using the functional $G_K (r) = \int_{\partial K} r \cdot n$, as $$ G_K(q) = G_K (q_h). $$ For good methods, we expect the error $q_h - q$ to go zero as the mesh size $h \to 0$, so we expect $G_K(q) - G_K(q_h) \to 0$. But it is exceptional to get zero for a functional of the error. For a conservative method, the exception does occur and $G_K(q) - G_K(q_h) = 0$.

The Raviart-Thomas mixed method¶

Consider the boundary value problem of finding $\newcommand{\R}{\mathbb{R}}$ $u: \om\to \R$ and $q: \om \to \R^d$ ($d\ge 2$), given $f: \om \to \R$ and (pointwise) invertible $\kappa : \om \to \R^{d \times d}$, satisfying

$$ \kappa^{-1} q + \grad u =0, \qquad \dive q = f, $$

with Dirichlet boundary conditions $u =0 $ on $\partial \om$. From the lectures, we know that the mixed weak formulation of this problem reads as follows.

Find $u \in L^2(\om)$ and $q \in H(\dive, \om)$ satisfying $$ \begin{aligned} \int_\om \kappa^{-1} q \cdot r - \int_\om u\, \dive r & = 0, && \text{ for all } r \in H(\dive, \om), \text{ and } \\ \int_\om v\, \dive q & = \int_\om f \,v, && \text{ for all } v \in L^2(\om). \end{aligned} $$

To obtain a numerical method, starting from the above weak form, we use the DG space $P_p(\oh)$ in place of $L^2(\om)$ and the Raviart-Thomas finite element space (introduced earlier) $$ R_{h, p} = \left\{ q\in H(\dive, \om) : \text{ for all } K \in \oh, \quad q|_K \in P_p(K)^2 + \begin{pmatrix} x \\ y \end{pmatrix} P_p(K) \; \right\} $$ in place of $H(\dive, \om).$

The Raviart-Thomas (RT) mixed method of order $p$ finds $u_h \in P_p(\oh)$ and $q_h \in R_{h, p}$ satisfying $$ \begin{aligned} \int_\om \kappa^{-1} q_h \cdot r_h - \int_\om u_h\, \dive r_h & = 0, && \text{ for all } r_h \in R_{h, p}, \text{ and } \\ \int_\om v_h\, \dive q_h & = \int_\om f \,v_h, && \text{ for all } v_h \in P_p(\oh). \end{aligned} $$

In the latter equation, if we select a test function $v_h$ that is the indicator function of one element $K \in \oh$, which is of course contained in $P_p(\oh)$ for any $p \ge 0$, then we obtain

$$ \int_K \dive q_h = \int_K f, $$

which is equivalent to condition (2) in the definition of a conservative flux. Of course, condition (1) is automatically satisfied by $q_h$, since every $q_h \in R_{h, p}$ has ⟦ $q_h\cdot n$ ⟧ $ = 0$. This proves that the Raviart-Thomas mixed method yields conservative fluxes.

In NGSolve, there are two ways to assemble such a mixed weak form, as we shall see in the next section.

Two ways to assemble¶

The first way: assemble a composite form¶

We can think of the system using a "big" bilinear form $C(\cdot, \cdot)$ in the product space $R_{h, p} \times P_p(\oh)$, i.e., for any $(q_h, u_h), (r_h, v_h) \in R_{h, p} \times P_p(\oh)$, set

$$ C((q_h, u_h), (r_h, v_h)) = \int_\Omega \kappa^{-1} q_h \cdot r_h - \int_\Omega u_h \,\text{div } r_h - \int_\Omega v_h \,\text{div } q_h. $$

Then the previous set of two equations for the Raviart-Thomas mixed method can be described as the single equation using the composite bilinear form:

$$ C((q_h, u_h), (r_h, v_h)) = -\int_\om f v_h, \qquad \text{ for all } v_h \in P_p(\oh), \, r_h \in R_{h,p}. $$

To compute with this form, we need the product space $R_{h, p} \times P_p(\oh)$. This space is represented by the object RW introduced below.

In [5]:
p = 4 
R = ng.HDiv(mesh, order=0, RT=True)
W = ng.L2(mesh, order=0)
RW = R * W   # Alternately:  RW = ng.FESpace([R, W])

Note that trial and test functions in RW have two components. Otherwise the procedure for declaring the bilinear form is exactly the same as what we saw previously.

In [6]:
from ngsolve import div, dx
q, u = RW.TrialFunction() 
r, v = RW.TestFunction()
C = ng.BilinearForm(RW)
C += ((1/kappa) * q*r - u*div(r) - div(q)*v) * dx
C.Assemble(); type(C.mat)
Out[6]:
ngsolve.la.SparseMatrixd

We now have a sparse assembled matrix representing the matrix of the big bilinear form $C(\cdot, \cdot)$ on the product finite element space.

The second way: assemble a block system¶

In the second approach, we make individual components of the bilinear form and put them together, i.e., we make the following "small" bilinear forms:

$$ a(q, r) = \int_\Omega \kappa^{-1} q \cdot r, \qquad b(q, v) = -\int_\Omega v \,\text{div } q $$

Then the equations RT mixed method can be rewritten using these forms as

$$ \begin{aligned} a(q_h, r_h) + b(r_h, u_h) & = 0 && \text{ for all } r_h \in R_{h, p}, \text{ and } \\ \,b( q_h, v_h) & = -\int_\om f \,v_h, && \text{ for all } v_h \in P_p(\oh). \end{aligned} $$

Note that the bilinear form $b$ uses different trial and test space. Implementing this requires the use of keyword arguments trialspace and testspace that we have not seen previously, but is self explanatory. There is no need to define a product space in this approach.

In [7]:
q, r = R.TnT()
u, v = W.TnT()

b = ng.BilinearForm(trialspace=R, testspace=W)
b += -div(q)*v * dx

a = ng.BilinearForm(R)
a += (1/kappa) * q*r * dx

b.Assemble(); a.Assemble();

When working with such component forms, you also need a facility to place them into appropriate places of a block partitioned matrix. Let $\mathtt A$ and $\mathtt B$ denote the stiffness matrices of the forms $a(\cdot, \cdot)$ and $b(\cdot, \cdot)$ made in the code block above, respectively. Then, what we want is the matrix $$ \mathbb B = \begin{bmatrix} \mathtt A & \mathtt B^T \\ \mathtt B & 0 \end{bmatrix} $$ where ${}^T$ denotes the transpose.

In [8]:
BB = ng.BlockMatrix([[a.mat, b.mat.T], 
                     [b.mat, None   ]])
type(BB)
Out[8]:
ngsolve.la.BlockMatrix

This BB object represents the above-mentioned block matrix $\mathbb B$. Note that the None element in the code above is how we represent the zero block sparsely.

Obviously $\mathbb B$ and $C$ represent the same linear operator. Block structures are just a convenience feature to work with components (without replicating the already allocated memory of the components). Block matrices expect block vectors when asked to perform matrix multiplication.

Be warned that it's not a good idea to directly perform a generic vector operation between a block vector object and a regular NGSolve vector. In some cases NGSolve will raise an exception and will not allow you to do so. Also, when you work with block vectors, it is a good idea to ensure that you have variable names bound to its individual component vectors in your python workspace. Here is an exercise to get some practice on working with the BlockMatrix structures and with matrix-vector products in NGSolve.

Exercise I: Write a python code to verify that the application of the block matrix $\mathbb{B}$ and the matrix $C$ of the composite bilinear form to the same vector gives the same result.

(Suggestion: Look up the component attribute of an NGSolve grid function on a product space.)

Solving for the thermal flux¶

We now return to solve model problem stated earlier. To repeat the problem, translating it to a boundary value problem, we need to solve for temperature $T$ satisfying $$\newcommand{\lft}{\partial\om_{\text{left}}} \newcommand{\rgt}{\partial\om_{\text{right}}} \begin{aligned} \kappa^{-1} q + \grad T & = 0, && \text{ in } \om \\ \dive q & = 5\, e^{-10 [(x/5)^2 + (y-1)^2]}, && \text{ in } \om \\ q\cdot n & = 0 && \text{ on top and bottom boundaries} \\ T & = 1 && \text{ on the left boundary } \lft \\ T & = 1/10 && \text{ on the right boundary } \rgt. \end{aligned} $$

To solve this using the mixed method, we have to carefully incorporate the boundary conditions into the mixed formulation. Dirichlet and Neumann conditions are respectively imposed as natural and essential conditions in the mixed formulation, in exactly the opposite manner of the primal formulation. We use the subspace of the Raviart-Thomas space where the flux boundary conditions are incorporated essentially:

$$ \newcommand{\Rtb}{\mathring{R}_{h, p}^{\text{top, bot}}} \Rtb= \{ r \in R_{h, p}: r\cdot n = 0 \text{ on the top and bottom boundaries}\}. $$

Also incorporating the Dirichlet boundary conditions on $T$ naturally, we obtain the following equations of the method: find $T_h \in P_p(\oh)$ and $q_h \in \Rtb$ satisfying

$$ \begin{aligned} \int_\om \kappa^{-1} q_h \cdot r_h - \int_\om T_h\, \dive r_h & = -\int_{\lft} 1\, r_h \cdot n -\int_{\rgt} \frac{1}{10}\, r_h \cdot n, && \text{ for all } r_h \in \Rtb \text{ and } \\ \int_\om v_h\, \dive q_h & = \int_\om f \,v_h, && \text{ for all } v_h \in P_p(\oh). \end{aligned} $$

Block linear system¶

Adopting the second approach to assembly mentioned above, we produce a block linear system, whose right and left hand sides are returned by the next function. Also note that to make the right hand side of the first equation, we need to integrate the trace of $r_h \cdot n$ along boundary edges, which is accomplished below by r.Trace() * n and ds provided by NGSolve.

In [9]:
from ngsolve import dx, ds

def MakeMixed(mesh, f, Tbdr, kappa, p=4):    
    """Return the block linear system (rhs and lhs) of the
    RT mixed method for solving Example T"""
    
    R = ng.HDiv(mesh, order=p, RT=True, dirichlet='top|bot')
    W = ng.L2(mesh, order=p)
    q, r = R.TnT()
    u, v = W.TnT()
    dl = ds(definedon=mesh.Boundaries('lft|rgt'))
    n = ng.specialcf.normal(mesh.dim)

    b = ng.BilinearForm(trialspace=R, testspace=W)
    b += -div(q) * v * dx
    a = ng.BilinearForm(R)
    a += (1/kappa) * q * r * dx    
    fR = ng.LinearForm(R)
    fR += -Tbdr * (r.Trace() * n) * dl
    fW = ng.LinearForm(W)
    fW += -f * v * dx
    with ng.TaskManager():
        b.Assemble() 
        a.Assemble()
        fR.Assemble()
        fW.Assemble() 
    BB = ng.BlockMatrix([[a.mat, b.mat.T], [b.mat, None]])
    FF = ng.BlockVector([fR.vec, fW.vec])   
    return BB, FF, R, W
In [10]:
BB, FF, R, W = MakeMixed(mesh, f, Tbdr, kappa)

Next, we need to solve for a vector x satisfying the block system BB * x = FF. In previous studies, we assembled sparse matrix objects which we could hand over to a sparse solver (using the Inverse(...) method). However, the block matrix BB does not have a sparse Inverse method. Hence let us take a moment to consider an iterative solver.

Iterative solver¶

Iterative solvers built using Krylov spaces do not generally require an assembled matrix, only an object that can perform the associated matrix-vector multiplication. The block matrix BB can perform the multiplication BB * x. Although Krylov space iterative solvers can be easily implemented in a few lines of python code, let us make use of a readymade Krylov solver implementation that NGSolve provides. MINRES is an iterative method suitable for invertible symmetric (not necessarily positive definite) linear systems, like the block system we have produced. Its implementation is available in ng.solvers.MinRes.

The rate of convergence of iterative solvers like MINRES depends on the condition number of the system. Hence they are usually effective only when we also provide it a good preconditioner. Recall that a preconditioner for use in solving $\mathbb B x = b$ is a linear operator $\mathbb P$ (acting on the same-size vectors as $\mathbb B$) with the property that $\mathbb P \mathbb B$ is better conditioned than $\mathbb B$. Once a preconditioner is given, the iterative solver, instead of solving $\mathbb B x = b$, solves the equivalent but better conditioned system $\mathbb P\mathbb B x = \mathbb P b$ behind the scenes.

Of course, one could set $\mathbb P$ to the identity operator on the free degrees of freedom, just to witness the performance of solver without preconditioning. (No preconditioning is the same as setting preconditioner to identitity.)

In [11]:
identityR = ng.Projector(mask=R.FreeDofs(), range=True)  # project to range=true/false bits of mask
identityW = ng.IdentityMatrix(W.ndof)
PI = ng.BlockMatrix([[identityR, None],
                     [None, identityW]])
In [12]:
qT = ng.BlockVector([ng.GridFunction(R).vec, 
                     ng.GridFunction(W).vec])
ng.solvers.MinRes(mat=BB, pre=PI, rhs=FF, sol=qT, maxsteps=30);
LinearSolver iteration 1, residual = 2.9368570094782798     
LinearSolver iteration 2, residual = 2.857585625213512     
LinearSolver iteration 3, residual = 2.3018850062439697     
LinearSolver iteration 4, residual = 2.2128696020378253     
LinearSolver iteration 5, residual = 1.7403833808109723     
LinearSolver iteration 6, residual = 1.6087811986532714     
LinearSolver iteration 7, residual = 1.4303594395287813     
LinearSolver iteration 8, residual = 1.347741240608702     
LinearSolver iteration 9, residual = 1.2055241690436589     
LinearSolver iteration 10, residual = 1.1183951751523777     
LinearSolver iteration 11, residual = 1.0184285543601268     
LinearSolver iteration 12, residual = 0.9819576063507084     
LinearSolver iteration 13, residual = 0.9051149826840907     
LinearSolver iteration 14, residual = 0.882184320830916     
LinearSolver iteration 15, residual = 0.8111504158992343     
LinearSolver iteration 16, residual = 0.7999471747546255     
LinearSolver iteration 17, residual = 0.745821207246296     
LinearSolver iteration 18, residual = 0.7390117518834541     
LinearSolver iteration 19, residual = 0.6967730456724389     
LinearSolver iteration 20, residual = 0.688286273642314     
LinearSolver iteration 21, residual = 0.6616532641561249     
LinearSolver iteration 22, residual = 0.6562779119953984     
LinearSolver iteration 23, residual = 0.6327747548348324     
LinearSolver iteration 24, residual = 0.6273294105373157     
LinearSolver iteration 25, residual = 0.6139627556534044     
LinearSolver iteration 26, residual = 0.6081649083049877     
LinearSolver iteration 27, residual = 0.5958073688483737     
LinearSolver iteration 28, residual = 0.5896193881093046     
LinearSolver iteration 29, residual = 0.5797103910195422     
LinearSolver iteration 30, residual = 0.5730669871621902     
WARNING: LinearSolver did not converge to TOL

As you can see the convergence is abysmally slow. Do try this: increase maxsteps and run again to see how many iterations you might need to do before getting anywhere close to convergence.

We need a better preconditioner. For our block matrix $$ \mathbb B = \begin{bmatrix} \mathtt A & \mathtt B^T \\ \mathtt B & 0 \end{bmatrix}, $$ one technique is to construct a preconditioner in the following form, using the same block partitioning, $$ \mathbb P = \begin{bmatrix} \mathtt M_R^{-1} & 0 \\ 0 & \mathtt M_W^{-1} \end{bmatrix}, $$ where $\mathtt M_R$ and $\mathtt M_W$ are the stiffness matrices of the bilinear forms $ \int_\om \kappa^{-1} q\cdot r + \int_\om (\dive q ) (\dive r),$ for $q, r \in \Rtb$ and $ \int_\om u \, v,$ for $u, v \in P_p(\oh),$ respectively. Since $\mathtt M_W$ is block diagonal with one block for each element (why?), it is very easy to invert. So the cost of construction of this preconditioner is essentially the cost of inversion of $\mathtt M_R$, a smaller matrix than the original $\mathbb B$.

In [13]:
def MakePrec(R, W):
    q, r = R.TnT()
    mR = ng.BilinearForm(((1/kappa)*q*r + div(q)*div(r))*dx)
    mR.Assemble()
    P = ng.BlockMatrix([[mR.mat.Inverse(R.FreeDofs()), None],
                        [None,          W.Mass(1).Inverse()]])
    return P
In [14]:
PP = MakePrec(R, W)
In [15]:
qh = ng.GridFunction(R)
Th = ng.GridFunction(W)
qT = ng.BlockVector([qh.vec, Th.vec])
ng.solvers.MinRes(mat=BB, pre=PP, rhs=FF, sol=qT, maxsteps=15);
LinearSolver iteration 1, residual = 4.655166594591876     
LinearSolver iteration 2, residual = 4.650238205329225     
LinearSolver iteration 3, residual = 1.6631181605024472     
LinearSolver iteration 4, residual = 0.30607352339348776     
LinearSolver iteration 5, residual = 0.039710972039450426     
LinearSolver iteration 6, residual = 0.002029206001448312     
LinearSolver iteration 7, residual = 0.0001244983185180265     
LinearSolver iteration 8, residual = 3.3971710477680996e-06     
LinearSolver iteration 9, residual = 7.718495056508469e-08     

Why this preconditioner works is an interesting question that we will postpone for another time. For now, let's settle for visualizing the computed solution.

Temperature and flux¶

MinRes overwrites the solution into the same memory location as the input qT. It is in the form of a block vector, whose components occupy the same memory as qh and Th. So to plot the temperature, we can directly use the variable name Th which is bound to the memory block of the $T$-component.

In [16]:
Draw(Th, deformation=True, settings={"Colormap":{"autoscale": True, "ncolors": 20}, "camera":{"transformations" :[{"type": "rotateX", "angle": -50}, {"type": "rotateY", "angle": -20}, {"type": "rotateZ", "angle": -10}]}});

This plot of the calculated temperature $T$ reveals lower temperature variations on the right than on the left. You also see that there is a kink at the middle interface (indicating a discontinuity in $\grad T$) to accomodate a more flat $T$ profile on the right.

Next, we plot the heat flux vector field:

In [17]:
Draw(qh, vectors={'grid_size': 30}, settings={"Colormap":{"autoscale": True, "ncolors": 20}, "Light": {"ambient":0.1, "diffuse":0.7, "shininess": 10,"specularity": 0.9}});    

It is clear from from this plot that the heat flow on the right subdomain is more than on the left (as the colors represent the magnitude of the heat flux). In fact, from the direction of the fluxes, we see that the left boundary condition is helping to dissipate some of the heat generated by the source. Also observe the apparent continuity of the normal component of the heat flux despite the jump in the material properties. This is exactly as it should be for a conservative method, as otherwise nonphysical energy creation or energy loss will be predicted across the interface.

Let us also note that even though the blob of heat source was centered in the domain, it created an uncentered temperature peak to the left. Is this shift of the peak due to the difference in boundary conditions on the left and right, or due to the difference in the left and right thermal conductivity? The utility of having a simulation tool is that we can now answer questions like this easily even when the answers are not intuitive. Go ahead and experiment, changing the boundary conditions, and changing the thermal conductivites set in the code cells above as you please. You will then be led to the conclusion that the shift of the peak is primarily due to the fact that the right half of the domain, due to its higher thermal conductivity, can transmit heat through it more efficiently, flattening out any temperature peaks there. The plot of the computed thermal fluxes further confirms this by showing high fluxes in the right half.

Exercise II: Solve the same model problem using Lagrange finite elements applied to the primal weak form. Using your computed temperature $\tilde T_h$ (in the Lagrange space $ V_{h, p}$) from this approach, further compute an approximate flux $\tilde{q}_h = -\kappa \grad \tilde{T}_h$. Computationally show that this flux is not conservative. Specifically, letting $\Gamma$ denote the middle interface, compute $\int_\Gamma$ ⟦ $\tilde q_h\cdot n$ ⟧ to show that it is nonzero (thereby quantifying the spurious energy loss across $\Gamma$). For comparison, letting $q_h$ denote the flux from the Raviart-Thomas mixed method, compute $\int_\Gamma$ ⟦ $q_h\cdot n$ ⟧ and show that it is close to machine zero.

(Hint: To integrate multivalued functions on element interfaces, dx(element_boundary=True) is useful. Think of how to restrict this integration to just $\Gamma$.)

Exercise III: In the mixed method above, what happens if you replace the RT space $\Rtb$ with its subspace of Cartesian product of Lagrange spaces (with the right boundary conditions) of degree at most $p$? (Try with $p=1$.)

$\ll$ Table Of Contents
$\ll$ Jay Gopalakrishnan