Coefficient field inversion in an elliptic partial differential equation

We consider the estimation of a coefficient in an elliptic partial differential equation as a model problem. Depending on the interpretation of the unknowns and the type of measurements, this model problem arises, for instance, in inversion for groundwater flow or heat conductivity. It can also be interpreted as finding a membrane with a certain spatially varying stiffness. Let , be an open, bounded domain and consider the following problem:

where is the solution of

Here the unknown coefficient field, denotes (possibly noisy) data, a given force, and the regularization parameter.

The variational (or weak) form of the state equation:

Find such that

where is the space of functions vanishing on with square integrable derivatives. Here, denotes the -inner product, i.e, for scalar functions we denote

Gradient evaluation:

The Lagrangian functional is given by

Then the gradient of the cost functional with respect to the parameter is

where is the solution of the forward problem,

and is the solution of the adjoint problem,

Hessian action:

To evaluate the action of the Hessian in a given direction , we consider variations of the meta-Lagrangian functional

Then the action of the Hessian in a given direction is


Inexact Newton-CG:

Written in abstract form, the Newton Method computes an update direction by solving the linear system

where the evaluation of the gradient involve the solution and of the forward and adjoint problem (respectively) for . Similarly, the Hessian action requires to additional solve the incremental forward and adjoint problems.

Discrete Newton system:

Let us denote the vectors corresponding to the discretization of the functions by and of the functions by .

Then, the discretization of the above system is given by the following symmetric linear system:

The gradient is computed using the following three steps

where stems from the discretization , and stands for the discretization of the right hand side .

where stems from the discretization of , is the mass matrix corresponding to the inner product in the state space, and stems from the data.

where is the matrix stemming from discretization of the regularization operator , and stems from discretization of the term .

Similarly the action of the Hessian in a direction (by using the CG algorithm we only need the action of to solve the Newton step) is given by

where stems from discretization of .

where stems for the discretization of .


By the end of this notebook, you should be able to:

Mathematical tools used:

List of software used:

Set up

Import dependencies

import dolfin as dl
import ufl

import sys
import os
sys.path.append( os.environ.get('HIPPYLIB_BASE_DIR', "../") )
from hippylib import *

import logging
import math

import matplotlib.pyplot as plt
%matplotlib inline


Model set up:

As in the introduction, the first thing we need to do is set up the numerical model. In this cell, we set the mesh, the finite element functions corresponding to state, adjoint and coefficient/gradient variables, and the corresponding test functions and the parameters for the optimization.

# create mesh and define function spaces
nx = 64
ny = 64
mesh = dl.UnitSquareMesh(nx, ny)
Vm = dl.FunctionSpace(mesh, 'Lagrange', 1)
Vu = dl.FunctionSpace(mesh, 'Lagrange', 2)

# The true and inverted parameter
mtrue_expression = dl.Expression(
    'std::log(2 + 7*(std::pow(std::pow(x[0] - 0.5,2) + std::pow(x[1] - 0.5,2),0.5) > 0.2))',
mtrue = dl.interpolate(mtrue_expression,Vm)
m = dl.interpolate(dl.Expression("std::log(2.0)", degree=1),Vm)

# define function for state and adjoint
u = dl.Function(Vu)
p = dl.Function(Vu)

# define Trial and Test Functions
u_trial, p_trial, m_trial = dl.TrialFunction(Vu), dl.TrialFunction(Vu), dl.TrialFunction(Vm)
u_test, p_test, m_test = dl.TestFunction(Vu), dl.TestFunction(Vu), dl.TestFunction(Vm)

# initialize input functions
f = dl.Constant(1.0)
u0 = dl.Constant(0.0)

# plot
nb.plot(mesh,subplot_loc=121, mytitle="Mesh", show_axis='on')
nb.plot(mtrue,subplot_loc=122, mytitle="True parameter field")


# set up dirichlet boundary conditions
def boundary(x,on_boundary):
    return on_boundary

bc_state = dl.DirichletBC(Vu, u0, boundary)
bc_adj = dl.DirichletBC(Vu, dl.Constant(0.), boundary)

Set up synthetic observations:

# noise level
noise_level = 0.05

# weak form for setting up the synthetic observations
a_goal = ufl.inner(ufl.exp(mtrue) * ufl.grad(u_trial), ufl.grad(u_test)) * ufl.dx
L_goal = f * u_test * ufl.dx

# solve the forward/state problem to generate synthetic observations
goal_A, goal_b = dl.assemble_system(a_goal, L_goal, bc_state)

utrue = dl.Function(Vu)
dl.solve(goal_A, utrue.vector(), goal_b)

ud = dl.Function(Vu)

# perturb state solution and create synthetic measurements ud
# ud = u + ||u||/SNR * random.normal
MAX = ud.vector().norm("linf")
noise = dl.Vector()
parRandom.normal(noise_level * MAX, noise)

ud.vector().axpy(1., noise)

# plot
nb.multi1_plot([utrue, ud], ["State solution with mtrue", "Synthetic observations"])


The cost function evaluation:

In the code below, and are symmetric positive definite matrices that stem from finite element discretization of the misfit and regularization component of the cost functional, respectively.

# regularization parameter
gamma = 1e-8

# weak for for setting up the misfit and regularization compoment of the cost
W_equ   = ufl.inner(u_trial, u_test) * ufl.dx
R_equ   = gamma * ufl.inner(ufl.grad(m_trial), ufl.grad(m_test)) * ufl.dx

W = dl.assemble(W_equ)
R = dl.assemble(R_equ)

# refine cost function
def cost(u, ud, m, W, R):
    diff = u.vector() - ud.vector()
    reg = 0.5 * m.vector().inner(R*m.vector() ) 
    misfit = 0.5 * diff.inner(W * diff)
    return [reg + misfit, misfit, reg]

Setting up the state equations, right hand side for the adjoint and the necessary matrices:

# weak form for setting up the state equation
a_state = ufl.inner(ufl.exp(m) * ufl.grad(u_trial), ufl.grad(u_test)) * ufl.dx
L_state = f * u_test * ufl.dx

# weak form for setting up the adjoint equation
a_adj = ufl.inner(ufl.exp(m) * ufl.grad(p_trial), ufl.grad(p_test)) * ufl.dx
L_adj = -ufl.inner(u - ud, p_test) * ufl.dx

# weak form for setting up matrices
Wum_equ = ufl.inner(ufl.exp(m) * m_trial * ufl.grad(p_test), ufl.grad(p)) * ufl.dx
C_equ   = ufl.inner(ufl.exp(m) * m_trial * ufl.grad(u), ufl.grad(u_test)) * ufl.dx
Wmm_equ = ufl.inner(ufl.exp(m) * m_trial * m_test *  ufl.grad(u),  ufl.grad(p)) * ufl.dx

M_equ   = ufl.inner(m_trial, m_test) * ufl.dx

# assemble matrix M
M = dl.assemble(M_equ)

Initial guess

We solve the state equation and compute the cost functional for the initial guess of the parameter m_ini

# solve state equation
state_A, state_b = dl.assemble_system (a_state, L_state, bc_state)
dl.solve (state_A, u.vector(), state_b)

# evaluate cost
[cost_old, misfit_old, reg_old] = cost(u, ud, m, W, R)

# plot
nb.plot(m,subplot_loc=121, mytitle="m_ini", vmin=mtrue.vector().min(), vmax=mtrue.vector().max())
nb.plot(u,subplot_loc=122, mytitle="u(m_ini)")


The reduced Hessian apply to a vector :

Here we describe how to apply the reduced Hessian operator to a vector . For an opportune choice of the regularization, the reduced Hessian operator evaluated in a neighborhood of the solution is positive define, whereas far from the solution the reduced Hessian may be indefinite. On the constrary, the Gauss-Newton approximation of the Hessian is always positive defined.

For this reason, it is beneficial to perform a few initial Gauss-Newton steps (5 in this particular example) to accelerate the convergence of the inexact Newton-CG algorithm.

The Hessian apply reads:

The Gauss-Newton Hessian apply is obtained by dropping the second derivatives operators , , and :

# Class HessianOperator to perform Hessian apply to a vector
class HessianOperator():
    cgiter = 0
    def __init__(self, R, Wmm, C, A, adj_A, W, Wum, gauss_newton_approx=False):
        self.R = R
        self.Wmm = Wmm
        self.C = C
        self.A = A
        self.adj_A = adj_A
        self.W = W
        self.Wum = Wum
        self.gauss_newton_approx = gauss_newton_approx

        # incremental state
        self.du = dl.Vector()

        # incremental adjoint
        self.dp = dl.Vector()

        # auxiliary vectors
        self.CT_dp = dl.Vector()
        self.C.init_vector(self.CT_dp, 1)
        self.Wum_du = dl.Vector()
        self.Wum.init_vector(self.Wum_du, 1)

    def init_vector(self, v, dim):

    # Hessian performed on v, output as generic vector y
    def mult(self, v, y):
        self.cgiter += 1
        if self.gauss_newton_approx:

    # define (Gauss-Newton) Hessian apply H * v
    def mult_GaussNewton(self, v, y):

        # incremental forward
        rhs = -(self.C * v)
        dl.solve (self.A, self.du, rhs)

        # incremental adjoint
        rhs = - (self.W * self.du)
        dl.solve (self.adj_A, self.dp, rhs)

        # Reg/Prior term

        # Misfit term
        self.C.transpmult(self.dp, self.CT_dp)
        y.axpy(1, self.CT_dp)

    # define (Newton) Hessian apply H * v
    def mult_Newton(self, v, y):

        # incremental forward
        rhs = -(self.C * v)
        dl.solve (self.A, self.du, rhs)

        # incremental adjoint
        rhs = -(self.W * self.du) -  self.Wum * v
        dl.solve (self.adj_A, self.dp, rhs)

        # Reg/Prior term
        y.axpy(1., self.Wmm*v)

        # Misfit term
        self.C.transpmult(self.dp, self.CT_dp)
        y.axpy(1., self.CT_dp)
        self.Wum.transpmult(self.du, self.Wum_du)
        y.axpy(1., self.Wum_du)

We solve the constrained optimization problem using the inexact Newton-CG method with Armijo line search.

The stopping criterion is based on a relative reduction of the norm of the gradient (i.e. ).

First, we compute the gradient by solving the state and adjoint equation for the current parameter , and then substituing the current state , parameter and adjoint variables in the weak form expression of the gradient:

Then, we compute the Newton direction by iteratively solving . The Newton system is solved inexactly by early termination of conjugate gradient iterations via Eisenstat–Walker (to prevent oversolving) and Steihaug (to avoid negative curvature) criteria.

Finally, the Armijo line search uses backtracking to find such that a sufficient reduction in the cost functional is achieved. More specifically, we use backtracking to find such that:

# define parameters for the optimization
tol = 1e-8
c = 1e-4
maxiter = 12
plot_on = False

# initialize iter counters
iter = 1
total_cg_iter = 0
converged = False

# initializations
g, m_delta = dl.Vector(), dl.Vector()

m_prev = dl.Function(Vm)

print ("Nit   CGit   cost          misfit        reg           sqrt(-G*D)    ||grad||       alpha  tolcg")

while iter <  maxiter and not converged:

    # assemble matrix C
    C =  dl.assemble(C_equ)

    # solve the adoint problem
    adjoint_A, adjoint_RHS = dl.assemble_system(a_adj, L_adj, bc_adj)
    dl.solve(adjoint_A, p.vector(), adjoint_RHS)

    # assemble W_ua and R
    Wum = dl.assemble (Wum_equ)
    Wmm = dl.assemble (Wmm_equ)

    # evaluate the  gradient
    CT_p = dl.Vector()
    C.transpmult(p.vector(), CT_p)
    MG = CT_p + R * m.vector()
    dl.solve(M, g, MG)

    # calculate the norm of the gradient
    grad2 = g.inner(MG)
    gradnorm = math.sqrt(grad2)

    # set the CG tolerance (use Eisenstat–Walker termination criterion)
    if iter == 1:
        gradnorm_ini = gradnorm
    tolcg = min(0.5, math.sqrt(gradnorm/gradnorm_ini))

    # define the Hessian apply operator (with preconditioner)
    Hess_Apply = HessianOperator(R, Wmm, C, state_A, adjoint_A, W, Wum, gauss_newton_approx=(iter<6) )
    P = R + gamma * M
    Psolver = dl.PETScKrylovSolver("cg", amg_method())

    solver = CGSolverSteihaug()
    solver.parameters["rel_tolerance"] = tolcg
    solver.parameters["zero_initial_guess"] = True
    solver.parameters["print_level"] = -1

    # solve the Newton system H a_delta = - MG
    solver.solve(m_delta, -MG)
    total_cg_iter += Hess_Apply.cgiter

    # linesearch
    alpha = 1
    descent = 0
    no_backtrack = 0
    while descent == 0 and no_backtrack < 10:
        m.vector().axpy(alpha, m_delta )

        # solve the state/forward problem
        state_A, state_b = dl.assemble_system(a_state, L_state, bc_state)
        dl.solve(state_A, u.vector(), state_b)

        # evaluate cost
        [cost_new, misfit_new, reg_new] = cost(u, ud, m, W, R)

        # check if Armijo conditions are satisfied
        if cost_new < cost_old + alpha * c * MG.inner(m_delta):
            cost_old = cost_new
            descent = 1
            no_backtrack += 1
            alpha *= 0.5
            m.assign(m_prev)  # reset a

    # calculate sqrt(-G * D)
    graddir = math.sqrt(- MG.inner(m_delta) )

    sp = ""
    print( "%2d %2s %2d %3s %8.5e %1s %8.5e %1s %8.5e %1s %8.5e %1s %8.5e %1s %5.2f %1s %5.3e" % \
        (iter, sp, Hess_Apply.cgiter, sp, cost_new, sp, misfit_new, sp, reg_new, sp, \
         graddir, sp, gradnorm, sp, alpha, sp, tolcg) )

    if plot_on:
        nb.multi1_plot([m,u,p], ["m","u","p"], same_colorbar=False)

    # check for convergence
    if gradnorm < tol and iter > 1:
        converged = True
        print( "Newton's method converged in ",iter,"  iterations")
        print( "Total number of CG iterations: ", total_cg_iter)

    iter += 1

if not converged:
    print( "Newton's method did not converge in ", maxiter, " iterations")
Nit   CGit   cost          misfit        reg           sqrt(-G*D)    ||grad||       alpha  tolcg
 1     1     1.12916e-05   1.12916e-05   1.34150e-11   1.56616e-02   3.79614e-04    1.00   5.000e-01
 2     1     7.83206e-07   7.83170e-07   3.68430e-11   4.68686e-03   5.35269e-05    1.00   3.755e-01
 3     1     3.12292e-07   3.12243e-07   4.92462e-11   9.73515e-04   7.14570e-06    1.00   1.372e-01
 4     6     1.91985e-07   1.61584e-07   3.04009e-08   4.54547e-04   1.00594e-06    1.00   5.148e-02
 5     1     1.86421e-07   1.56004e-07   3.04171e-08   1.05501e-04   6.25400e-07    1.00   4.059e-02
 6    13     1.80334e-07   1.36992e-07   4.33419e-08   1.12181e-04   2.14958e-07    1.00   2.380e-02
 7     5     1.80267e-07   1.38198e-07   4.20693e-08   1.15139e-05   3.92325e-08    1.00   1.017e-02
 8    15     1.80266e-07   1.38243e-07   4.20235e-08   1.48892e-06   3.20470e-09    1.00   2.906e-03
Newton's method converged in  8   iterations
Total number of CG iterations:  43
nb.multi1_plot([mtrue, m], ["mtrue", "m"])
nb.multi1_plot([u,p], ["u","p"], same_colorbar=False)



Copyright (c) 2016-2018, The University of Texas at Austin & University of California, Merced.
Copyright (c) 2019-2020, The University of Texas at Austin, University of California--Merced, Washington University in St. Louis.
All Rights reserved.
See file COPYRIGHT for details.

This file is part of the hIPPYlib library. For more information and source code availability see

hIPPYlib is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License (as published by the Free Software Foundation) version 2.0 dated June 1991.