Coefficient field inversion in an elliptic partial differential equation
We consider the estimation of a coefficient in an elliptic partial differential equation as a model problem. Depending on the interpretation of the unknowns and the type of measurements, this model problem arises, for instance, in inversion for groundwater flow or heat conductivity. It can also be interpreted as finding a membrane with a certain spatially varying stiffness. Let , be an open, bounded domain and consider the following problem:
where is the solution of
Here the unknown coefficient field, denotes (possibly noisy) data, a given force, and the regularization parameter.
The variational (or weak) form of the state equation:
Find such that
where is the space of functions vanishing on with square integrable derivatives. Here, denotes the inner product, i.e, for scalar functions we denote
Gradient evaluation:
The Lagrangian functional is given by
Then the gradient of the cost functional with respect to the parameter is
where is the solution of the forward problem,
and is the solution of the adjoint problem,
Hessian action:
To evaluate the action of the Hessian in a given direction , we consider variations of the metaLagrangian functional
Then the action of the Hessian in a given direction is
where

and are the solution of the forward and adjoint problem, respectively;

is the solution of the incremental forward problem,
 and is the solution of the incremental adjoint problem,
Inexact NewtonCG:
Written in abstract form, the Newton Method computes an update direction by solving the linear system
where the evaluation of the gradient involve the solution and of the forward and adjoint problem (respectively) for . Similarly, the Hessian action requires to additional solve the incremental forward and adjoint problems.
Discrete Newton system:
Let us denote the vectors corresponding to the discretization of the functions by and of the functions by .
Then, the discretization of the above system is given by the following symmetric linear system:
The gradient is computed using the following three steps
 Given we solve the forward problem
where stems from the discretization , and stands for the discretization of the right hand side .
 Given and solve the adjoint problem
where stems from the discretization of , is the mass matrix corresponding to the inner product in the state space, and stems from the data.
 Define the gradient
where is the matrix stemming from discretization of the regularization operator , and stems from discretization of the term .
Similarly the action of the Hessian in a direction (by using the CG algorithm we only need the action of to solve the Newton step) is given by
 Solve the incremental forward problem
where stems from discretization of .
 Solve the incremental adjoint problem
where stems for the discretization of .
 Define the Hessian action
Goals:
By the end of this notebook, you should be able to:
 solve the forward and adjoint Poisson equations
 understand the inverse method framework
 visualise and understand the results
 modify the problem and code
Mathematical tools used:
 Finite element method
 Derivation of gradiant and Hessian via the adjoint method
 Inexact NewtonCG
 Armijo line search
List of software used:
 FEniCS, a parallel finite element element library for the discretization of partial differential equations
 PETSc, for scalable and efficient linear algebra operations and solvers
 Matplotlib, a python package used for plotting the results
Set up
Import dependencies
from __future__ import absolute_import, division, print_function
from dolfin import *
import sys
import os
sys.path.append( os.environ.get('HIPPYLIB_BASE_DIR', "../") )
from hippylib import *
import logging
import matplotlib.pyplot as plt
%matplotlib inline
logging.getLogger('FFC').setLevel(logging.WARNING)
logging.getLogger('UFL').setLevel(logging.WARNING)
set_log_active(False)
Model set up:
As in the introduction, the first thing we need to do is set up the numerical model. In this cell, we set the mesh, the finite element functions corresponding to state, adjoint and coefficient/gradient variables, and the corresponding test functions and the parameters for the optimization.
# create mesh and define function spaces
nx = 64
ny = 64
mesh = UnitSquareMesh(nx, ny)
Vm = FunctionSpace(mesh, 'Lagrange', 1)
Vu = FunctionSpace(mesh, 'Lagrange', 2)
# The true and inverted parameter
mtrue = interpolate(Expression('log(2 + 7*(pow(pow(x[0]  0.5,2) + pow(x[1]  0.5,2),0.5) > 0.2))', degree=5),Vm)
m = interpolate(Expression("log(2.0)", degree=1),Vm)
# define function for state and adjoint
u = Function(Vu)
p = Function(Vu)
# define Trial and Test Functions
u_trial, p_trial, m_trial = TrialFunction(Vu), TrialFunction(Vu), TrialFunction(Vm)
u_test, p_test, m_test = TestFunction(Vu), TestFunction(Vu), TestFunction(Vm)
# initialize input functions
f = Constant("1.0")
u0 = Constant("0.0")
# plot
plt.figure(figsize=(15,5))
nb.plot(mesh,subplot_loc=121, mytitle="Mesh", show_axis='on')
nb.plot(mtrue,subplot_loc=122, mytitle="True parameter field")
plt.show()
# set up dirichlet boundary conditions
def boundary(x,on_boundary):
return on_boundary
bc_state = DirichletBC(Vu, u0, boundary)
bc_adj = DirichletBC(Vu, Constant(0.), boundary)
Set up synthetic observations:
 Propose a coefficient field shown above

The weak form of the pde: Find such that .

Perturb the solution: , where
# noise level
noise_level = 0.05
# weak form for setting up the synthetic observations
a_goal = inner(exp(mtrue) * nabla_grad(u_trial), nabla_grad(u_test)) * dx
L_goal = f * u_test * dx
# solve the forward/state problem to generate synthetic observations
goal_A, goal_b = assemble_system(a_goal, L_goal, bc_state)
utrue = Function(Vu)
solve(goal_A, utrue.vector(), goal_b)
ud = Function(Vu)
ud.assign(utrue)
# perturb state solution and create synthetic measurements ud
# ud = u + u/SNR * random.normal
MAX = ud.vector().norm("linf")
noise = Vector()
goal_A.init_vector(noise,1)
parRandom.normal(noise_level * MAX, noise)
bc_adj.apply(noise)
ud.vector().axpy(1., noise)
# plot
nb.multi1_plot([utrue, ud], ["State solution with mtrue", "Synthetic observations"])
plt.show()
The cost function evaluation:
In the code below, and are symmetric positive definite matrices that stem from finite element discretization of the misfit and regularization component of the cost functional, respectively.
# regularization parameter
gamma = 1e8
# weak for for setting up the misfit and regularization compoment of the cost
W_equ = inner(u_trial, u_test) * dx
R_equ = gamma * inner(nabla_grad(m_trial), nabla_grad(m_test)) * dx
W = assemble(W_equ)
R = assemble(R_equ)
# refine cost function
def cost(u, ud, m, W, R):
diff = u.vector()  ud.vector()
reg = 0.5 * m.vector().inner(R*m.vector() )
misfit = 0.5 * diff.inner(W * diff)
return [reg + misfit, misfit, reg]
Setting up the state equations, right hand side for the adjoint and the necessary matrices:
# weak form for setting up the state equation
a_state = inner(exp(m) * nabla_grad(u_trial), nabla_grad(u_test)) * dx
L_state = f * u_test * dx
# weak form for setting up the adjoint equation
a_adj = inner(exp(m) * nabla_grad(p_trial), nabla_grad(p_test)) * dx
L_adj = inner(u  ud, p_test) * dx
# weak form for setting up matrices
Wum_equ = inner(exp(m) * m_trial * nabla_grad(p_test), nabla_grad(p)) * dx
C_equ = inner(exp(m) * m_trial * nabla_grad(u), nabla_grad(u_test)) * dx
Wmm_equ = inner(exp(m) * m_trial * m_test * nabla_grad(u), nabla_grad(p)) * dx
M_equ = inner(m_trial, m_test) * dx
# assemble matrix M
M = assemble(M_equ)
Initial guess
We solve the state equation and compute the cost functional for the initial guess of the parameter m_ini
# solve state equation
state_A, state_b = assemble_system (a_state, L_state, bc_state)
solve (state_A, u.vector(), state_b)
# evaluate cost
[cost_old, misfit_old, reg_old] = cost(u, ud, m, W, R)
# plot
plt.figure(figsize=(15,5))
nb.plot(m,subplot_loc=121, mytitle="m_ini", vmin=mtrue.vector().min(), vmax=mtrue.vector().max())
nb.plot(u,subplot_loc=122, mytitle="u(m_ini)")
plt.show()
The reduced Hessian apply to a vector :
Here we describe how to apply the reduced Hessian operator to a vector . For an opportune choice of the regularization, the reduced Hessian operator evaluated in a neighborhood of the solution is positive define, whereas far from the solution the reduced Hessian may be indefinite. On the constrary, the GaussNewton approximation of the Hessian is always positive defined.
For this reason, it is beneficial to perform a few initial GaussNewton steps (5 in this particular example) to accelerate the convergence of the inexact NewtonCG algorithm.
The Hessian apply reads:
The GaussNewton Hessian apply is obtained by dropping the second derivatives operators , , and :
# Class HessianOperator to perform Hessian apply to a vector
class HessianOperator():
cgiter = 0
def __init__(self, R, Wmm, C, A, adj_A, W, Wum, gauss_newton_approx=False):
self.R = R
self.Wmm = Wmm
self.C = C
self.A = A
self.adj_A = adj_A
self.W = W
self.Wum = Wum
self.gauss_newton_approx = gauss_newton_approx
# incremental state
self.du = Vector()
self.A.init_vector(self.du,0)
# incremental adjoint
self.dp = Vector()
self.adj_A.init_vector(self.dp,0)
# auxiliary vectors
self.CT_dp = Vector()
self.C.init_vector(self.CT_dp, 1)
self.Wum_du = Vector()
self.Wum.init_vector(self.Wum_du, 1)
def init_vector(self, v, dim):
self.R.init_vector(v,dim)
# Hessian performed on v, output as generic vector y
def mult(self, v, y):
self.cgiter += 1
y.zero()
if self.gauss_newton_approx:
self.mult_GaussNewton(v,y)
else:
self.mult_Newton(v,y)
# define (GaussNewton) Hessian apply H * v
def mult_GaussNewton(self, v, y):
# incremental forward
rhs = (self.C * v)
bc_adj.apply(rhs)
solve (self.A, self.du, rhs)
# incremental adjoint
rhs =  (self.W * self.du)
bc_adj.apply(rhs)
solve (self.adj_A, self.dp, rhs)
# Reg/Prior term
self.R.mult(v,y)
# Misfit term
self.C.transpmult(self.dp, self.CT_dp)
y.axpy(1, self.CT_dp)
# define (Newton) Hessian apply H * v
def mult_Newton(self, v, y):
# incremental forward
rhs = (self.C * v)
bc_adj.apply(rhs)
solve (self.A, self.du, rhs)
# incremental adjoint
rhs = (self.W * self.du)  self.Wum * v
bc_adj.apply(rhs)
solve (self.adj_A, self.dp, rhs)
# Reg/Prior term
self.R.mult(v,y)
y.axpy(1., self.Wmm*v)
# Misfit term
self.C.transpmult(self.dp, self.CT_dp)
y.axpy(1., self.CT_dp)
self.Wum.transpmult(self.du, self.Wum_du)
y.axpy(1., self.Wum_du)
The inexact NewtonCG optimization with Armijo line search:
We solve the constrained optimization problem using the inexact NewtonCG method with Armijo line search.
The stopping criterion is based on a relative reduction of the norm of the gradient (i.e. ).
First, we compute the gradient by solving the state and adjoint equation for the current parameter , and then substituing the current state , parameter and adjoint variables in the weak form expression of the gradient:
Then, we compute the Newton direction by iteratively solving . The Newton system is solved inexactly by early termination of conjugate gradient iterations via Eisenstatâ€“Walker (to prevent oversolving) and Steihaug (to avoid negative curvature) criteria.
Finally, the Armijo line search uses backtracking to find such that a sufficient reduction in the cost functional is achieved. More specifically, we use backtracking to find such that:
# define parameters for the optimization
tol = 1e8
c = 1e4
maxiter = 12
plot_on = False
# initialize iter counters
iter = 1
total_cg_iter = 0
converged = False
# initializations
g, m_delta = Vector(), Vector()
R.init_vector(m_delta,0)
R.init_vector(g,0)
m_prev = Function(Vm)
print ("Nit CGit cost misfit reg sqrt(G*D) grad alpha tolcg")
while iter < maxiter and not converged:
# assemble matrix C
C = assemble(C_equ)
# solve the adoint problem
adjoint_A, adjoint_RHS = assemble_system(a_adj, L_adj, bc_adj)
solve(adjoint_A, p.vector(), adjoint_RHS)
# assemble W_ua and R
Wum = assemble (Wum_equ)
Wmm = assemble (Wmm_equ)
# evaluate the gradient
CT_p = Vector()
C.init_vector(CT_p,1)
C.transpmult(p.vector(), CT_p)
MG = CT_p + R * m.vector()
solve(M, g, MG)
# calculate the norm of the gradient
grad2 = g.inner(MG)
gradnorm = sqrt(grad2)
# set the CG tolerance (use Eisenstatâ€“Walker termination criterion)
if iter == 1:
gradnorm_ini = gradnorm
tolcg = min(0.5, sqrt(gradnorm/gradnorm_ini))
# define the Hessian apply operator (with preconditioner)
Hess_Apply = HessianOperator(R, Wmm, C, state_A, adjoint_A, W, Wum, gauss_newton_approx=(iter<6) )
P = R + gamma * M
Psolver = PETScKrylovSolver("cg", amg_method())
Psolver.set_operator(P)
solver = CGSolverSteihaug()
solver.set_operator(Hess_Apply)
solver.set_preconditioner(Psolver)
solver.parameters["rel_tolerance"] = tolcg
solver.parameters["zero_initial_guess"] = True
solver.parameters["print_level"] = 1
# solve the Newton system H a_delta =  MG
solver.solve(m_delta, MG)
total_cg_iter += Hess_Apply.cgiter
# linesearch
alpha = 1
descent = 0
no_backtrack = 0
m_prev.assign(m)
while descent == 0 and no_backtrack < 10:
m.vector().axpy(alpha, m_delta )
# solve the state/forward problem
state_A, state_b = assemble_system(a_state, L_state, bc_state)
solve(state_A, u.vector(), state_b)
# evaluate cost
[cost_new, misfit_new, reg_new] = cost(u, ud, m, W, R)
# check if Armijo conditions are satisfied
if cost_new < cost_old + alpha * c * MG.inner(m_delta):
cost_old = cost_new
descent = 1
else:
no_backtrack += 1
alpha *= 0.5
m.assign(m_prev) # reset a
# calculate sqrt(G * D)
graddir = sqrt( MG.inner(m_delta) )
sp = ""
print( "%2d %2s %2d %3s %8.5e %1s %8.5e %1s %8.5e %1s %8.5e %1s %8.5e %1s %5.2f %1s %5.3e" % \
(iter, sp, Hess_Apply.cgiter, sp, cost_new, sp, misfit_new, sp, reg_new, sp, \
graddir, sp, gradnorm, sp, alpha, sp, tolcg) )
if plot_on:
nb.multi1_plot([m,u,p], ["m","u","p"], same_colorbar=False)
plt.show()
# check for convergence
if gradnorm < tol and iter > 1:
converged = True
print( "Newton's method converged in ",iter," iterations")
print( "Total number of CG iterations: ", total_cg_iter)
iter += 1
if not converged:
print( "Newton's method did not converge in ", maxiter, " iterations")
Nit CGit cost misfit reg sqrt(G*D) grad alpha tolcg
1 1 1.12916e05 1.12916e05 1.34131e11 1.56616e02 3.79614e04 1.00 5.000e01
2 1 7.83203e07 7.83166e07 3.68374e11 4.68686e03 5.35268e05 1.00 3.755e01
3 1 3.12289e07 3.12240e07 4.92387e11 9.73515e04 7.14567e06 1.00 1.372e01
4 6 1.91792e07 1.61389e07 3.04037e08 4.54694e04 1.00593e06 1.00 5.148e02
5 1 1.86420e07 1.56000e07 3.04197e08 1.03668e04 6.15515e07 1.00 4.027e02
6 11 1.80340e07 1.36887e07 4.34527e08 1.12151e04 2.14951e07 1.00 2.380e02
7 5 1.80268e07 1.38103e07 4.21646e08 1.19478e05 3.96243e08 1.00 1.022e02
8 15 1.80266e07 1.38241e07 4.20247e08 1.70777e06 3.37236e09 1.00 2.981e03
Newton's method converged in 8 iterations
Total number of CG iterations: 41
nb.multi1_plot([mtrue, m], ["mtrue", "m"])
nb.multi1_plot([u,p], ["u","p"], same_colorbar=False)
plt.show()
Copyright (c) 20162018, The University of Texas at Austin & University of California, Merced.
All Rights reserved.
See file COPYRIGHT for details.
This file is part of the hIPPYlib library. For more information and source code availability see https://hippylib.github.io.
hIPPYlib is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License (as published by the Free Software Foundation) version 2.0 dated June 1991.