BISMG:StephenC

Friday Dec 18 2009
Added a new Richardson error estimator, based on

$$ R_{2h} = L_{2h}^{-1}C(r_h,2) - C(L_h ^{-1} r_h,2) $$

where $$L_h$$ is the linearized viscous tensor operator discretized at resolution $$h(x,y)$$ and $$r_h$$ is the right hand side in the most recent linear system solved.

ie

$$ R_{2h} = L_{2h}^{-1}C(r_h,2) - C(u_h,2) $$

where $$u_h$$ is the velocity function at the end of the Picard iterations

The previous estimator was producing some odd results in the nonlinear case (growing with resolution). This one appears to work well

The major advantage is that error estimates are just that: estimates of the error in the velocity field (rather than truncation order). Of course, they are only reliable close to the solution, but should act as decent error indicators otherwise.

The major disadvantage is computational cost - an additional linear system needs to be solved. But it is a smaller one than the main problem (by a factor of 4 in 2D and 8 in 3D), we have a decent initial guess for its solution ($$C(u_h,2)$$), and it is just a linear problem, to be solved after a nonlinear problem. Profiling suggest that it takes about 5% of the CPU time when running the ISMIP-HOM C test at L = 160 km. It is necessary to insist on blocks of at least 4x4 cells in any patch (for the multigrid solver to work) which might result in some extra cost.

Some graphs [[Media:slcdec18.pdf]] show the error estimator and adaptive refinement at work. In all these the problem is ISMIP-HOM C but at L = 1024 km. The first pair (x-component of u and the order of magnitude of its error etimate) are at 32km resolution - the fast flowing region confined to a couple of cells. But most importantly, it only grows to about 1km/a. In the subsequent graphs, the peak speed grows to 2 km / a over much of a cell, and the error estimates shrink, first in extent and then in magnitude. I think is is interesting that a broad variation in basal stress (easily resolved by the 32km mesh) can lead to a velocity profile that depends so strongly on resolution. If the fast flowing regions were connected to mass sinks, about twice as much mass per unit time would be lost through them in the adaptively refined mesh compared to the coarse mesh.

Friday 11 Dec
Periodic boundary conditions and nonlinear stresses (which are solved by Picard iteration) are now included, so I can compare with Experiment C from Pattyn et al 2008, Crysophere 2 p95. Pattyn et al 2008 is a comparison study of many higher order stress models, including full Stokes models, Blatter-Pattyn (LMLa) type models, and Shelfy-Stream (L1L1) type models. Experiment C prescribes a periodic domain of side $$L$$, a constant ice thickness 1km, a base elevation

$$ z_b = - x \tan (0.1/180 \pi) $$

and basal drag

$$ \beta ^2 = 1000 + 1000 \sin(2 \pi x / L) \sin (2 \pi y / L) \mbox{Pa a m}^{-1} $$

My results (for $$L \in \{5,10,20,40,80,160\} \mbox{km}$$ are similar to those in the paper, but consistently with a slightly higher flow rate than the mean plus standard deviation reuslts published in there - they state that L1L1 approximations are outliers.

Also added a Richardson error estimator, that is an error indicator $$ e(x,y) = |T_{2h} C(u_h) - C(T_{h} u_h)| $$ where $$T$$ is the viscous tensor operator, C(.) is a coarse averaging operator, and the $$h$$ subscript means 'discretised with mesh spacing h'. This is applied at the end of the Picard iterations

Friday 4th Dec
Can now solve (linear) stress balance equations with arbitrary Dirichlett conditions, and advance ice thickness given the solution, u. Set up a test problem where the initial ice thickness is

$$H = 1 + 12 x(1-x)y(1-y) \exp( - \left [ (x-1/2)^2 + (y-1/2)^2 \right] / s$$

and at the boundaries, $$u_i = 0$$ and $$H = 1$$. As time passes, the ice bump spreads across the domain at much the same rate if I refine the majority of the mesh or just a region where $$ |H_{,i}| $$ is largest.

Image shows ice thickness along $$y = 1/2$$ at several times, black for adaptive mesh, purple for mostly-refined mesh



must find a way of generating decent graphics - visit only exports rasters and some odd formats

Friday 27th Nov
Started to write classes AMRLevelIceSheet, IceSheetPhysics, and IceSheetPhysicsIBC which take advantage of CHOMBO's explicit treatment of advection and implicit treatment of diffusion.

Advection

I've got the ice thickness $$H$$ advecting around, using a first-order upwind scheme. With a given velocity field $$u = (1,1)^T$$, $$H = 1$$ on the boundaries and initial $$ H=1 + x(1-x)y(1-y)\exp( ((x-1/2)^2 + (y-1/2)^2)/s)$$, on a 32x32 mesh, not only is the central bump in the initial state not well resolved (below left), but numerical diffusion broadens it a good deal over time (below right)



Numerical diffusion due to the first order upwind scheme decays like $$ |u_x| |u_y| h^2 |\nabla H| $$, so adaptive refinement can be use to reduce it, as in the images below



Stress model

I also started to implement a stress model, currently linear, but otherwise similar to the usual Shelfy-Stream model (equation 2, in Nov 13 notes). This is an elliptic equation, so must be solved on all the patches at the same time. This is working for no-slip boundary conditions ($$u_i = 0 $$) only for now, other Dirichlett boundary conditions cause a failure. The graph below shows velocity vectors superimposed on the initial state.



The associated class, ViscousTensorOp, might be adapted to use external stress models in two ways: first by called external code to evalutate the flux, and second, by using whatever solvers already exists in a preconditioning step.

Friday 20 Nov
I'm still getting to grips with Chombo. I've written a very simple application, approximately solving a first order equation

$$\frac{\partial u}{\partial t} = \sin(10(x+y) - t)$$

with initial values

$$u(x,y,t=0) = \cos(10(x+y))$$

To implement this in Chombo, I wrote a C++ class, AMRLevelSinWave which is a subclass of Chombo's AMRLevel base class. Any subclass of AMRLevel can be passed to the main driver for time dependent calculations, and needs to implement a few methods: advance, initialGrid(g), regrid(g), initialTagCells(a), tagCells(a).

Of these, it is advance that would interface with Glimmer-CISM. In this app, it updates the solution at each level like so

DataIterator dit(_uNew.disjointBoxLayout); for (dit.begin; dit.ok; ++dit){ const Box& b = _uNew.disjointBoxLayout[dit]; BoxIterator bit(b); for (bit.begin; bit.ok; ++bit){ const IntVect& iv = bit; Real x = (iv[0] + 0.5) * dx; Real y = (iv[1] + 0.5) * dx; _uNew[dit](iv,0) += dt * 0.5*(sin( 10.0 * (x+y) - time) + sin( 10.0 * (x+y) - time - dt)); } }

So, iterate through each of a level's set of rectangular regions, using the DataIterator dit. Then fish out _uNew[dit], which contains the data for just one rectangular box (conveniently including a block of data arranged as a FORTRAN multidimensional array would be), and updates it. It is easy enough to imagine passing this block (plus some boundary data, which could be derived from the solution on a coarser level) to some fixed mesh code to update (although something extra needs to be done to match both solution and fluxes at the boundaries)

By implementing the other methods, a series of refined meshes are generated as the solution is evolved, three timesteps worth shown below







Friday 13th November
I've been investigating Chombo, attempting to decide how we might use it to build an ice sheet model with adaptive mesh refinement. Ideally, we would like to integrate it with the GLIMMER-CISM model, but at the moment I don't know enough about either to say how that might be done. I think that the best way to start would be to look at how a Chombo application needs to be written by trying to do just that: implementing a simple ice sheet model, essentially a 2D analogue of the model Rupert used in his 1D grounding line work.

The (nonlinear) PDEs are (1), a vertically integrated mass balance equation $$\frac{\partial H}{\partial t} + (Hu_j)_{,j} = a$$ and (2) a vertically integrated momentum balance equation $$\left [ \nu \left (u_{i,j} + u_{j,i} + 2 \delta _{ij} u_{k,k} \right ) \right]_{,j} = \beta ^2 u_i + \rho g H s_{,i}$$. $$H$$ is the ice height, and $$u$$ the vertically averaged velocity vector. $$ \nu = b \left [ (u_{i,j} + u_{j,i})(u_{j,i} + u_{i,j}) \right ]^\frac{n-1}{2n} $$ is a viscosity given by Glen's flow law for ice, $$\beta^2(x,y)$$ is a basal drag coefficient, $$s(x,y)$$ is the surface elevation of ice.

(1) is a first-order hyperbolic conservation equation and (2) is second-order elliptic. On top of this there will be second-order parabolic equations (heat equation), which can be broken into hyperbolic and elliptic parts, so I think that building a model that solves these should demonstrate that Chombo can do the basic job, and help me decide what kind of bridge between the two code bases might be needed. I expect that the most effort will be due to elliptic operators, which need to be inverted on the whole mesh. Chombo provides a fairly involved multigrid procedure (AMRVCycleMG(l)) which does the job, and only seems to need definitions of the operators on uniform grids, which looks promising. There is a C++ class, ViscousTensorOp which can be used to solve a Stokes problem, with $$A u_i + B \left [ C \left ( u_{i,j} + u_{j,i} \right ) + D \delta _{ij} u_{k,k} \right ] _{,j} = E $$ That is, a linear, constant coefficient elliptic PDE that is, basically, (2) linearised. Chombo also has an example application (SelfGravity) where a system of hyperbolic equations are to be advanced in time alongside solutions to one elliptic equation (Poisson's equation). So, it seems that I have a template to work from.

If anyone would like see the sort of thing Chombo can do, let me know. There is a nice advection-diffusion (heat equation) example included but it has a minor flaw which renders the results insensible - it uses $$|\phi _{,j} / \phi| > \epsilon$$ to decide where to refine the mesh. Since $$\phi$$ is close to zero across much of the domain, the mesh is refined everywhere. In my copy, I have changed the criterion to $$|\phi _{,j} / (1 + \phi)| > \epsilon$$, and the resulting adaptive refinement works nicely