BISMG:SarahS/weekly reports

Aims
Testing the ability of glimmer-cism to predict changes in ice thickness and velocity for Antarctica over the period 1980 to 2004.

Simulating changes in the Antarctic ice sheet for the next 200 years using data from regional climate models for the A1B and E1 SRES scenarios.

A sensitivity study of key model parameters in glimmer-cism will be carried out and I will also help support an ice model inter-comparison project with Tamsin Edwards.

This work forms part of the ice2sea project.

Week ending 5th Mar 2010
Spent some time with Gethin trying to figure out the reason for the seg fault in the code when replacing SLAP with PARDISO.

PARDISO seems to run ok when calculating tvels but crashes when it is called again to calculate uvels.

The reason for the crash is because the size of pcgsize(2) is modified between the first (to calculate tvels) and the second call (to calculate uvels) but I assume in the code that the NNZ (i.e pcgsize(2)) is the same for both calls.

pcgsize(2) is modified by the line

pcgsize(2) = ct - 1.

So the NNZ is modified between the first call and second call to the solver. To fix this I will have to ensure that the modified matrix is input into the second call to the solver.

Other jobs for next week are
 * To check why Gethin's suboruinte to convert a coordinate list format matrix to Yale format outputs different values for the compressed row indices from the subroutine in glimmer-cism.
 * Print out the sparse matrix and look at the actual values
 * Test if my new function works correctly by using the pardiso example (see documentation).

Week ending 26th Feb 2010

 * Worked on replacing the SLAP solver with PARDISO in glam stand alone code. The aim of this is to parallise the solver part of the glam to reduce the simulation time.


 * I have run the test provided in the PARDISO documentation (pardiso_sym.f page 32) which works ok.
 * Gethin wrote a subroutine to convert a sparse matrix from a coordinate system format to Yale format and vice versa. PARDISO requires Yale format. I have put this subroutine into glam.


 * The call to initialise pardiso works ok. But get a seg fault when I call pardiso. It is likely I have input incorrect values for 'ia','ja', and 'a' into the subroutine. I will check in more detail the reason for the seg fault next week.

Wondering whether I should pursue the pardiso route given the speed up times that can be achieved with trillios. However, if I can get the seg fault sorted out Ill be nearly there and it might not be too difficult then to replace SLAP with PETSc.

Week ending 19th Feb 2010
CCSM land ice working group meeting in Boulder.

Week ending 5th Feb 2010
Started preliminary profiling of glimmer-cism. Ran a short (30min) simulation using input from an ISMIP-HOM experiment. The run spends ~50% of it's time using routines from the SLAP library. See below.

Next week, I am going to extract the Pine Island region from Annes Le Brocq's 5km Antarctic data set to carry out more profiling of glimmer. Ill also be setting up glam to run using pardiso & mumps and will profile the performance of these solvers.

Week ending 29th Jan 2010
Changes made to glint to read in ocean fields: Follow a similar template to the way air temp & precip are read in. OCHAM data provided by Rupert.


 * Add ocean salinity & ocean temp files to the config file.
 * In glint_example.F90 add variables ocean_temp & ocean_sal, allocate memory to these variables.
 * Get glint_example_climate.F90 to read in the extra files. Create a new variable called o_grid which is part of type (global_grid).

Next I need to interpolate the ocean data from the ocean grid to the glimmer grid using existing interpolation subroutines. To do this I think I need to modify


 * call glint..................example_glide.F90
 * call glint_i_tstep .......glint_main.F90
 * call glint_downscaling .......glint_timestep.F90
 * call interp_to_local for the ocean note: interp_to_local is located in glint_climate.F90

Week ending 22th Jan 2010
The report gives a background on different types of ice sheet models (i.e SIA, SSA & full Stokes) and an outline of the processes that are intended to be added to glimmer-cism.
 * Read a report from Los Alamos 2008 CISM meeting.

In the report the code is said to be put on the svn repository at http:/ocean11.lanl.gov/trac/CISM. Has it been decided to use berlios instead or are there two parallel repositories? Should I be using either of these or the source svn repository at Bristol?

The incorrect equation was: $$l=-\frac{\rho_i}{\rho_w}H+f$$
 * Fixed the error that Anne Le. B found in her calculation of the lower ice surface

The correct equation is: $$l=-\frac{\rho_i}{\rho_w}(H-f)$$

Week ending 15th Jan 2010
Needed phthon libraries to run ISMIP-HOM tests. Gethin installed these python libraries on dartagnan.
 * pycdf
 * scipy
 * scipy.ndimage

Began to run the ISMIP-HOM tests. There was a bug in each .config file supplied with the tests.

The bug was caused by the line

which_ho_sparse = 3

This option uses the PARDISO solver which I have not installed yet. A sparse matrix error occurs.

In glide_types.F90 there are only options for 0, 1 or 2 for this parameter. When I change it to

which_ho_sparse = 0 or 1 it works ok.

Wrote an abstract on dust modeling and an abstract on glimmer-cism for EGU.

I've been in touch with Steve Price about installing Pardiso. Apparently, Pardiso does not come with source code so I must install precompiled libraries on dartagnan.

Week ending 8th Jan 2010
Compared the run time for glimmer-cism at 5km and 20km resolution.

2 years at 5km = 15m41.000s

2 years at 20km = 0m25.792s

I am going to run on 20km for testing.

Read Characteristics of the Antarctica SMB, 1958-2002 using a RCM (RACMO2), Van De Berg et al., (2005)

Currently running a 20k simulation and outputting total ice volume get an idea of how long it takes to spin up the model.

Week ending 18th Dec 2009
The model now runs on a 5km resolution. The stack limit on the machine needed to be changed to unlimited (ulimit –s ulimited)

Modified the code to read in geothermal heat flux instead of calculating it. All of Anne's data is read in glide_io.F90.

Read some papers:

Rignot and Thomas 2002- Mass balance of polar ice sheets

Alley et al. 2005 - Ice sheet and sea level changes

Davis et al. 2005 - Snowfall-driven growth in East Antarctica Ice sheet mitigates recent sea level rise

Finished first draft of dust model description paper.

Week ending 13th Dec 2009
Made a list of the subroutines contained in the glide files and what these subroutines do.

Anne sent me the latest version of her dataset with a paper describing the data. Read the paper F. Pattyn (2003) A new three-dimensional higher order thermomechanical ice sheet model..

I still have nt been able to resolve the reason why I get insufficient virtual memory when I run the model on a 5km resolution. Running on bluecrystal gives the same error. The debugger tells me the error occurs at subroutine glide_temp_timeevolvetemp. This subroutine is not present in the code so it must be generated at run time. The _mp suggests some kind of parallelisation. Ill see if I can increase the ulimit next (as suggested when i Google the error message) and see if using 4 processors on blue crystal helps.

Continued to write up a paper.

Week ending 4th Dec 2009
Started a new wiki section for glimmer-cism notes.

Made modifications to glimmer-cism Makefile to remove python.

Some of the week has been spent writing a paper on dust modeling and tuning.

Week ending 27th Nov 2009
Installed an updated version of glimmer-cism.

Modified the code to read in fixed temperature and accumulation data from Anne's dataset.

I’ve been comparing the re-factored temperature solver to the original code to get a handle on what changes have been implemented.

Week ending 20th November 2009
I made changes to the floation equation in glimmer-cism to account for firn snow. These are the changes

1.	In glide_io.F90 : read in firn snow from netCDF file

2.	In glide_vars.def : add extra variable data%geometry%firn to the list

3.	Change density of ice to value used in Anne's data(original value 910kgm-3, new rhoi=918kgm-3)

4.	In glide_setup.F90 insert new subroutine called glide_calclsrf_firn(thck,topg,eus,firn,lsrf)

!*FD Calculates the elevation of the lower surface of the ice, !*FD by considering whether it is floating or not, modify to   !*FD include firn

use glimmer_global, only : dp   use glimmer_physcon, only : rhoi, rhoo implicit none

real(dp), intent(in), dimension :: thck !*FD Ice thickness real(dp), intent(in), dimension :: topg !*FD Bedrock topography elevation real, intent(in)                     :: eus  !*FD global sea level real(dp), intent(in), dimension :: firn !*FD Firn depth real(dp), intent(out), dimension :: lsrf !*FD Lower ice surface elevation

real(dp), parameter :: con = - rhoi / rhoo

where (topg-eus < con * thck + firn) lsrf = con * thck + firn elsewhere lsrf = topg end where

end subroutine glide_calclsrf_firn

5.	In glide_mask.F90 modify subroutine glide_set_mask

!Identify points where the ice is floating or where there is open ocean including firn

where (topg - eus < con * thck + firn) mask = ior(mask, GLIDE_MASK_OCEAN) elsewhere mask = ior(mask, GLIDE_MASK_LAND) endwhere

Test pixels to check if calculated usrf and lsrf equals data provided by Anne.

Looks ok!

Week ending 13th November 2009
Ran glimmer-cism using the glint driver. Forcing with one year NCEP data repeated for 10 years. Initial conditions 1) Ice surface elevation 5km provided by Anne Le Brocq (5km resolution) and 2)  Bed Topography from BEDMAP1_plus  (5km resolution).

I had to run on the 20km resolution because I encountered a run time error about insufficient virtual memory when I ran on a 5km resolution. Perhaps 5km is too fine a resolution to run the model or I need to look for a bug. Prepared ERA-40 monthly 2m temp and total precipitation data for input into glimmer-cism for the years 1958-2002. Changed the driver to read in ERA-40 data.