Difference between revisions of "Nemo"
AnneOsborne (talk | contribs) |
|||
(35 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
==Welcome to the Bristol Nemo Page!== | ==Welcome to the Bristol Nemo Page!== | ||
− | Here are a few tips and comments based on Bristol's experiences compiling and running NEMO. Hopefully, you will find some useful. | + | Here are a few tips and comments based on Bristol's experiences compiling and running NEMO. Hopefully, you will find some useful. '''Note that we are using NEMOv3'''. |
Please take a look the [http://www.lodyc.jussieu.fr/NEMO/ official NEMO webpage] before continuing with this page. | Please take a look the [http://www.lodyc.jussieu.fr/NEMO/ official NEMO webpage] before continuing with this page. | ||
+ | |||
+ | ==Model Compilation== | ||
+ | |||
+ | In the steps below, I'm assuming that you are interested in the ORCA2_LIM version of the model. | ||
+ | |||
+ | If you follow the steps described on the [http://www.lodyc.jussieu.fr/NEMO/ official NEMO webpage], you will be able to download and compile the model, with one caveat. When using newer versions of the NetCDF libraries--such as v3.6.2 installed on the Quest cluster--you will need to change line 212 of modipsl/util/AA_make.gdef to "NCDF_INC = /usr/local/netcdf/3.6.2/pgi-251/include" and line 213 to "NCDF_LIB = -L/usr/local/netcdf/3.6.2/pgi-251/lib -lnetcdf -lnetcdff". | ||
+ | |||
+ | ===Parallel Version=== | ||
+ | |||
+ | To compile the model for parallel runs, we will need to: | ||
+ | |||
+ | * Edit '''modipsl/modeles/NEMO/OPA_SRC/par_oce.F90''', and change: | ||
+ | <pre> | ||
+ | #if ! defined key_mpp_dyndist | ||
+ | INTEGER, PUBLIC, PARAMETER :: & !: | ||
+ | jpni = 4, & !: number of processors following i | ||
+ | jpnj = 4, & !: number of processors following j | ||
+ | jpnij = 16 !: nb of local domain = nb of processors | ||
+ | ! ! ( <= jpni x jpnj ) | ||
+ | </pre> | ||
+ | to reflect the domain decomposition that you would like. NB we will need to make sure that we match your resource request appropriately when submitting the job to the queueing manager. | ||
+ | |||
+ | * Edit '''modipsl/util/AA_make.gdef''' so that we use '''mpif90''' as the linker, e.g.: | ||
+ | <pre> | ||
+ | #-Q- linux F_L = mpif90 | ||
+ | </pre> | ||
+ | |||
+ | * Copy the file '''mpif.h''' from, e.g. '''/usr/local/Cluster-Apps/ofed/1.2.5/mpi/intel/mvapich-0.9.9/include''' (if your are using the intel compiler on quest) into '''modipsl/lib'''. If running on quest, also ensure that you have the correct module loaded, e.g. '''module add default-infiniband-intel-251'''. | ||
+ | |||
+ | * Edit '''modipsl/config/ORCA2_LIM/scripts/BB_make.ldef''' and add in '''key_mpp_mpi''', e.g.: | ||
+ | <pre> | ||
+ | P_P = key_trabbl_dif key_vectopt_loop key_vectopt_memory key_orca_r2 key_ice_lim key_lim_fdd key_dynspg_flt key_d | ||
+ | iaeiv key_ldfslp key_traldf_c2d key_traldf_eiv key_dynldf_c3d key_dtatem key_dtasal key_tau_monthly key_flx_bulk_ | ||
+ | monthly key_tradmp key_trabbc key_zdftke key_zdfddm key_mpp_mpi | ||
+ | </pre> | ||
+ | |||
+ | * Re-run '''modipsl/util/ins_make -t <section>''' (e.g. <section> = linux) to regenerate the makefiles and recompile. | ||
+ | |||
+ | * Note that on quest, the AMD chipset means that we must not request >2GB of memory per process. If you are compiling a high resolution version of the model and you encounter a '''relocated truncation to fit''' error, you must widen your domain decomposition. | ||
+ | |||
+ | ==Running the Model== | ||
+ | |||
+ | Nemo provides a test case, ORCA2_LIM, for a global ocean run coupled with the sea-ice model. The run simulates one year. The configuration for ORCA2_LIM is downloaded through CVS when getting Nemo and the necessary forcing files are provided through a link on the Nemo website or can be copied from ~ggdaqw/NEMO_forcing as detailed below. | ||
+ | |||
+ | The instructions for running the model are not so clear. You will need two things to run the model; (i) some forcing files; and (ii) a run script. Assuming you are running on the Quest cluster, you can obtain forcing files for an example job by copying the directory ~ggdagw/NEMO_forcing and it's contents to you home directory. If you adopt the same directory name, you will be able to use [[Nemo_example_runscript|this runscript]]. Place the runscript into the directory modipsl/config/ORCA2_LIM/EXP00. I named it Job_EXP.new. You will need to create the directory "$DUMP2HOLD/NEMO" in your home directory to collect the output. You can submit the job by typing "qsub Job_EXP.new", check whether it is running using "showq", and obviously look in the output directory. | ||
+ | |||
+ | ===Parallel Version=== | ||
+ | |||
+ | The section '''nam_mpp''' must be edited in the file '''namelist''' and the mpi send/recieve type switched from the default 'S' to 'I': | ||
+ | |||
+ | <pre> | ||
+ | c_mpi_send = 'I' ! mpi send/recieve type ='S', 'B', or 'I' for standard send, | ||
+ | ! buffer blocking send or immediate non-blocking sends, resp. | ||
+ | nn_buffer = 0 ! size in bytes of exported buffer ('B' case), 0 no exportation | ||
+ | </pre> | ||
+ | |||
+ | All the above regaring running the model holds true when running a parallel version of the model, except you must edit your run-script (Job_EXP.new) in several places e.g.: | ||
+ | <pre> | ||
+ | #PBS -N EXP.1 | ||
+ | #PBS -o output_EXP.1 | ||
+ | #PBS -j oe | ||
+ | #PBS -S /usr/bin/ksh | ||
+ | #PBS -q quest-medium | ||
+ | #PBS -l nodes=4:ppn=4 | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | NB_PROC=16 | ||
+ | </pre> | ||
+ | |||
+ | In order for each processor to access the starting coordinate and geothermal heating files, create as many links are number of processors you are using. i.e. for four processors: | ||
+ | |||
+ | <pre> | ||
+ | # for parallel | ||
+ | ln -s coordinates.nc coordinates_000.nc | ||
+ | ln -s coordinates.nc coordinates_001.nc | ||
+ | ln -s coordinates.nc coordinates_002.nc | ||
+ | ln -s coordinates.nc coordinates_003.nc | ||
+ | |||
+ | ln -s geothermal_heating.nc geothermal_heating_000.nc | ||
+ | ln -s geothermal_heating.nc geothermal_heating_001.nc | ||
+ | ln -s geothermal_heating.nc geothermal_heating_002.nc | ||
+ | ln -s geothermal_heating.nc geothermal_heating_003.nc | ||
+ | </pre> | ||
+ | |||
+ | <pre> | ||
+ | #- To be use for a mpp run | ||
+ | echo $HOSTNAME | ||
+ | pwd | ||
+ | MPIRUN=~/mpirun-intel | ||
+ | cat $PBS_NODEFILE > machine.file.$PBS_JOBID | ||
+ | # call mpirun | ||
+ | $MPIRUN -ssh -np ${NB_PROC} -hostfile machine.file.$PBS_JOBID ./opa.xx | ||
+ | </pre> | ||
+ | |||
+ | * Note that we have created the symbolic link '''~/mpirun-intel''' to point to '''/usr/local/Cluster-Apps/ofed/1.2.5/mpi/intel/mvapich-0.9.9/bin/mpirun_rsh''' so that we can access (the right version of, in this case intel) mpirun_rsh, despite enviroment variable problems with ethe modular environment on quest. | ||
==Projects using Nemo at Bristol== | ==Projects using Nemo at Bristol== | ||
===Mediterranean Sea=== | ===Mediterranean Sea=== | ||
+ | |||
+ | Sediment cores recovered from the Mediterranean Sea reveal distinct layers of organic rich material thought to be associated with deep water anoxia, caused by either a shut down in deep water circulation, an increase in productivity, or both. One of the ways this could have happened is by a large increase in fresh water runoff into the basin. We intend to use a Mediterranean configuration of NEMO to study how point sources of freshwater, represented by low delta-18-O, are circulated around the Mediterranean, and compare the modeled distribution with records of delta-18-O during these "Mediterranean Anoxic Events". | ||
===Amundsen Sea=== | ===Amundsen Sea=== | ||
Line 19: | Line 117: | ||
Input data at the sea surface and at the open boundaries are constructed using output data from the [http://www.soc.soton.ac.uk/JRD/OCCAM/ OCCAM global ocean model]. | Input data at the sea surface and at the open boundaries are constructed using output data from the [http://www.soc.soton.ac.uk/JRD/OCCAM/ OCCAM global ocean model]. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | == | + | ==Regional model== |
How to set up a regional model starting from the ORCA2_LIM test case: | How to set up a regional model starting from the ORCA2_LIM test case: | ||
Line 34: | Line 127: | ||
#* Edit file '''fait_config''' in modipsl/modeles/UTIL/. Add <code>\n AMUNDSEN</code> to <code>LIST =</code> and add the line <code>set -A DIR_AMUNSDEN OPA_SRC C1D_SRC NST_SRC</code>. | #* Edit file '''fait_config''' in modipsl/modeles/UTIL/. Add <code>\n AMUNDSEN</code> to <code>LIST =</code> and add the line <code>set -A DIR_AMUNSDEN OPA_SRC C1D_SRC NST_SRC</code>. | ||
# Change CPP keys by editing '''BB_make.ldef''' in modipsl/config/AMUNDSEN/scripts/. Remove <code>key_orca_r2</code> since you're not running a global model and add <code>key_obc</code> to invoke [[open boundary conditions]] instead. | # Change CPP keys by editing '''BB_make.ldef''' in modipsl/config/AMUNDSEN/scripts/. Remove <code>key_orca_r2</code> since you're not running a global model and add <code>key_obc</code> to invoke [[open boundary conditions]] instead. | ||
+ | # '''TIP''': modify '''tradmp.F90''' in modipsl/modeles/NEMO/OPA_SRC/TRA/. Replace the line <code>if (cp_cfg=="orca" .AND. (ndmp > 0 .OR. ndmp==-1)) then </code> with <code>if (ndmp > 0 .OR. ndmp==-1) then</code>. If you don't make the change you'll get the error message <code>tra_dmp: You should not have seen this print error?</code>. With this change you can apply tracer damping as in the ORCA configurations (i.e. global models for various resolutions) without special treatment for particular areas such as the Mediterranean or the Red Sea that are grid dependent. | ||
+ | # Set the new domain resolution in '''par_oce.F90''' in modipsl/modeles/NEMO/OPA_SRC/. Parameters for the resolution are <code>jpidta</code> and <code>jpjdta</code>. | ||
+ | # Set parameters in '''obc_oce.F90''' and '''obc_par.F90''' in modipsl/modeles/NEMO/OPA_SRC/OBC/. | ||
+ | #* obc_oce.F90: <code>nbobc</code> is the number of open boundaries, i.e. set to 1, 2, 3 or 4. | ||
+ | #* obc_par.F90: set logical parameter <code>lp_obc_east</code> to <code>.true.</code> if open boundary conditions are to be applied to part or all of the eastern face of the domain. Alter <code>jpjed</code> and/or <code>jpjef</code> if the open boundary covers only part of the eastern face. Similarly for <code>lp_obc_west</code>, <code>jpjwd</code>, <code>jpjwf</code>, <code>lp_obc_north</code>, <code>jpind</code>, <code>jpinf</code> and <code>lp_obc_south</code>, <code>jpisd</code>, <code>jpisf</code>. | ||
+ | # In the input file namelist set <code>n_cla</code> to 0. Assuming you have no closed seas, i.e. Mediterranean and Red Sea in ORCA2. | ||
+ | # Generate input data from the ORCA2_LIM test case files using the [[AGRIF package]]. | ||
+ | |||
+ | ==Bathymetry== | ||
+ | |||
+ | There are some pre-processing packages available on the NEMO website. I don't recommend using the bathymetry package OPABAT. It uses idl and fortran routines contained in IDL_OPABAT3.tar. The instructions are in French and some of the files are missing. Instead, use the AGRIF nesting tools. The instructions are clear and the package also interpolates the input files from the ORCA2_LIM test case. | ||
+ | |||
+ | ===AGRIF package=== | ||
+ | |||
+ | AGRIF is designed to create fine regional grids (child grids) in a form that NEMO can read in from a coarse NEMO global grid. The idea is to run the fine grid '''with''' the global grid to provide local increased resolution where the model needs it. It is possible to genrate the child bathymetry by interpolating from the global bathymetry '''and''' high resolution topography. It blends the coarse and fine resolutions over a few cells at the edges of the child grid. It also creates input files for the child grid from the global input files. | ||
+ | |||
+ | You can download the source code (Nesting_tools_NEMO.tar) and the user's manual (doc_nesting_tools.pdf) from the NEMO website. The manual tells you how to untar and make the executable files, run the executables and gives some theory about the interpolation schemes. The executables are | ||
+ | * <code>create_coordinates.exe </code> generates the longitude, latitude and metrics for the child grid, | ||
+ | * <code>create_bathy.exe </code> generates the bathymetry, | ||
+ | * <code>create_data.exe </code> generates the necessary input files (nav_lon and nav_lat in the child files aren't correct but NEMO doesn't use them) and | ||
+ | * <code>create_restart.exe </code> interpolates the global restart file to the child grid. | ||
+ | |||
+ | These executables use one '''namelist'''. AGRIF provides an example namelist, pacifique_tropical, in Nesting_tools/bin. To run the example copy coordinates.nc, bathy_level.nc, bathy_meter.nc, taux_1m.nc, tauy_1m.nc, data_1m_potential_temperature_nomask.nc, data_1m_potential_temperature_nomask.nc, flx.nc, runoff_1m_nomask.nc and geothermal_heating.nc from ORCA2_LIM test case to Nesting_tools/bin. You will also need to acquire bathymetry_meter_ORCA_R05.nc, which I found it by typing the filename into google. Alternatively, try setting <code>new_topo = false</code>. Run the executables in Nesting_tools/bin in order using | ||
+ | |||
+ | <code>./create_*.exe pacifique_topical</code> | ||
+ | |||
+ | where * is coordinates, bathy and then data (you don't need to do restart). Output has the form 1_globalname.nc, where 1_ signifies child grid and globalname is the name of the file copied from the test case, such as bathy_level, tauy_1m etc. | ||
+ | |||
+ | For a new regional model copy pacifique_tropical and rename it as namelist_Amundsen, say. Set <code>imin, imax, jmin</code> and <code>jmax</code> to the indices on the global grid corresponding to your region, where (imin, jmin) is the south-west corner and (imax, jmax) is the north-east corner of your region. These indices must lie inside the global grid. Set <code>rho</code>, which is the grid refinement ratio to values between 2 and 5 (AGRIF recommendation). I suggest setting <code>type_bathy_interp = 2</code> for bilinear interpolation of the bathymetry as it seems more robust then the other options. Add <code>'runoff_1m_nomask.nc'</code> to the list of forcing files. | ||
+ | |||
+ | ===High resolution bathymetry=== | ||
==Open boundary conditions== | ==Open boundary conditions== |
Latest revision as of 16:02, 8 January 2009
Welcome to the Bristol Nemo Page!
Here are a few tips and comments based on Bristol's experiences compiling and running NEMO. Hopefully, you will find some useful. Note that we are using NEMOv3.
Please take a look the official NEMO webpage before continuing with this page.
Model Compilation
In the steps below, I'm assuming that you are interested in the ORCA2_LIM version of the model.
If you follow the steps described on the official NEMO webpage, you will be able to download and compile the model, with one caveat. When using newer versions of the NetCDF libraries--such as v3.6.2 installed on the Quest cluster--you will need to change line 212 of modipsl/util/AA_make.gdef to "NCDF_INC = /usr/local/netcdf/3.6.2/pgi-251/include" and line 213 to "NCDF_LIB = -L/usr/local/netcdf/3.6.2/pgi-251/lib -lnetcdf -lnetcdff".
Parallel Version
To compile the model for parallel runs, we will need to:
- Edit modipsl/modeles/NEMO/OPA_SRC/par_oce.F90, and change:
#if ! defined key_mpp_dyndist INTEGER, PUBLIC, PARAMETER :: & !: jpni = 4, & !: number of processors following i jpnj = 4, & !: number of processors following j jpnij = 16 !: nb of local domain = nb of processors ! ! ( <= jpni x jpnj )
to reflect the domain decomposition that you would like. NB we will need to make sure that we match your resource request appropriately when submitting the job to the queueing manager.
- Edit modipsl/util/AA_make.gdef so that we use mpif90 as the linker, e.g.:
#-Q- linux F_L = mpif90
- Copy the file mpif.h from, e.g. /usr/local/Cluster-Apps/ofed/1.2.5/mpi/intel/mvapich-0.9.9/include (if your are using the intel compiler on quest) into modipsl/lib. If running on quest, also ensure that you have the correct module loaded, e.g. module add default-infiniband-intel-251.
- Edit modipsl/config/ORCA2_LIM/scripts/BB_make.ldef and add in key_mpp_mpi, e.g.:
P_P = key_trabbl_dif key_vectopt_loop key_vectopt_memory key_orca_r2 key_ice_lim key_lim_fdd key_dynspg_flt key_d iaeiv key_ldfslp key_traldf_c2d key_traldf_eiv key_dynldf_c3d key_dtatem key_dtasal key_tau_monthly key_flx_bulk_ monthly key_tradmp key_trabbc key_zdftke key_zdfddm key_mpp_mpi
- Re-run modipsl/util/ins_make -t <section> (e.g. <section> = linux) to regenerate the makefiles and recompile.
- Note that on quest, the AMD chipset means that we must not request >2GB of memory per process. If you are compiling a high resolution version of the model and you encounter a relocated truncation to fit error, you must widen your domain decomposition.
Running the Model
Nemo provides a test case, ORCA2_LIM, for a global ocean run coupled with the sea-ice model. The run simulates one year. The configuration for ORCA2_LIM is downloaded through CVS when getting Nemo and the necessary forcing files are provided through a link on the Nemo website or can be copied from ~ggdaqw/NEMO_forcing as detailed below.
The instructions for running the model are not so clear. You will need two things to run the model; (i) some forcing files; and (ii) a run script. Assuming you are running on the Quest cluster, you can obtain forcing files for an example job by copying the directory ~ggdagw/NEMO_forcing and it's contents to you home directory. If you adopt the same directory name, you will be able to use this runscript. Place the runscript into the directory modipsl/config/ORCA2_LIM/EXP00. I named it Job_EXP.new. You will need to create the directory "$DUMP2HOLD/NEMO" in your home directory to collect the output. You can submit the job by typing "qsub Job_EXP.new", check whether it is running using "showq", and obviously look in the output directory.
Parallel Version
The section nam_mpp must be edited in the file namelist and the mpi send/recieve type switched from the default 'S' to 'I':
c_mpi_send = 'I' ! mpi send/recieve type ='S', 'B', or 'I' for standard send, ! buffer blocking send or immediate non-blocking sends, resp. nn_buffer = 0 ! size in bytes of exported buffer ('B' case), 0 no exportation
All the above regaring running the model holds true when running a parallel version of the model, except you must edit your run-script (Job_EXP.new) in several places e.g.:
#PBS -N EXP.1 #PBS -o output_EXP.1 #PBS -j oe #PBS -S /usr/bin/ksh #PBS -q quest-medium #PBS -l nodes=4:ppn=4
NB_PROC=16
In order for each processor to access the starting coordinate and geothermal heating files, create as many links are number of processors you are using. i.e. for four processors:
# for parallel ln -s coordinates.nc coordinates_000.nc ln -s coordinates.nc coordinates_001.nc ln -s coordinates.nc coordinates_002.nc ln -s coordinates.nc coordinates_003.nc ln -s geothermal_heating.nc geothermal_heating_000.nc ln -s geothermal_heating.nc geothermal_heating_001.nc ln -s geothermal_heating.nc geothermal_heating_002.nc ln -s geothermal_heating.nc geothermal_heating_003.nc
#- To be use for a mpp run echo $HOSTNAME pwd MPIRUN=~/mpirun-intel cat $PBS_NODEFILE > machine.file.$PBS_JOBID # call mpirun $MPIRUN -ssh -np ${NB_PROC} -hostfile machine.file.$PBS_JOBID ./opa.xx
- Note that we have created the symbolic link ~/mpirun-intel to point to /usr/local/Cluster-Apps/ofed/1.2.5/mpi/intel/mvapich-0.9.9/bin/mpirun_rsh so that we can access (the right version of, in this case intel) mpirun_rsh, despite enviroment variable problems with ethe modular environment on quest.
Projects using Nemo at Bristol
Mediterranean Sea
Sediment cores recovered from the Mediterranean Sea reveal distinct layers of organic rich material thought to be associated with deep water anoxia, caused by either a shut down in deep water circulation, an increase in productivity, or both. One of the ways this could have happened is by a large increase in fresh water runoff into the basin. We intend to use a Mediterranean configuration of NEMO to study how point sources of freshwater, represented by low delta-18-O, are circulated around the Mediterranean, and compare the modeled distribution with records of delta-18-O during these "Mediterranean Anoxic Events".
Amundsen Sea
Thinning of the Pine Island Glacier, West Antarctica, has been observed during the 1990s. It has been suggested that relatively warm water at the base of the ice shelf has triggered the thinning. The source of and mechanism driving this warm water is unknown.
We are constructing a regional model of the Amunsden Sea using Nemo to investigate ocean circulation close to Pine Island Bay. Input data at the sea surface and at the open boundaries are constructed using output data from the OCCAM global ocean model.
Regional model
How to set up a regional model starting from the ORCA2_LIM test case:
- Create a new configuration for the regional model.
- In directory modipsl/config/ create a new directory AMUNDSEN, for example, and copy the contents of the ORCA2_LIM directory into AMUNDSEN.
- Edit file fait_config in modipsl/modeles/UTIL/. Add
\n AMUNDSEN
toLIST =
and add the lineset -A DIR_AMUNSDEN OPA_SRC C1D_SRC NST_SRC
.
- Change CPP keys by editing BB_make.ldef in modipsl/config/AMUNDSEN/scripts/. Remove
key_orca_r2
since you're not running a global model and addkey_obc
to invoke open boundary conditions instead. - TIP: modify tradmp.F90 in modipsl/modeles/NEMO/OPA_SRC/TRA/. Replace the line
if (cp_cfg=="orca" .AND. (ndmp > 0 .OR. ndmp==-1)) then
withif (ndmp > 0 .OR. ndmp==-1) then
. If you don't make the change you'll get the error messagetra_dmp: You should not have seen this print error?
. With this change you can apply tracer damping as in the ORCA configurations (i.e. global models for various resolutions) without special treatment for particular areas such as the Mediterranean or the Red Sea that are grid dependent. - Set the new domain resolution in par_oce.F90 in modipsl/modeles/NEMO/OPA_SRC/. Parameters for the resolution are
jpidta
andjpjdta
. - Set parameters in obc_oce.F90 and obc_par.F90 in modipsl/modeles/NEMO/OPA_SRC/OBC/.
- obc_oce.F90:
nbobc
is the number of open boundaries, i.e. set to 1, 2, 3 or 4. - obc_par.F90: set logical parameter
lp_obc_east
to.true.
if open boundary conditions are to be applied to part or all of the eastern face of the domain. Alterjpjed
and/orjpjef
if the open boundary covers only part of the eastern face. Similarly forlp_obc_west
,jpjwd
,jpjwf
,lp_obc_north
,jpind
,jpinf
andlp_obc_south
,jpisd
,jpisf
.
- obc_oce.F90:
- In the input file namelist set
n_cla
to 0. Assuming you have no closed seas, i.e. Mediterranean and Red Sea in ORCA2. - Generate input data from the ORCA2_LIM test case files using the AGRIF package.
Bathymetry
There are some pre-processing packages available on the NEMO website. I don't recommend using the bathymetry package OPABAT. It uses idl and fortran routines contained in IDL_OPABAT3.tar. The instructions are in French and some of the files are missing. Instead, use the AGRIF nesting tools. The instructions are clear and the package also interpolates the input files from the ORCA2_LIM test case.
AGRIF package
AGRIF is designed to create fine regional grids (child grids) in a form that NEMO can read in from a coarse NEMO global grid. The idea is to run the fine grid with the global grid to provide local increased resolution where the model needs it. It is possible to genrate the child bathymetry by interpolating from the global bathymetry and high resolution topography. It blends the coarse and fine resolutions over a few cells at the edges of the child grid. It also creates input files for the child grid from the global input files.
You can download the source code (Nesting_tools_NEMO.tar) and the user's manual (doc_nesting_tools.pdf) from the NEMO website. The manual tells you how to untar and make the executable files, run the executables and gives some theory about the interpolation schemes. The executables are
create_coordinates.exe
generates the longitude, latitude and metrics for the child grid,create_bathy.exe
generates the bathymetry,create_data.exe
generates the necessary input files (nav_lon and nav_lat in the child files aren't correct but NEMO doesn't use them) andcreate_restart.exe
interpolates the global restart file to the child grid.
These executables use one namelist. AGRIF provides an example namelist, pacifique_tropical, in Nesting_tools/bin. To run the example copy coordinates.nc, bathy_level.nc, bathy_meter.nc, taux_1m.nc, tauy_1m.nc, data_1m_potential_temperature_nomask.nc, data_1m_potential_temperature_nomask.nc, flx.nc, runoff_1m_nomask.nc and geothermal_heating.nc from ORCA2_LIM test case to Nesting_tools/bin. You will also need to acquire bathymetry_meter_ORCA_R05.nc, which I found it by typing the filename into google. Alternatively, try setting new_topo = false
. Run the executables in Nesting_tools/bin in order using
./create_*.exe pacifique_topical
where * is coordinates, bathy and then data (you don't need to do restart). Output has the form 1_globalname.nc, where 1_ signifies child grid and globalname is the name of the file copied from the test case, such as bathy_level, tauy_1m etc.
For a new regional model copy pacifique_tropical and rename it as namelist_Amundsen, say. Set imin, imax, jmin
and jmax
to the indices on the global grid corresponding to your region, where (imin, jmin) is the south-west corner and (imax, jmax) is the north-east corner of your region. These indices must lie inside the global grid. Set rho
, which is the grid refinement ratio to values between 2 and 5 (AGRIF recommendation). I suggest setting type_bathy_interp = 2
for bilinear interpolation of the bathymetry as it seems more robust then the other options. Add 'runoff_1m_nomask.nc'
to the list of forcing files.
High resolution bathymetry
Open boundary conditions
Wiki example
This text is very imprtant. This is less so.
Listen very carefully, I shall say this only once:
- put kettle on
- find mug
- add water to tea bag--in mug!
- slurp
- read
- slurp
can be done in any order
An external link is a search engine
A link to another wiki page is the genie project