A testing framework for the CABLE land surface model
Open the slide deck at: slides.com/seanbryan/benchcab
1. Getting access on NCI
module use /g/data/hh5/public/modules
module load conda/analysis3-unstable
mkdir -p /scratch/nf33/$USER
cd /scratch/nf33/$USER
git clone git@github.com:CABLE-LSM/bench_example.git
cd bench_example
2. Clone the example work directory
3. Configure the configuration file
vi config.yaml
4. Run the tests
benchcab run
2. Getting access to benchcab on NCI
module use /g/data/hh5/public/modules
module load conda/analysis3-unstable
mkdir -p /scratch/nf33/$USER
cd /scratch/nf33/$USER
git clone git@github.com:CABLE-LSM/bench_example.git
# git clone https://github.com/CABLE-LSM/bench_example.git
cd bench_example
3. Clone the example work directory
1. Connect to NCI
ssh -Y <userID>@gadi.nci.org.au
project: nf33
experiment: AU-Tum
realisations: [
{
path: "trunk",
},
{
path: "branches/Users/ccc561/demo-branch",
}
]
modules: [
intel-compiler/2021.1.1,
netcdf/4.7.4,
openmpi/4.1.0
]
1. Edit config.yaml to the following:
2. Run benchcab with the verbose flag enabled
benchcab run --verbose
Manual step
config.yaml
namelists
$ tree bench_example/
bench_example/
├── config.yaml
├── LICENSE
├── namelists
│ ├── cable.nml
│ ├── cable_soilparm.nml
│ └── pft_params.nml
└── README.md
.
├── benchmark_cable_qsub.sh
├── benchmark_cable_qsub.sh.o<jobid>
├── rev_number-1.log
├── runs
│ └── site
│ ├── logs
│ │ ├── <task>_log.txt
│ │ └── ...
│ ├── outputs
│ │ ├── <task>_out.nc
│ │ └── ...
│ ├── analysis
│ │ └── bitwise-comparisons
│ └── tasks
│ ├── <task>
│ │ ├── cable (executable)
│ │ ├── cable.nml
│ │ ├── cable_soilparm.nml
│ │ └── pft_params.nml
│ └── ...
└── src
├── CABLE-AUX
├── <realisation-0>
└── <realisation-1>
/scratch/nf33/ccc561/standard_evaluation/runs/site/outputs
Full workflow demo:
benchcab
can support new namelist parameters introduced by a code change through the patch
optionpatch
specifies any branch specific namelist parameters which are then applied to namelist files for tasks that run the corresponding branchpatch: {
cable: {
cable_user: {
MY_NEW_FEATURE: True
}
}
}
Use a different potential evaporation scheme for one branch only
realisations: [
{
path: "trunk",
},
{
path: "branches/Users/sb8430/test-branch",
patch: {
cable: {
cable_user: {
SSNOW_POTEV: "P-M"
}
}
}
}
]
science_configurations: [
{ # S0 configuration
cable: {
cable_user: {
GS_SWITCH: "medlyn",
FWSOIL_SWITCH: "Haverd2013"
}
}
},
{ # S1 configuration
cable: {
cable_user: {
GS_SWITCH: "leuning",
FWSOIL_SWITCH: "Haverd2013"
}
}
}
]
Users can specify their own science configurations in config.yaml
See the documentation for potential gotchas.
🚧 Test suites for:
◦ Global/regional simulations (offline CABLE)
◦ Global/regional simulations (online CABLE)
◦ CABLE-CASA-CNP
🚧 A standard set of science configurations.
🚧 Fortran code coverage analysis.
🚧 Automated model evaluation step.
🚧 Tests for different compilers and compiler flags.
🚧 Updates to analysis plots for flux site tests.
🚧 Model evaluation with ILAMB
GitHub issues: github.com/CABLE-LSM/benchcab/issues
ACCESS-Hive forum: forum.access-hive.org.au
realisations: [
{
path: "trunk",
name: "trunk_head"
},
{
path: "trunk",
name: "trunk_r9468",
revision: 9468
}
]
Evaluation: We want CABLE developers to easily evaluate the impact of their code additions to CABLE
Standardisation: We want a standardised evaluation of CABLE to allow for comparison
Automation: CABLE is highly configurable, running tests manually for every possible configuration is time consuming
benchcab
does (hopefully):benchcab
provides a fast, standardised way for developers of CABLE to evaluate how code changes affect the model output.benchcab
automates running tests against a lot of possible configurations of CABLEbenchcab
is intended to have limited configurability by designRegression mode:
run 2 models with the same science options
New feature mode:
run 2 models. One with a science patch added to the science options
Ensemble mode:
run any number of models with custom science options
Will be required for code submissions
Necessary to support old versions
benchcab
benchcab
is executed via the command line$ benchcab -h
usage: benchcab [-h] [-V] command ...
benchcab is a tool for evaluation of the CABLE land surface model.
positional arguments:
command
run Run all test suites for CABLE.
fluxnet Run the fluxnet test suite for CABLE.
checkout Run the checkout step in the benchmarking workflow.
build Run the build step in the benchmarking workflow.
fluxnet-setup-work-dir
Run the work directory setup step of the fluxnet command.
fluxnet-run-tasks Run the fluxnet tasks of the main fluxnet command.
spatial Run the spatial tests only.
optional arguments:
-h, --help Show this help message and exit.
-V, --version Show program's version number and exit.
benchcab
benchcab run
command:$ benchcab run
Creating src directory: /scratch/tm70/sb8430/bench_example/src
Checking out repositories...
Successfully checked out trunk at revision 9550
Successfully checked out test-branch at revision 9550
Successfully checked out CABLE-AUX at revision 9550
Writing revision number info to rev_number-1.log
Compiling CABLE serially for realisation trunk...
Successfully compiled CABLE for realisation trunk
Compiling CABLE serially for realisation test-branch...
Successfully compiled CABLE for realisation test-branch
Setting up run directory tree for FLUXNET tests...
Creating runs/site/logs directory: /scratch/tm70/sb8430/bench_example/runs/site/logs
Creating runs/site/outputs directory: /scratch/tm70/sb8430/bench_example/runs/site/outputs
Creating runs/site/tasks directory: /scratch/tm70/sb8430/bench_example/runs/site/tasks
Creating task directories...
Setting up tasks...
Successfully setup FLUXNET tasks
Creating PBS job script to run FLUXNET tasks on compute nodes: benchmark_cable_qsub.sh
PBS job submitted: 82479088.gadi-pbs
The CABLE log file for each task is written to runs/site/logs/<task_name>_log.txt
The CABLE standard output for each task is written to runs/site/tasks/<task_name>/out.txt
The NetCDF output for each task is written to runs/site/outputs/<task_name>_out.nc
runs/sites/outputs/
under your work directory