This is a development repository for refactoring and improving the numerical stability of the coli part of the PoWR code (Potsdam Wolf-Rayet Stellar Atmospheres). For a description of PoWR and the available models, see here.
This library is currently under development!
The code in this repository is the merged version between the Potsdam branch (branch "wrh-source") and the Heidelberg branch as of August 24.
The source code is a collection of >400 Fortran77 files with >500 subroutines. For a complete PoWR cycle, different programs are called consecutively. The execution is handled by bash scripts that call each other. These bash scripts are placed in powr/dummychain
and powr/proc.dir
.
In this repository, the development focuses on the coli part of the execution cycle, in particular the colimo subroutine.
Testing needs to include one and several coli cycles, as well as integration into the PoWR execution cycle. For this, we run colitest
and wrstart
. The integration tests are only run before merging into the main branch, as they are quite costly in terms of compute time.
Testing also also includes compilation of the source code on MacOS and Ubuntu OS with different intel and gnu compilers. The results from different compilers and different optimizations need to be consistent.
- initial set-up of the CI to ensure valid output
- improve the make process: better structure of source directory and make process
- improve the make process: include ifx and gnu compiler and debug compiler options
- profile the test runs for time and memory consumption
- implement Fortran best practices in
colimo.f
and modernize to latest standards - identify computational and numerical bottlenecks and improve the algorithm
To compile PoWR, make sure you have a Fortran compiler and math libraries installed. We recommend intel Fortran and MKL libraries. These can be obtained through the OneAPI toolkits.
After installing the compiler(s) and libraries, make sure that the environment variables in your shell are set correctly - so that you can compile, link and execute the code. Intel OneAPI provides a set_up_vars.sh
script that should (in principle, sometimes there can be version conflicts if you have several sets of compilers installed) set up everything for you.
To compile, simply execute make
in the base repository directory of PoWR. This will provide you with (optimized) executables in the powr/exe.dir
directory, compiled with ifx
, the modern Fortran compiler by Intel.
Other compilation options are:
make small
: Compilation withifx
and optimization, of onlycoli
andsteal
as targetsmake debug
: Compilation with debug options and no optimization forifx
, onlycoli
andsteal
are compiled and the executables are placed inpowr/exe_dev.dir
make debug_all
: Compilation with debug options and no optimization forifx
, all programs are compiled and the executables are placed inpowr/exe_dev.dir
make intel_classic
: Compilation with the classicifort
compiler with optimization, all programs are compiled and the executables are placed inpowr/exe.dir
make intel_classic_debug
: Compilation with debug options and no optimization with the classicifort
compiler, onlycoli
andsteal
are compiled and the executables are placed inpowr/exe_dev.dir
make gfortran
: Compilation with the gnu Fortran compiler, onlycoli
is compiledmake clean
: remove all binaries, object and module filesmake clean_build
: remove all object files to allow recompilation with/without debug options, without removing the compiled binaries in theexe.dir
/exe_dev.dir
folders
The object and module files are placed in build
, module files in modules
, library files in lib
, to allow a clean separation of source files, object and module files, and binary executables.
Note: In the future, the make process should make heavier use of encapsulated module files.
The tests are driven by pytest. The configuration of the tests exports all the necessary variables and sets the stage for the bash scripts, that are then called in subprocesses. The output of the different jobs is compared to reference output in assert statements.
There are also testmodel files included in the repository. These include runs that are numerically unstable and need to be debugged. The testmodel files can be downloaded using
wget -O testmodels.tgz https://heibox.uni-heidelberg.de/f/a62c7ae5559d43a0a8b2/?dl=1
The coli test runs follow this workflow:
- Create a new chain, ie 1, by
makechain 1
. This copies some scripts and executables into different folders:
Folder | Purpose |
---|---|
wrdata1 | data directory of the current results |
scratch subdirectories | directories to handle execution cycle process |
wrjobs | collection of scripts to run |
output | directory containing the global output |
tmp_data | intermediate results? commonly scratch? |
tmp_2day | ? |
The most important files are in the wrdata1
directory: CARDS, DATOM, FEDAT, FEDAT_FORMAL, FGRID, FORMAL_CARDS, MODEL, MODEL_STUDY, NEWDATOM_INPUT, NEWFORMAL_CARDS_INPUT, next_job, next_jobz
Most of these are input files, some control the job execution.
- The test is then run by
sub colitest1
(or directly by calling colitest1). This creates the run log inoutput
(colitest1.log
andcolitest1.cpr
) - these are checked if the run are successful (COLITEST finished
inlog
). The results incpr
are compared to the reference file.
The output that is generated in wrdata1
is also compared: MODEL_STUDY_DONE, MODEL_STUDY_DONE_STEAL
- First, a new chain is generated using
makechain 1
. - The integration test calls
wrstart
throughsubmit
.wrstart
then callswruniq
which handles the COMO / COLI / STEAL program cycles. Since all of these processes are detached from the initial submit process, we need to check for completion by regularly parsing the log files.
TBD
colitest
(small integration test, coli and steal) and the full integration test (including the full cycle of program executions) are run using pytest
. The Python test session set-up and tear-down are included in tests
.
ifx
is returning a runtime error when executing coli: This may happen if the environment variables for oneapi are not set correctly. Force-source the set-up using
source /opt/intel/oneapi/2024.2/oneapi-vars.sh --force