-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable SHiELD runs in argo workflows #2377
Conversation
ba681f3
to
e743580
Compare
1ba95a9
to
bad6d92
Compare
This references forcing data in the vcm-fv3config bucket, and makes parameters controlling whether we use data from initial conditions or the climatology consistent with our v0.7 FV3GFS base config.
e743580
to
fac244f
Compare
This bumps SHiELD-wrapper to include a Q-flux bug fix, and a couple other user experience improvements with the SOM. Note this fix has not been merged to SHiELD yet, so I will update this PR later once we can point to main branches of SHiELD-wrapper and SHiELD_physics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit tough to review, but I think it looks good!
I would support moving the DGLBACKEND env var definition to the Dockerfile.
Thanks @oliverwm1! I went ahead and set |
#2377 reorganized the `base_yamls` directory in fv3kube to make room for SHiELD reference configurations, but neglected to update the `package_data` parameter in `setup.py` accordingly. Without this change, installing `fv3kube` via something like: ``` $ pip install git+https://github.com/ai2cm/fv3net.git@b8e2f83b5206539724a4d096d0433ceeb3bc805a#egg=fv3kube&subdirectory=external/fv3kube ``` does not include the `base_yamls` files, which are an important component of the library. I've tested this locally and it fixes the issue, e.g. use: ``` $ pip install git+https://github.com/ai2cm/fv3net.git@8fe01cd6c49ea635a1b07afd4ee4615db7555ce6#egg=fv3kube&subdirectory=external/fv3kube ``` and check for the existence of the `base_yamls` directory (note the different SHA from the original example).
This PR builds on #2376 and splits out from #2350 what is necessary to run SHiELD-wrapper-based prognostic simulations through our standard prognostic run argo workflow. No changes to the frontend API are needed; the prognostic run workflow is modified to infer which template (
run-fv3gfs
orrun-shield
) to run based on the input config.For convenience this also adds a starter base config for SHiELD, which is based on the configuration used in the PIRE simulations (but for simplicity with the mixed layer ocean turned off). I have tested the
prognostic-run
andrestart-prognostic-run
workflows using a SHiELD-based config offline. I'm not sure if we want to add an integration test yet or not.Significant internal changes:
prognostic-run
workflow to infer whether to use FV3GFS or SHiELD based on the config.restart-prognostic-run
workflow to infer whether to use FV3GFS or SHiELD based on the config at the provided URL.fv3kube
to better accommodate SHiELD configs. No user-facing changes to the FV3GFS configs are made.Note this PR makes use of YAML anchors and aliases to reduce the amount of duplicate configuration code. Some illustration of how these work can be found here. Use of this concept was already introduced in #2103 within the
training.yaml
template, though this is the first time using it in the prognostic run.To illustrate the updated workflows I have included some example step outputs from
argo get
below (we ran theprognostic-run
workflow for two segments and then ran one more segment via therestart-prognostic-run
workflow).prognostic-run
restart-prognostic-run