This repository has been archived by the owner on Mar 21, 2024. It is now read-only.
ENH: Upgrading package versions for security patches #757
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Upgrading various packages to patch identified critical CVEs.
This involves updating pytorch-lightning to 1.6.4. This introduces a number of breaking changes for various components and tests which have been patched in this PR.
However, this includes two potentially problematic changes:
model_training.py
now no longer changes the current working directory before callingtrainer.fit()
. This was as it was causing the CIFAR SSL container tests to fail as CIFAR is downloaded to/None/
in the head of the dir. I'm personally of the opinion that we shouldn't be changing the working directory anyway as it's a perfect recipe for annoying problems like this one (took me a long time to figure out what was going on). I think it should be the responsibility of the user running non-InnerEye models to ensure that their files are being saved to the appropriate folder, not IE-DL's job to try and guess where they're going to go.expected_metrics
intest_innereye_ssl_container_cifar10_resnet_simclr()
. These regression tests were failing after updating the package versions so I updated the expected metrics to the new ones. The validation loss values had improved, the training ones worsened, neither significantly. This could affect other models but I'm not sure which models need to be checked!Also includes an upgrade to hi-ml v0.2.5, closing #801