Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add testing #59

Open
Alicegaz opened this issue Jul 6, 2022 · 0 comments
Open

Add testing #59

Alicegaz opened this issue Jul 6, 2022 · 0 comments

Comments

@Alicegaz
Copy link

Alicegaz commented Jul 6, 2022

Support two types of testing:

  1. Running with the separate script: Add Test DataModule and Test TaskModule to support testing models on a separate script.
  2. Manually restricting dataset types and Lightning Module loop types to be executed by running the train.py: Add a parameter --mode to train.py in order to allow for train or validate (and maybe test) options. As the user may want to just run validation for benchmarking with no training using the training configs, Data Modules, and Task Modules, it would be good to add a parameter to allow skipping training and executing validation only (or test only). It may look like that:
if args.mode=="train":
     trainer.fit(model, dataloaders=data_module)
elif args.mode=="valid":
     trainer.validate(model, dataloaders=data_module)

In this case, we require all train and valid (and maybe) data loaders to be initialized.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant