Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add check_opt_vs_noopt_jit option to check that results between baseline and optimized jitted versions match #249

Merged
merged 2 commits into from
Mar 2, 2021

Conversation

Krovatkin
Copy link
Contributor

@Krovatkin Krovatkin commented Feb 9, 2021

add check_opt_vs_noopt_jit option to check that results between baseline and optimized jitted versions match

@@ -29,4 +29,34 @@ def _set_mode(self, train):
(model, _) = self.get_module()
model.train(train)


def check_results(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm wondering how this is intended to be used. If it's used mainly locally by jit developers, it won't affect all users of the benchmark... Should we make it slightly less intrusive by not needing to implement a 'check_results' override for models that don't jit? is it sufficient to just try/except NotImplementedError inside this func to avoid the ones that aren't supported?

]

if model_name in model_blacklist:
warnings.warn(UserWarning(f"{model_name}.get_module() doesn't support `check_results` yet!"))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this just WIP (that you have both blacklist and try/except)? or you need both for some reason?

Copy link
Contributor Author

@Krovatkin Krovatkin Feb 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wconstab unfortunately, for some benchmarks results don't match, while the others are set up in a way I can't get at their jitted module to be able to reset the optimization version and recompile it at noopt.
I added some comments.
Eventually, I'm hoping to enable results checking for as many models as I can

Copy link
Contributor

@wconstab wconstab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm- other than one nit, which is that you could add a comment and/or change check_results function name, explaining the idea of verifying that the JIT optimization behaves same as baseline JIT.

(Disambiguate from also important but non-addressed case of making sure the model is producing correct results..)

@Krovatkin Krovatkin changed the title add check_results option for inference runs add check_opt_vs_noopt_jit option to check that results between baseline and optimized jitted versions match Mar 2, 2021
@Krovatkin Krovatkin merged commit ff0b662 into master Mar 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants