Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

model_comparator.py broken #1899

Open
johnwee1 opened this issue May 29, 2024 · 0 comments
Open

model_comparator.py broken #1899

johnwee1 opened this issue May 29, 2024 · 0 comments

Comments

@johnwee1
Copy link
Contributor

Referencing this issue https://github.com/meta-llama/llama-recipes/pull/488/files where it seems that the evaluator class is no longer used.

I could not get the model_comparator.py script to run as it throws the same attribute error AttributeError: module 'lm_eval.tasks' has no attribute 'initialize_tasks' so my guess is that the script is broken.

I actually don't understand the point of the script, isn't it more straightforward to eval using predict-only at temperature 0 then directly compare the results? since at any non zero temperature the results would not tally across different backends.

thanks for taking the time to read.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant