Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep getting error: 'VLLM' object has no attribute 'AUTO_MODEL_CLASS' #1953

Closed
andrew0411 opened this issue Jun 12, 2024 · 8 comments
Closed

Comments

@andrew0411
Copy link

Hello, i tried to run eval on local with vllm, but the same error occurs.

version

  • evaluation-harness==0.4.2
  • vllm==0.3.2
export HF_DATASETS_OFFLINE=1;
lm-eval \
--tasks mmlu \
--model vllm \
--model_args pretrained=/model/path/Meta-Llama-3-8B,tokenizer=/model/path/Meta-Llama-3-8B,tensor_parallel_size=1,dtype=bfloat16,data_parallel_size=4,gpu_memory_utilization=0.8 \
--batch_size "auto" \
--num_fewshot 5 \
--write_out \
--output_path ../results/
2024-06-10:18:05:24,757 INFO     [evaluator.py:362] Running loglikelihood requests
Traceback (most recent call last):
  File "/usr/local/bin/lm-eval", line 8, in <module>
    sys.exit(cli_evaluate())
  File "/home/jovyan/workspace/llm-data/lm-evaluation-harness-0.4.2/lm_eval/__main__.py", line 342, in cli_evaluate
    results = evaluator.simple_evaluate(
  File "/home/jovyan/workspace/llm-data/lm-evaluation-harness-0.4.2/lm_eval/utils.py", line 288, in _wrapper
    return fn(*args, **kwargs)
  File "/home/jovyan/workspace/llm-data/lm-evaluation-harness-0.4.2/lm_eval/evaluator.py", line 234, in simple_evaluate
    results = evaluate(
  File "/home/jovyan/workspace/llm-data/lm-evaluation-harness-0.4.2/lm_eval/utils.py", line 288, in _wrapper
    return fn(*args, **kwargs)
  File "/home/jovyan/workspace/llm-data/lm-evaluation-harness-0.4.2/lm_eval/evaluator.py", line 373, in evaluate
    resps = getattr(lm, reqtype)(cloned_reqs)
  File "/home/jovyan/workspace/llm-data/lm-evaluation-harness-0.4.2/lm_eval/api/model.py", line 325, in loglikelihood
    context_enc, continuation_enc = self._encode_pair(context, continuation)
  File "/home/jovyan/workspace/llm-data/lm-evaluation-harness-0.4.2/lm_eval/api/model.py", line 300, in _encode_pair
    if self.AUTO_MODEL_CLASS == transformers.AutoModelForCausalLM:
AttributeError: 'VLLM' object has no attribute 'AUTO_MODEL_CLASS'

As implied, the error occured in the model initialization, where VLLM template has no such attribute 'AUTO_MODEL_CLASS'.

Do I need to manually put it as argument?

@LSinev
Copy link
Contributor

LSinev commented Jun 12, 2024

you can try if it is fixed in the latest state of main branch. there were many fixes, changes and improvements since release of 0.4.2

@haileyschoelkopf
Copy link
Contributor

Hi, This is fixed on the main branch! We will soon be putting out a new version release for v0.4.3 .

@malhajar17
Copy link

malhajar17 commented Jun 23, 2024

Hi, Could you provide me with steps to solve this? I have tried updating the repo a couple of times (i am in main) but i keep getting this. @haileyschoelkopf

@malhajar17
Copy link

as a temporary solution you can skip the conditon causing the error by choosing your type of model in the lm_eval/api/model.py in def _encode_pair(self, context, continuation): (in line 299) and setting the "AUTO_MODEL_CLASS" manually. so in my case for "llama3 70b" i changed if self.AUTO_MODEL_CLASS == transformers.AutoModelForCausalLM: to if True and it worked.

@haileyschoelkopf
Copy link
Contributor

@malhajar17 have you uninstalled and reinstalled after pulling the latest commits from main?

@yaakovtayeb
Copy link

having the same issue here. When is the 0.43 version scheduled for a release?

@najsword
Copy link

@malhajar17 have you uninstalled and reinstalled after pulling the latest commits from main?

Yeah!, I also fall this hole. Yes, of course. Where is 0.4.3? Extremely anxious

@haileyschoelkopf
Copy link
Contributor

v0.4.3 is released on PyPI!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants