Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix is_multimodal_model judge #132

Merged
merged 1 commit into from
Feb 3, 2024
Merged

Fix is_multimodal_model judge #132

merged 1 commit into from
Feb 3, 2024

Conversation

hnyls2002
Copy link
Collaborator

@hnyls2002 hnyls2002 commented Feb 3, 2024

if isinstance(model, str):
    return "llava" or "yi-vl" in model

The above logic is wrong, when model is llama-7b, it returns "llava" but not True or False.

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/ubuntu/sglang/python/sglang/launch_server.py", line 11, in <module>
    launch_server(server_args, None)
  File "/home/ubuntu/sglang/python/sglang/srt/server.py", line 356, in launch_server
    tokenizer_manager = TokenizerManager(server_args, port_args)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/sglang/python/sglang/srt/managers/tokenizer_manager.py", line 106, in __init__
    self.tokenizer = self.processor.tokenizer
                     ^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'LlamaTokenizerFast' object has no attribute 'tokenizer'. Did you mean: '_tokenizer'?

@hnyls2002 hnyls2002 merged commit cd8c3cc into main Feb 3, 2024
@hnyls2002 hnyls2002 deleted the fix branch February 3, 2024 03:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant