Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Samples #40

Open
wants to merge 8 commits into
base: inference
Choose a base branch
from
Prev Previous commit
Next Next commit
Don't clobber kwargs in load_pretrained_model
Signed-off-by: Alastair D'Silva <[email protected]>
  • Loading branch information
deece committed May 26, 2024
commit 0dc4977882d1ac1e8715ef6dea85d3f7bd5d5731
2 changes: 1 addition & 1 deletion llava/model/builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@


def load_pretrained_model(model_path, model_base, model_name, load_8bit=False, load_4bit=False, device_map="auto", attn_implementation="flash_attention_2", customized_config=None, **kwargs):
kwargs = {"device_map": device_map}
kwargs.update({"device_map": device_map})

if load_8bit:
kwargs["load_in_8bit"] = True
Expand Down