Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: HuggingFace model from local file path #14367

Closed
1 task done
lyie28 opened this issue Jun 25, 2024 · 4 comments
Closed
1 task done

[Question]: HuggingFace model from local file path #14367

lyie28 opened this issue Jun 25, 2024 · 4 comments
Labels
question Further information is requested

Comments

@lyie28
Copy link

lyie28 commented Jun 25, 2024

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

Can I load a local fine-tuned model with HF framework from a local file path, rather than uploading it to the hub and using the HF hub slug?

@lyie28 lyie28 added the question Further information is requested label Jun 25, 2024
Copy link

dosubot bot commented Jun 25, 2024

Hey @lyie28! I'm a bot designed to help you out with bugs, questions, and more. I'm here to assist you while waiting for a human maintainer. Let me know how I can help!

Yes, you can load a local fine-tuned model with the HuggingFace framework from a local file path. Here is an example of how you can do it:

from sentence_transformers import CrossEncoder

# Load the model from a local path
model = CrossEncoder('path/to/your/local/model')

This code snippet demonstrates how to load a model from a local file path using the CrossEncoder class from the sentence_transformers library [1].

To continue talking to Dosu, mention @dosu.

@lyie28
Copy link
Author

lyie28 commented Jun 25, 2024

Hi @dosu

Basically I am using a framework that expects a LlamaIndex LLM, but I have a local fine-tuned model using the HF framework. I wanted to know if I can create a LlamaIndex LLM object from the local model without needing to upload the HF local model to the hub.

I have only seen examples of HF integrations like the following which use the HF hub slug.

from llama_index.llms.huggingface import HuggingFaceLLM
llm = HuggingFaceLLM(model_name="HuggingFaceH4/zephyr-7b-alpha")

Copy link

dosubot bot commented Jun 25, 2024

Yes, you can create a LlamaIndex LLM object from a local fine-tuned HuggingFace model without needing to upload the model to the HuggingFace hub. Here is an example of how you can do this:

from llama_index.llms.huggingface import HuggingFaceLLM
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load your local fine-tuned model and tokenizer
local_model_path = "path/to/your/local/model"
model = AutoModelForCausalLM.from_pretrained(local_model_path)
tokenizer = AutoTokenizer.from_pretrained(local_model_path)

# Create the HuggingFaceLLM object
llm = HuggingFaceLLM(
    model=model,
    tokenizer=tokenizer,
    context_window=3900,
    max_new_tokens=256,
    device_map="auto"
)

# Now you can use `llm` for your LLM applications

This code snippet demonstrates how to initialize the HuggingFaceLLM class with a local model and tokenizer, bypassing the need to upload the model to the HuggingFace hub [1].

To continue talking to Dosu, mention @dosu.

@logan-markewich
Copy link
Collaborator

Dosu is correct above

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants