-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: HuggingFace model from local file path #14367
Comments
Hey @lyie28! I'm a bot designed to help you out with bugs, questions, and more. I'm here to assist you while waiting for a human maintainer. Let me know how I can help! Yes, you can load a local fine-tuned model with the HuggingFace framework from a local file path. Here is an example of how you can do it: from sentence_transformers import CrossEncoder
# Load the model from a local path
model = CrossEncoder('path/to/your/local/model') This code snippet demonstrates how to load a model from a local file path using the |
Hi @dosu Basically I am using a framework that expects a LlamaIndex LLM, but I have a local fine-tuned model using the HF framework. I wanted to know if I can create a LlamaIndex LLM object from the local model without needing to upload the HF local model to the hub. I have only seen examples of HF integrations like the following which use the HF hub slug. from llama_index.llms.huggingface import HuggingFaceLLM
llm = HuggingFaceLLM(model_name="HuggingFaceH4/zephyr-7b-alpha") |
Yes, you can create a LlamaIndex LLM object from a local fine-tuned HuggingFace model without needing to upload the model to the HuggingFace hub. Here is an example of how you can do this: from llama_index.llms.huggingface import HuggingFaceLLM
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load your local fine-tuned model and tokenizer
local_model_path = "path/to/your/local/model"
model = AutoModelForCausalLM.from_pretrained(local_model_path)
tokenizer = AutoTokenizer.from_pretrained(local_model_path)
# Create the HuggingFaceLLM object
llm = HuggingFaceLLM(
model=model,
tokenizer=tokenizer,
context_window=3900,
max_new_tokens=256,
device_map="auto"
)
# Now you can use `llm` for your LLM applications This code snippet demonstrates how to initialize the |
Dosu is correct above |
Question Validation
Question
Can I load a local fine-tuned model with HF framework from a local file path, rather than uploading it to the hub and using the HF hub slug?
The text was updated successfully, but these errors were encountered: