Skip to content
#

parameter-efficient-fine-tuning

Here are 23 public repositories matching this topic...

Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts

  • Updated Dec 13, 2023
  • Python
UniFAQ

Fine-Tuned LLM-Based FAQ Generation for University Admissions: A project involving the fine-tuning of state-of-the-art language models, including LLaMA-3 8b, LLaMA-2 7b, Mistral 7b, T5, and BART, leveraging QLoRA PEFT.

  • Updated Jul 5, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the parameter-efficient-fine-tuning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the parameter-efficient-fine-tuning topic, visit your repo's landing page and select "manage topics."

Learn more