[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
-
Updated
Jul 6, 2024 - Python
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
[SIGIR'24] The official implementation code of MOELoRA.
[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
[ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.
[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts
This is the official repository of the papers "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" and "Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters".
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models
[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.
A framework to optimize Parameter-Efficient Fine-Tuning for Fairness in Medical Image Analysis
Code for the EACL 2024 paper: "Small Language Models Improve Giants by Rewriting Their Outputs"
Parameter Efficient Fine-Tuning for CLIP
A Production-Ready, Scalable RAG-powered LLM-based Context-Aware QA App
Add a description, image, and links to the parameter-efficient-fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the parameter-efficient-fine-tuning topic, visit your repo's landing page and select "manage topics."