[SIGIR'24] The official implementation code of MOELoRA.
-
Updated
Jul 22, 2024 - Python
[SIGIR'24] The official implementation code of MOELoRA.
Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B
CRE-LLM: A Domain-Specific Chinese Relation Extraction Framework with Fine-tuned Large Language Model
High Quality Image Generation Model - Powered with NVIDIA A100
Mistral and Mixtral (MoE) from scratch
The task of this project is to Convert Natural Language to SQL Queries
Fine-tuning Llama3 8b to generate JSON formats for arithmetic questions and process the output to perform calculations.
Fine-tuning an LLM to generate musical micro-genres
This project is an implementation of the paper: Parameter-Efficient Transfer Learning for NLP, Houlsby [Google], ICML 2019.
This repo contains everything about transformers and NLP.
Comparing popular Parameter Efficient Fine-Tuning (PEFT) techniques for Large Language Models
Add a description, image, and links to the peft-fine-tuning-llm topic page so that developers can more easily learn about it.
To associate your repository with the peft-fine-tuning-llm topic, visit your repo's landing page and select "manage topics."