Skip to content

πŸ”₯πŸ”₯ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)

Notifications You must be signed in to change notification settings

hanoonaR/LLaVA-pp

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LLaVA++: Extending Visual Capabilities with LLaMA-3 and Phi-3

Oryx Models

* Equal contributions

Mohamed bin Zayed University of AI (MBZUAI)

Google Demo


πŸ“’ Latest Updates

  • Apr-28-24- Online demo of Phi-3-V and LLaMA-3-V are released, check them out at Online Demo πŸ”₯πŸ”₯πŸ”₯
  • Apr-28-24- LoRA, fully fine-tuned and S2 fine-tuned models and results are added! πŸ”₯πŸ”₯πŸ”₯
  • Apr-27-24- Google Colab is released to chat with Phi-3-V-3.8B model, check it out at Google Colab πŸ”₯πŸ”₯πŸ”₯
  • Apr-26-24- Phi-3-V and LLaVA-3-V released: Excited to release the new integration of LLaVA with Phi-3 Mini Instruct and LLaMA-3 Instruct models! Hugging Face πŸ”₯πŸ”₯πŸ”₯

πŸ’¬ Introduction

This repository enhances the capabilities of the LLaVA 1.5 model, incorporating latest LLMs released this weakπŸ”₯, Phi-3 Mini Instruct 3.8B, and LLaMA-3 Instruct 8B.

πŸ† Results: Phi-3-V and LLaVA-3-V

Comparison on Benchmarks for Instruction-following LMMS & academic-task-oriented datasets:

  • Average computed excluding MME, and second-best are underlined.

πŸ€– Model-Zoo

The following table provides an overview of the available models in our zoo. For each model, you can find links to its Hugging Face page.

Model Name Hugging Face Link Summary
LLaVA-Phi-3-mini-4k-instruct-pretrain Hugging Face Pretrained on LCS-558K.
LLaVA-Phi-3-mini-4k-instruct-lora Hugging Face LoRA weights fine-tuned on LLaVA-Instruct-665K.
LLaVA-Phi-3-mini-4k-instruct Hugging Face Merged LoRA weights in HuggingFace format.
LLaVA-Phi-3-mini-4k-instruct-FT Hugging Face Fully fine-tuned model weights in HuggingFace format.
Model Name Hugging Face Link Summary
LLaVA-Meta-Llama-3-8B-Instruct-pretrain Hugging Face Pretrained on LCS-558K.
LLaVA-Meta-Llama-3-8B-Instruct-lora Hugging Face LoRA weights fine-tuned on LLaVA-Instruct-665K.
LLaVA-Meta-Llama-3-8B-Instruct Hugging Face Merged weights in HuggingFace format.
LLaVA-Meta-Llama-3-8B-Instruct-FT Hugging Face Fully fine-tuned model weights in HuggingFace format.
LLaVA-Meta-Llama-3-8B-Instruct-FT-S2 Hugging Face Fully fine-tuned S2 model weights in HuggingFace format.

Installation

git clone https://github.com/mbzuai-oryx/LLaVA-pp.git
cd LLaVA-pp
git submodule update --init --recursive

Packages you need to update from LLAVA:

pip install git+https://github.com/huggingface/transformers@a98c41798cf6ed99e1ff17e3792d6e06a2ff2ff3

πŸš€ Phi-3-V

To integrate Phi-3-V with LLaVA, follow these steps to update the codebase:

# Copy necessary files
cp Phi-3-V/train.py LLaVA/llava/train/train.py
cp Phi-3-V/llava_phi3.py LLaVA/llava/model/language_model/llava_phi3.py
cp Phi-3-V/builder.py LLaVA/llava/model/builder.py
cp Phi-3-V/model__init__.py LLaVA/llava/model/__init__.py
cp Phi-3-V/main__init__.py LLaVA/llava/__init__.py
cp Phi-3-V/conversation.py LLaVA/llava/conversation.py

# Training commands
cp scripts/Phi3-V_pretrain.sh LLaVA/Vi-phi3_pretrain.sh
cp scripts/Phi3-V_finetune_lora.sh LLaVA/Vi-phi3_finetune_lora.sh

Train Phi-3-V

  1. Pre-train
cd LLaVA
bash Phi3-V_pretrain.sh
  1. Finetune
cd LLaVA
bash Phi3-V_finetune_lora.sh

πŸš€ LLaMA-3-V

To integrate LLaMA-3-V with LLaVA, follow these steps to update the codebase:

# Copy necessary files
cp LLaMA-3-V/train.py LLaVA/llava/train/train.py
cp LLaMA-3-V/conversation.py LLaVA/llava/conversation.py
cp LLaMA-3-V/builder.py LLaVA/llava/model/builder.py
cp LLaMA-3-V/llava_llama.py LLaVA/llava/model/language_model/llava_llama.py

# Training commands
cp scripts/LLaMA3-V_pretrain.sh LLaVA/LLaMA3-V_pretrain.sh
cp scripts/LLaMA3-V_finetune_lora.sh LLaVA/LLaMA3-V_finetune_lora.sh

Train LLaMA-3-V

  1. Pre-train
cd LLaVA
bash LLaMA3-V_pretrain.sh
  1. Finetune
cd LLaVA
bash LLaMA3-V_finetune_lora.sh

πŸ™ Acknowledgement

We are thankful to LLaVA, lmms-eval and S2-Wrapper for releasing their models and code as open-source contributions.

In case if you face any issues or have any questions, please feel free to create an issue or reach out at [email protected] & [email protected].

πŸ“œ Citation

  @misc{hanoona2024LLaVA++,
          title={LLaVA++: Extending Visual Capabilities with LLaMA-3 and Phi-3},
          author={Rasheed, Hanoona and Maaz, Muhammad and Khan, Salman and Khan, Fahad S.},
          url={https://github.com/mbzuai-oryx/LLaVA-pp},
          year={2024}
  }

About

πŸ”₯πŸ”₯ LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.6%
  • Shell 3.4%