Skip to content

A framework streamlining Training, Finetuning, Evaluation and Deployment of Multi Modal Language models

License

Notifications You must be signed in to change notification settings

adithya-s-k/eagle

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 

Repository files navigation

Eagle

A framework streamlining Training, Finetuning, Evaluation and Deployment of Multi Modal Language models

Features

  • Diverse Model Support: Llama3, Phi, Mistral, Gemma, and more.
  • Versatile Image Encoding: CLIP, Seglip, RADIO, and others.
  • Customization Made Simple: YAML files and CLI for adaptability.
  • Efficient Resource Utilization: Seamless operation on a single GPU.
  • Seamless Deployment: Docker locally or on cloud with Skypilot.
  • Comprehensive Documentation: Includes datasets for successful implementation.

Table of Content

  1. Introduction
  2. Supported_Models
  3. Changelog
  4. Installation
  5. Pretrain
  6. Finetune
  7. Evaluate
  8. Inference
  9. Features to be Added
  10. Citation
  11. Acknowledgement

SUPPORTED MODELS

LLMS

  • Llama3
  • Phi
  • Mistral
  • Gemma

Vision Encoder/Transformer

Audio Encoder/Transformer

Video Encode/Transformer

Multi Model

CHANGLE LOGS (What's New)

  • Version 1.0.1:
    • Added support for distributed training.
    • Included accelerate library.
  • Version 1.0.0:
    • Initial release.

Installation

  1. Clone the repository from GitHub.
  2. Install dependencies using pip: pip install -r requirements.txt.
  3. Run setup.sh to set up the environment.
  4. Start using Eagle!

PRETRAIN

  • Utilize supported models for pretraining multimodal models.

FINETUNE

  • Fine-tune pretrained models with custom datasets or tasks.

EVALUATE

  • Evaluate model performance using specified metrics and datasets.

INFERENCE/DEPLOY

  • Deploy models for inference on new data or integrate them into existing systems.

Features to be Added

  • Add support for accelerate.
  • Add support for additional Huggingface models such as falcon, mpt.

CITATION

@article{AdithyaSKolavi2024,
  title={Eagle: Unified Platform to train multimodal models},
  author={Adithya S Kolavi},
  year={2024},
  url={https://github.com/adithya-s-k/eagle}
}

ACKNOWLEDGEMENT

We would like to express our gratitude to the creators of LLaVA (Large Language and Vision Assistant) for providing the groundwork for our project. Visit their repository here.

About

A framework streamlining Training, Finetuning, Evaluation and Deployment of Multi Modal Language models

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages