Skip to content

OmniFusion — a multimodal model to communicate using text and images

License

Notifications You must be signed in to change notification settings

nikolayakhmetyanov/OmniFusion

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OmniFusion

Hugging Face

OmniFusion is an advanced multimodal AI model designed to extend the capabilities of traditional language processing systems by integrating additional data modalities such as images, and potentially audio, 3D and video content.

ChangeLog

[01/04/2024] OmniFusion-1.1 weights are uploaded to Huggingface. Now the model can speak Russian :)

[01/04/2024] Model training source code for OmniFusion-1.1 released

[22/11/2023] OmniFusion weights are available on Huggingface

Architecture

OmniFusion open source version core is Mistral-7B. Initially focusing on images, we selected the CLIP-ViT-L as the visual encoder for its efficient information transfer capabilities. The most important component of OmniFusion is its adapter, a mechanism allowing the language model to interpret and incorporate information from different modalities. The adapter is a single-layer, four-headed transformer, which has shown superior performance compared to simpler linear layers or MLP structures.

This adapter takes embeddings from the visual encoder (excluding the CLS token) and maps them into textual embeddings compatible with the language model.

To further enhance the model's multimodal capabilities, we employ trainable special tokens to mark the beginning and end of visual data within the text sequence.

Training Process consists of two stages

  1. Pre-training the adapter on Image Captioning tasks (LAION, CC-4M).
  2. Once the adapter has learned to map ViT's visual embeddings to the language model's textual space, we proceed to unfreeze Mistral for improved understanding of dialog formats and complex queries.

Results

OmniFusion was benchmarked against the latest multimodal SOTA models. It excelled in generative metrics and classification benchmarks like VisualDialog.

Update: OmniFusion-1.1 (with proprietary GigaChat LLM) results on various benchmarks:

Model Performance on Visual Dialog Benchmark

Model NDCG MRR Recall@1 Recall@5 Recall@10
OmniFusion 25.91 10.78 4.74 13.80 20.53
LLaVA-13B 24.74 8.91 2.98 10.80 18.02

Omifusion-1.1 (rus)

Model textvqa scienceqa pope gqa ok_vqa
OmniFusion-1.1 (one encoder, Mistral) 0.4893 0.6802 0.7818 0.4600 0.5187
OmniFusion-1.1 (two encoders, Mistral) 0.4755 0.6732 0.8153 0.4761 0.5317

Examples

Future Plans

We will soon release a public version of OmniFusion based on an open language model. Work is underway on a version that understands Russian, uses ImageBind encoders, and accepts more modalities (sound, 3D, video). Stay tuned for updates on GitHub!

Authors

The FusionBrain scientific group from the AIRI Institute, in collaboration with scientists from Sber AI, led the model's development.

Main contributors:

  • Anton Razzhigaev: Blog
  • Elizaveta Goncharova
  • Matvey Mihkalchuk
  • Maxim Kurkin
  • Irina Abdullaeva
  • Denis Dimitrov Blog
  • Andrey Kuznetsov Blog

About

OmniFusion — a multimodal model to communicate using text and images

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%