Lists (1)
Sort Name ascending (A-Z)
Stars
The official repo of Qwen-Audio (通义千问-Audio) chat & pretrained large audio language model proposed by Alibaba Cloud.
Evaluate the accuracy of LLM generated outputs
The official repo of Qwen2-Audio chat & pretrained large audio language model proposed by Alibaba Cloud.
The official repository for the paper: Evaluation of Retrieval-Augmented Generation: A Survey.
Code examples and jupyter notebooks for the Cohere Platform
OCR, layout analysis, reading order, table recognition in 90+ languages
Multilingual Medicine: Model, Dataset, Benchmark, Code
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery 🧑🔬
A non-official CLI for Llama Index Parser
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
[ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model
This repository collects all relevant resources about interpretability in LLMs
This is the official repository of our paper "MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations for Medicine“
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
PyTorch code and models for V-JEPA self-supervised learning from video.
Accelerating the development of large multimodal models (LMMs) with lmms-eval
Convert PDF to markdown quickly with high accuracy
Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.
General technology for enabling AI capabilities w/ LLMs and MLLMs
This repository introduces PIXIU, an open-source resource featuring the first financial large language models (LLMs), instruction tuning data, and evaluation benchmarks to holistically assess finan…
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
A collection of AWESOME things about mixture-of-experts
Reaching LLaMA2 Performance with 0.1M Dollars