-
Arcee.ai
- Toronto
-
17:28
(UTC -12:00) - in/malikeh97
Block or Report
Block or report Malikeh97
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (1)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
Repo associated with the paper Multi-modal preference alignment remedies regression of visual instruction tuning on language model
This repository collects all relevant resources about interpretability in LLMs
[知识编辑] [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
PyTorch code and models for V-JEPA self-supervised learning from video.
Accelerating the development of large multimodal models (LMMs) with lmms-eval
Convert PDF to markdown quickly with high accuracy
Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.
General technology for enabling AI capabilities w/ LLMs and MLLMs
This repository introduces PIXIU, an open-source resource featuring the first financial large language models (LLMs), instruction tuning data, and evaluation benchmarks to holistically assess finan…
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
A collection of AWESOME things about mixture-of-experts
Reaching LLaMA2 Performance with 0.1M Dollars
⚗️ distilabel is a framework for synthetic data and AI feedback for AI engineers that require high-quality outputs, full data ownership, and overall efficiency.
A one stop repository for generative AI research updates, interview resources, notebooks and much more!
An open science effort to benchmark legal reasoning in foundation models
Aligning Large Language Models with Human: A Survey
Robust recipes to align language models with human and AI preferences
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
mmcquade11 / llm-autoeval
Forked from arcee-ai/llm-autoevalAutomatically evaluate your LLMs in Google Colab
Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answer based on user queries.
A framework for few-shot evaluation of language models.
A framework for merging models solving different tasks with different initializations into one multi-task model without any additional training
Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind
Tools for merging pretrained large language models.
[ACL 2024] Progressive LLaMA with Block Expansion.