Block or Report
Block or report alirezamshi
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (1)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
RewardBench: the first evaluation tool for reward models.
Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation editor Gradio UI.
Convert Compute And Books Into Instruct-Tuning Datasets (or classifiers)!
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Large Language Model Text Generation Inference
Scalable Meta-Evaluation of LLMs as Evaluators
An open-sourced LLM judge for evaluating LLM-generated answers.
Arena-Hard-Auto: An automatic LLM benchmark.
AI for all: Build the large graph of the language models
Minimalistic large language model 3D-parallelism training
Easily embed, cluster and semantically label text datasets
A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate, Groq (100+ LLMs)
Superfast AI decision making and intelligent processing of multi-modal data.
All available datasets for Instruction Tuning of Large Language Models
LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
A library for easily merging multiple LLM experts, and efficiently train the merged LLM.
Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
LLM training code for Databricks foundation models
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
alirezamshi / small100
Forked from alirezamshi-zz/small100Implementation of "SMaLL-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages" paper, accepted to EMNLP 2022.
An open science effort to benchmark legal reasoning in foundation models
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"