Starred repositories
The Open Cookbook for Top-Tier Code Large Language Model
An automated AI system (Python framework) designed to analyze any type of website content and generate structured reports using Claude 3.5 Sonnet API and Firecrawl. While currently configured for e…
The first AI agent that builds third-party integrations through reverse engineering platforms' internal APIs.
An AI web browsing framework focused on simplicity and extensibility.
A simple screen parsing tool towards pure vision based GUI agent
🪄 Create rich visualizations with AI
⚙️ Convert HTML to Markdown. Even works with entire websites and can be extended through rules.
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 and reasoning techniques.
The only fully local production-grade Super SDK that provides a simple, unified, and powerful interface for calling more than 200+ LLMs.
A compilation of the best multi-agent papers
The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework Join our Community: https://discord.com/servers/agora-999382051935506503
Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
Business intelligence as code: build fast, interactive data visualizations in pure SQL and markdown
Finetune Llama 3.2, Mistral, Phi, Qwen & Gemma LLMs 2-5x faster with 80% less memory
A helper script collecting contents of a repo and placing it in one text file.
Unofficial Pytorch implementation of Dom-LM paper.
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"
Turn an entire GitHub Repo into a single organized .txt file to use with LLM's (GPT-4, Claude Opus, Gemini, etc)
Code for NeurIPS 2024 paper "AutoManual: Constructing Instruction Manuals by LLM Agents via Interactive Environmental Learning"
Entropy Based Sampling and Parallel CoT Decoding
Official Repo for ICML 2024 paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.