Block or Report
Block or report simonggx
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
The official repository of the paper "(Perhaps) Beyond Human Translation: Harnessing Multi-Agent Collaboration for Translating Ultra-Long Literary Texts"
Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden States
CodeGeeX4-ALL-9B, a versatile model for all AI software development scenarios, including code completion, code interpreter, web search, function calling, repository-level Q&A and much more.
[arXiv preprint] The official code of paper "Open-Vocabulary SAM".
Convert PDF to markdown quickly with high accuracy
A nearly-live implementation of OpenAI's Whisper.
1 min voice data can also be used to train a good TTS model! (few shot voice cloning)
gpt-4o for windows, macos and linux
A generative speech model for daily dialogue.
A modular graph-based Retrieval-Augmented Generation (RAG) system
Open Source framework for voice and multimodal conversational AI
An AI search engine inspired by Perplexity
AI Q&A Search Engine ➡️ 基于LangChain和SearXNG打造的开源AI搜索引擎
Build a Perplexity-Inspired Answer Engine Using Next.js, Groq, Llama-3, Langchain, OpenAI, Upstash, Brave & Serper
GroqNotes: Generate organized notes from audio using Groq, Whisper, and Llama3
Agent framework and applications built upon Qwen2, featuring Function Calling, Code Interpreter, RAG, and Chrome extension.
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.
Build AI Assistants with memory, knowledge and tools.
🦜🔗 Build context-aware reasoning applications
Development repository for the Triton language and compiler
Retrieval and Retrieval-augmented LLMs
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Accessible large language models via k-bit quantization for PyTorch.