Highlights
- Pro
Lists (2)
Sort Name ascending (A-Z)
Language
Sort by: Recently starred
Starred repositories
Full stack, modern web application template. Using FastAPI, React, SQLModel, PostgreSQL, Docker, GitHub Actions, automatic HTTPS and more.
Integration of FastAPI framework supported by Pydantic with SQLAlchemy ORM and PostgreSQL on asyncpg driver
ProtonDB Badges is a plugin for Decky Loader to display tappable ProtonDB badges on your game pages
Installs the latest GE-Proton and Installs Non Steam Launchers under 1 Proton prefix folder and adds them to your steam library. Installs... Battle.net, Epic Games, Ubisoft, GOG, EA App, Amazon Gam…
Documentation on setting up an LLM server on Debian from scratch, using Ollama, Open WebUI, and OpenedAI Speech.
An extremely fast Python package and project manager, written in Rust.
A guide for technical professionals looking to start consulting
Web UI for AutoGen (A Framework Multi-Agent LLM Applications)
Connect and chat with your multiple documents (pdf and txt) through GPT 3.5, GPT-4 Turbo, Claude and Local Open-Source LLMs
🪢 Langfuse Python SDK - Instrument your LLM app with decorators or low-level SDK and get detailed tracing/observability. Works with any LLM or framework
DSPy: The framework for programming—not prompting—foundation models
Auto-Instrumentation for AI Observability
Get up and running with Llama 3.1, Mistral, Gemma 2, and other large language models.
Chat language model that can use tools and interpret the results
A language for constraint-guided and efficient LLM programming.
A guidance compatibility layer for llama-cpp-python
A programming framework for agentic AI 🤖
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
Integrate cutting-edge LLM technology quickly and easily into your apps
🔥 Turn entire websites into LLM-ready markdown or structured data. Scrape, crawl and extract with a single API.
Simple Chainlit UI for running llms locally using Ollama and LangChain
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
A high-throughput and memory-efficient inference and serving engine for LLMs
Comparison of Language Model Inference Engines
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.