-
IOE Pulchowk Campus
- Nepal
-
21:17
(UTC +05:45) - sumityadav.com.np
- @Rocker_Ritesh
- in/rockerritesh
Highlights
- Pro
Lists (2)
Sort Name ascending (A-Z)
Starred repositories
MTCNN face detection implementation for TensorFlow, as a PIP package.
Efficient CUDA kernels for training convolutional neural networks with PyTorch.
Universal LLM Deployment Engine with ML Compilation
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting…
🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper
MiniCPM3-4B: An edge-side LLM that surpasses GPT-3.5-Turbo.
⚡️HivisionIDPhotos: a lightweight and efficient AI ID photos tools. 一个轻量级的AI证件照制作算法。
Machine learning from scratch
Efficient Triton Kernels for LLM Training
Inference Vision Transformer (ViT) in plain C/C++ with ggml
Release for Improved Denoising Diffusion Probabilistic Models
Retrieval Augmented Generation (RAG) chatbot powered by Weaviate
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
#1 Locally hosted web application that allows you to perform various operations on PDF files
Rich is a Python library for rich text and beautiful formatting in the terminal.
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
A modular graph-based Retrieval-Augmented Generation (RAG) system
A framework for serving and evaluating LLM routers - save LLM costs without compromising quality!
Streamlit wrapper for lightweight-charts
Streamlit — A faster way to build and share data apps.
Edit, preview and share mermaid charts/diagrams. New implementation of the live editor.
A natural language interface for computers
The #1 open-source voice interface for desktop, mobile, and ESP32 chips.
This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.