-
FPTU HCM
- Ho Chi Minh city
-
20:27
(UTC +07:00) - pe.nho.73307
- @TuanPham672604
Highlights
- Pro
Block or Report
Block or report vTuanpham
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (2)
Sort Name ascending (A-Z)
Stars
Language
Sort by: Recently starred
Stateful load balancer custom-tailored for llama.cpp
Python library providing function decorators for configurable backoff and retry
PyMuPDF is a high performance Python library for data extraction, analysis, conversion & manipulation of PDF (and other) documents.
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Ranger plugin that adds file glyphs / icon support to Ranger
Official inference repo for FLUX.1 models
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
Official repository of Evolutionary Optimization of Model Merging Recipes
Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)
Crack WPA/WPA2 Wi-Fi Routers with Airodump-ng and Aircrack-ng/Hashcat
Robust recipes to align language models with human and AI preferences
A Next.JS boilerplate with the famous Open Source Boostrap Admin Template, CoreUI.
TextGrad: Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
A blazing fast inference solution for text embeddings models
Understand Human Behavior to Align True Needs
Stable Diffusion implemented from scratch in PyTorch
Convert Compute And Books Into Instruct-Tuning Datasets (or classifiers)!
A pipeline parallel training script for LLMs.
A framework for serving and evaluating LLM routers - save LLM costs without compromising quality!
Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793