- Austin, TX
-
14:05
(UTC -05:00) - https://catid.io
Highlights
- Pro
Block or Report
Block or report catid
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Stop messing around with finicky sampling parameters and just use DRµGS!
Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "
A library for unit scaling in PyTorch
[ICML‘2024] "LoCoCo: Dropping In Convolutions for Long Context Compression", Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen
recursal / GoldFinch-paper
Forked from SmerkyG/GoldFinch-paperGoldFinch and other hybrid transformer components
OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training
MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.
Seamless operability between C++11 and Python
🥧 Savoury implementation of the QUIC transport protocol and HTTP/3
A GPU accelerated error-bounded lossy compression for scientific data.
The FastLanes Compression Layout: Decoding >100 Billion Integers per Second with Scalar Code
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
An extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc) using cutting edge algorithms (3DGS, NeRF, etc.)
GRASS: Compute Efficient Low-Memory LLM Training with Structured Sparse Gradients
ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training
Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793
STAR: Scale-wise Text-to-image generation via Auto-Regressive representations
[CVPR 2024 Highlight] FoundationPose: Unified 6D Pose Estimation and Tracking of Novel Objects
BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.