-
23:34
(UTC -04:00) - haileyschoelkopf.github.io
- @haileysch__
Block or Report
Block or report haileyschoelkopf
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
FlagGems is an operator library for large language models implemented in Triton Language.
Multimodal language model benchmark, featuring challenging examples
A native PyTorch Library for large model training
PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu
Fast modular code to create and train cutting edge LLMs
A simple and efficient Mamba implementation in pure PyTorch and MLX.
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.47% of bugs in the SWE-bench evaluation set and takes just 1 minute to run.
A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.
Simple and efficient pytorch-native transformer training and inference (batched)
Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature
Accelerated First Order Parallel Associative Scan
scalable and robust tree-based speculative decoding algorithm
A Native-PyTorch Library for LLM Fine-tuning
Language models scale reliably with over-training and on downstream tasks
Experiment of using Tangent to autodiff triton
Triton-based implementation of Sparse Mixture of Experts.
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
Ring attention implementation with flash attention