-
University of Toronto
- Toronto
- junweima.github.io
- in/jeremy-ma
Highlights
- Pro
Block or Report
Block or report junweima
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Annotated version of the Mamba paper
MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248
A Benchmark of Tabular Machine Learning in-the-Wild with real-world industry-grade tabular datasets
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
A comprehensive collection of KAN(Kolmogorov-Arnold Network)-related resources, including libraries, projects, tutorials, papers, and more, for researchers and developers in the Kolmogorov-Arnold N…
Our maintained PFN repository. Come here to train SOTA PFNs.
Repository for CARTE: Context-Aware Representation of Table Entries
A Native-PyTorch Library for LLM Fine-tuning
[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.
The implementation of "TabR: Unlocking the Power of Retrieval-Augmented Tabular Deep Learning"
Schedule-Free Optimization in PyTorch
The Python package of differential nearest neighbors regression (DNNR): Raising KNN-regression to levels of gradient boosting method. Build on-top of Numpy, Scikit-Learn, and Annoy.
Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
[ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723
Transformers with Arbitrarily Large Context
[NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch