-
George Mason University, CS Department
- Fairfax, VA
- https://jianchaotan.github.io
Highlights
- Pro
Block or Report
Block or report JianchaoTan
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
An infinite number of monkeys randomly throwing paint at a canvas
[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
Latent Consistency Model for AUTOMATIC1111 Stable Diffusion WebUI
Offers a toolset for comprehensive, multi-faceted large-scale data analysis and optimizations
Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”
SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.
A simple but complete full-attention transformer with a set of promising experimental features from various papers
LaVIT: Empower the Large Language Model to Understand and Generate Visual Content
Implementation of Nougat Neural Optical Understanding for Academic Documents
Pytorch Implementation of Rethinking Graph Neural Architecture Search from Message-passing (CVPR2021)
Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.
Text-to-Image generation. The repo for NeurIPS 2021 paper "CogView: Mastering Text-to-Image Generation via Transformers".
Command-line tool to inspect the difference between (the text in) two PDF files
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
A C++ platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model.
[CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".
[TCAD'23] AccelTran: A Sparsity-Aware Accelerator for Transformers
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
[NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers
The white paper which discusses the security and privacy problems of large models.
A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer