Highlights
- Pro
Lists (1)
Sort Name ascending (A-Z)
Stars
A high-quality tool for convert PDF to Markdown and JSON.一站式开源高质量数据提取工具,将PDF转换成Markdown和JSON格式。
A Comprehensive Toolkit for High-Quality PDF Content Extraction
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).
Slides, notes, and materials for the workshop
Get the sunset and sunrise times for a geolocation without having to access any api.
Real-time daylight prediction using Rhinoceros3D, Grasshopper, Ladybug Tools and pyTorch
Energym is an open source building simulation library designed to test climate control and energy management strategies on buildings in a systematic and reproducible way.
Framework for energy monitoring and measurement on NVIDIA Jetson boards
2024 up-to-date list of DATASETS, CODEBASES and PAPERS on Multi-Task Learning (MTL), from Machine Learning perspective.
PyTorch emulation library for Microscaling (MX)-compatible data formats
A method to increase the speed and lower the memory footprint of existing vision transformers.
An MLIR dialect to enable the efficient acceleration of ML model on CGRAs.
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.
[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"
MICRO22 artifact evaluation for Sparseloop
NNtrainer is Software Framework for Training Neural Network Models on Devices.
Code for reproducing "AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks" (NeurIPS 2021)
Sparsity-aware deep learning inference runtime for CPUs
[ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chang, Zhangyang Wang
[ICML 2022] "Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets" by Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wang, Zhangyang Wang.
A portable framework to map DFG (dataflow graph, representing an application) on spatial accelerators.
RapidStream TAPA compiles task-parallel HLS program into high-frequency FPGA accelerators.
Blaok / tapa
Forked from rapidstream-org/rapidstream-tapaTAPA is a dataflow HLS framework that features fast compilation, expressive programming model and generates high-frequency FPGA accelerators. [See https://github.com/UCLA-VAST/tapa for issues & pul…