![tensorflow logo](https://raw.githubusercontent.com/github/explore/80688e429a7d4ef2fca1e82350fe8e3517d3494d/topics/tensorflow/tensorflow.png)
-
@mlsys-seo MLSYS Lab, Hanyang Univertity
- Seoul, South Korea
-
11:33
(UTC +09:00)
Highlights
- Pro
Block or Report
Block or report SungBalance
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (2)
Sort Name ascending (A-Z)
Language
Sort by: Recently starred
Starred repositories
PyTorch native quantization and sparsity for training and inference
High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.
achimnol / aiomonitor-ng
Forked from aio-libs/aiomonitoraiomonitor is module that adds monitor and python REPL capabilities for asyncio application
aiomonitor is module that adds monitor and python REPL capabilities for asyncio application
ReFT: Representation Finetuning for Language Models
LangChain 공식 Document, Cookbook, 그 밖의 실용 예제를 바탕으로 작성한 한국어 튜토리얼입니다. 본 튜토리얼을 통해 LangChain을 더 쉽고 효과적으로 사용하는 방법을 배울 수 있습니다.
Regression Transformer (2023; Nature Machine Intelligence)
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Tools for merging pretrained large language models.
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
AI-Video-Cropper is a Python-based tool that leverages the power of GPT-4 (OpenAI's language model) to automatically analyze videos, extract the most interesting sections, and crop them for improve…
Specify what you want it to build, the AI asks for clarification, and then builds it. Completely separate team and codebase from the AI Web App builder https://gptengineer.app
머신러닝 입문자 혹은 스터디를 준비하시는 분들에게 도움이 되고자 만든 repository입니다. (This repository is intented for helping whom are interested in machine learning study)
The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”
🔥Highlighting the top ML papers every week.
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
4 bits quantization of LLaMA using GPTQ
Running large language models on a single GPU for throughput-oriented scenarios.
Resource-adaptive cluster scheduler for deep learning training.
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator suppor…
Unlock vGPU functionality for consumer grade GPUs.
🎓 Sharing machine learning course / lecture notes.
Betty: an automatic differentiation library for generalized meta-learning and multilevel optimization