-
Naver Corp.
- https://taekyoon.github.io/my_resume/
Lists (1)
Sort Name ascending (A-Z)
Stars
GPU environment and cluster management with LLM support
An extended project of the LLM Compiler paper, focusing on developing LLM-based Autonomous Agents.
Go concurrency with channel transformations: a toolkit for streaming, batching, pipelines, and error handling
A blazing fast inference solution for text embeddings models
See how to augment LLMs with real-time data for dynamic, context-aware apps - Rag + Agents.
Google TPU optimizations for transformers models
Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verified research papers.
Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
RAG AutoML Tool - Find optimal RAG pipeline for your own data.
日本語LLMまとめ - Overview of Japanese LLMs
This tool helps automatic generation of grammatically valid synthetic Code-mixed data by utilizing linguistic theories such as Equivalence Constant Theory and Matrix Language Theory.
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)
Accelerated First Order Parallel Associative Scan
Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)
Minimalistic large language model 3D-parallelism training
Sakura-SOLAR-DPO: Merge, SFT, and DPO
A curated list of Large Language Model (LLM) Interpretability resources.
Accelerate, Optimize performance with streamlined training and serving options with JAX.