-
Preferred Networks, Inc. / Preferred Elements, Inc.
- Japan
-
02:34
(UTC +09:00) - nzw0301.github.io
Lists (4)
Sort Name ascending (A-Z)
Stars
- All languages
- Batchfile
- C
- C#
- C++
- CSS
- Classic ASP
- Common Lisp
- Cuda
- Cython
- Dockerfile
- Emacs Lisp
- Fortran
- Gherkin
- Go
- HTML
- Haskell
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- Lean
- Lua
- MATLAB
- MDX
- Nix
- OCaml
- Objective-C
- OpenEdge ABL
- Perl
- PostScript
- PowerShell
- Python
- R
- Roff
- Ruby
- Rust
- SCSS
- Sass
- Scala
- Shell
- TeX
- TypeScript
- Vue
An extremely fast Python package and project manager, written in Rust.
An Experiment on Dynamic NTK Scaling RoPE
Helpful tools and examples for working with flex-attention
Whisperのデコーダをllm-jp-1.3b-v1.0に置き換えた音声認識モデルを学習させるためのコード
YaRN: Efficient Context Window Extension of Large Language Models
LOFT: A 1 Million+ Token Long-Context Benchmark
LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.
Doing simple retrieval from LLM models at various context lengths to measure accuracy
A throughput-oriented high-performance serving framework for LLMs
Efficient Triton Kernels for LLM Training
LaTeX style file for Transactions on Machine Learning Research
ClearML - Auto-Magical CI/CD to streamline your AI workload. Experiment Management, Data Management, Pipeline, Orchestration, Scheduling & Serving in one MLOps/LLMOps solution
Models and code for RepCodec: A Speech Representation Codec for Speech Tokenization
Official pytorch implementation of "Unsqueeze [CLS] Bottleneck to Learn Rich Representations " (ECCV 2024)
Run PyTorch LLMs locally on servers, desktop and mobile
Pretty-print tabular data in Python, a library and a command-line utility. Repository migrated from bitbucket.org/astanin/python-tabulate.
[NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models
PyTorch native quantization and sparsity for training and inference
A high-throughput and memory-efficient inference and serving engine for LLMs
Fast inference from large lauguage models via speculative decoding