-
Ocean University of China
Block or Report
Block or report oshixiaoxiliu
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
A PyTorch implementation of the Transformer model in "Attention is All You Need".
Customize attachment location with variables($filename, $data, etc) like typora.
thenickdude / KVM-Opencore
Forked from Leoyzen/KVM-OpencoreOpenCore disk image for running macOS VMs on Proxmox/QEMU
thenickdude / OSX-KVM
Forked from kholia/OSX-KVMPersonal fork for testing
Share some hackintosh Clover files 分享一些黑苹果clover配置文件
《李宏毅深度学习教程》(李宏毅老师推荐👍),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
总结梳理自然语言处理工程师(NLP)需要积累的各方面知识,包括面试题,各种基础知识,工程能力等等,提升核心竞争力
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficie…
Graphic notes on Gilbert Strang's "Linear Algebra for Everyone", 线性代数的艺术中文版, 欢迎PR.
Graphic notes on Gilbert Strang's "Linear Algebra for Everyone"
A generative speech model for daily dialogue.
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
A minimal GPU design in Verilog to learn how GPUs work from the ground up
LSPatch: A non-root Xposed framework extending from LSPosed
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Instruct-tune LLaMA on consumer hardware
Unsupervised text tokenizer for Neural Network-based text generation.
🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Ongoing research training transformer models at scale
Code and documentation to train Stanford's Alpaca models, and generate the data.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型