A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
-
Updated
Aug 14, 2024 - Python
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
An unofficial https://bgm.tv ui first app client for Android and iOS, built with React Native. 一个无广告、以爱好为驱动、不以盈利为目的、专门做 ACG 的类似豆瓣的追番记录,bgm.tv 第三方客户端。为移动端重新设计,内置大量加强的网页端难以实现的功能,且提供了相当的自定义选项。 目前已适配 iOS / Android / WSA、mobile / 简单 pad、light / dark theme、移动端网页。
Mixture-of-Experts for Large Vision-Language Models
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
MindSpore online courses: Step into LLM
Tutel MoE: An Optimized Mixture-of-Experts Implementation
A libGDX cross-platform API for InApp purchasing.
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
MOE is an event-driven OS for 8/16/32-bit MCUs. MOE means "Minds Of Embedded system", It’s also the name of my lovely baby daughter 😎
🍙 A curated list of my favorite visual novels for Android
Official LISTEN.moe Android app
Fork of Moe Counter powered by Cloudflare Workers.
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*
Batch download high quality videos from https://twist.moe
Add a description, image, and links to the moe topic page so that developers can more easily learn about it.
To associate your repository with the moe topic, visit your repo's landing page and select "manage topics."