Stars
- All languages
- AMPL
- Arduino
- Assembly
- AutoHotkey
- Batchfile
- BitBake
- C
- C#
- C++
- CMake
- CartoCSS
- Clojure
- CoffeeScript
- Cuda
- D
- Dart
- Dockerfile
- Eagle
- G-code
- Go
- Groff
- HTML
- Haskell
- Java
- JavaScript
- Jupyter Notebook
- KiCad Layout
- Kotlin
- Lua
- MATLAB
- Objective-C
- Objective-C++
- OpenSCAD
- PHP
- PLpgSQL
- PowerShell
- Prolog
- Python
- Ruby
- Rust
- Scala
- Shell
- Svelte
- Swift
- SystemVerilog
- TeX
- Twig
- TypeScript
- VHDL
- Verilog
- Vue
Model components of the Llama Stack APIs
Project Malmo is a platform for Artificial Intelligence experimentation and research built on top of Minecraft. We aim to inspire a new generation of research into challenging new problems presente…
Open source Claude Artifacts – built with Llama 3.1 405B
This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?
Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
An API standard for multi-agent reinforcement learning environments, with popular reference environments and related utilities
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
Speech To Speech: an effort for an open-sourced and modular GPT4-o
🤘 TT-NN operator library, and TT-Metalium low level kernel programming model.
Official inference repo for FLUX.1 models
ReVanced / GmsCore
Forked from microg/GmsCoreFree implementation of Play Services
Free open source office suite with business productivity tools: document and project management, CRM, mail aggregator.
EfficientQAT: Efficient Quantization-Aware Training for Large Language Models
Official inference library for Mistral models
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Simulated annealing for neural networks with JAX.
Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
Efficient utility of image similarity using deep neural network and deep learning.
SMDK, Scalable Memory Development Kit, is developed for Samsung CXL(Compute Express Link) Memory Expander to enable full-stack Software-Defined Memory system