- Italy, Bologna
- @loretoparisi
Block or Report
Block or report loretoparisi
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseJavaScript
2x faster than JSON.stringify()
Fast Differentiable Tensor Library in JavaScript and TypeScript with Bun + Flashlight
Utilities to use the Hugging Face Hub API
Bringing stable diffusion models to web browsers. Everything runs inside the browser with no server support.
Run modern deep learning models in the browser.
The simplest way to run LLaMA on your local machine
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Lord of Large Language Models Web User Interface
SemanticFinder - frontend-only live semantic search with transformers.js
Tensorflow Node.js Examples
Run GPT model on the browser with WebGPU. An implementation of GPT inference in less than ~1500 lines of vanilla Javascript.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
High-performance In-browser LLM Inference Engine
Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPU
hnswlib-node provides Node.js bindings for Hnswlib
hnswlib-wasm attempts to create a browser friendly version of hnswlib
Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
Tensor computation with WebGPU acceleration
Comlink makes WebWorkers enjoyable.
Browser-compatible JS library for running language models