Skip to content
View kristaller486's full-sized avatar

Block or report kristaller486

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Starred repositories

3 stars written in C++
Clear filter

Local AI API Platform

C++ 2,058 116 Updated Nov 7, 2024

Tensor parallelism is all you need. Run LLMs on an AI cluster at home using any device. Distribute the workload, divide RAM usage, and increase inference speed.

C++ 1,474 103 Updated Oct 14, 2024

WebAssembly binding for llama.cpp - Enabling in-browser LLM inference

C++ 427 21 Updated Oct 31, 2024