- Paris
-
08:16
(UTC +01:00) - https://redtachyon.me
Stars
Modeling, training, eval, and inference code for OLMo
Instruction Tuning with GPT-4
Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI
A toolkit for practical Human-AI cooperation research
This is our own implementation of 'Layer Selective Rank Reduction'
📝 Minimalistic Vue-powered static site generator
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Repo for reproduction of sequential social dilemmas
[ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the diverse strengths of multiple open-source LLMs. LLM-Blender cut …
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
A collection of GPT system prompts and various prompt injection/leaking knowledge.
Modern C++ Programming Course (C++03/11/14/17/20/23/26)
Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.
Gymnasium extension for DarkSouls III, Elden Ring, and other Souls games
A gamification of the "powergrid problem" using grid2op that allows you to "operate" a powergrid.
a state-of-the-art-level open visual language model | 多模态预训练模型
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
An Open-source Framework for Data-centric, Self-evolving Autonomous Language Agents
Easy and fast file sharing from the command-line.
An open source implementation of Microsoft's VALL-E X zero-shot TTS model. Demo is available in https://plachtaa.github.io/vallex/
Tutorial on neural theorem proving
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.