-
Max Planck Institute for Psycholinguistics
- Nijmegen, NL
- http:https://lpag.de
- @[email protected]
- @LukasGalke
Highlights
- Pro
Block or Report
Block or report lgalke
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"
Unofficial PyTorch implementation of DeepMind's PNAS 2017 paper "Overcoming Catastrophic Forgetting"
Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.
Graph Neural Network library made for Apple Silicon
🎉 PILOT: A Pre-trained Model-Based Continual Learning Toolbox
[TMLR 2024] Efficient Large Language Models: A Survey
A benchmark for distribution shift in tabular data
Build presentation slides with Markdeep and present them right in your browser.
PyTorch implementation of various methods for continual learning (XdG, EWC, SI, LwF, FROMP, DGR, BI-R, ER, A-GEM, iCaRL, Generative Classifier) in three different scenarios.
A collection of design patterns/idioms in Python
Robust recipes to align language models with human and AI preferences
Must-read Papers on Textual Adversarial Attack and Defense
A privacy-first, open-source platform for knowledge management and collaboration. Download link: http:https://github.com/logseq/logseq/releases. roadmap: http:https://trello.com/b/8txSM12G/roadmap
Run any open-source LLMs, such as Llama 3.1, Gemma, as OpenAI compatible API endpoint in the cloud.
A suite of test scenarios for multi-agent reinforcement learning.
Latex template for the journal "Language Development Research"
A Latex style and template for paper preprints (based on NIPS style)
Exploring finetuning public checkpoints on filter 8K sequences on Pile
Implementations of growing and pruning in neural networks
Make Zotero effective for us LaTeX holdouts
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.