Lists (11)
Sort Name ascending (A-Z)
- All languages
- Astro
- Batchfile
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Dockerfile
- Elixir
- Elm
- Emacs Lisp
- Go
- HTML
- Haskell
- JSON
- Java
- JavaScript
- Jinja
- Jsonnet
- Julia
- Jupyter Notebook
- Kotlin
- Lua
- MDX
- Makefile
- Markdown
- Mathematica
- Mustache
- Nix
- Nunjucks
- Objective-C
- PHP
- Perl
- Prolog
- Python
- R
- Roff
- Ruby
- Rust
- SCSS
- SVG
- Scala
- Shell
- Svelte
- Swift
- TeX
- TypeScript
- V
- Vue
- XSLT
- Zig
Starred repositories
🦜🔗 Build context-aware reasoning applications
A latent text-to-image diffusion model
Companion webpage to the book "Mathematics For Machine Learning"
A small set of Python functions to draw pretty maps from OpenStreetMap data. Based on osmnx, matplotlib and shapely libraries.
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
Official inference library for Mistral models
A collection of infrastructure and tools for research in neural network interpretability.
An easy to use blogging platform, with enhanced support for Jupyter Notebooks.
A computer science textbook
Productivity Tools for Plotly + Pandas
An interactive data visualization tool which brings matplotlib graphics to the browser using D3.
The hub for EleutherAI's work on interpretability and learning dynamics
Solutions of Reinforcement Learning, An Introduction
PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks
TruthfulQA: Measuring How Models Imitate Human Falsehoods
HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal
Mechanistic Interpretability Visualizations using React
Emergent world representations: Exploring a sequence model trained on a synthetic task
Interpreting how transformers simulate agents performing RL tasks
Repo to accompany paper "Implicit Self-Regularization in Deep Neural Networks..."
Code for reproducing figures and results in the paper ``Early stopping in deep networks: Double descent and how to eliminate it''
Code for "Unifying Grokking and Double Descent" from the NeurIPS 2022 ML Safety Workshop.
Clone of https://gitlab.cs.washington.edu/pmp10/stat570.
Using WBIC-like method to estimate quantities that behaves qualitatively like the learning coefficient for large network.
This repository contains code to run experiments and generate plots described in the paper.