-
University of Liverpool
- Liverpool
Stars
[ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future
An open-source library designed for the evaluation of Spiking Neural Networks (SNNs).
相田くひを的《完全自杀手册》 ,这个版本在YuriMiller翻译的基础上添加了一部分注释,并修正了Issues中指出错误的版本,大概以后也会继续维护,会看Issues和Discussions指出文章的错误,一直到告别世界为止
简体中文版本的《完全自杀手册》,自己完成了一部分翻译和校对、注释工作,不久之后就要与这个世界告别,这一本书算是最后的礼物。
Code for paper Weight Expansion: A New Perspective on Dropout and Generalization
Implementation for the paper: Uncertainty Estimation for 3D Dense Prediction via Cross-Point Embeddings
Implementation for the paper: STUN: Self-Teaching Uncertainty Estimation for Place Recognition
Implementation for the paper: AutoPlace: Robust Place Recognition with Single-chip Automotive Radar
https://arxiv.org/pdf/2304.01246.pdf
⏰ AI conference deadline countdowns
Neuromorphic paper list, automatically updating everyday at 8:00 am GMT.
A simple way to deploy local llama-2 models with Docker
Graphic notes on Gilbert Strang's "Linear Algebra for Everyone"
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga…
This repo is meant to serve as a guide for Machine Learning/AI technical interviews.
This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpretable Image Recognition" (to appear at NeurIPS 2019), by Chaof…
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
XAI - An eXplainability toolbox for machine learning
Conversion from CNNs to SNNs using Tensorflow-Keras
The official code for the paper "Delving Deep into Label Smoothing", IEEE TIP 2021
Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
Evaluation robustness of neural network interpretation