-
Institute of Automation,Chinese Academy of Sciences
Stars
Offline Multi-Agent Reinforcement Learning Implementations: Solving Overcooked Game with Data-Driven Method
This repository is the official implementation of ZSC-Eval: An Evaluation Toolkit and Benchmark for Multi-agent Zero-shot Coordination. Pre-trained Agent Zoo: https://huggingface.co/Leoxxxxh/ZSC-Ev…
It's the pytorch implementation of google research football.
A benchmark environment for fully cooperative human-AI performance.
Tutorial for surrogate gradient learning in spiking neural networks
A paper list of spiking neural networks, including papers, codes, and related websites. 本仓库收集脉冲神经网络相关的顶会顶刊论文和代码,正在持续更新中。
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
Implementation of "SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks"
Collection of tutorials about methods of computational neuroscience using Python
Ananke: A theme for Hugo Sites
Spikingformer: Spike-driven Residual Learning for Transformer-based Spiking Neural Network
Offical implementation of "Spike-driven Transformer V2: Meta Spiking Neural Network Architecture Inspiring the Design of Next-generation Neuromorphic Chips" (ICLR2024)
Offical implementation of "Spike-driven Transformer" (NeurIPS2023)
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Fast and memory-efficient exact attention
Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder LM (eg. Flan-T5).
we want to create a repo to illustrate usage of transformers in chinese
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation
Dataset of GPT-2 outputs for research in detection, biases, and more
Code for the paper "Language Models are Unsupervised Multitask Learners"
Codes for Spiking Neural Networks with Improved Inherent Recurrence Dynamics for Sequential Learning
The official training/validation/test dataset repository for the SOTA? task as SimpleText Task4@CLEF2024
Machine Theory of Mind Reading List. Built upon EMNLP Findings 2023 Paper: Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models