Highlights
- Pro
Block or Report
Block or report ecfm
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.47% of bugs in the SWE-bench evaluation set and takes just 1 minute to run.
🔥Highlighting the top ML papers every week.
Diagnostic benchmark suite to explicitly test logical relational reasoning on natural language
Leveraging BERT and c-TF-IDF to create easily interpretable topics.
A collection of topic diversity measures for topic modeling
Code & Prompts for TopicGPT: A Prompt-Based Framework for Topic Modeling
This repository introduces PIXIU, an open-source resource featuring the first financial large language models (LLMs), instruction tuning data, and evaluation benchmarks to holistically assess finan…
Multivariate Time Series Forecasting with efficient Transformers. Code for the paper "Long-Range Transformers for Dynamic Spatiotemporal Forecasting."
A professional list of Papers, Tutorials, and Surveys on AI for Time Series in top AI conferences and journals.
A professional list of Tutorials and Surveys on DL, ML, DM, CV, NLP, Speech in top AI conferences and journals.
An offical implementation of PatchTST: "A Time Series is Worth 64 Words: Long-term Forecasting with Transformers." (ICLR 2023) https://arxiv.org/abs/2211.14730
FinRL-Meta: Dynamic datasets and market environments for FinRL.
Codebase for FOMC-NLP, accepted at ACL 2023 (main)
A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically Chat…
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Simple UI for LLM Model Finetuning
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
A list of totally open alternatives to ChatGPT
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.