Lists (12)
Sort Name ascending (A-Z)
Stars
Experimental solutions to selected exercises from the book [Advances in Financial Machine Learning by Marcos Lopez De Prado]
All the answers for exercises from Advances in Financial Machine Learning by Dr Marco Lopez de Parodo.
Python wrapper for TA-Lib (https://ta-lib.org/).
An Evaluation of ChatGPT on Information Extraction task, including Named Entity Recognition (NER), Relation Extraction (RE), Event Extraction (EE) and Aspect-based Sentiment Analysis (ABSA).
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person
[ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)
Qlib is an AI-oriented quantitative investment platform that aims to realize the potential, empower research, and create value using AI technologies in quantitative investment, from exploring ideas…
OpenChat: Advancing Open-source Language Models with Imperfect Data
A curated list of awesome libraries, packages, strategies, books, blogs, tutorials for systematic trading.
FinGLM: 致力于构建一个开放的、公益的、持久的金融大模型项目,利用开源开放来促进「AI+金融」。
🩹Editing large language models within 10 seconds⚡
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
Official release of InternLM2.5 base and chat models. 1M context support
纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行
A high-throughput and memory-efficient inference and serving engine for LLMs
Train transformer language models with reinforcement learning.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), ga…
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.