Highlights
- Pro
Block or Report
Block or report yusun-nlp
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
F-Eval: Asssessing Fundamental Abilities with Refined Evaluation Methods
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
EMNLP 2023 Plan, Verify and Switch: Integrated Reasoning with Diverse X-of-Thoughts
Code for building ConceptNet from raw data.
A high-throughput and memory-efficient inference and serving engine for LLMs
Light local website for displaying performances from different chat models.
Tutorial for creating Python/Qt GUIs with fbs
[ACL'24 Oral] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark
Official release of InternLM2.5 7B base and chat models. 1M context support
code for [ACL23] An AMR-based Link Prediction Approach for Document-level Event Argument Extraction
code for [EMNLP22 findings] DORE: Document Ordered Relation Extraction based on Generative Framework
Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复
[ACL 23] CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Fast and memory-efficient exact attention
This is the code repo for the paper <UTC-IE: A Unified Token-pair Classification Architecture for Information Extraction>
Must-read papers on prompt-based tuning for pre-trained language models.
An open-source tool-augmented conversational language model from Fudan University
A comprehensive mapping database of English to Chinese technical vocabulary in the artificial intelligence domain
Datawhale成员整理的面经,内容包括机器学习,CV,NLP,推荐,开发等,欢迎大家star
Making large AI models cheaper, faster and more accessible
Aligning pretrained language models with instruction data generated by themselves.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Ongoing research training transformer models at scale