Lists (1)
Sort Name ascending (A-Z)
Stars
Multimodal rumor detection model using an evidence-based dataset. Current version uses CLIP embeddings for both text and image inputs.
🤖一个基于 WeChaty 结合 OpenAi ChatGPT / Kimi / 讯飞等Ai服务实现的微信机器人 ,可以用来帮助你自动回复微信消息,或者管理微信群/好友,检测僵尸粉等...
Taxonomy tree that will allow you to create models tuned with your data
DSPy: The framework for programming—not prompting—foundation models
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
The data and code of "KG-FPQ: Evaluating Factuality Hallucination in LLMs with Knowledge Graph-based False Premise Questions".
Code for paper "W-RAG: Weakly Supervised Dense Retrieval in RAG for Open-domain Question Answering"
Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models
⚡FlashRAG: A Python Toolkit for Efficient RAG Research
Anserini is a Lucene toolkit for reproducible information retrieval research
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-…
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-of-use, backed by research.
Reference implementation for DPO (Direct Preference Optimization)
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Build ChatGPT over your data, all with natural language
The source code of paper "CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking"
EMNLP 2021 - Pre-training architectures for dense retrieval
An annotated implementation of the Transformer paper.
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
TensorFlow code and pre-trained models for BERT
text2vec, text to vector. 文本向量表征工具,把文本转化为向量矩阵,实现了Word2Vec、RankBM25、Sentence-BERT、CoSENT等文本表征、文本相似度计算模型,开箱即用。
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard