Block or Report
Block or report wooden070
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
HoTPP: An Event Sequence Prediction Benchmark
A generative speech model for daily dialogue.
This is the offcial code of "HiSMatch: Historical Structure Matching based Temporal Knowledge Graph Reasoning"
Question and Answer based on Anything.
Code for paper Temporal Label Smoothing for Early Event Prediction (ICML 2023)
Official Implementation of ICLR 2024 paper: "Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning"
RUCAIBox / StructGPT
Forked from JBoRu/StructGPTThe code and data for "StructGPT: A general framework for Large Language Model to Reason on Structured Data"
Source code of DRAGIN, ACL 2024 main conference Long Paper
Code and Data for KDD 2023 paper "Context-aware Event Forecasting via Graph Disentanglement"
[ICLR'24] Enhancing Healthcare Predictions with Personalized Knowledge Graphs
PyTorch implementation of the paper "Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning", NeurIPS 2023
Code associated with the WWW'23 paper "Event Prediction using Case-Based Reasoning over Knowledge Graphs"
Codebase for ACL 2023 paper "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memories"
Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.
PyTorch CZSL framework containing GQA, the open-world setting, and the CGE and CompCos methods.
Official Implementation of paper: Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints
Pytorch implementation of "Learning Domain-Aware Detection Head with Prompt Tuning" (NeurIPS 2023)
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)
[NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner"
PPAT: Progressive Graph Pairwise Attention Network for Event Causality Identification
Ask Me Anything language model prompting
[EMNLP 2023 Findings] Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
[EMNLP 2022] An Open Toolkit for Knowledge Graph Extraction and Construction
Code for SetGNER: General Named Entity Recognition as Entity Set Generation, EMNLP 2022
Code for CHAI: A CHatbot AI for Task-Oriented Dialogue with Offline Reinforcement Learning
WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000+ "why" question-answer-rationale triplets.
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts…