-
Stanford University
-
00:41
(UTC -07:00) - https://cs.stanford.edu/~shirwu
- @ShirleyYXWu
Highlights
- Pro
Block or Report
Block or report Wuyxin
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Specify what you want it to build, the AI asks for clarification, and then builds it.
Large datasets for conversational AI
🌲 Code for our EMNLP 2023 paper - 🎄 "Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Models"
This is a repository for sharing papers in the field of empathetic conversational AI. The related source code for each paper is linked if available.
ClariQ: SCAI Workshop data challenge on conversational search clarification.
Proactive Dialogue Systems - Paper Reading List
AvaTaR: Optimizing LLM Agents for Tool-Assisted Knowledge Retrieval (https://arxiv.org/abs/2406.11200)
Forward-Looking Active REtrieval-augmented generation (FLARE)
The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models"
STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases (https://stark.stanford.edu/)
Early release of the official implementation for "GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts"
Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabilities.
All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)
AgentTuning: Enabling Generalized Agent Abilities for LLMs
[NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
Must-read Papers on Large Language Model (LLM) as Optimizers and Automatic Optimization for Prompting LLMs.
Dataset and code for EMNLP2020 paper "HybridQA: A Dataset of Multi-Hop Question Answeringover Tabular and Textual Data"
Data and Code for ICLR2020 Paper "TabFact: A Large-scale Dataset for Table-based Fact Verification"
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
We introduce new zero-shot prompting magic words that improves the reasoning ability of language models: panel discussion!