-
Stanford University
-
11:47
(UTC -07:00) - https://cs.stanford.edu/~shirwu
- @ShirleyYXWu
Highlights
- Pro
Block or Report
Block or report Wuyxin
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
HippoRAG is a novel RAG framework inspired by human long-term memory that enables LLMs to continuously integrate knowledge across external documents.
Large-scale pretrained models for goal-directed dialog
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Let ChatGPT teach your own chatbot in hours with a single GPU!
Specify what you want it to build, the AI asks for clarification, and then builds it.
Large datasets for conversational AI
🌲 Code for our EMNLP 2023 paper - 🎄 "Tree of Clarifications: Answering Ambiguous Questions with Retrieval-Augmented Large Language Models"
This is a repository for sharing papers in the field of empathetic conversational AI. The related source code for each paper is linked if available.
ClariQ: SCAI Workshop data challenge on conversational search clarification.
Proactive Dialogue Systems - Paper Reading List
AvaTaR: Optimizing LLM Agents for Tool-Assisted Knowledge Retrieval (https://arxiv.org/abs/2406.11200)
Forward-Looking Active REtrieval-augmented generation (FLARE)
The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models"
STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases (https://stark.stanford.edu/)
Early release of the official implementation for "GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts"
Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabilities.
All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)
AgentTuning: Enabling Generalized Agent Abilities for LLMs
[NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…