-
Northeastern University, @NEUIR
- Shenyang, China
-
13:14
(UTC +08:00) - https://xinhaidong.top
- https://orcid.org/0009-0003-9484-3251
- @xhd0728
- https://afdian.net/a/xhd0728
Highlights
Stars
This is the code repo for the paper "RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rewards".
A directed multi-graph library for JavaScript
Prefill LLMs only once, re-use KV across instances
"LightRAG: Simple and Fast Retrieval-Augmented Generation"
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Hackable and optimized Transformers building blocks, supporting a composable construction.
小红书笔记 | 评论爬虫、抖音视频 | 评论爬虫、快手视频 | 评论爬虫、B 站视频 | 评论爬虫、微博帖子 | 评论爬虫、百度贴吧帖子 | 百度贴吧评论回复爬虫 | 知乎问答文章|评论爬虫
CloudFlare 图床,基于 CloudFlare Pages 和 Telegram Bot 的免费图片托管解决方案!
Repository hosting code used to reproduce results in "Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations" (https://arxiv.org/abs/2402.17152).
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
This course is a rigorous, year-long introduction to computational social science. We cover topics spanning reproducibility and collaboration, machine learning, natural language processing, and cau…
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
A Large-scale Multimodal Dataset for recommender System
This is the code repo for our paper "RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rewards".
HLLM: Enhancing Sequential Recommendations via Hierarchical Large Language Models for Item and User Modeling
LLaMA 2 implemented from scratch in PyTorch