-
Harbin Institute of Technology(Shenzhen)
- Shenzhen, China
-
00:39
(UTC +08:00)
Highlights
- Pro
Lists (5)
Sort Name ascending (A-Z)
Stars
Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Train transformer language models with reinforcement learning.
Reference implementation for DPO (Direct Preference Optimization)
Robust recipes to align language models with human and AI preferences
An elegant \LaTeX\ résumé template. 大陆镜像 https://gods.coding.net/p/resume/git
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)
CHAIR metric is a rule-based metric for evaluating object hallucination in caption generation.
[EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''
✨✨Latest Advances on Multimodal Large Language Models
Stanford NLP Python library for tokenization, sentence segmentation, NER, and parsing of many human languages
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
CS-BAOYAN / CS-BAOYAN-2024
Forked from CS-BAOYAN/CS-BAOYAN-20232024年保研经验贴和相关物料
My solutions to "Linear Algebra Done Right" by Sheldon Axler, 4th Edition. ------ on update.
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 and reasoning techniques.
将哈工大(深圳)教务平台的课表转换为 ICS 文件
A flexible package manager that supports multiple versions, configurations, platforms, and compilers.
针对哈工大深圳的修改版 ZJU-Connect-for-Windows
Continuation of Clash Verge - A Clash Meta GUI based on Tauri (Windows, MacOS, Linux)
Python packaging and dependency management made easy
一个支持节点与订阅链接的 Linux 命令行代理工具 | A command-line tool for one-click proxy in your research and development without installing v2ray or anything else (only for linux)
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A high-throughput and memory-efficient inference and serving engine for LLMs
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support