-
The Chinese University of Hong Kong
- Hong Kong SAR
- https://gregxmhu.github.io/
Block or Report
Block or report GregxmHu
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLanguage: Python
Sort by: Most stars
Starred repositories
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
100+ Chinese Word Vectors 上百种预训练中文词向量
Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
A social networking service scraper in Python
Code for visualizing the loss landscape of neural nets
Set of tools to assess and improve LLM security.
Dataset of GPT-2 outputs for research in detection, biases, and more
Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.
A practical and feature-rich paraphrasing framework to augment human intents in text form to build robust NLU models for conversational engines. Created by Prithiviraj Damodaran. Open to pull reque…
Implementation of ChatGPT RLHF (Reinforcement Learning with Human Feedback) on any generation model in huggingface's transformer (blommz-176B/bloom/gpt/bart/T5/MetaICL)
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
An Open-Source Package for Information Retrieval.
PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML …
The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models".
[ICML 2021] Break-It-Fix-It: Unsupervised Learning for Program Repair
TAP: An automated jailbreaking method for black-box LLMs
Can AI-Generated Text be Reliably Detected?
Code for our SIGIR 2022 accepted paper : P3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
[EMNLP 2022] This is the code repo for our EMNLP‘22 paper "Dimension Reduction for Efficient Dense Retrieval via Conditional Autoencoder".
T5 Prompt Tuning
GregxmHu / promptbench
Forked from microsoft/promptbenchA robustness evaluation framework for large language models on adversarial prompts
GregxmHu / OpenMatch
Forked from thunlp/OpenMatchAn Open-Source Package for Information Retrieval.