Block or Report
Block or report LilanOvO
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language: Python
Sort by: Most stars
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
A high-throughput and memory-efficient inference and serving engine for LLMs
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Fast and flexible image augmentation library. Paper about the library: https://www.mdpi.com/2078-2489/11/2/125
Tools for merging pretrained large language models.
Universal and Transferable Attacks on Aligned Language Models
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)
A unified evaluation framework for large language models
Official Implementation of "Graph of Thoughts: Solving Elaborate Problems with Large Language Models"
Must-read Papers on Textual Adversarial Attack and Defense
Official repository of Evolutionary Optimization of Model Merging Recipes
The Security Toolkit for LLM Interactions
BlueLM(蓝心大模型): Open large language models developed by vivo AI Lab
Codebase for Merging Language Models (ICML 2024)
Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing
Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
Tools for understanding how transformer predictions are built layer-by-layer
机器学习和差分隐私的论文笔记和代码仓
Implementation of "RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation".
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
Git Re-Basin: Merging Models modulo Permutation Symmetries in PyTorch
Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NextGenAISafety @ ICML 2024)
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models
[NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evaluations".