-
University of Texas at Austin
- https://jyhong.gitlab.io
- @hjy836
Block or Report
Block or report jyhong836
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (27)
Sort Name ascending (A-Z)
agent
alignment
Backdoor or WM
Big model
Causality
ChatBot
Contrastive Learning
Cutting-edge
Data-free distillation
Dataset
DDPM
Deep Learning
Efficiency
Face
FL
Federated LearningGPT
Mental Health
MIA
MTL MoE
papers
Privacy
Prompt
RecSys
Recomendation systemTools
UDA
Unsupervised domain adaptation.Unlearning
Web
Stars
Language
Sort by: Recently starred
HuatuoGPT, Towards Taming Language Models To Be a Doctor. (An Open Medical GPT)
Odyssey: Empowering Agents with Open-World Skills
Official repository for Physics Informed Token Transformer (PITT)
Run safety benchmarks against AI models and view detailed reports showing how well they performed.
Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.
TAP: An automated jailbreaking method for black-box LLMs
Multilingual Automatic Speech Recognition with word-level timestamps and confidence
This is the official implementation of ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting
Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding
A simple evaluation of generative language models and safety classifiers.
Retrieval-Augmented Theorem Provers for Lean
PubMedQA: A Dataset for Biomedical Research Question Answering
Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"
Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).
Can large language models provide useful feedback on research papers? A large-scale empirical analysis.
Code for the paper "Poseidon: Efficient Foundation Models for PDEs"
Learning in infinite dimension with neural operators.
Collaborative book Machine Learning Systems
ICON for in-context operator learning
[ICLR 2023] "Learning to Grow Pretrained Models for Efficient Transformer Training" by Peihao Wang, Rameswar Panda, Lucas Torroba Hennigen, Philip Greengard, Leonid Karlinsky, Rogerio Feris, David …
Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models