-
Someplace
- Somewhere
-
17:05
(UTC +08:00) - https://lfhase.win
Highlights
- Pro
Block or Report
Block or report LFhase
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseLists (27)
Sort Name ascending (A-Z)
AI4Sci
alignment
BOOM
Causality
CTG
drug_ai
egnn
🔮 Future ideas
GAD
gflow
gnn-ood
graph_adv
graph-LLM
GraphOT
job
LLM
LMOOD
MLLM
OoD
quant
readlist
sparse
subgraphGNN
symmetry
temporal
tips
tools
Stars
Language
Sort by: Recently starred
A simple pip-installable Python tool to generate your own HTML citation world map from your Google Scholar ID.
Code for "Predicting Cellular Responses to Novel Drug Perturbations at a Single-Cell Resolution", NeurIPS 2022.
Preprint: Asymmetry in Low-Rank Adapters of Foundation Models
[ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
Code for the 2024 arXiv publication "Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models"
A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)
📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).
RichHF-18K dataset contains rich human feedback labels we collected for our CVPR'24 paper: https://arxiv.org/pdf/2312.10240, along with the file name of the associated labeled images (no urls or im…
[ICML 2024] How Interpretable Are Interpretable Graph Neural Networks?
Code for our ICML 2024 paper "Aligning Transformers with Weisfeiler-Leman"
Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.
[CVPR 2024] Improving out-of-distribution generalization in graphs via hierarchical semantic environments
[ICML 2024] Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical
Benchmarking Benchmark Leakage in Large Language Models
Recipes to train reward model for RLHF.
Evaluate your LLM's response with Prometheus and GPT4 💯
Official code repo for the paper "LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset"
Repository of paper "LLMs with Chain-of-Thought Are Non-Causal Reasoners"
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
A deep learning model for small molecule drug discovery and cheminformatics based on SMILES
Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"
This repository contains information on the creation, evaluation, and benchmark models for the L+M-24 Dataset. L+M-24 will be featured as the shared task at The Language + Molecules Workshop at ACL…
[ICLR 2024] Domain-Agnostic Molecular Generation with Chemical Feedback