Skip to content

BUPT-GAMMA/GFMPapers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 

Repository files navigation

GFMPapers: Must-read papers on graph foundation models (GFMs)

awesome PRs

This list is currently maintained by members in BUPT GAMMA Lab. If you like our project, please give us a star ⭐ on GitHub for the latest update.

We thank all the great contributors very much.

⭐ We have held a tutorial about graph foundation model at the WebConf 2024! Here is the tutorial. [part1][part2][part3]

Contents

Keywords Convention

backbone architecture

Pretraining

Adaptation

The meaning of each tag can be referred to in the "Towards Graph Foundation Models: A Survey and Beyond" paper.

0. Survey Papers

  1. [arXiv 2023.8] Graph Meets LLMs: Towards Large Graph Models. [pdf][paperlist]
  2. [arXiv 2023.10] Integrating Graphs with Large Language Models: Methods and Prospects. [pdf]
  3. [arXiv 2023.10] Towards Graph Foundation Models: A Survey and Beyond. [pdf][paperlist]
  4. [arXiv 2023.11] A Survey of Graph Meets Large Language Model: Progress and Future Directions. [pdf][paperlist]
  5. [arXiv 2023.12] Large Language Models on Graphs: A Comprehensive Survey. [pdf][paperlist]
  6. [arXiv 2024.2] Graph Foundation Models. [pdf][paperlist][paperlist2]
  7. [arXiv 2024.2] Advancing Graph Representation Learning with Large Language Models: A Comprehensive Survey of Techniques. [pdf]
  8. [arXiv 2024.2] Towards Versatile Graph Learning Approach: from the Perspective of Large Language Models. [pdf]
  9. [arXiv 2024.3] A Survey on Self-Supervised Pre-Training of Graph Foundation Models: A Knowledge-Based Perspective. [pdf][paperlist]
  10. [arXiv 2024.4] A Survey of Large Language Models on Generative Graph Analytics: Query, Learning, and Applications. [pdf]
  11. [arXiv 2024.4] Graph Machine Learning in the Era of Large Language Models (LLMs). [pdf]
  12. [arXiv 2024.5] A Survey of Large Language Models for Graphs. [pdf]

1. GNN-based Papers

  1. [arXiv 2023.10] Enhancing Graph Neural Networks with Structure-Based Prompt [pdf]
  2. [arXiv 2023.11] MultiGPrompt for Multi-Task Pre-Training and Prompting on Graphs [pdf]
  3. [arXiv 2023.10] HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks [pdf]
  4. [arXiv 2023.10] Prompt Tuning for Multi-View Graph Contrastive Learning [pdf]
  5. [arXiv 2023.05] PRODIGY: Enabling In-context Learning Over Graphs. [pdf]
  6. [arXiv 2023.05] G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks. [pdf]
  7. [arXiv 2023.04] AdapterGNN: Efficient Delta Tuning Improves Generalization Ability in Graph Neural Networks. [pdf]
  8. [arXiv 2023.02] SGL-PT: A Strong Graph Learner with Graph Prompt Tuning. [pdf]
  9. [KDD 2023] All in One: Multi-Task Prompting for Graph Neural Networks. [pdf]
  10. [KDD 2023] A Data-centric Framework to Endow Graph Neural Networks with Out-Of-Distribution Detection Ability. [pdf] [code]
  11. [AAAI 2023] Ma-gcl: Model augmentation tricks for graph contrastive learning. [pdf] [code]
  12. [WWW 2023] GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner. [pdf] [code]
  13. [WWW 2023] Graphprompt: Unifying pre-training and downstream tasks for graph neural networks. [pdf] [code]
  14. [CIKM 2023] Voucher Abuse Detection with Prompt-based Fine-tuning on Graph Neural Networks. [pdf] [code]
  15. [KDD 2022] GraphMAE: Self-supervised masked graph autoencoders. [pdf] [code]
  16. [KDD 2022] Gppt: Graph pre-training and prompt tuning to generalize graph neural networks.
  17. [arXiv 2022.09] Universal Prompt Tuning for Graph Neural Networks. [pdf]
  18. [KDD 2021] Pre-training on large-scale heterogeneous graph. [pdf] [code]
  19. [CIKM 2021] Contrastive pre-training of GNNs on heterogeneous graphs. [pdf] [code]
  20. [ICML 2020] Deep graph contrastive representation learning. [pdf] [code]
  21. [NeurIPS 2020] Self-supervised graph transformer on large-scale molecular data. [pdf]
  22. [NeurIPS 2020] Graph contrastive learning with augmentations. [pdf] [code]
  23. [KDD 2020] Gcc: Graph contrastive coding for graph neural network pre-training. [pdf] [code]
  24. [KDD 2020] Gpt-gnn: Generative pre-training of graph neural networks. [pdf] [code]
  25. [arXiv 2020.01] Graph-bert: Only attention is needed for learning graph representations. [pdf] [code]
  26. [ICLR 2019] Deep graph infomax. [pdf] [code]
  27. [arXiv 2016.11] Variational graph auto-encoders. [pdf] [code]
  28. [TKDE 2024] Generalized graph prompt: Toward a unification of pretraining and downstream tasks on graphs [pdf] [code]
  29. [WWW 2024] Inductive graph alignment prompt: Bridging the gap between graph pretraining and inductive fine-tuning from spectral perspective [pdf]
  30. [AAAI 2024] Hgprompt: Bridging homogeneous and heterogeneous graphs for few-shot prompt learning [pdf] [code]
  31. [NeurIPS 2023 workshop] Its all graph to me: Single-model graph representation learning on multiple domains [pdf]
  32. [WWW 2024] Graphcontrol: Adding conditional control to universal graph pre-trained models for graph domain transfer learning [pdf]
  33. [AAAI 2024] Fine-tuning graph neural networks by preserving graph generative patterns [pdf] [code]
  34. [WSDM 2024] S2GAE: Self-Supervised Graph Autoencoders are Generalizable Learners with Graph Masking [pdf]

2. LLM-based Papers

  1. [arXiv 2023.10] Talk Like a Graph: Encoding Graphs for Large Language Models [pdf]
  2. [arxiv 2023.10] Graphtext: Graph reasoning in text space. [pdf]
  3. [arXiv 2023.09] Can LLMs Effectively Leverage Graph Structural Information: When and Why [pdf]
  4. [arXiv 2023.08] Natural language is all a graph needs. [pdf] [code]
  5. [arxiv 2023.08] Evaluating large language models on graphs: Performance insights and comparative analysis. [pdf] [code]
  6. [arxiv 2023.07] Can large language models empower molecular property prediction? [pdf] [code]
  7. [arxiv 2023.07] Meta-Transformer: A Unified Framework for Multimodal Learning. [pdf] [code]
  8. [arxiv 2023.07] Exploring the potential of large language models (llms) in learning on graphs [pdf] [code]
  9. [arxiv 2023.05] Gimlet: A unified graph-text model for instruction-based molecule zero-shot learning. [pdf]
  10. [arxiv 2023.05] Can language models solve graph problems in natural language? [pdf] [code]
  11. [arxiv 2023.05] Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking [pdf]
  12. [NeurIPS 2023] WalkLM: A Uniform Language Model Fine-tuning Framework for Attributed Graph Embedding. [pdf]
  13. [ICML 2023] Pretrained Language Models to Solve Graph Tasks in Natural Language [pdf]
  14. [KDD 2024] Graphwiz: An instruction-following language model for graph problems [pdf]
  15. [AAAI 2024] Zero-shot causal graph extrapolation from text via llms [pdf]
  16. [KDD 2024] Llm4dyg: Can large language models solve problems on dynamic graphs? [pdf]
  17. [arXiv 2024.6] MolecularGPT: Open Large Language Model (LLM) for Few-Shot Molecular Property Prediction [pdf]

3. GNN+LLM-based Papers

  1. [ICLR 2024] Label-free Node Classification on Graphs with Large Language Models (LLMs) [pdf]
  2. [ICLR 2023] One for All: Towards Training One Graph Model for All Classification Tasks [pdf]
  3. [arXiv_2023.09] Prompt-based Node Feature Extractor for Few-shot Learning on Text-Attributed Graphs.[pdf]
  4. [arxiv 2023.08] Simteg: A frustratingly simple approach improves textual graph learning. [pdf]
  5. [ICLR 2024] Harnessing explanations: Llm-to-lm interpreter for enhanced text-attributed graph representation learning. [pdf]
  6. [arxiv 2023.05] Congrat: Self-supervised contrastive pretraining for joint graph and text embeddings. [pdf]
  7. [KDD 2023] Train your own GNN teacher: Graph-aware distillation on textual graphs. [pdf]
  8. [arxiv 2023.04] Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt. [pdf]
  9. [ICLR 2023] Learning on large-scale text-attributed graphs via variational inference. [pdf]
  10. [SIGIR 2023] Augmenting low-resource text classification with graph-grounded pre-training and prompting. [pdf]
  11. [PMLR 2023] Enhancing activity prediction models in drug discovery with the ability to understand human language. [pdf]
  12. [ICLR 2022] Node feature extraction by self-supervised multi-scale neighborhood prediction. [pdf]
  13. [arxiv 2022.12] Multi-modal molecule structure-text model for text-based retrieval and editing. [pdf]
  14. [arxiv 2022.09] A molecular multimodal foundation model associating molecule graphs with natural language. [pdf]
  15. [NIPS 2021] Graphformers: Gnn-nested transformers for representation learning on textual graph. [pdf]
  16. [EMNLP 2021] Text2mol: Cross-modal molecule retrieval with natural language queries. [pdf]
  17. [arxiv 2020.08] Graph-based modeling of online communities for fake news detection. [pdf]
  18. [arxiv 2023] GraphAgent: Exploiting Large Language Models for Interpretable Learning on Text-attributed Graphs. [pdf]
  19. [Nature] Multi-modal Molecule Structure-text Model for Text-based Retrieval and Editing. [pdf]
  20. [KDD 2024] GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs with Large Language Models [pdf]
  21. [arXiv 2024.6] UniGLM: Training One Unified Language Model for Text-Attributed Graphs [pdf]
  22. [KDD 2023] Graph-aware language model pre-training on a large graph corpus can help multiple graph applications [pdf]
  23. [KDD 2023] Heterformer: Transformer-based deep node representation learning on het�erogeneous text-rich networks [pdf]
  24. [ICLR 2023] Edgeformers: Graph�empowered transformers for representation learning on textual�edge networks [pdf]
  25. [WSDM 2024] Llmrec: Large language models with graph augmentation for recommendation [pdf]
  26. [NIPS 2023] Walklm: A uniform language model fine-tuning framework for attributed graph embedding [pdf]
  27. [NIPS 2023] Learning multiplex embeddings on text-rich networks with one text encoder [pdf]
  28. [EMNLP 2023] Molca: Molecular graph-language modeling with cross-modal projector and uni-modal adapter [pdf]
  29. [ACL 2023] Patton: Language model pretraining on text-rich networks [pdf]
  30. [IJCAI 2024] Efficient tuning and inference for large language models on textual graphs [pdf]
  31. [WWW 2024] Graphtranslator: Aligning graph model to large language model for open-ended tasks [pdf]
  32. [EMNLP] Pretraining language models with text-attributed heterogeneous graphs [pdf]
  33. [SIGIR 2024] Graphgpt: Graph instruction tuning for large language models [pdf]
  34. [ACL 2024] In�structgraph: Boosting large language models via graph-centric instruction tuning and preference alignment [pdf]
  35. [EMNLP 2023] Relm: Leveraging language models for enhanced chemical reaction prediction [pdf]
  36. [WWW 2024] Can we soft prompt llms for graph learning tasks [pdf]

4. Benchmarks and Datasets

  1. [arXiv 2024.06] GraphFM: A Comprehensive Benchmark for Graph Foundation Model. [pdf]
  2. [NeurIPS 2023] A Comprehensive Study on Text-attributed Graphs: Benchmarking and Rethinking. [pdf]
  3. [ICLR 2024] Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets. [pdf]
  4. [Nature] Multi-modal Molecule Structure-text Model for Text-based Retrieval and Editing. [pdf]

5. Other papers

  1. [ICLR 2024] Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models. [pdf]

Contributors

We thank all the contributors to this list. And more contributions are very welcome.

About

Must-read papers on graph foundation models (GFMs)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published