Skip to content

alga-hopf/dl-spectral-graph-partitioning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Deep learning and spectral embedding for graph partitioning

Deep learning model that integrates spectral embedding to partition a graph such that the normalized cut is the smallest possible. Based on Deep Learning and Spectral Embedding for Graph Partitioning.

Minimize the normalized cut

Given a graph G=(V,E), the goal is to find a partition of the set of vertices V such that its normalized cut is minimized. This is known to be an NP-complete problem, hence we look for approximate solutions to it. The Fiedler vector is known to be a relaxed solution of the minimum cut problem, however it is expensive to compute. Our idea is to approximate the Fiedler vector by training a neural network and use this vector as node features for a graph (embedding module). Then this graph is fed to another neural network that is trained by minimizing the expected value of the normalized cut in an unsupervised fashion (partitioning module). The codes for training and testing are explained below.

Training

The training dataset is made of SuiteSparse matrices, Finite Element triangulations and random Delaunay graphs. For the Suitesparse matrices, the user needs to download the matrices from the SuiteSparse matrix collection in the Matrix Market format and put the .mtx files in the folder dl-spectral-graph-partitioning/suitesparse. In the paper we focus on matrices coming from 2D/3D discretizations. For the Finite Element triangulations, the user can download the matrices from here and put the 3 folders in dl-spectral-graph-partitioning/. Finally, random Delaunay graphs in the unit square and in the rectangle [0,2]x[0,1] are automatically generated. After the folders are created, simply run training.py. This will train the embedding and the partitioning modules separately and will save the tuned weights.

Testing

The results of the testing process will include a comparison with the normalized cut obtained with Scotch, so the user has to build Scotch themselves (the repo includes a folder with the binaries). Then run test.py with the following arguments

  • nmin: minimum graph size (default: 100)
  • nmax: maximum graph size (default: 10000)
  • ntest: number of testing graphs (default: 50)
  • dataset: dataset type to choose among 'delaunay', 'suitesparse', and the Finite Element triangulations graded_l, hole3, hole6 (default: 'delaunay').

Required software

  • Pytorch
  • Pytorch Geometric
  • NetworkX
  • Numpy
  • Scipy
  • NetworkX-METIS

About

Deep learning and spectral embedding for graph partitioning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published