Skip to content
/ GIA-HAO Public
forked from LFhase/GIA-HAO

[ICLR'22] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

License

Notifications You must be signed in to change notification settings

ZFancy/GIA-HAO

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GIA-HAO

This repo contains the sample code for reproducing results of ICLR'22 Paper Understanding and Improving Graph Injection Attack by Promoting Unnoticeability.

Full code and instructions will be released soon.

Introduction

We provide several ways to test with different GIA methods and combinations with HAO. Specifically, by specifying an attack method, you can generate the perturbed graph and evaluate the robustness of GNNs based on it. Moreover, you can also evaluate the robustness of different defense models against different attack methods in batch. Running commands are as below. We also provide the reproduction script for grb-cora. By changing the parameters according to our paper, you can simply reproduce results for other benchmarks.

To incorporate HAO into your evaluation pipeline for the robustness of GNNs, you can simply make it in 3 steps!

# step 1: propagate one step (including the injected nodes) without self-connection
#         then we obtain the aggregated neighbor features
with torch.no_grad():
    features_propagate = gcn_norm(adj_attack, add_self_loops=False) @ features_concat
    features_propagate = features_propagate[n_total:]
# step 2: calculate the node-centric homophily (here we implement it with cosine similarity)
homophily = F.cosine_similarity(features_attack, features_propagate)
# step 3: add homophily to your original L_atk with a proper weight, then you make it!
pred_loss += disguise_coe*homophily.mean()

Note that it is only a minimal implementation example, you can also implement HAO with different homophily measures tailored for your own case : )

Single attack tests

  • Generating Perturbed Graphs:
# Generating Perturbed Graph with PGD
python gnn_misg.py --dataset 'grb-cora'  --inductive --eval_robo --eval_attack 'gia' --grb_mode 'full' --num_layers 3 --runs 1 --disguise_coe 0

# Generating Perturbed Graph with PGD+HAO
python gnn_misg.py --dataset 'grb-cora'  --inductive --eval_robo --eval_attack 'gia' --grb_mode 'full' --num_layers 3 --runs 1 --disguise_coe 1
  • Evaluating Blackbox Test Robustness:
# Evaluating blackbox test robustness with GCN
python gnn_misg.py --dataset 'grb-cora'  --inductive --eval_robo --eval_attack 'gia' --grb_mode 'full' --num_layers 3 --runs 1 --eval_robo_blk

# Evaluating blackbox test robustness with EGuard
python gnn_misg.py --dataset 'grb-cora'  --inductive --eval_robo --eval_attack 'gia' --grb_mode 'full' --model 'egnnguard' --num_layers 3 --eval_robo_blk --runs 1
  • Evaluating with Targeted Attack: Simply add --eval_target to the running commands will do the job.

Batch attack tests

Here we use --batch_eval command in gnn_misg.py to enable batch evaluations. During each evaluation of GNN model, you can use batch_attacks and report_batch to specify the attacks that you want to evaluate and report.

mkdir atkg
bash run.sh

Citation

@inproceedings{chen2022understanding,
    title={Understanding and Improving Graph Injection Attack by Promoting Unnoticeability},
    author={Yongqiang Chen and Han Yang and Yonggang Zhang and Kaili Ma and Tongliang Liu and Bo Han and James Cheng},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=wkMG8cdvh7-}
}

About

[ICLR'22] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.5%
  • Shell 4.5%