Skip to content

A PyTorch implementation of the Graph-based CNN for image segmentation extending the original work https://github.com/samleoqh/MSCG-Net

License

Notifications You must be signed in to change notification settings

henrylao/mscgNetV2

 
 

Repository files navigation

MSCG-Net for Semantic Segmentation

Overview

This repository contains MSCG-Net models (MSCG-Net-50 and MSCG-Net-101) for semantic segmentation in Agriculture-Vision Challenge and Workshop (CVPR 2020), and the pipeline of training and testing models, implemented in PyTorch. Please refer to our paper for details: Multi-view SelfConstructing Graph Convolutional Networks with Adaptive Class Weighting Loss for Semantic Segmentation

Getting Started

Project File Structure

├── checkpoints # output check point, trained weights, log files, tensorboard, etc
├── logs        # model runtime log and function tracing
├── models
├── train.py    # TODO: implement CLI using Click
├── train_R101.py
├── train_R50.py
└── utils       # model block, loss, utils code, # dataset loader and pre-processing code, config code, etc

Dependencies

  • python 3.5+
  • pytorch 1.4.0
  • opencv 3.4+
  • tensorboardx 1.9
  • albumentations 0.4.0
  • pretrainedmodels 0.7.4
  • others (see requirements.txt)

Installation

  1. Configure your environment using either virtual environment, anaconda, or your choice of an environment manager
  2. Run the following install the mscg-net package dependencies while in the project root directory
pip install -r requirements.txt # install  mscg-core dependencies
pip install -e .  # install mscg-core as a package which resolves the issue of pathing

Usage

Dataset Preparation

NOTE the current implementation has been hardcoded to support the 2021 dataset

  1. Change DATASET_ROOT to your dataset path in ../project_root/util/data/__init__.py
DATASET_ROOT = '/your/path/to/Agriculture-Vision'
  1. Keep the dataset structure as the same with the official structure shown as below
.
├── test
│   ├── boundaries
│   ├── images
│   └── masks
├── train
│   ├── boundaries
│   ├── images
│   ├── labels
│   └── masks
└── val
    ├── boundaries
    ├── images
    ├── labels
    └── masks

How to Train

NOTE the current implementation requires an NVIDIA GPU

Solution to Memory Issues on a Linux Machine (Ubuntu 20.04)

  1. IMPORTANT Set up the necessary memory to support training NOTE this requires editing the swap memory file to allow up to 150gb of memory due to the existing implementation
# linux
sudo swapoff -a       # disable the current swap memory file
sudo fallocate -l <amount greater than 120>G /swapfile  # specify swap memory size  
sudo chmod 600 /swapfile  # configure user permissions 
sudo mkswap /swapfile   # create the swapfile
sudo swapon /swapfile   # enable the newly created swap memory file
  1. Run the following to train while inside the project root
python ./train_R50.py
python ./train_R101.py

Remarks

CUDA_VISIBLE_DEVICES=0 python ./tools/train_R50.py  # trained weights checkpoint1
# train_R101.py 								    # trained weights, checkpoint2
# train_R101_k31.py 							    # trained weights, checkpoint3

Please note that: we first train these models using Adam combined with Lookahead as the optimizer for the first 10k iterations (around 7~10 epochs) and then change the optimizer to SGD in the remaining iterations. So you will have to ** manually change the code to switch the optimizer** to SGD as follows:

# Change line 48: Copy the file name ('----.pth') of the best checkpoint trained with Adam
train_args.snapshot = '-------.pth'
...
# Comment line 92
# base_optimizer = optim.Adam(params, amsgrad=True)

# uncomment line 93
base_optimizer = optim.SGD(params, momentum=train_args.momentum, nesterov=True)

Test with a single GPU

CUDA_VISIBLE_DEVICES=0 python ./tools/test_submission.py

Trained weights for 3 models download (save them to ./checkpoint before run test_submission)

checkpoint1 , checkpoint2 , checkpoint3

2021 Results Summary

To be added

2020 Results Summary

Models mIoU (%) Background Cloud shadow Double plant Planter skip Standing water Waterway Weed cluster
MSCG-Net-50 (ckpt1) 54.7 78.0 50.7 46.6 34.3 68.8 51.3 53.0
MSCG-Net-101 (ckpt2) 55.0 79.8 44.8 55.0 30.5 65.4 59.2 50.6
MSCG-Net-101_k31 (ckpt3) 54.1 79.6 46.2 54.6 9.1 74.3 62.4 52.1
Ensemble_TTA (ckpt1,2) 59.9 80.1 50.3 57.6 52.0 69.6 56.0 53.8
Ensemble_TTA (ckpt1,2,3) 60.8 80.5 51.0 58.6 49.8 72.0 59.8 53.8
Ensemble_TTA (new_5model) 62.2 80.6 48.7 62.4 58.7 71.3 60.1 53.4

Please note that all our single model's scores are computed with just single-scale (512x512) and single feed-forward inference without TTA. TTA denotes test time augmentation (e.g. flip and mirror). Ensemble_TTA (checkpoint1,2) denotes two models (checkpoint1, and checkpoint2) ensemble with TTA, and (checkpoint1, 2, 3) denotes three models ensemble.

Model Size

Models Backbones Parameters GFLOPs Inference time
(CPU/GPU )
MSCG-Net-50 Se_ResNext50_32x4d 9.59 18.21 522 / 26 ms
MSCG-Net-101 Se_ResNext101_32x4d 30.99 37.86 752 / 45 ms
MSCG-Net-101_k31 Se_ResNext101_32x4d 30.99 37.86 752 / 45 ms

Please note that all backbones used pretrained weights on ImageNet that can be imported and downloaded from the link. And MSCG-Net-101_k31 has exactly the same architecture wit MSCG-Net-101, while it is trained with extra 1/3 validation set (4,431) instead of just using the official training images (12,901).

API Documentation

Run the MAKEFILE using the following

# initialize doc 
sphinx-quickstart

# build sphinx docs
sphinx-build -b html docs/source/ docs/build/

Citation:

Please consider citing our work if you find the code helps you

Multi-view Self-Constructing Graph Convolutional Networks with Adaptive Class Weighting Loss for Semantic Segmentation

@InProceedings{Liu_2020_CVPR_Workshops,
author = {Liu, Qinghui and Kampffmeyer, Michael C. and Jenssen, Robert and Salberg, Arnt-Borre},
title = {Multi-View Self-Constructing Graph Convolutional Networks With Adaptive Class Weighting Loss for Semantic Segmentation},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}

Self-Constructing Graph Convolutional Networks for Semantic Labeling

@inproceedings{liu2020scg,
  title={Self-Constructing Graph Convolutional Networks for Semantic Labeling},
  author={Qinghui Liu and Michael Kampffmeyer and Robert Jenssen and Arnt-Børre Salberg},
  booktitle={Proceedings of IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium},
  year={2020}
}

Releases

No releases published

Packages

 
 
 

Languages

  • Python 99.7%
  • Makefile 0.3%