Skip to content

This is official Pytorch implementation of "Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity"

License

Notifications You must be signed in to change notification settings

Linfeng-Tang/PSFusion

Repository files navigation

PSFusion

@article{TANG2023PSFusion,
  title={Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity},
  author={Tang, Linfeng and Zhang, Hao and Xu, Han and Ma, Jiayi},
  journal={Information Fusion},
  volume = {99},
  pages = {101870},
  year={2023},
}

Framework

Framework

The overall framework of the proposed PSFusion.

Recommended Environment

  • torch 1.10.0
  • cudatoolkit 11.3.1
  • torchvision 0.11.0
  • kornia 0.6.5
  • pillow 8.3.2

To Test

  1. Downloading the pre-trained checkpoint from best_model.pth and putting it in ./results/PSFusion/checkpoints.
  2. Downloading the MSRS dataset from MSRS and putting it in ./datasets.
  3. python test_Fusion.py --dataroot=./datasets --dataset_name=MSRS --resume=./results/PSFusion/checkpoints/best_model.pth

If you need to test other datasets, please put the dataset according to the dataloader and specify --dataroot and --dataset-name

To Train

Before training PSFusion, you need to download the pre-processed MSRS dataset MSRS and putting it in ./datasets.

Then running python train.py --dataroot=./datasets/MSRS --name=PSFusion

Motivation

Demo

Comparison of fusion and segmentation results between SeAFusion and our method under harsh conditions.

Parm

Comparison of the computational complexity between feature-level fusion and image-level fusion for the semantic segmentation task.

Network Architecture

SDFM

The architecture of the superficial detail fusion module (SDFM) based on the channel-spatial attention mechanism.

PSFM

The architecture of the profound semantic fusion module (PSFM) based on the cross-attention mechanism.

To Segmentation

Experiments

Qualitative fusion results

MSRS

Qualitative comparison of PSFusion with 9 state-of-the-art methods on the **MSRS** dataset.

M3FD

Qualitative comparison of PSFusion with 9 state-of-the-art methods on the **M3FD** dataset.

Quantitative fusion results

MSRS

Quantitative comparisons of the six metrics on 361 image pairs from the MSRS dataset. A point (x, y) on the curve denotes that there are (100*x)% percent of image pairs which have metric values no more than y.

M3FD

Quantitative comparisons of the six metrics on 300 image pairs from the M3FD dataset.

Segmentation comparison

MSRS

Segmentation results of various fusion algorithms on the MSRS dataset.

MSRS

Per-class segmentation results on the MSRS dataset.

Potential of image-level fusion for high-level vision tasks

MFNet

Segmentation results of feature-level fusion-based multi-modal segmentation algorithms and our image-level fusion-based solution on the MFNet dataset.

MSRS

Per-class segmentation results of image-level fusion and feature-level fusion on the MFNet dataset.

If this work is helpful to you, please cite it as:

@article{TANG2023PSFusion,
  title={Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity},
  author={Tang, Linfeng and Zhang, Hao and Xu, Han and Ma, Jiayi},
  journal={Information Fusion},
  volume={99},
  pages={101870},
  year={2023},
  publisher={Elsevier}
}

About

This is official Pytorch implementation of "Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages