Skip to content

FabrCas/master_thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Enhancing Abnormality identification:
Robust Out-Of-Distribution strategies for Deepfake Detection

Novel architectures and strategies for OOD detection employing the collaborative efforts of In-Distribution classifiers and Out-Of-Distribution detectors.

Out-Of-Distribution in Deepfake Detection domain

These techniques are used to improve the robustness of Deepfake detection. Our study integrates Convolutional Neural Networks (CNN) and Vision Transformers (ViT), presenting two distinct architectures related to a common Strategy. The first exploits the image reconstruction capabilities of the CNN model, while the second integrates the attention estimation in the study. Auxiliary data produced by the ID classifier and other components are exploited by the custom Abnormality module to infer whether a sample is Out-Of-Distribution.

Proposed Architecture

The full treatment of this research study is covered in this pdf file.

Data

The CDDB dataset can be downloaded at the following link: Download

Models

You can download the pre-trained models from the following link: Download

Installation

To install the required dependencies, run the following command:

git  clone  https://github.com/FabrCas/master_thesis.git
cd  master_thesis
pip  install  -r  ./setup/requirements.txt

Run main file to create the necessary folders.

python  main.py

Then move the CDDB dataset in the data folder and pretrained models in the models folder, unzipping files.

Workspace File System

├──  data/
├───── CDDB/
├───── cifar10/
├───── cifar100/
├───── DTD/
├───── FashionMNIST/
├───── MNIST/
├───── SVHN/
├───── tinyimagenet/
├──  models/
├───── benchmarks/
├───── bin_class/
├───── ood_detection/
├──  results/
├───── benchmarks/
├───── bin_class/
├───── ood_detection/
├──  scripts/
├───── ...
├──  setup/
├───── requirments.txt
├──  static/
├───── ...
├──  bin_classifier.py
├──  bin_ViTClassifier.py
├──  dataset.py
├──  experiments.py
├──  __init__.py
├──  main.py
├──  models_2.py
├──  models.py
├──  multi_classifier.py
├──  ood_detection.py
├──  README.md
├──  LICENSE
├──  test_dataset.py
├──  test_models.py
├──  launch_bin_classifier.py
├──  launch_bin_ViTClassifier.py
├──  launch_experiments.py
├──  launch_ood_detector.py
└──  utilities.py

Launching the Software

To launch the software and test the approaches proposed, run the following command:

python main.py

Use the following main parameters:

  • --help, get execution details.

  • --useGPU, to specify a local GPU runtime. Specify between True and False. Defaults to True.

  • --verbose, give additional information in your console printouts. Specify between True and False. Defaults to True.

  • --m, choose which test to run among:

    • benchmark_synth, CIFAR10 OOD benchmark using only synthetic data.
    • benchmark, CIFAR10 OOD benchmark using synthetic and real outliers.
    • df_content, Deepfake detection in Content scenario (Faces) with ViT based approach.
    • df_group, Deepfake detection in Group scenario (GANs) with U-Net based approach.
    • df_mix, Deepfake detection in Mix scenario with U-Net based approach.
    • abn_content_synth, OOD detection in Content scenario (Faces) using only sinthetic data.
    • abn_content, OOD detection in Content scenario (Faces) using synthetic and real outliers.
    • abn_group, OOD detection in Group scenario (GANs) using synthetic and real outliers.
    • abn_mix, OOD detection in Mix scenario using synthetic and real outliers.

Example:

python main.py --verbose True --useGPU True --m benchmark

Single modules for deepfake and OOD detection, i.e. ood_detection.py, can be utilized following procedure in launch_*.py files. An example:

from bin_ViTClassifier import DFD_BinViTClassifier_v7

from ood_detection import Abnormality_module_ViT

  

# Define In-Distributution Classifier

scenario =  "content"

classifier_name =  "faces_ViTEA_timm_DeiT_tiny_separateTrain_v7_13-02-2024"

classifier_type =  "ViTEA_timm"

autoencoder_type =  "vae"

prog_model_timm =  3  # (tiny DeiT)

classifier_epoch =  25

autoencoder_epoch =  25

classifier = DFD_BinViTClassifier_v7(scenario=scenario,  model_type=classifier_type,  model_ae_type  = autoencoder_type,  prog_pretrained_model= prog_model_timm)

# load classifier & autoencoder

classifier.load_both(classifier_name, classifier_epoch, autoencoder_epoch)

  

# Train Abnormality module

type_encoder =  "encoder_v3"

abn = Abnormality_module_ViT(classifier,  scenario  = scenario,  model_type= type_encoder)

abn.train()

References

Dataset

CDDB: Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials. paper.

OOD benchmarks

  • CIFAR-10/CIFAR-100. Official page: link
  • MNIST, The MNIST Database of Handwritten Digit Images for Machine Learning Research. paper
  • FMINST, Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. paper
  • SVHN. Official page: link
  • DTD, Describing Textures in the Wild. paper
  • Tiny ImageNet, 80 Million Tiny Images: A Large Data Set for Nonparametric Object and Scene Recognition. paper

Models

  • ResNet, Deep Residual Learning for Image Recognition. paper
  • AutoEncoder, Stacked Convolutional Auto-Encoders for Hierarchical Feature Extraction paper
  • VAE, Auto-Encoding Variational Bayes paper
  • U-net, Convolutional Networks for Biomedical Image Segmentation. paper
  • ViT (Vision Transformer), An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale paper
  • DeiT, Training data-efficient image transformers & distillation through attention paper

OOD resources

  • A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. paper
  • CutMix, Regularization Strategy to Train Strong Classifiers with Localizable Features. paper
  • ODIN, Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. paper
  • Confidence Branch, Learning Confidence for Out-of-Distribution Detection in Neural Networks. paper
  • Visual Attention, Leveraging Visual Attention for out-of-distribution Detection. paper

License

This project is licensed under the MIT License.

Contact

About

Master's Thesis project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published