The code for reproducing the results in the paper "Do autoencoders need a bottleneck for anomaly detection?"
Runs with baetorch (built on Pytorch), and Neural Tangent Kernel.
- Code for preprocessing benchmark datasets and industrial datasets are in
benchmark
andcase_study
folders, respectively. - Code for analysing the results are in
analyse
folder.
Main codes are in the base folder:
01-Main-Run.py
executes the training and prediction with deterministic AE, VAE and BAEs, and hyperparameter grids are specified inParams_{Dataset}.py
(e.g.Params_ZEMA.py
for ZeMA dataset).02-Run-InfiniteBAE.py
executes similar evaluation run but with infinitely wide BAE using Neural Tangent Kernel.- The bottleneck can be adjusted by setting the
latent_factor
orskip
hyperparameters in the grids. - The main code creates a new
experiments
folder and save the results in csv files.
Yong, Bang Xiang, and Alexandra Brintrup. "Do autoencoders need a bottleneck for anomaly detection?." arXiv preprint arXiv:2202.12637 (2022).
@article{yong2022autoencoders,
title={Do autoencoders need a bottleneck for anomaly detection?},
author={Yong, Bang Xiang and Brintrup, Alexandra},
journal={arXiv preprint arXiv:2202.12637},
year={2022}
}