Skip to content

Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

License

Notifications You must be signed in to change notification settings

jingweimo/albumentations

 
 

Repository files navigation

Albumentations

PyPI version CI

  • The library works with images in HWC format.
  • The library is faster than other libraries on most of the transformations.
  • Based on numpy, OpenCV, imgaug picking the best from each of them.
  • Simple, flexible API that allows the library to be used in any computer vision pipeline.
  • Large, diverse set of transformations.
  • Easy to extend the library to wrap around other libraries.
  • Easy to extend to other tasks.
  • Supports transformations on images, masks, key points and bounding boxes.
  • Supports python 3.5-3.7
  • Easy integration with PyTorch.
  • Easy transfer from torchvision.
  • Was used to get top results in many DL competitions at Kaggle, topcoder, CVPR, MICCAI.
  • Written by Kaggle Masters.

Table of contents

How to use

All in one showcase notebook - showcase.ipynb

Classification - example.ipynb

Object detection - example_bboxes.ipynb

Non-8-bit images - example_16_bit_tiff.ipynb

Image segmentation example_kaggle_salt.ipynb

Keypoints example_keypoints.ipynb

Custom targets example_multi_target.ipynb

Weather transforms example_weather_transforms.ipynb

Serialization serialization.ipynb

Replay/Deterministic mode replay.ipynb

You can use this Google Colaboratory notebook to adjust image augmentation parameters and see the resulting images.

parrot

inria

medical

vistas

Authors

Alexander Buslaev

Alex Parinov

Vladimir I. Iglovikov

Evegene Khvedchenya

Mikhail Druzhinin

Installation

PyPI

You can use pip to install albumentations:

pip install albumentations

If you want to get the latest version of the code before it is released on PyPI you can install the library from GitHub:

pip install -U git+https://github.com/albumentations-team/albumentations

And it also works in Kaggle GPU kernels (proof)

!pip install albumentations > /dev/null

Conda

To install albumentations using conda we need first to install imgaug via conda-forge collection

conda install -c conda-forge imgaug
conda install albumentations -c conda-forge

Documentation

The full documentation is available at https://albumentations.ai/docs/.

Pixel-level transforms

Pixel-level transforms will change just an input image and will leave any additional targets such as masks, bounding boxes, and keypoints unchanged. The list of pixel-level transforms:

Spatial-level transforms

Spatial-level transforms will simultaneously change both an input image as well as additional targets such as masks, bounding boxes, and keypoints. The following table shows which additional targets are supported by each transform.

Transform Image Masks BBoxes Keypoints
CenterCrop
CoarseDropout
Crop
CropNonEmptyMaskIfExists
ElasticTransform
Flip
GridDistortion
GridDropout
HorizontalFlip
IAAAffine
IAACropAndPad
IAAFliplr
IAAFlipud
IAAPerspective
IAAPiecewiseAffine
Lambda
LongestMaxSize
MaskDropout
NoOp
OpticalDistortion
PadIfNeeded
RandomCrop
RandomCropNearBBox
RandomGridShuffle
RandomResizedCrop
RandomRotate90
RandomScale
RandomSizedBBoxSafeCrop
RandomSizedCrop
Resize
Rotate
ShiftScaleRotate
SmallestMaxSize
Transpose
VerticalFlip

Migrating from torchvision to albumentations

Migrating from torchvision to albumentations is simple - you just need to change a few lines of code. Albumentations has equivalents for common torchvision transforms as well as plenty of transforms that are not presented in torchvision. migrating_from_torchvision_to_albumentations.ipynb shows how one can migrate code from torchvision to albumentations.

Benchmarking results

To run the benchmark yourself follow the instructions in benchmark/README.md

Results for running the benchmark on first 2000 images from the ImageNet validation set using an Intel Xeon Platinum 8168 CPU. All outputs are converted to a contiguous NumPy array with the np.uint8 data type. The table shows how many images per second can be processed on a single core, higher is better.

albumentations
0.4.5
imgaug
0.4.0
torchvision (Pillow-SIMD backend)
0.5.0
keras
2.3.1
augmentor
0.2.8
solt
0.1.8
HorizontalFlip 3066 1544 1652 874 1658 853
VerticalFlip 4159 2014 1427 4147 1448 3788
Rotate 417 327 160 29 60 113
ShiftScaleRotate 703 471 144 30 - -
Brightness 2210 997 397 210 396 2058
Contrast 2208 1023 330 - 331 2059
BrightnessContrast 2199 582 190 - 190 1051
ShiftRGB 2215 998 - 378 - -
ShiftHSV 381 241 59 - - 128
Gamma 2340 - 686 - - 951
Grayscale 4961 372 735 - 1423 4286
RandomCrop64 157376 2560 41448 - 36036 35454
PadToSize512 2833 - 478 - - 2629
Resize512 952 595 885 - 873 881
RandomSizedCrop_64_512 3128 881 1295 - 1254 2678
Equalize 760 399 - - 666 -
Multiply 2184 1059 - - - -
MultiplyElementwise 124 197 - - - -

Python and library versions: Python 3.7.5 (default, Oct 19 2019, 00:03:48) [GCC 8.3.0], numpy 1.18.1, pillow-simd 7.0.0.post3, opencv-python 4.2.0.32, scikit-image 0.16.2, scipy 1.4.1.

Contributing

To create a pull request to the repository follow the documentation at docs/contributing.rst

Adding new transforms

If you are contributing a new transformation, make sure to update "Pixel-level transforms" or/and "Spatial-level transforms" sections of this file (README.md). To do this, simply run (with python3 only):

python3 tools/make_transforms_docs.py make

and copy/paste the results into the corresponding sections. To validate your modifications, you can run:

python3 tools/make_transforms_docs.py check README.md

Building the documentation

  1. Go to docs/ directory
    cd docs
    
  2. Install required libraries
    pip install -r requirements.txt
    
  3. Build html files
    make html
    
  4. Open _build/html/index.html in browser.

Alternatively, you can start a web server that rebuilds the documentation automatically when a change is detected by running make livehtml

Competitions won with the library

Albumentations are widely used in Computer Vision Competitions at Kaggle and other platforms.

You can find their names and links to the solutions here.

Used by

Comments

In some systems, in the multiple GPU regime PyTorch may deadlock the DataLoader if OpenCV was compiled with OpenCL optimizations. Adding the following two lines before the library import may help. For more details pytorch/pytorch#1355

cv2.setNumThreads(0)
cv2.ocl.setUseOpenCL(False)

Citing

If you find this library useful for your research, please consider citing Albumentations: Fast and Flexible Image Augmentations:

@Article{info11020125,
    AUTHOR = {Buslaev, Alexander and Iglovikov, Vladimir I. and Khvedchenya, Eugene and Parinov, Alex and Druzhinin, Mikhail and Kalinin, Alexandr A.},
    TITLE = {Albumentations: Fast and Flexible Image Augmentations},
    JOURNAL = {Information},
    VOLUME = {11},
    YEAR = {2020},
    NUMBER = {2},
    ARTICLE-NUMBER = {125},
    URL = {https://www.mdpi.com/2078-2489/11/2/125},
    ISSN = {2078-2489},
    DOI = {10.3390/info11020125}
}

You can find the full list of papers that cite Albumentations here.

About

Fast image augmentation library and easy to use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about library: https://www.mdpi.com/2078-2489/11/2/125

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.7%
  • Other 0.3%