Skip to content

Easily turn a set of image urls to an image dataset

License

Notifications You must be signed in to change notification settings

Skylion007/img2dataset

 
 

Repository files navigation

img2dataset

pypi Open In Colab Try it on gitpod

Easily turn a set of image urls to an image dataset.

Also supports saving captions for url+caption datasets.

Install

pip install img2dataset

Usage

First get some image url list. For example:

echo 'https://placekitten.com/200/305' >> myimglist.txt
echo 'https://placekitten.com/200/304' >> myimglist.txt
echo 'https://placekitten.com/200/303' >> myimglist.txt

Then, run the tool:

img2dataset --url_list=myimglist.txt --output_folder=output_folder --thread_count=64 --image_size=256

The tool will then automatically download the urls, resize them, and store them with that format:

  • output_folder
    • 0
      • 0.jpg
      • 1.jpg
      • 2.jpg

or as this format if choosing webdataset:

  • output_folder
    • 0.tar containing:
      • 0.jpg
      • 1.jpg
      • 2.jpg

with each number being the position in the list. The subfolders avoids having too many files in a single folder.

If captions are provided, they will be saved as 0.txt, 1.txt, ...

This can then easily be fed into machine learning training or any other use case.

API

This module exposes a single function download which takes the same arguments as the command line tool:

  • url_list A file with the list of url of images to download, one by line (required)
  • image_size The side to resize image to (default 256)
  • output_folder The path to the output folder (default "images")
  • thread_count The number of threads used for downloading the pictures. This is important to be high for performance. (default 256)
  • resize_mode The way to resize pictures, can be no, border or keep_ratio (default border)
    • no doesn't resize at all
    • border will make the image image_size x image_size and add a border
    • keep_ratio will keep the ratio and make the smallest side of the picture image_size
  • resize_only_if_bigger resize pictures only if bigger that the image_size (default False)
  • output_format decides how to save pictures (default files)
    • files saves as a set of subfolder containing pictures
    • webdataset saves as tars containing pictures
  • input_format decides how to load the urls (default txt)
    • txt loads the urls as a text file of url, one per line
    • csv loads the urls and optional caption as a csv
    • parquet loads the urls and optional caption as a parquet
  • url_col the name of the url column for parquet and csv (default url)
  • caption_col the name of the caption column for parquet and csv (default None)

Road map

This tool work as it. However in the future goals will include:

  • support for multiple input files
  • support for csv or parquet files as input
  • benchmarks for 1M, 10M, 100M pictures

For development

Either locally, or in gitpod (do export PIP_USER=false there)

Setup a virtualenv:

python3 -m venv .env
source .env/bin/activate
pip install -e .

to run tests:

pip install -r requirements-test.txt

then

python -m pytest -v tests -s

Benchmarks

cd tests
bash benchmark.js

1000 images/s is the currently observed performance. 3.6M images/s

About

Easily turn a set of image urls to an image dataset

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 66.7%
  • Python 32.6%
  • Other 0.7%