By Stefan Kahl, Thomas Wilhelm-Stein, Holger Klinck, Danny Kowerko, and Maximilian Eibl
We provide a baseline system for the LifeCLEF bird identification task BirdCLEF2018. We encourage participants to build upon the code base and share their results for future reference. We will keep the repository updated and will add improvements and submission boilerplate in the future.
If you have any questions or problems running the scripts, don't hesitate to contact us.
Contact: Stefan Kahl, Technische Universität Chemnitz, Media Informatics
E-Mail: [email protected]
This project is licensed under the terms of the MIT license.
Please cite the paper in your publications if the repository helps your research.
@article{kahl2018recognizing,
title={Recognizing Birds from Sound - The 2018 BirdCLEF Baseline System},
author={Kahl, Stefan and Wilhelm-Stein, Thomas and Klinck, Holger and Kowerko, Danny and Eibl, Maximilian},
journal={arXiv preprint arXiv:1804.07177},
year={2018}
}
You can download our paper here: https://arxiv.org/abs/1804.07177
This is a Thenao/Lasagne implementation in Python 2.7 for the identification of hundreds of bird species based on their vocalizations. This code is tested using Ubuntu 16.04 LTS but should work with other distributions as well.
Before cloning the repository, you need to install CUDA, cuDNN, OpenCV, Libav, Theano and Lasagne. You can find more detailed instructions below. After that, you can use the Python package tool PIP to install missing dependencies after the download of the repository:
git clone https://github.com/kahst/BirdCLEF-Baseline.git
cd BirdCLEF-Baseline
sudo pip install –r requirements.txt
On your host system you need to ...
- Install Docker Engine Utility for NVIDIA GPUs
- Clone repository
git clone https://github.com/kahst/BirdCLEF-Baseline.git
- Run
./docker-run <path_to_datasets>
The docker-run.sh
script takes care of all required tasks (see Workflow)
You can download the BirdCLEF training and test data via https://www.crowdai.org.
You need to register for the challenges to access the data. After download, you need to unpack the two archives and change the path to the resulting directory containing "wav" and "xml" folders in the config.py
script.
Note: The dataset is quite large, you will need ~250 GB for the training data.
Our workflow consists of four main phases: First, we need to sort the BirdCLEF training data. Secondly, we extract spectrograms from audio recordings. Thirdly, we train a deep neural net based on the resulting spectrograms - we treat the audio classification task as an image processing problem. Finally, we test the trained net given a local validation set of unseen audio recordings.
We want to divide the dataset into a train and validation split. The validation split should comprise 10% of the entire dataset and should contain at least one sample per bird species. Additionally, we want to copy the samples into folders named after the class they represent. The training script uses subfolders as class names (labels), so the sorted dataset should look like this:
dataset
¦
+--train
¦ ¦
¦ +---species1
¦ ¦ file011.wav
¦ ¦ file012.wav
¦ ¦ ...
¦ ¦
¦ +---species2
¦ ¦ file021.wav
¦ ¦ file022.wav
¦ ¦ ...
¦ ¦
¦ +---...
¦
+--val
¦ ¦
¦ +---species1
¦ ¦ file013.wav
¦ ¦ ...
¦ ¦
¦ +---species2
¦ ¦ file023.wav
¦ ¦ ...
¦ ¦
¦ +---...
¦
+--metadata
file011.json
file012.json
...
Before running the script sort_data.py
, you need to adjust the path pointing to the extracted wav and xml files from the BirdCLEF training data in the config.py
by setting the value for TRAINSET_PATH
. We are using the scientific name of each species as label, that makes ist easier to include background species in the metric for evaluation. However, you can use any class name you want - the class ID provided with the xml files would be equally good.
The metadata
directory contains JSON-files which store some additional information, most importantly the list of background species of each recording.
Note: You can use any other dataset for training, as long you organize it in the same way. Simply adjust the sorting script accordingly.
Extracting spectrograms from audio recordings is a vital part of our system. We decided to use MEL-scale log-amplitude spectrograms, which each represent one second of a recording. We are using librosa for all of the audio processing. The script utils/audio.py
contains all the logic. You can run the script stand-alone with the provided example wav-file.
You can run the script spec.py
to start the extraction - this might take a while, depending on your CPU.
The config.py
contains a section with all important settings, like sample rate, chunk length and cut-off frequencies. We are using these settinsg as defaults:
SAMPLE_RATE = 44100
SPEC_FMIN = 300
SPEC_FMAX = 15000
SPEC_LENGTH = 1.0
SPEC_OVERLAP = 0.25
SPEC_MINLEN = 1.0
SPEC_SIGNAL_THRESHOLD = 0.001
Most monophonic recordings from the BirdCLEF dataset are sampled at 44.1 kHz
, we use a low-pass and high-pass filter at 15 kHz
and 300 Hz
. Our signal chunks are of 1 s
length - you can use any other chunk length if you like. The SPEC_OVERLAP
value defines the step width for extraction, consecutive spectrograms are overlapping by the defined amount. The SPEC_MINLEN
value excludes all chunks shorter than 1 s
from the extraction.
Our rule-based spectrogram analysis rejects samples, which do not contain any bird sounds. It also estimates the signal-to-noise ratio based on some simple calculations. The rejection threshold is set through the SPEC_SIGNAL_THRESHOLD
value and will be preserved in the filename of the saved spectrogram file.
If your dataset is sorted and all specs have been extracted, you can start training your own CNN. If you changed some of the paths, make sure to adjust the settings in the config.py
accordingly.
There are numerous settings that you can change to adjust the net itself and the training process. Most of them might have significant impact on the duration of the training process, memory consumption and result quality.
All options are preceded by a comment explaining the impact of changes - if you still have questions or run into any trouble, please do not hesitate to contact us.
To start the training, simply run the script train.py
. This will automatically call the following procedures:
- parsing the dataset for samples
- building a neural net
- compiling Thenao test and train functions
- generating batches of samples (incl. augmentation)
- training the net for some epochs
- validating the net after each epoch
- saving snapshots at certain points during training
- saving best snapshopt after training has completed
When finished (this might take a looooong time), you can find the best model in the snapshot/
directory named after the run name specified in the config.py
.
Note: If you run out of GPU memory, you should consider lowering the batch size and/or input size of the net, or dial down on the parameter count of the net (But hey: Who wants to do that?).
We already created a local validation split with sort_data.py
. We now make use of those unseen recordings and assess the performance of the best snapshot from training (e.g. TEST_MODEL = 'BirdCLEF_TUC_CLO_EXAMPLE_model_epoch_50.pkl'
).
Testing includes the spectrogram extraction for each test recording (specify how many specs to use with MAX_SPECS_PER_FILE
) and the prediction of class scores for each segment. Finally, we calculate the global score for the entire recording by pooling individual scores of all specs. We use Mean Exponential Pooling for that - you can change the pooling strategy in the test.py
by adjusting this line:
p_pool = np.mean((p * 2) ** 2, axis=0)
The local validation split from our baseline approach contains 4399 recordings - 10% of the entire training set but at least one recording per species. The metric we use is called Mean Label Ranking Average Precision (MLRAP) and our best net scores a MLRAP of 0.000 including background species (TEST_WITH_BG_SPECIES = True
).
The results are competitive, but still - there is a lot of room for improvements :)
If you want to experiment with the system and evaluate different settings or CNN layouts, you can simply change some values in the config.py
and run the script evaluate.py
. This will automatically run the training, save a snapshot and test the trained model using the local validation split. All you have to do is sit and wait for a couple of hours :)
We will soon provide the code that you can use to generate a valid submission for the monophone task...
Note: You will need to download the test data available from crowdai.org, first.
The versions that you need for your machine differ, depending on OS and GPU. The installation process listed below should work with Ubuntu 16.04 LTS and any CUDA-capable GPU by NVIDIA.
First of all, you should update your system:
sudo apt-get update
sudo apt-get upgrade
Download CUDA 9.1 (you might want to use newer versions, if available):
Install CUDA:
sudo dpkg -i cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda
Add the paths to the .bashrc:
PATH=/usr/local/cuda-9.1/bin${PATH:+:${PATH}}
LD_LIBRARY_PATH=/usr/local/cuda-9.1/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Note: You should be able to run nvidia-smi
as command and see some details about your GPU. If not, the proper drivers are missing. You can install the drivers for your GPU with e.g. sudo apt-get install nvidia-390
.
Download cuDNN (you need to be registered):
https://developer.nvidia.com/cudnn
Installing from a Tar File:
Navigate to your directory containing the cuDNN Tar file. Unzip the cuDNN package.
tar -xzvf cudnn-9.0-linux-x64-v7.tgz
Copy the following files into the CUDA Toolkit directory.
sudo cp cuda/include/cudnn.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
Prerequisites (incl. Python):
sudo apt-get install python-dev python-pip libblas-dev liblapack-dev cmake
sudo pip install numpy, scipy, cython
Install gpuarray:
https://deeplearning.net/software/libgpuarray/installation.html
git clone https://github.com/Theano/libgpuarray.git
cd libgpuarray
mkdir Build
cd Build
cmake .. -DCMAKE_BUILD_TYPE=Release # or Debug if you are investigating a crash
make
sudo make install
cd ..
sudo python setup.py build
sudo python setup.py install
sudo ldconfig
Install Theano:
git clone git:https://github.com/Theano/Theano.git
cd Theano
sudo pip install -e .
Adjust .theanorc in your home directory to select a GPU and fix random seeds:
[global]
device=cuda
floatX=float32
optimizer_excluding=low_memory
[mode]=FAST_RUN
[dnn.conv]
algo_bwd_filter=deterministic
algo_bwd_data=deterministic
[gpuarray]
preallocate=0
Clone the repository and install Lasagne:
sudo pip install https://github.com/Lasagne/Lasagne/archive/master.zip
We use OpenCV for image processing; you can install the cv2 package for Python running this command:
sudo apt-get install python-opencv
The audio processing library Librosa uses the Libav tools:
sudo apt-get install libav-tools
If you have trouble with some of the installation steps, you can open an issue or contact us. Thenao and Lasagne offer comprehensive installation guides, too - you should consult them for further information.