Skip to content

MMathisLab/AcinoSet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AcinoSet: A 3D pose dataset of Cheetahs in the wildCheetah

Daniel Joska, Liam Clark, Naoya Muramatsu, Ricardo Jericevich, Fred Nicolls, Alexander Mathis, Mackenzie W. Mathis, Amir Patel

AcinoSet is a dataset of 13 free-running cheetahs in the wild that contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 8,522 human-annotated frames. We utilize markerless animal pose estimation with DeepLabCut to provide 2P keypoints. Then we use three methods that can serve as strong baselines for 3D pose estimation tool development: traditional sparse bundle adjustment, an Extended Kalman Filter, and a trajectory optimization-based method we call Full Trajectory Estimation. We believe this dataset will be useful benchmark for a diverse range of fields such as ecology, robotics, biomechanics as well as computer vision.

AcinoSet code by:

Prerequisites

  • Python3, anaconda, code dependencies are within conda env files.

2D --> 3D Data Pipeline:

What we provide:

The following sections document how this was created by the code within this repo:

Pre-trained DeepLabCut Model:

  • You can use the full_cheetah model provided in the DLC Model Zoo To re-create the H5 files (or on new videos).
  • Here, we also already provide the videos and H5 outputs of all frames, here.
Labelling Cheetah Body Positions:

If you want to label more cheetah data, you can also do so within the DeepLabCut framework. We provide a conda file for an easy-install, but please see the repo for installation and instructions for use.

$ conda env create -f conda_envs/dDLC.yml -n DLC

Optionally: Manually Defining the Shared Points for 3D calibration:

You can manually define points on each video with Argus. Documentation is here.

Build the environment:

$ conda env create -f conda_envs/argus.yml -n argus

Launch Argus/Clicker:

$ python
>>> import argus_gui as ag; ag.ClickerGUI()

Keyboard Shortcuts:

  • G ... to a specific frame
  • X ... to switch the sync mode setting the windows to the same frame
  • A ... to use the auto-tracker
  • 7, Y, U, I ... growing the view finder at the bottom right
  • O ... to bring up the options dialog
  • S ... to bring up a save dialog

Then you must convert the output data from Argus to work with the rest of the pipeline (here is an example):

$ python converter_argus.py \
    --data_dir ../data/2019_03_07/extrinsic_calib/videos

Intrinsic & Extrinsic Calibration:

Build the environment.

$ conda env create --file conda_envs/cv.yml

Launch Jupyter Lab:

$ jupyter lab

Run calib_with_gui.ipynb, and follow the instructions.

Full Trajectory Optimization:

Prepare the environment.

$ pyenv local anaconda3-5.2.0/envs/cv

Run full_traj_opt.py, or use the supplied Jupyter Notebook:

$ python full_traj_opt.py \
    --n_camera 6 \
    --logs_dir ../logs \
    --configs_dir ../configs \
    --data_dir ../data/2019_03_09/jules/flick1 \
    --scene_file ../data/2019_03_09/extrinsic_calib/scene_sba.json

If you want to view the 3D animation, run full_traj_optimisation.ipynb and follow the instructions!

About

A repository to reconstruct cheetahs into 3D from videos

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published