Autonomous video editing powered by Object Tracking and Motion Detection
The first method of video editing is through the use of Object Tracking. Using PyTorch, YOLOv3, and OpenCV a deep learning model is made to track objects in a given video. Using this model, the user specifies which objects in a given video they would like to scan through and will then make cuts along the frames of these objects in the video and splice them together to create a new scene.
The other method of video editing is by using Motion Detection. Each frame of a given video is compared by computing the difference between the RGB channels of each pixel and the video is cut along the given motion threshold
Firt clone this repositiory and then install the required dependencies (preferably in your virtual environment) with pip
.
pip install requirements.txt
- Run the shell file download_weights.sh or run
wget https://pjreddie.com/media/files/yolov3.weights
(only need to do this once) - Specify object tracking method
- Specify a directory of the video you would like to read and specify the output directory for the edited copy. (You can also choose random to select a random object in the video for fun)
- Once it detects objects, choose the objects from the displayed list to edit the video around
$ python3 main.py object videos/short-clip.mp4 out/test-output2.mp4
- Specify motion algorithm
- Provide the directory of the video you would like to read, specify what the filename and output should be and the motion threshold (Random is also an option if you want to select a random motion percentage)
$ python3 main.py motion videos/short-clip.mp4 edits/motion-test-variety-new.mp4 10 15