Tracking the trajectory of a target ship in GPS coordinates from a single video taken from a monocular camera with unknown camera intrinsic and extrinsic parameters. The first stage of the detector uses a custom trained YOLOv4 object detector. The trajectory is estimated through by learning a quadratic approximation to the cost function. This project was our team's entry into the AI Tracks at Sea competition held by the US Navy.
Advisor: Dr. Olugbenga Anubi
Members: Ashwin Vadivel, Boluwatife Olabiran, Muhammad Saud Ul Hassan, Yu Zheng
Use git clone to clone the repo into your local machine. Download the model file and video sample at https://drive.google.com/drive/folders/1dE54rw5-pUVBVzzuK1zqbwRjhWruqXbx?usp=sharing. Next add the video "19.mp4" to the root of the cloned repo. Add the .trt model file into the directory "yolo/".
To run the demo please enter the following command:
python3 main.py \
--video </path_to_video> \
-n number_of_points_to_generate \
-lat source_latitude \
-lon source_longitude
The algorithm will run through every frame of the video and ouput detections to the terminal. Once completed, the detected trajectory will be saved in the directory into the file: interpolated_data.csv. The number of points in the trajectory will be equal to the argument -n number_of_points_to_generate.
In order to view the output trajectory, we used the free website: https://www.gpsvisualizer.com/.
Ground Truth Trajectory | Model Predicted Trajectory |
---|---|
![]() |
![]() |