Skip to content

Agronav: Autonomous Navigation Framework for Agricultural Robots and Vehicles using Semantic Segmentation and Semantic Line Detection

Notifications You must be signed in to change notification settings

StructuresComp/agronav

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Agronav: Autonomous Navigation Framework for Agricultural Robots and Vehicles using Semantic Segmentation and Semantic Line Detection


Pipeline of the Agronav framework


Updates

  • 06/22/2023: Revised instructions, tested image inference code.

Instructions


Dependencies

This code has been tested on Python 3.8.

  1. After creating a virtual environment (Python 3.8), install pytorch and cuda package.

    Using conda:

    conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia

    Using pip:

    pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116
  2. Install mmcv

    pip install mmcv-full==1.7.1
    cd segmentation
    pip install -v -e .
  3. Install other dependencies

    conda install numpy scipy scikit-image pqdm -y
    pip install opencv-python yml POT pudb -y
  4. Install deep-hough-transform

    cd ../lineDetection
    cd model/_cdht
    python setup.py build 
    python setup.py install --user

Training the Semantic Segmentation Model

  1. Download the Agroscapes Segmentation dataset from here and extract the images and labels to data/agronav/images and data/agronav/labels respectively.

  2. Run python train-agronav.py to start training. Although before that check the file agronav.py for the configuration of training (cfg). You might want to edit the python file for cfg based on your training model. Accordingly edit cfg.load_from and cfg.work_dir, for your checkpoint and output directory respectively.


Training the Semantic Line Detection Model

  1. Download the AgroNav_LineDetection dataset from here and extract to data/ directory. The dataset contains images and ground truth annotations of the semantic lines. The images are the outputs of the semantic segmentation model. Each image contains a pair of semantic lines.

  2. Run the following lines for data augmentation and generation of the parametric space labels.

cd lineDetection
python data/prepare_data_NKL.py --root './data/agroNav_LineDetection' --label './data/agroNav_LineDetection' --save-dir './data/training/agroNav_LineDetection_resized_100_100' --fixsize 400 
  1. Run the following script to obtain a list of filenames of the training data.
python data/extractFilenameList.py

This creates a .txt file with the filenames inside /training. Divide the filenames into train and validation data.

agroNav_LineDetection_train.txt
agroNav_LineDetection_val.txt
  1. Specify the training and validation data paths in config.yml.

  2. Train the model.

python train.py

Inference

  1. Download the pre-trained checkpoints for semantic segmentation [MobileNetV3, HRNet, ResNest]. Move the downloaded file to ./segmentation/checkpoint/.
  2. Download the pre-trained checkpoints for semantic line detection here. Move the download file to ../lineDetection/checkpoint/.
  3. Move inference images to ./inference/input
  4. Run the following command to perform end-to-end inference on the test images. End-to-end inference begins with a raw RGB image, and visualizes the centerlines.
    python e2e_inference_image.py
  5. The final results with the centerlines will be saved in ./inference/output_ceterline, the intermediate results are saved in ./inference/temp and ./inference/output.

To run the semantic segmentation and line detection models independently, use ./segmentation/inference_image.py and ./lineDetection/inference.py.


Citation

If found helpful, please consider citing our work:

@InProceedings{Panda_2023_CVPR,
    author    = {Panda, Shivam K. and Lee, Yongkyu and Jawed, M. Khalid},
    title     = {Agronav: Autonomous Navigation Framework for Agricultural Robots and Vehicles Using Semantic Segmentation and Semantic Line Detection},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2023},
    pages     = {6271-6280}
}

About

Agronav: Autonomous Navigation Framework for Agricultural Robots and Vehicles using Semantic Segmentation and Semantic Line Detection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published