Skip to content

[ICRA19] Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning

Notifications You must be signed in to change notification settings

chenshengduo/CrowdNav

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CrowdNav with UWB as detection sensor

This repository is forked from CrowNav. For the experiment usages, I add the UWB detection module. In the simulation, there are 3 static obstacle(human) and 3 other robots. The controlled robot using UWB to detect the static obstacles as well as fusing other robot's UWB reading.

Current issue

The calculation of UWB detection use the Sympy package and the calculation time is quite long.

CrowdNav

This repository contains the codes for our ICRA 2019 paper. For more details, please refer to the paper Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning.

Please find our recent follow-up work on relational graph learning for crowd navigation.

Abstract

Mobility in an effective and socially-compliant manner is an essential yet challenging task for robots operating in crowded spaces. Recent works have shown the power of deep reinforcement learning techniques to learn socially cooperative policies. However, their cooperation ability deteriorates as the crowd grows since they typically relax the problem as a one-way Human-Robot interaction problem. In this work, we want to go beyond first-order Human-Robot interaction and more explicitly model Crowd-Robot Interaction (CRI). We propose to (i) rethink pairwise interactions with a self-attention mechanism, and (ii) jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework. Our model captures the Human-Human interactions occurring in dense crowds that indirectly affects the robot's anticipation capability. Our proposed attentive pooling mechanism learns the collective importance of neighboring humans with respect to their future states. Various experiments demonstrate that our model can anticipate human dynamics and navigate in crowds with time efficiency, outperforming state-of-the-art methods.

Method Overview

Setup

  1. Install Python-RVO2 library
  2. Install crowd_sim and crowd_nav into pip
pip install -e .

Getting Started

This repository is organized in two parts: gym_crowd/ folder contains the simulation environment and crowd_nav/ folder contains codes for training and testing the policies. Details of the simulation framework can be found here. Below are the instructions for training and testing policies, and they should be executed inside the crowd_nav/ folder.

  1. Train a policy.
python train.py --policy sarl
  1. Test policies with 500 test cases.
python test.py --policy orca --phase test
python test.py --policy sarl --model_dir data/output --phase test
  1. Run policy for one episode and visualize the result.
python test.py --policy orca --phase test --visualize --test_case 0
python test.py --policy sarl --model_dir data/output --phase test --visualize --test_case 0
  1. Visualize a test case.
python test.py --policy sarl --model_dir data/output --phase test --visualize --test_case 0
  1. Plot training curve.
python utils/plot.py data/output/output.log

Simulation Videos

CADRL LSTM-RL
SARL OM-SARL

Learning Curve

Learning curve comparison between different methods in an invisible setting.

Citation

If you find the codes or paper useful for your research, please cite our paper:

@misc{1809.08835,
Author = {Changan Chen and Yuejiang Liu and Sven Kreiss and Alexandre Alahi},
Title = {Crowd-Robot Interaction: Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning},
Year = {2018},
Eprint = {arXiv:1809.08835},
}

About

[ICRA19] Crowd-aware Robot Navigation with Attention-based Deep Reinforcement Learning

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%