This repo contains a unified multi-threading inferencing nodes for monocular 3D object detection, depth prediction and semantic segmentation.
You could checkout the ROS1 version of each inference package:
In this repo, we fully re-structure the code and messages formats for ROS2 (humble), and integrate multi-thread inferencing for three vision tasks.
- Currently all pretrained models are trained using the visionfactory repo. Thus focusing on out-door autonomous driving scenarios. But it is ok to plugin ONNX models that satisfiy the interface.
This repo relies on ROS2 and onnxruntime:
pip3 install onnxruntime-gpu
For Orin, find the pip wheel in https://elinux.org/Jetson_Zoo with version number > 1.3 is ok
Under the workspace directory, find the launch file and change topic names and onnx checkpoint paths then build the workspace (you can choose to either launch any of the three tasks in the launch file)
colcon build --synlink-install
source install/setup.bash
ros2 launch ros2_vision_inference detect_seg_depth.launch.xml
Notice that as a known issue from ros2 python package. The launch/rviz files are copied but not symlinked when we run "colcon build", so whenever we modify the launch file, we need to rebuild the package; whenever we want to modify the rviz file, we need to save it explicitly in the src folder.
colcon build --symlink-install --packages-select=ros2_vision_inference # rebuilding only ros2_vision_inference
/image_raw (sensor_msgs/Image)
/camera_info (sensor_msgs/CameraInfo)
/depth_image (sensor_msgs/Image): Depth image of float type.
/seg_image (sensor_msgs/Image): RGB-colorized segmentation images.
/mono3d/bbox (visualization_msgs/MarkerArray): 3D bounding boxes outputs.
/point_cloud (sensor_msgs/PointCloud2): projecting /seg_image with /depth_image into a point cloud.
MonoDepth ONNX: def forward(self, normalized_images[1, 3, H, W], P[1, 3, 4]):->float[1, 1, H, W]
Segmentation ONNX def forward(self, normalized_images[1, 3, H, W]):->long[1, H, W]
Mono3D ONNX: def forward(self, normalized_images[1, 3, H, W], P[1, 3, 4]):->scores[N], bboxes[N,12], cls_indexes[N]
Classes definitions are from the visionfactory repo.