This reposity introduces an associate dataset of our work R3LIVE, termed R3LIVE-Dataset. This dataset was collected within the campuses of the University of Hong Kong (HKU) and the Hong Kong University of Science and Technology (HKUST), including 13 sequences that are collected by exploring both indoor and outdoor environments. R3LIVE-dataset is collected in various scenes (e.g., walkway, park, forest, etc) at different times in a day (i.e., morning, noon, and evening), which allows the dataset to capture both structured urban buildings and cluttered field environments with different lighting conditions. The dataset also includes three sequences (degenerate_seq_00/01/02) where the LiDAR or camera (or both) degenerate by occasionally facing the device to a single and/or texture-less plane (e.g., wall, the ground) or visually. The total traveling length reaches 8.4 Km, duration reaching 2.4 hours.
Our dataset can be downloaded form Microsoft OneDrive
https://1drv.ms/f/c/3e715b7aa136191a/EqpK7QnN4OpCqHmL2ykpZ50Bjz3pyJ0kwyvwpBBLtzR4bQ?e=TqPN8E
or from Baidu-NetDisk [百度网盘]:
Link(链接) : https://pan.baidu.com/s/1zmVxkcwOSul8oTBwaHfuFg
Code(提取码): wwxw
A brief overview of these 14 sequences are shown as follows:
Sequence | Duration (s) | Traveling Length (m) [1] | Sensor Degeneration | Return to origin [2] | Aruco marker [3] | Camera exposure time[4] | Scenarios |
---|---|---|---|---|---|---|---|
degenerate_seq_00 | 86 | 53.3 | LiDAR | Yes | Outdoor | ||
degenerate_seq_01 | 85 | 75.2 | LiDAR | Yes | Outdoor | ||
degenerate_seq_02 | 101 | 74.9 | Camera, LiDAR [5] | Yes | Yes | Indoor | |
hku_campus_seq_00 | 202 | 190.6 | ---- | Yes | Indoor | ||
hku_campus_seq_01 | 304 | 374.6 | ---- | Outdoor | |||
hku_campus_seq_02 | 323 | 354.3 | ---- | Yes | Yes | Indoor, Outdoor | |
hku_campus_seq_03 | 173 | 181.2 | ---- | Yes | Yes | Indoor, Outdoor | |
hku_main_building | 1170 | 1036.9 | ---- | Yes | Yes | Indoor, Outdoor | |
hku_park_00 | 228 | 247.3 | ---- | Yes | Yes | Outdoor, Cluttered | |
hku_park_01 | 351 | 401.8 | ---- | Yes | Yes | Outdoor, Cluttered | |
hkust_campus_00 | 1073 | 1317.2 | ---- | Yes | Yes | Indoor, Outdoor | |
hkust_campus_01 | 1162 | 1524.3 | ---- | Yes | Yes | Indoor, Outdoor | |
hkust_campus_02 | 1618 | 2112.2 | ---- | Yes | Outdoor | ||
hkust_campus_03 | 478 | 503.8 | ---- | Yes | Yes | Indoor, Outdoor | |
Total | 7354 | 8447.6 |
[1]: The length of traveling is calculated with the result of R3LIVE algorithm.
[2]: Sequences are collected by traveling a loop, with starting from and ending with the same position.
[3]: Sequences with ArUco marker for providing the ground-truth relative pose.
[4]: Sequences with ground-truth camera exposure time read from camera's API.
[5]: With very limited visual features in this scenario (see Experiment-1 of our paper).
Each of our sequences is released as a simple rosbag file, and you can view the detail of each sequence with the command "rosbag info xxx.bag".
Take sequence "hku_main_building.bag" as an exmple:
rosbag info hku_main_building.bag
You can see:
path: hku_main_building.bag
version: 2.0
duration: 18:11s (1091s)
start: Feb 13 2022 14:55:54.90 (1644735354.90)
end: Feb 13 2022 15:14:06.70 (1644736446.70)
size: 7.3 GB
messages: 266130
compression: none [8266/8266 chunks]
types: livox_ros_driver/CustomMsg [e4d6829bdfe657cb6c21a746c86b21a6]
sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/CompressedImage [8f7a12909da2c9d3332d540a0977563f]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
topics: /camera/image_color/compressed 16320 msgs : sensor_msgs/CompressedImage
/camera/image_color_frame_info 16320 msgs : sensor_msgs/CameraInfo
/livox/imu 222573 msgs : sensor_msgs/Imu
/livox/lidar 10917 msgs : livox_ros_driver/CustomMsg
The data in each topic is explained as follows:
Topic name | Sensor data | Frequency | Message type |
---|---|---|---|
/camera/image_color/compressed | The recorded color image | 15HZ | sensor_msgs/CompressedImage |
/camera/image_color_frame_info | The ground truth camera exposure time [1] | 15HZ (same as image) | sensor_msgs/CameraInfo |
/livox/imu | The recorded IMU data | 200Hz | sensor_msgs/Imu |
/livox/lidar | The recorder LiDAR data [2] | 10HZ | livox_ros_driver/CustomMsg |
[1]: The ground-truth camera exposure time is read from the camera API (see Technical Reference for BFS-U3-13Y3.pdf)
[2]: See livox_ros_driver.
Notice: For the sake of convenience, the ground-truth camera exposure time is stored as a string in "/camera/image_color_framenfo/distortion_model", with its timestamp equal to its correspondence image.
You can see the details of camera exposure time by:
rosbag play YOUR_DOWNLOADED.bag
rostopic echo /camera/image_color_frame_info/distortion_model
and you will see the print as below:
"Camera_timestamp = 121202934544, exposure_time = 2.948 ms, gain = 8.9927 db"
In some sequences of our R3LIVE-dataset, we have placed the ArUco marker board for providing the ground-truth reference pose, as shown in the below figure:
The marker we used was generated using create_board_charuco.cpp from the OpenCV library. The marker was created with the following command:
[YOUR_BUILD_BIN_FILES] -w=10 -h=14 -sl=300 -ml=250 -d=13 [YOUR_GENERATED_IMAGE]
For example, as my command shown below:
./create_board_charuco -w=10 -h=14 -sl=300 -ml=250 -d=13 ./aruco_maker_board.png
Ideally, you will get the generated marker image same as below, with each grid on our marker board having a size of 0.0186 m.
Then, you can the ArUco maker detector to detect the maker and then calculate the ground-truth pose of our R3LIVE-dataset. Notice that each grid of our used marker board has a size of 0.0186 m.
In this sequence, we sampled the data with data with intentionally making the LiDAR sensor facing the pure plane (i.e., the floor), as shown in the following figure. In this scenario, the LiDAR is well-known degenerated in estimating the full pose.
Similar to 'degenerate_seq_00', we sampled this sequence of data for testing the robustness of R3LIVE in LiDAR degenerated scenario, shown as below:
In this sequence, we sampled the data by passing through a narrow “T”-shape passage while occasionally facing against the sidewalls, where the visual textures on walls are very limited (see Fig. a and Fig. c). This sequence of data is used for evaluating the robustness of R3LIVE in simultaneously LiDAR degenerated and visual texture-less environments. We refer our users to see Experiment-1 of our paper for more details.
In these four sequences, we sample data in the campus of The University of Hong Kong (HKU). We use these sequences of data to evaluate the capacity of R3LIVE for real-time reconstructing the radiance map. The mapping results of R3LIVE in these four sequences are shown follows:
Our mapping result of sequence "hku_campus_seq_00".
Sequence "hku_campus_seq_01" are collected by walking along the drive way of the HKU campus. (a) is the birdview of the whole radiance map, with its details shown in (b~d).
Sequence ``hku_campus_seq_/02/03" are sampled at the same place but at different times of day (evening and morning, respectively) and with different traveling trajectories. (a) is the birdview of map of sequence ``hku_campus_seq_02", with the closeup view of details are shown in (b) and (c).
In these two sequences, we sample the data in a complex and unstructured environment, where have a lot of trees, bushes, flowers, and etc. The mapping results of R3LIVE in these two sequences are shown below:
Sequence ``hku_park_00" is collected by walking along the pathway of a garden of HKU. (a) is the birdview of the whole radiance map, with its details shown in (b~d).
Sequence "hku_park_01" is collected in a cluttered environment with many trees and bushes. (a) is the birdview of the whole radiance map, with its details are shown in (b) and (c).
In sequence "hku_main_building", we collect the data in both interior and exterior of the main building of HKU. The radiance map reconstructed by R3LIVE is shown as below
Our reconstructed radiance map of the main building of HKU. (a) The bird's view of the map, with its details shown in (b~n). (b~g) closeup of outdoor scenarios and (h~n) closeup of indoor scenarios.
In these four sequences, we collect the data within the campus of the Hong Kong University of Science and Technology (HKUST), with the length of traveling reach as 1317 and 1524 meters. We use these two sequences to test the ability of R3LIVE in real-time reconstructing the radiance map in a large-scale environment.
Sequence ``hku_campus_seq_00/01" are collected within the campus of HKUST with two different traveling trajectories. In (a), we merge the point cloud of sequence "hku_campus_seq_00" with the GoogleEarth satellite image and find them aligned well. The details of our reconstructed radiance map are selectively shown in (b~d).
Sequence "hku_campus_seq_02" is collected by exploring the entrance piazza of HKUST, traveling both the interior and exterior of the buildings. (a) is the birdview of the whole radiance map, with the outdoor and indoor scenarios selectively shown in (b) and (c), respectively.
Sequence ``hkust_campus_seq_03" captures most part of the HKUST's campus, with the traveling length reaching 2.1 Km. We collected the data starting from the sea front (see the lower left of (a)) and ending at the entrance piazza (the upper right of (a)) of HKUST. In (a), we merge our reconstructed point cloud map (points are colored by their heigh) with the Google Earth satellite image and find them aligned well. (b) shows the side view of the map. (c~h) are the closeup views of the details marked in (a).
This dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is intended for non-commercial academic use. For commercial use, please contact me <ziv.lin.ljrATgmail.com> and Dr. Fu Zhang <fuzhangAThku.hk> to negotiate a different license.
For any technical issues, please contact me via email Jiarong Lin < ziv.lin.ljrATgmail.com >.
For commercial use, please contact me < ziv.lin.ljrATgmail.com > and Dr. Fu Zhang < fuzhangAThku.hk >