The RoboCup@Home VizBox is a little webserver @Home robots can run to vizualize what is going on.
The main page shows
- an outline of the current challenge and where the robot is in the story of that challenge.
- Subtitles of what the robot and operator just said; their conversation
- Images of what the robot sees or a visualisation of the robot's world model, eg. camera images, it's map, anything to make clear what is going on to the audience.
Additionally, the server accepts HTTP POSTs to which a command sentence can be submitted on /command
The server abstracts over the underlying robot via a Backend. A backend accepts messages from the robot's internals. A message is forwarded to the web page via websockets.
Currently, only a ROS backend is implemented:
operator_text std_msgs/String
What the robot has heard the operator sayrobot_text std_msgs/String
What the robot itself is sayingchallenge_step std_msgs/UInt32
Active item index in the plan or action sequence the robot is executing.image sensor_msgs/Image
Image of what the robo sees or anything else interesting to the audience
command std_msgs/String
Command HTTP POSTed to the robot.
- Allow robot to push action sequence and challenge name to server. Allows for GPSR action sequences etc.
git clone [email protected]:WalkingMachine/vizbox.git
cd vizbox
pip install -r requirements.txt --user
roscore # in separate terminal
./server.py image:=/usb_cam/image_raw # Remaps the image-topic to output of the USB cam, see below
Open The web page on localhost
To reproduce the the screenshot:
roslaunch usb_cam usb_cam-test.launch # separate terminal
rostopic pub /challenge_step std_msgs/UInt32 "data: 0" --once
rostopic pub /robot_text std_msgs/String "data: 'Hello operator'" --once
rostopic pub /operator_text std_msgs/String "data: 'Robot, follow me'" --once
rostopic pub /robot_text std_msgs/String "data: 'OK, I will follow you'" --once;
rostopic pub /challenge_step std_msgs/UInt32 "data: 1" --once