CN116160440A - Remote operation system of double-arm intelligent robot based on MR remote control - Google Patents

Remote operation system of double-arm intelligent robot based on MR remote control Download PDF

Info

Publication number
CN116160440A
CN116160440A CN202211564363.9A CN202211564363A CN116160440A CN 116160440 A CN116160440 A CN 116160440A CN 202211564363 A CN202211564363 A CN 202211564363A CN 116160440 A CN116160440 A CN 116160440A
Authority
CN
China
Prior art keywords
real
virtual
scene
remote control
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211564363.9A
Other languages
Chinese (zh)
Inventor
王芸
王玉成
李芬
赵娜娜
谢超
叶晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Advanced Manufacturing Technology
Original Assignee
Institute of Advanced Manufacturing Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Advanced Manufacturing Technology filed Critical Institute of Advanced Manufacturing Technology
Priority to CN202211564363.9A priority Critical patent/CN116160440A/en
Publication of CN116160440A publication Critical patent/CN116160440A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/006Controls for manipulators by means of a wireless system for controlling one or several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a remote operation system of a double-arm intelligent robot based on MR remote control, comprising: the robot is characterized in that a double-arm manipulator and a binocular camera are arranged on a walking vehicle body; the MR remote control terminal is an MR helmet worn on the head by an operator; an information processing unit: the central computing processing equipment establishes communication with a walking vehicle body, a double-arm manipulator, a binocular camera and an MR helmet; a virtual reality system is formed by a binocular camera, an MR helmet and a central computing processing device. The invention can realize real-time visual remote operation on complex and inaccessible environments through the virtual reality system, the double arms can flexibly turn, the speed is controllable, the degree of freedom is good, and the problem that the life of human beings is easily endangered in the existing complex operation process is effectively solved.

Description

Remote operation system of double-arm intelligent robot based on MR remote control
Technical Field
The invention relates to the technical field of robot application, in particular to a teleoperation double-arm intelligent robot based on an MR remote control terminal system.
Background
In order to ensure personal safety, for some occasions with complex environments and uncertain dangers, related operations cannot be carried out by directly entering a dangerous area manually. In the prior art, the camera is fixedly arranged on the robot, the robot and the camera are sent into a set area to be put into operation, the mode limits the field of view range of the robot to a great extent, when the robot vibrates, the robot is difficult to align to a target object, particularly in the case of teleoperation of the robot, the flexibility of the robot is poor, the quick and efficient operation is difficult to realize, and the use requirements of various related fields in the quick development of society cannot be met.
Disclosure of Invention
The invention provides a teleoperation double-arm intelligent robot based on an MR remote control terminal system, which has good degree of freedom, flexible operation and man-machine collaborative visual operation, and aims to avoid the defects of the prior art, and the teleoperation double-arm intelligent robot is operated in a complex environment by using a virtual reality technology to control the robot in real time, so that the problem of harm to human life in the prior complex operation is solved.
The invention adopts the following technical scheme for solving the technical problems:
the invention relates to a remote operation system of a double-arm intelligent robot based on MR remote control, which comprises:
and (3) a robot: the double-arm mechanical arm and the binocular camera are arranged on the walking vehicle body;
MR remote control terminal: is an MR helmet worn on the head by an operator;
an information processing unit: the system is a central computing processing device which establishes communication connection with the walking vehicle body, the double-arm manipulator, the binocular camera and the MR helmet, and a virtual reality system is formed by the binocular camera, the MR helmet and the central computing processing device;
the central computing processing equipment builds an immersive control environment for an operator by reconstructing a three-dimensional model of a real space; after the communication connection is established, the robot acquires real external display scene data in real time by utilizing a binocular camera and transmits the real external display scene data back to the server, a virtual scene is created in the Unity platform, the virtual scene is deployed into a mixed reality head display through Visual Studio, and the operator visually manipulates the robot to execute related tasks in real time.
The remote operation system of the double-arm intelligent robot based on MR remote control is also characterized in that: the binocular camera scans the site three-dimensional environment, a digital twin environment is formed by data fusion calculation with a self state sensor, the complete three-dimensional environment around the robot and the three-dimensional gesture of the robot are displayed in the digital twin environment, a three-dimensional scene is displayed through the immersive display function of the MR helmet, and an operator performs interactive operations such as zooming, translation, rotation and the like through gestures to observe a task site from different visual angles.
The remote operation system of the double-arm intelligent robot based on MR remote control is also characterized in that: the MR remote control terminal is provided with a user intention analysis module, a mixed reality capturing module, an image preprocessing module, a communication module, a visual algorithm processing module, an output processing module, a mixed reality rendering module and a mixed reality interaction module; the system exports the reconstructed three-dimensional model into an obj file format and puts the obj file format into a Unity platform scene, and after the real scene data is overlapped with the virtual scene through background transparentization, the system displays a monitoring video of the robot, the three-dimensional virtual scene, robot state information, director communication information and a control operation interface in an immersed interface; the operator interacts with the control interface through the gesture recognition function by using the virtual gesture, inputs a command, and controls the robot to execute related tasks.
The remote operation system of the double-arm intelligent robot based on MR remote control is also characterized in that:
the communication between the communication module in the MR remote control terminal and the server is carried out by adopting a SocketAsyncEventArgs method according to the following steps:
step 1, creating a Socket AsyncEventArgs object and a Socket object;
step 2, setting the callback method, the buffer area and the attribute of UserToken of the SocketAsyncEventArgs object;
step 3, using a Socket asynceeventargs object as a parameter, and creating asynchronous connection by using a ConnectAsync method of the Socket object so as to connect with a server;
step 4, asynchronously receiving the message by using a receiver async method;
and 5, asynchronously sending the message by using a SendAsync method.
The remote operation system of the double-arm intelligent robot based on MR remote control is also characterized in that:
before virtual-real interaction, the mixed reality interaction module in the MR remote control terminal firstly establishes an accurate mapping relation between a virtual scene and a real environment, converts the virtual space and the real space into a unified coordinate system, realizes scene registration, and ensures accurate position matching of the virtual scene and the real environment, wherein the virtual scene and the real environment are as follows:
let P 1 (x 1 ,z 1 ) And P 2 (x 2 ,z 2 ) The two points in the virtual space, and the points in the real space corresponding to one-to-one are respectively denoted as P' 1 (x′ 1 ,z′ 1 ) And P' 2 (x′ 2 ,z′ 2 ) The method comprises the steps of carrying out a first treatment on the surface of the The vector is calculated by the formula (a) and the formula (b) respectively
Figure SMS_1
And->
Figure SMS_2
The method comprises the following steps:
Figure SMS_3
Figure SMS_4
wherein: x=x 2 -x 1 ;z=z 2 -z 1 ;x′=x′ 2 -x′ 1 ;z′=z′ 2 -z′ 1
V (v.x, V.z) represents any point in the virtual space, and the point in the real space corresponding to the point is R (R.x, R.z);
then there are: the V (v.x, V.z) point and R (R.x, R.z) point coordinate relationships are characterized by formulas (c) and (d):
Figure SMS_5
Figure SMS_6
wherein: theta is a vector
Figure SMS_7
And->
Figure SMS_8
Is included in the plane of the first part;
and (3) respectively calculating from the formula (e), the formula (f) and the formula (g) to obtain sin theta, cos theta and ratio as follows:
Figure SMS_9
Figure SMS_10
Figure SMS_11
interconversion of the virtual coordinate system and the real space coordinate system is realized by the formulas (c) and (D), and the position under the real space coordinate system is found by traversing each node information in the 2D structure diagram, so that a virtual model is constructed; and arranging scene contents into the MR helmet in real time through the mapping relation.
The remote operation system of the double-arm intelligent robot based on MR remote control is also characterized in that:
the mixed reality interaction module in the MR remote control terminal comprises three types, namely static superposition, real-time tracking interaction and real-time modeling interaction;
the static superposition is to superimpose virtual contents in a static real scene, wherein the virtual contents are text description information of a target real object or are preset virtual objects;
the real-time tracking interaction is to identify and track a real object through tracking preset features or models, and perform real-time superposition or position-based collision interaction on the real object;
the real-time modeling interaction is carried out on a moving real object to obtain pose and contour information of the moving real object, and a point cloud copy of the moving real object is created according to the pose and contour information, so that the point cloud copy and the virtual object carry out fine interaction.
The remote operation system of the double-arm intelligent robot based on MR remote control is also characterized in that: the MR helmet has mixed reality display glasses; the mixed reality display glasses include:
the real scene acquisition module acquires real-time real scene data of the outside world through acquisition;
a virtual scene generation module for generating a virtual scene;
the image fusion module is used for carrying out image fusion on the real scene and the virtual scene through superposition;
the space positioning module acquires real-time position information of an acquisition space through acquisition;
and the image display module is used for displaying the image by combining the virtual and the real.
The MR helmet displays a surveillance video of a binocular camera, a three-dimensional virtual scene, robot state information, operator communication information and a control operation interface in an immersive interface by overlapping real-time real scene data with a virtual scene through background transparentization.
The remote operation system of the double-arm intelligent robot based on MR remote control is also characterized in that:
the walking vehicle body adopts a crawler-type walking mechanism, a communication module and a central controller are configured, the communication module is utilized to establish a connection between the crawler-type walking mechanism and the central controller, and the central controller receives control instructions from the MR helmet and the central computing processing equipment and performs corresponding task operation on the walking vehicle body.
The remote operation system of the double-arm intelligent robot based on MR remote control is also characterized in that:
the double-arm manipulator comprises a hand and a manipulator arm; the proximal end of the mechanical arm is arranged on the walking vehicle body, and the hand part is arranged at the distal end of the mechanical arm;
the mechanical arm has six degrees of freedom of shoulder roll, shoulder pitch, elbow roll, elbow pitch, wrist roll and wrist pitch;
the hand part is driven by the mechanical arm and has six degrees of freedom, the hand part is independently provided with a palm, and fingers on the palm comprise thumbs and four fingers; the thumb and the four fingers are respectively provided with a finger joint, and driving motors are respectively arranged in the finger joints and at the connection positions of the palm and the fingers, so that the hand can realize grabbing actions.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention is matched with the MR helmet, the binocular camera and the central computing processing equipment, can realize visual teleoperation of the robot, and greatly reduces the difficulty of affecting the operation of the robot due to the distance and the surrounding environment.
2. The central computing processing equipment is responsible for receiving, processing and synchronously storing the related information from the MR helmet and the robot; the robot can communicate with the MR helmet and the central computing processing equipment through a wireless network, and information acquired by various equipment is transmitted to the MR helmet and the central computing processing equipment in real time, so that a good feedback channel is established between an operator and the robot, and the robot is accurate to control.
3. The robot has good degree of freedom, can flexibly turn, has controllable speed, and is greatly beneficial to better outdoor dangerous operation through visual teleoperation.
4. The crawler-type mobile trolley is adopted, so that the robot can cope with complex road surface environments and can reach a destination rapidly.
5. The double-arm manipulator disclosed by the invention not only can ensure that the robot has good degree of freedom and wide moving range, but also can realize the purpose of accurately grabbing objects by the cooperation of an operator and the robot through a virtual reality system.
Drawings
FIG. 1 is a schematic diagram of the overall architecture of the system of the present invention;
FIG. 2 is a diagram of a system functional architecture of the present invention;
fig. 3 is a schematic diagram of a dual-arm intelligent robot in the system of the present invention.
Reference numerals in the drawings: 1 a walking car body, 2 a double-arm manipulator, 3 a binocular camera, 4 an MR helmet and 5 a central computing processing device.
Detailed Description
Referring to fig. 1, the dual-arm intelligent robot teleoperation system based on MR remote control in this embodiment includes a robot, an MR remote control terminal, and an information processing unit, wherein: the robot is characterized in that a double-arm manipulator 2 and a binocular camera 3 are arranged on a travelling car body 1; the MR remote control terminal is an MR helmet 4 worn on the head by an operator, and the operator controls the forward and backward movements of the robot, the gesture of the mechanical arm and the grabbing gesture of the mechanical finger through the MR helmet; the MR helmet can also be used for carrying out real-time communication with the commander through a network, and relevant videos and other pictures are sent to the commander end; the information processing unit is a central computing processing device 5 which establishes communication with the walking vehicle body 1, the double-arm manipulator 2, the binocular camera 3 and the MR helmet 4, and a virtual reality system is formed by the binocular camera 3, the MR helmet 4 and the central computing processing device 5; the central computing processing equipment 5 builds an immersive control environment for an operator by reconstructing a three-dimensional model of a real space; after the communication connection is established, the robot utilizes the binocular camera 3 to acquire real external display scene data in real time and transmit the real external display scene data back to the server, a virtual scene is created in the Unity platform, the virtual scene is deployed into a mixed reality head display through Visual Studio, and the operator visually manipulates the robot to execute related tasks in real time.
In the embodiment, a binocular camera 3 scans a field three-dimensional environment, a digital twin environment is formed through data fusion calculation with a self state sensor, a complete three-dimensional environment around the robot and a three-dimensional gesture of the robot are displayed in the digital twin environment, a three-dimensional scene is displayed through an immersion display function of an MR helmet, an operator performs interactive operations such as zooming, translation, rotation and the like through gestures, and a task field is observed from different view angles.
Referring to fig. 2, the MR remote control terminal in this embodiment has a user intention analysis module, a mixed reality capturing module, an image preprocessing module, a communication module, a visual algorithm processing module, an output processing module, a mixed reality rendering module and a mixed reality interaction module; the system exports the reconstructed three-dimensional model into an obj file format and puts the obj file format into a Unity platform scene, and after the real scene data is overlapped with the virtual scene through background transparentization, the system displays a monitoring video of the robot, the three-dimensional virtual scene, robot state information, director communication information and a control operation interface in an immersed interface; the operator interacts with the control interface through the gesture recognition function by using the virtual gesture, inputs a command, and controls the robot to execute related tasks.
The software interaction system of the MR remote control terminal mainly comprises three functional modules of panoramic virtual scene construction, instruction capturing and recognition and target detection tracking; the software system is divided into a data acquisition layer, a data processing layer and an application layer. The data acquisition layer provides basic data for subsequent panorama generation and display by acquiring image, sound and coordinate data; the data processing layer performs virtual image generation, gesture recognition, voice recognition and target tracking; the application layer is responsible for virtual-real interaction of the scene.
The system shown in fig. 2 can record information collected by the robot, facilitate analysis and summary after the task, and can record operation information of an operator for backtracking after the task.
In this embodiment, the communication between the communication module in the MR remote control terminal and the server is performed by adopting a socketasynceeventargs method according to the following steps:
step 1, creating a Socket AsyncEventArgs object and a Socket object;
step 2, setting the callback method, the buffer area and the attribute of UserToken of the SocketAsyncEventArgs object;
step 3, using a Socket asynceeventargs object as a parameter, and creating asynchronous connection by using a ConnectAsync method of the Socket object so as to connect with a server;
step 4, asynchronously receiving the message by using a receiver async method;
and 5, asynchronously sending the message by using a SendAsync method.
The socket asynceeventArgs are asynchronous socket parameters, exist in a system, net, socket name space, are a set of asynchronous methods designed aiming at high concurrency in a NET library, can realize multiplexing of socket objects, can save the problems of object redistribution and synchronization under high concurrency, and improve the system performance.
Before virtual-real interaction, the mixed reality interaction module in the MR remote control terminal firstly establishes an accurate mapping relation between a virtual scene and a real environment, converts the virtual space and the real space into a unified coordinate system, realizes scene registration, and ensures accurate position matching of the virtual scene and the real environment, wherein the virtual scene and the real environment are as follows:
let P 1 (x 1 ,z 1 ) And P 2 (x 2 ,z 2 ) The two points in the virtual space, and the points in the real space corresponding to one-to-one are respectively denoted as P' 1 (x′ 1 ,z′ 1 ) And P' 2 (x′ 2 ,z′ 2 ) The method comprises the steps of carrying out a first treatment on the surface of the The vector is calculated by the formula (a) and the formula (b) respectively
Figure SMS_12
And->
Figure SMS_13
The method comprises the following steps:
Figure SMS_14
Figure SMS_15
wherein: x=x 2 -x 1 ;z=z 2 -z 1 ;x′=x′ 2 -x′ 1 ;z′=z′ 2 -z′ 1
V (v.x, V.z) represents any point in the virtual space, and the point in the real space corresponding to the point is R (R.x, R.z);
then there are: the V (v.x, V.z) point and R (R.x, R.z) point coordinate relationships are characterized by formulas (c) and (d):
Figure SMS_16
Figure SMS_17
wherein: theta is a vector
Figure SMS_18
And->
Figure SMS_19
Is included in the plane of the first part;
and (3) respectively calculating from the formula (e), the formula (f) and the formula (g) to obtain sin theta, cos theta and ratio as follows:
Figure SMS_20
Figure SMS_21
Figure SMS_22
interconversion of the virtual coordinate system and the real space coordinate system is realized by the formulas (c) and (D), and the position under the real space coordinate system is found by traversing each node information in the 2D structure diagram, so that a virtual model is constructed; and arranging scene contents into the MR helmet in real time through the mapping relation.
The mixed reality interaction module in the MR remote control terminal in the embodiment comprises three types, namely static superposition, real-time tracking interaction and real-time modeling interaction; the static superposition is to superimpose virtual contents in a static real scene, wherein the virtual contents are text description information of a target real object or are preset virtual objects; the real-time tracking interaction is to identify and track a real object through tracking preset features or models, and perform real-time superposition or position-based collision interaction on the real object; the real-time modeling interaction is to perform real-time modeling on a moving real object to obtain pose and contour information of the moving real object, and create a point cloud copy of the moving real object according to the pose and contour information, so that the point cloud copy and the virtual object perform fine interaction.
In this embodiment, the MR helmet 4 has mixed reality display glasses; the mixed reality display glasses include:
the real scene acquisition module acquires real-time real scene data of the outside world through acquisition;
a virtual scene generation module for generating a virtual scene;
the image fusion module is used for carrying out image fusion on the real scene and the virtual scene through superposition;
the space positioning module acquires real-time position information of an acquisition space through acquisition;
and the image display module is used for displaying the image by combining the virtual and the real.
The MR helmet 4 displays the surveillance video of the binocular camera 3, the three-dimensional virtual scene, the robot state information, the operator communication information, and the control operation interface in the immersive interface by superimposing the real-time real scene data with the virtual scene through the background transparency process.
Referring to fig. 3, in the present embodiment, a crawler-type travelling mechanism is adopted for a travelling body 1, so that a complex road surface environment can be handled, and an intelligent robot can quickly and accurately reach an operation site; the communication module and the central controller are configured, the communication module is utilized to establish a connection between the crawler-type travelling mechanism and the central controller, and the central controller receives control instructions from the MR helmet 4 and the central computing processing equipment 5 and performs corresponding task operation on the travelling car body 1.
In the embodiment, a double-arm manipulator is arranged, so that the robot has more degrees of freedom and a wider range of motion, the double-arm manipulator 2 comprises a hand and a manipulator to simulate a human hand, the near end of the manipulator is arranged on the walking vehicle body 1, and the hand is arranged at the far end of the manipulator; the mechanical arm has six degrees of freedom of shoulder roll, shoulder pitch, elbow roll, elbow pitch, wrist roll and wrist pitch; the hand part is driven by the mechanical arm to have six degrees of freedom, the hand part is independently provided with a palm, and fingers on the palm comprise thumbs and four fingers; the thumb and the four fingers are provided with finger joints, and driving motors are arranged in the finger joints and at the connection positions of the palm and the fingers, so that the hand can realize grabbing actions, and the aim of accurately grabbing objects is fulfilled.
The intelligent robot can realize real-time visual control, the three-dimensional sense and the immersion sense of the distance sense in the three-dimensional space can be well presented, and the perfect communication module enables an operator to be well connected with the robot.
Finally, it should be noted that: the foregoing description is only illustrative of the preferred embodiments of the present invention, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements or changes may be made without departing from the spirit and principles of the present invention.

Claims (9)

1. A teleoperation system of a double-arm intelligent robot based on MR remote control, characterized in that the system comprises:
and (3) a robot: a double-arm manipulator (2) and a binocular camera (3) are arranged on a walking vehicle body (1);
MR remote control terminal: is an MR helmet (4) worn on the head by an operator;
an information processing unit: the system is a central computing processing device (5) which establishes communication with the walking vehicle body (1), the double-arm manipulator (2), the binocular camera (3) and the MR helmet (4), and a virtual reality system is formed by the binocular camera (3), the MR helmet (4) and the central computing processing device (5);
the central computing processing equipment (5) builds an immersive control environment for an operator by reconstructing a three-dimensional model of a real space; after the communication connection is established, the robot utilizes a binocular camera (3) to acquire real external display scene data in real time and transmit the real external display scene data back to the server, a virtual scene is created in the Unity platform, the virtual scene is deployed into a mixed reality head display through Visual Studio, and the operator visually manipulates the robot to execute related tasks in real time.
2. The MR remote control-based dual-arm intelligent robot teleoperation system according to claim 1, characterized in that: the binocular camera (3) scans a site three-dimensional environment, a digital twin environment is formed by data fusion calculation with a self state sensor, the complete three-dimensional environment around the robot and the three-dimensional gesture of the robot are displayed in the digital twin environment, a three-dimensional scene is displayed through the immersive display function of the MR helmet, an operator performs interactive operations such as zooming, translation and rotation through gestures, and the task site is observed from different view angles.
3. The MR remote control-based dual-arm intelligent robot teleoperation system according to claim 1, characterized in that: the MR remote control terminal is provided with a user intention analysis module, a mixed reality capturing module, an image preprocessing module, a communication module, a visual algorithm processing module, an output processing module, a mixed reality rendering module and a mixed reality interaction module; the system exports the reconstructed three-dimensional model into an obj file format and puts the obj file format into a Unity platform scene, and after the real scene data is overlapped with the virtual scene through background transparentization, the system displays a monitoring video of the robot, the three-dimensional virtual scene, robot state information, director communication information and a control operation interface in an immersed interface; the operator interacts with the control interface through the gesture recognition function by using the virtual gesture, inputs a command, and controls the robot to execute related tasks.
4. The MR remote control-based dual-arm intelligent robot teleoperation system according to claim 3, characterized in that:
the communication between the communication module in the MR remote control terminal and the server is carried out by adopting a SocketAsyncEventArgs method according to the following steps:
step 1, creating a Socket AsyncEventArgs object and a Socket object;
step 2, setting the callback method, the buffer area and the attribute of UserToken of the SocketAsyncEventArgs object;
step 3, using a Socket asynceeventargs object as a parameter, and creating asynchronous connection by using a ConnectAsync method of the Socket object so as to connect with a server;
step 4, asynchronously receiving the message by using a receiver async method;
and 5, asynchronously sending the message by using a SendAsync method.
5. The MR remote control-based dual-arm intelligent robot teleoperation system according to claim 3, characterized in that:
before virtual-real interaction, the mixed reality interaction module in the MR remote control terminal firstly establishes an accurate mapping relation between a virtual scene and a real environment, converts the virtual space and the real space into a unified coordinate system, realizes scene registration, and ensures accurate position matching of the virtual scene and the real environment, wherein the virtual scene and the real environment are as follows:
let P 1 (x 1 ,z 1 ) And P 2 (x 2 ,z 2 ) The two points in the virtual space, and the points in the real space corresponding to one-to-one are respectively denoted as P' 1 (x′ 1 ,z′ 1 ) And P' 2 (x′ 2 ,z′ 2 ) The method comprises the steps of carrying out a first treatment on the surface of the The vector is calculated by the formula (a) and the formula (b) respectively
Figure FDA0003986020990000021
And->
Figure FDA0003986020990000022
The method comprises the following steps: />
Figure FDA0003986020990000023
Figure FDA0003986020990000024
Wherein: x=x 2 -x 1 ;z=z 2 -z 1 ;x′=x′ 2 -x′ 1 ;z′=z′ 2 -z′ 1
V (v.x, V.z) represents any point in the virtual space, and the point in the real space corresponding to the point is R (R.x, R.z);
then there are: the V (v.x, V.z) point and R (R.x, R.z) point coordinate relationships are characterized by formulas (c) and (d):
Figure FDA0003986020990000025
Figure FDA0003986020990000026
wherein: theta is a vector
Figure FDA0003986020990000027
And->
Figure FDA0003986020990000028
Is included in the plane of the first part;
and (3) respectively calculating from the formula (e), the formula (f) and the formula (g) to obtain sin theta, cos theta and ratio as follows:
Figure FDA0003986020990000029
Figure FDA00039860209900000210
Figure FDA00039860209900000211
interconversion of the virtual coordinate system and the real space coordinate system is realized by the formulas (c) and (D), and the position under the real space coordinate system is found by traversing each node information in the 2D structure diagram, so that a virtual model is constructed; and arranging scene contents into the MR helmet in real time through the mapping relation.
6. The MR remote control-based dual-arm intelligent robot teleoperation system according to claim 3, characterized in that:
the mixed reality interaction module in the MR remote control terminal comprises three types, namely static superposition, real-time tracking interaction and real-time modeling interaction;
the static superposition is to superimpose virtual contents in a static real scene, wherein the virtual contents are text description information of a target real object or are preset virtual objects;
the real-time tracking interaction is to identify and track a real object through tracking preset features or models, and perform real-time superposition or position-based collision interaction on the real object;
the real-time modeling interaction is carried out on a moving real object to obtain pose and contour information of the moving real object, and a point cloud copy of the moving real object is created according to the pose and contour information, so that the point cloud copy and the virtual object carry out fine interaction.
7. The MR remote control-based dual-arm intelligent robot teleoperation system according to claim 1, characterized in that: the MR helmet (4) has mixed reality display glasses; the mixed reality display glasses include:
the real scene acquisition module acquires real-time real scene data of the outside world through acquisition;
a virtual scene generation module for generating a virtual scene;
the image fusion module is used for carrying out image fusion on the real scene and the virtual scene through superposition;
the space positioning module acquires real-time position information of an acquisition space through acquisition;
and the image display module is used for displaying the image by combining the virtual and the real.
The MR helmet (4) displays the surveillance video, the three-dimensional virtual scene, the robot state information, the operator communication information and the control operation interface of the binocular camera (3) in the immersive interface by overlapping the real-time real scene data with the virtual scene through background transparency processing.
8. The MR remote control-based dual-arm intelligent robot teleoperation system according to claim 1, characterized in that: the walking vehicle body (1) adopts a crawler-type walking mechanism, a communication module and a central controller are configured, the communication module is utilized to establish a connection between the crawler-type walking mechanism and the central controller, the central controller receives control instructions from the MR helmet (4) and the central computing processing equipment (5), and corresponding task operation is carried out on the walking vehicle body (1).
9. The MR remote control-based dual-arm intelligent robot teleoperation system according to claim 1, characterized in that: the double-arm manipulator (2) comprises a hand and a manipulator arm; the proximal end of the mechanical arm is arranged on the walking vehicle body (1), and the hand part is arranged at the distal end of the mechanical arm;
the mechanical arm has six degrees of freedom of shoulder roll, shoulder pitch, elbow roll, elbow pitch, wrist roll and wrist pitch;
the hand part is driven by the mechanical arm and has six degrees of freedom, the hand part is independently provided with a palm, and fingers on the palm comprise thumbs and four fingers; the thumb and the four fingers are respectively provided with a finger joint, and driving motors are respectively arranged in the finger joints and at the connection positions of the palm and the fingers, so that the hand can realize grabbing actions.
CN202211564363.9A 2022-12-07 2022-12-07 Remote operation system of double-arm intelligent robot based on MR remote control Pending CN116160440A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211564363.9A CN116160440A (en) 2022-12-07 2022-12-07 Remote operation system of double-arm intelligent robot based on MR remote control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211564363.9A CN116160440A (en) 2022-12-07 2022-12-07 Remote operation system of double-arm intelligent robot based on MR remote control

Publications (1)

Publication Number Publication Date
CN116160440A true CN116160440A (en) 2023-05-26

Family

ID=86415377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211564363.9A Pending CN116160440A (en) 2022-12-07 2022-12-07 Remote operation system of double-arm intelligent robot based on MR remote control

Country Status (1)

Country Link
CN (1) CN116160440A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117655601A (en) * 2023-12-12 2024-03-08 中船舰客教育科技(北京)有限公司 MR-based intelligent welding method, MR-based intelligent welding device, MR-based intelligent welding computer equipment and MR-based intelligent welding medium
CN118192811A (en) * 2024-05-16 2024-06-14 苏州易普趣软件有限公司 Interaction method, system, device and storage medium based on MR technology

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117655601A (en) * 2023-12-12 2024-03-08 中船舰客教育科技(北京)有限公司 MR-based intelligent welding method, MR-based intelligent welding device, MR-based intelligent welding computer equipment and MR-based intelligent welding medium
CN118192811A (en) * 2024-05-16 2024-06-14 苏州易普趣软件有限公司 Interaction method, system, device and storage medium based on MR technology

Similar Documents

Publication Publication Date Title
CN108422435B (en) Remote monitoring and control system based on augmented reality
US7714895B2 (en) Interactive and shared augmented reality system and method having local and remote access
CN109164829B (en) Flying mechanical arm system based on force feedback device and VR sensing and control method
CN112634318B (en) Teleoperation system and method for underwater maintenance robot
CN116160440A (en) Remote operation system of double-arm intelligent robot based on MR remote control
CN111438673B (en) High-altitude operation teleoperation method and system based on stereoscopic vision and gesture control
CN113842165B (en) Portable remote ultrasonic scanning system and safe ultrasonic scanning compliance control method
CN113021357A (en) Master-slave underwater double-arm robot convenient to move
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
CN108828996A (en) A kind of the mechanical arm remote control system and method for view-based access control model information
CN114791765B (en) ROS intelligent vehicle interaction method based on mixed reality technology
Krupke et al. Prototyping of immersive HRI scenarios
Schwarz et al. Low-latency immersive 6D televisualization with spherical rendering
CN115157261A (en) Flexible mechanical arm teleoperation man-machine interaction device and method based on mixed reality
CN110695990A (en) Mechanical arm control system based on Kinect gesture recognition
CN110539315A (en) Construction robot based on virtual reality control
CN111702787B (en) Man-machine cooperation control system and control method
CN113888723A (en) Ultrahigh-definition diagnosis-level medical data MR panoramic display system and method
CN106527720A (en) Immersive interaction control method and system
CN209648706U (en) A kind of robot control system
CN115556115B (en) Collaborative robot control system based on MR technology
Lenz et al. The BERT2 infrastructure: An integrated system for the study of human-robot interaction
Bai et al. Kinect-based hand tracking for first-person-perspective robotic arm teleoperation
Zhou et al. Development of a synchronized human-robot-virtuality interaction system using cooperative robot and motion capture device
CN115359222A (en) Unmanned interaction control method and system based on augmented reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination