CN115139292B - Robot remote control method, system, equipment and medium with hand feeling enhancement - Google Patents

Robot remote control method, system, equipment and medium with hand feeling enhancement Download PDF

Info

Publication number
CN115139292B
CN115139292B CN202110346871.9A CN202110346871A CN115139292B CN 115139292 B CN115139292 B CN 115139292B CN 202110346871 A CN202110346871 A CN 202110346871A CN 115139292 B CN115139292 B CN 115139292B
Authority
CN
China
Prior art keywords
robot
target object
hand
information
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110346871.9A
Other languages
Chinese (zh)
Other versions
CN115139292A (en
Inventor
黄碧丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110346871.9A priority Critical patent/CN115139292B/en
Publication of CN115139292A publication Critical patent/CN115139292A/en
Application granted granted Critical
Publication of CN115139292B publication Critical patent/CN115139292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/006Controls for manipulators by means of a wireless system for controlling one or several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/081Touching devices, e.g. pressure-sensitive
    • B25J13/084Tactile sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

Disclosed are a robot remote control method, system, device and medium with enhanced hand feel, the robot remote control method comprising: obtaining touch information of each fingertip of the robot to a target object, wherein the touch information is a detection value of contact force between the corresponding fingertip and the target object; acquiring the posture information of each joint of the robot, wherein the posture information of each joint comprises the opening angle information of the joint; generating an attitude estimate of the target object based on the tactile information of each fingertip of the robot hand and the attitude information of each joint; determining the contact force of each fingertip of the robot hand based on the posture estimation of the target object and the posture information of each joint of the robot hand; haptic feedback corresponding to the contact force of each fingertip of the robot hand is provided to the human hand by a haptic remote control device matched to the robot hand.

Description

Robot remote control method, system, equipment and medium with hand feeling enhancement
Technical Field
The present disclosure relates to the field of intelligent control, and more particularly, to a robot remote control method, system, device, and computer readable storage medium with enhanced hand feel.
Background
With the wide application of intelligent control in civil and commercial fields, the robot remote control system and method have important roles in automatic control and man-machine interaction, and also face higher requirements.
Currently, when remote control, such as robot remote control, is performed, a robot is generally operated to perform corresponding behavior actions depending on visual information, and tactile information (e.g., contact force information between the robot and a target object) of the robot is sensed by a tactile sensor directly disposed at a finger tip of the robot, so that a user determines and further manipulates the operation process of the robot based on the visual information and the tactile information. However, on one hand, the haptic information directly measured by the haptic sensor has lower accuracy and reliability due to the presence of the sensing noise; on the other hand, the sensor may be limited by the environment during sensing and transmission, and has poor contact or delayed communication, so that data is lost or suddenly changed, which may cause injury to the hand of the user, and misjudgment of the contact state between the robot and the target by the user, so that correct control operation cannot be realized.
Therefore, there is a need for a robot remote control method that can provide a user with good tactile feedback on the premise of sensing the tactile information of the robot, and the tactile feedback has high accuracy and real-time, and the method has high reliability and robustness.
Disclosure of Invention
In view of the above, the present disclosure provides a robot remote control method, system, apparatus, and computer-readable storage medium with enhanced hand feel. The robot remote control method provided by the invention can provide good tactile feedback for users on the premise of sensing the tactile information of the robot, and has higher accuracy and real-time performance, and has higher reliability and robustness.
According to an aspect of the present disclosure, a robot remote control method with hand feel enhancement is provided, including: obtaining touch information of each fingertip of a robot to a target object, wherein the touch information is a detection value of contact force between the corresponding fingertip and the target object; acquiring the posture information of each joint of the robot, wherein the posture information of each joint comprises the opening angle information of the joint; generating an attitude estimate of the target object based on tactile information of each fingertip of the robot and attitude information of each joint; determining a contact force feedback value of each fingertip of the robot based on the posture estimation of the target object and the posture information of each joint of the robot; and providing haptic feedback corresponding to the contact force feedback values of the fingertips of the robot hand to the human hand through a haptic remote control device matched with the robot hand.
In some embodiments, generating the pose estimate of the target object based on the haptic information of each fingertip of the robot hand and the pose information of each joint comprises: based on a robot hand holding gesture model, gesture prediction data of the target object are generated according to touch information of each fingertip and gesture information of each joint; determining the distribution probability that the gesture prediction data of the target object belong to a robot hand holding gesture model based on the touch information of each fingertip and the gesture information of each joint, wherein the robot hand holding gesture model is a nonlinear model; comparing the determined distribution probability with a preset probability threshold value, and determining the gesture estimation of the target object based on the comparison result.
In some embodiments, the comparing the determined distribution probability with a preset probability threshold value, and determining the pose estimate of the target object based on the comparison result comprises: and determining the gesture prediction data of the target object at the current moment as the gesture estimation of the target object at the current moment under the condition that the determined distribution probability is larger than the preset probability threshold. And determining the posture estimation of the target object at the moment before the current moment as the posture estimation of the target object at the current moment under the condition that the determined distribution probability is smaller than or equal to the preset probability threshold.
In some embodiments, the robot hand gripping pose model is a gaussian mixture model comprising a plurality of gaussian distribution models, wherein determining the distribution probability that the pose prediction data of the target object belongs to the robot hand gripping pose model comprises: determining, for each gaussian distribution model in the gaussian mixture model, a gaussian distribution probability that the pose prediction data of the target object belongs to the gaussian distribution model; and carrying out weighted summation on the Gaussian distribution probabilities of the gesture prediction data of the target object belonging to each Gaussian distribution model to generate the distribution probability of the gesture prediction data of the target object belonging to the robot hand holding gesture model.
In some embodiments, the pose information of each joint includes opening angle information of a plurality of joints in a thumb, an index finger and a middle finger of the robot, and the pose estimate of the target object includes position data and euler angle data of the target object.
In some embodiments, the robot hand gripping gesture model is trained via the steps of: setting a visual tracking mark on a target object, and setting a touch sensor on the fingertip of the robot hand; the method comprises the steps of manipulating a robot to adjust the gesture of a target object, wherein gesture information of joints of the robot and touch information of fingertips of the robot to the target object, which are acquired by a touch sensor, are recorded in the gesture adjustment process of the target object so as to generate a plurality of training data, and each training data comprises gesture information of each joint and touch information of fingertips of the corresponding robot to the target object; performing visual tracking processing on the target to generate standard target posture data corresponding to each training data in a plurality of training data of the target object; for each of the plurality of training data, using a robot hand to hold a pose model, generating pose prediction data for a target object based on the training data; generating a loss function based on the attitude prediction data of the target object and the standard target attitude data; based on the loss function, the robot hand gripping gesture model is trained.
In some embodiments, the robot hand is trained to hold the pose model such that the error of the pose prediction data of the target object and the standard target pose data is less than a preset error threshold.
In some embodiments, determining the contact force feedback value for each fingertip of the robot based on the pose estimate of the target object and the pose information for each joint of the robot comprises: determining a target contact state based on the posture estimation of the target object and the posture information of each joint of the robot; and determining a contact force feedback value of each fingertip of the robot hand based on the target contact state and the physical attribute data of the target object.
In some embodiments, the target contact state comprises: non-contact, start contact, continuous contact, end contact.
In some embodiments, determining the contact force feedback value for each fingertip of the robot further comprises: based on the posture estimation of the target object and the posture information of each joint of the robot, generating a virtual view of the target object and the robot, and transmitting the virtual view to a user.
In some embodiments, the method further comprises a pre-calibration step comprising: under the condition that a hand executing remote control action executes a preset hand action, acquiring joint data of the robot hand and joint data of the hand executing the remote control action; determining a mapping relation between the joints of the robot and the joints of the hand for executing remote control action based on the joint data of the robot and the joint data of the hand of the user; and under the condition that the mapping relation meets the preset condition, taking the mapping relation as a target mapping relation between the robot hand and the human hand; and under the condition that the mapping relation does not meet the preset condition, adjusting the mapping proportion parameter of the robot hand and the human hand in the mapping relation, and taking the adjusted mapping relation as a target mapping relation.
In some embodiments, the predetermined hand motion includes a hand open limit motion and a hand closed limit motion.
In some embodiments, the haptic remote control device is configured to obtain hand pose information and generate a pose control signal for the robot based on the hand pose information.
According to another aspect of the present disclosure, a robot remote control system with enhanced hand feel is presented, comprising a slave end and a master end capable of communicating with each other, and wherein the slave end comprises: a robot arm; a tactile data acquisition device provided at each fingertip of the robot hand and configured to acquire tactile information of each fingertip of the robot hand to a target object, the tactile information being a detection value of a contact force of the corresponding fingertip with the target object; a joint posture information acquisition device configured to acquire posture information of each joint of the robot hand, the posture information of each joint including opening angle information of the joint; a target object posture estimation device configured to generate a posture estimation of the target object based on tactile information of each fingertip of the robot hand and posture information of each joint; a slave-end data transmission device configured to transmit the posture estimation of the target object and the posture information of each joint of the robot arm to the master end; the main end comprises: a haptic force calculation device configured to determine a contact force feedback value of each fingertip of the robot based on the posture estimation of the target object and the posture information of each joint of the robot; and a haptic feedback generation device matched with the robot hand and configured to provide haptic feedback corresponding to the contact force feedback values of the fingertips of the robot hand to the human hand.
In some embodiments, the master end further includes a visual feedback device disposed at the head of the user, the visual feedback device configured to generate a virtual view of the target object and the robot based on the pose estimate of the target object and the pose information of each joint of the robot, and display the virtual view.
In some embodiments, the apparatus includes a processor and a memory containing a set of instructions that, when executed by the processor, cause the control apparatus to perform operations comprising: obtaining touch information of each fingertip of a robot to a target object, wherein the touch information is a detection value of contact force between the corresponding fingertip and the target object; acquiring the posture information of each joint of the robot, wherein the posture information of each joint comprises the opening angle information of the joint; generating an attitude estimate of the target object based on tactile information of each fingertip of the robot and attitude information of each joint; determining a contact force feedback value of each fingertip of the robot based on the posture estimation of the target object and the posture information of each joint of the robot; and providing haptic feedback corresponding to the contact force feedback values of the fingertips of the robot hand to the human hand through a haptic remote control device matched with the robot hand.
According to another aspect of the present disclosure, a computer-readable storage medium is presented, characterized in that it has stored thereon computer-readable instructions, which when executed by a computer perform the method as described before.
By utilizing the robot remote control method, the system, the equipment and the medium, the sensing of the touch information (contact force information) of the robot can be well completed, particularly, good touch feedback can be provided for a user, the touch feedback has the advantages of high accuracy and real-time, and the method has high reliability and robustness.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without making creative efforts to one of ordinary skill in the art. The following drawings are not intended to be drawn to scale on actual dimensions, emphasis instead being placed upon illustrating the principles of the disclosure.
FIG. 1A shows a schematic diagram of a robot remote control process;
FIG. 1B is a schematic diagram of a process for robot remote control based on visual information;
FIG. 2A illustrates an exemplary flow chart of a robot remote control method 100 with enhanced hand feel according to an embodiment of the disclosure;
FIG. 2B illustrates a schematic diagram of a robot remote control method with enhanced hand feel in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates an exemplary flowchart of a process S102 of generating a pose estimate of the target object according to embodiments of the present disclosure;
FIG. 4 illustrates an exemplary flowchart of a process S1022 for determining a probability of a distribution of pose prediction data of the target object belonging to a robot hand gripping pose model, in accordance with an embodiment of the disclosure;
FIG. 5A illustrates an exemplary flowchart of a training process 200 for a robot hand gripping gesture model in accordance with embodiments of the present disclosure;
FIG. 5B illustrates a schematic diagram of a robot hand and a region of a human hand in training of a robot hand gripping gesture model in accordance with embodiments of the present disclosure;
FIG. 5C shows a schematic diagram of a target object for a training process in accordance with an embodiment of the present disclosure;
FIG. 5D illustrates a schematic diagram of a robot gripping gesture training process in accordance with an embodiment of the present disclosure;
FIG. 5E illustrates an error histogram for applying a trained robotic hand grip pose model according to embodiments of the present disclosure;
FIG. 6 illustrates an exemplary flowchart of a process S103 of determining contact force feedback values for various fingertips of a robot hand in accordance with an embodiment of the present disclosure;
FIG. 7A illustrates an exemplary flowchart of a robot pre-calibration step 300, according to an embodiment of the present disclosure;
FIG. 7B shows a schematic diagram of a robot pre-calibration process according to an embodiment of the disclosure;
FIG. 7C illustrates a schematic diagram of a correspondence of a robotic joint and a human hand joint, according to an embodiment of the disclosure;
FIG. 8A illustrates a schematic diagram of a robot remote control method controlling a robot in accordance with an embodiment of the disclosure;
FIG. 8B illustrates a comparison of haptic signals detected by a robotic fingertip haptic sensor when a robot remote control method is compared to a conventional robot remote control method in accordance with an embodiment of the present disclosure;
FIG. 9 illustrates an exemplary block diagram of a robotic remote control system 500 with enhanced hand feel according to an embodiment of the disclosure;
fig. 10 illustrates an exemplary flow chart of a robotic remote control system 950 with enhanced hand feel according to embodiments of the disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. All other embodiments, which can be made by one of ordinary skill in the art without undue burden based on the embodiments of the present disclosure, are also within the scope of the present disclosure.
As used in this disclosure and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
While the present disclosure makes various references to certain modules in a system according to embodiments of the present disclosure, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in this disclosure to describe the operations performed by a system according to embodiments of the present disclosure. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that simulates, extends, and extends human intelligence using a digital computer or a machine controlled by a digital computer, perceives the environment, obtains knowledge, and uses the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The present disclosure relates to an intelligent control aspect of an artificial intelligence technology, and in particular, to a specific application of the artificial intelligence technology in a robot remote control process, and specifically, the present disclosure provides a robot remote control method with hand feel enhancement, by constructing a target object pose estimation based on haptic information and pose information of each joint of a robot by using a model, and jointly determining a contact force feedback value of each fingertip of the robot by using the target object pose estimation and the pose information of each joint of the robot, so that noise in the haptic information is effectively filtered in a robot remote control process, accuracy of the generated contact force feedback value is improved, and a contact force born by the robot can be flexibly and dynamically fed back in real time.
It should be understood that the robot remote control method described in the present disclosure refers to a method for remotely controlling a robot to perform a predetermined action or to complete a predetermined operation. The robot hand described in the present disclosure refers to a robot part for performing a specific operation, which is, for example, a robot part having a human hand structure.
Referring to FIG. 1A, a schematic diagram of a robot remote control process is shown. The robot hand can be manipulated by a user to perform a specific action with respect to or interact with a target object, for example, and can perform bi-directional communication with the user, specifically, on the one hand, the robot hand can be remotely controlled by the user via a remote control device of the user side to perform a corresponding operation (e.g., perform the same action as the user's hand). On the other hand, the robot can also send the acquired detection information (for example, the contact information of the fingertip) and the own data information (for example, the gesture information of each joint) to the user side so as to feed back the current operation execution state to the user in real time.
In the existing robot remote control method, the robot is generally operated to perform corresponding behavior actions mainly depending on visual information. Fig. 1B is a schematic diagram showing a process of remote control of a robot based on visual information, referring to fig. 1B, in which position and posture information of the target object and the robot are obtained by providing a camera, and a tactile sensor for sensing tactile information of the tip of the robot (e.g., contact force information between the robot and the target object) is directly provided at the tip of the robot, and a user performs judgment of the operation process of the robot and further manipulates the same based on the visual information and the tactile information. However, on one hand, the haptic information directly measured by the haptic sensor has lower accuracy and reliability due to the presence of the sensing noise; on the other hand, the sensor may be limited by the environment during sensing and transmission, and has poor contact or delayed communication, so that data is lost or suddenly changed, which may cause injury to the hand of the user, and misjudgment of the contact state between the robot and the target by the user, so that correct control operation cannot be realized.
Based on the above, a robot remote control method with hand feel enhancement is provided in the present disclosure. The hand feeling enhancement aims at representing that the remote control method can enhance the feedback precision and reliability of the touch feedback transmitted to a user by the robot so as to accurately feed back the contact force between the robot and a target object in real time. Fig. 2A illustrates an exemplary flow chart of a robot remote control method 100 with enhanced hand feel according to an embodiment of the present disclosure. Fig. 2B shows a schematic diagram of a robot remote control method with enhanced hand feel according to an embodiment of the present disclosure. Next, with reference to fig. 2A and 2B, a process and steps of robot remote control will be briefly described.
First, in step S101, tactile information of each fingertip of the robot hand on a target object is acquired, the tactile information being a detection value of a contact force between the corresponding fingertip and the target object.
As previously described, the robot hand refers to a robot part for performing a specific operation, which is, for example, a robot part having a human hand structure, and which has a plurality of finger sub-parts each having a plurality of finger joints. It should be appreciated that embodiments of the present disclosure are not limited by the particular hand joint arrangement of the robot and the number of arrangements thereof.
The target object refers to a target object which is interacted with the robot when the robot performs a specific task or action. For example, when the robot is operated to perform an operation of picking up a wood block, the wood block is the target object. Embodiments of the present disclosure are not limited by the type of the target object and the specific content of the particular task.
The step of acquiring the touch information of each fingertip of the robot hand on the target object is to acquire the touch information when one or more fingers of the robot hand for executing tasks interact with the target object. For example, in some embodiments, if only the thumb, index finger, and middle finger of a robot hand are employed to perform a particular task, only the tactile information of the tips of the thumb, index finger, and middle finger of the robot hand to the target object may be collected.
The tactile information is a detection value of acting force between a target object and a fingertip of the robot hand, namely a detection value of contact force of the fingertip to the target object, under the current hand gesture of the robot hand. For example, the tactile information may be measured via one or more tactile sensors provided at the tips of the respective fingers of the robot hand, or may be collected by other means as the case may be. Embodiments of the present disclosure are not limited by the manner in which the tactile information is collected for each fingertip of the robotic arm and its specific numerical values.
After the tactile information is obtained, in step S102, posture information of each joint of the robot arm is obtained, and the posture information of each joint includes opening angle information of the joint.
Referring to fig. 2B, in which part of the joints in the robot hand are schematically shown in gray filled boxes, the posture information of each joint refers to the posture state of each joint of the robot hand, which is used to reflect the current posture of the robot hand itself.
For example, the posture information of each joint includes, for example, opening angle information of the joint, where the opening angle information refers to an angle value of an included angle formed between two adjacent joints, and is used to characterize the opening and closing degree of the joint.
It should be appreciated that the opening angle information of the joint may also include other data content according to actual needs, and embodiments of the present disclosure are not limited by the specific composition of the pose information of the joint.
It should be appreciated that the steps S101 and S102 may be performed sequentially, in reverse order, or simultaneously. Embodiments of the present disclosure are not limited by the specific order of execution of steps S101 and S102.
After the tactile information of each fingertip of the robot to the target object and the posture information of each joint of the robot are obtained, in step S103, the posture estimation of the target object is generated based on the tactile information of each fingertip of the robot and the posture information of each joint.
The posture estimation of the target object refers to the predicted data of the posture state of the target object. The pose estimation may include, for example, an estimation of the relative position of the target object (compared to the position of the robot hand), an estimation of the pose angle of the target object, or may also include an estimation of the absolute position of the target object (compared to the position of the three-dimensional rectangular coordinate system). It should be appreciated that embodiments of the present disclosure are not limited by the specific composition of the pose estimation of the target object and its manner of expression.
Referring to fig. 2B, the pose estimation of the target object may be implemented, for example, based on a pre-trained robot hand gripping pose model, or the pose estimation process may be implemented based on a preset algorithm or system of equations.
After the posture estimation of the target object is obtained, in step S104, a contact force feedback value of each fingertip of the robot hand is determined based on the posture estimation of the target object and the posture information of each joint of the robot hand.
Referring to fig. 2B, for example, based on a simulation model in a virtual reality system, based on the pose estimation of the target object and the pose information of each joint of the robot, the target object and the robot are simulated to obtain a simulated view of the target object and the robot, and thus, a contact state of the target object and the robot is determined, an interaction force of the target object and the robot is determined based on the contact state, and a contact force feedback value of each fingertip of the robot is further determined.
After determining the contact force feedback values of the respective fingertips of the robot hand, in step S105, haptic feedback corresponding to the contact force feedback values of the respective fingertips of the robot hand is provided to the human hand by a haptic remote control device (i.e., the aforementioned remote control device) that is matched with the robot hand.
The haptic remote control device is used for realizing remote control of the robot based on user indication (such as hand gesture of a user) and can provide haptic feedback or other data information of the robot to the user so as to realize bilateral teleoperation. According to actual needs, it may be, for example, a wearable haptic device, such as an exoskeleton data glove, or other haptic device as well, embodiments of the present disclosure are not limited by the particular type of wearable haptic device and its composition.
For example, the haptic remote control device is an exoskeleton data glove having a hand structure corresponding to a human hand and having corresponding joint parts, and disposed outside the user's hand such that each joint of the exoskeleton glove corresponds to a corresponding joint of the human hand, and the haptic feedback process is for example: the determined contact force feedback values of all fingertips of the robot hand are applied to the corresponding fingertips of the exoskeleton data glove through the exoskeleton data glove, and then the contact force feedback values are applied to the corresponding fingertips of the user through the exoskeleton data glove, so that the user perceives the contact force of the current robot hand and the target object (namely, tactile feedback is provided), the user can judge the interaction state of the current robot hand and the target object accurately in real time based on the tactile feedback, and therefore the operation of the robot hand is further adjusted (for example, the hand gesture of the user is adjusted, so that the exoskeleton data glove records a new hand gesture and sends the gesture to the robot hand to control the robot hand to perform corresponding gesture adjustment).
Based on the above, in the present disclosure, in the process of performing robot remote control, after detecting the tactile information at the tip of the robot, the posture estimation of the target object is constructed based on the detected tactile information of the tip and the posture information of each joint of the robot, and the posture estimation of the target object and the posture information of each joint of the robot are integrated to determine the contact force feedback value of each tip of the robot together. Compared with the method for directly transmitting the contact force information detected by the touch sensor, the robot remote control method for enhancing the hand feeling can effectively filter noise in the detected touch force information by determining the contact force based on the target object gesture and the robot gesture, improve the accuracy of the generated touch force feedback value, flexibly and dynamically feedback the touch force born by the robot, effectively avoid the damage to the hand of a user when the sensor is influenced by surrounding physical environment to generate signal abnormality (such as abnormal peak value of the detection value of the touch force generated by abrupt change of the signal), effectively avoid the situation of error control caused by misjudgment on the operation state of the robot when the sensor signal is delayed or interrupted, enhance the accuracy and reliability of the touch feedback of the robot to the user in the robot remote control process, optimize the man-machine interaction experience of the touch feedback, and realize the robot remote control process for enhancing the hand feeling.
In some embodiments, the above-described process S102 of generating the pose estimate of the target object based on the haptic information of each fingertip of the robot hand and the pose information of each joint may be described in more detail, for example. Fig. 3 shows an exemplary flowchart of a process S102 of generating a pose estimate of the target object according to an embodiment of the disclosure.
Referring to fig. 3, first, in step S1021, based on a robot hand holding posture model, posture prediction data of the target object is generated from tactile information of each fingertip and posture information of each joint.
It should be appreciated that in some embodiments, the robot hand gripping gesture model may include, for example, a plurality of robot hand gripping gesture sub-models, each robot hand gripping gesture sub-model corresponding to a different type of target object. Then, for example, a grip gesture sub-model corresponding to the target object may be first determined in the robot grip gesture model based on the type of the target object. In some embodiments, the robot hand gripping gesture model may also be a single model for the current target object. It should be appreciated that the above only gives a composition example of the robot hand gripping gesture model. Embodiments of the present disclosure are not limited by the specific composition of the robotic hand gripping gesture model.
The robot hand holding gesture model is a pre-trained model, and the model can well describe the corresponding relation between the fingertip touch information and the joint gesture information of the robot hand and the gesture of the target object, so that the gesture estimation of the target object can be generated based on the fingertip touch information and the joint gesture information of the robot hand by using the model.
According to practical needs, the robot hand holding gesture model may be a nonlinear model, for example, a gaussian mixture model, where gesture prediction data of the target object can be calculated based on a gaussian mixture regression function in the gaussian mixture model.
The attitude prediction data of the target object refers to a predicted value of the attitude position of the target object, and may include, for example, spatial position coordinates (for example, absolute coordinates in an absolute coordinate system or relative coordinates in a relative coordinate system) of the target object, attitude angle data (for example, euler angle data) of the target object, and the like.
After the gesture prediction data of the target object is obtained, in step S1022, based on the tactile information of each fingertip and the gesture information of each joint, a distribution probability that the gesture prediction data of the target object belongs to a robot hand holding gesture model is determined, wherein the robot hand holding gesture model is a nonlinear model.
The distribution probability that the gesture prediction data of the target object belongs to the robot hand holding gesture model refers to a probability value that the calculated gesture prediction data of the target object belongs to the robot hand holding gesture model, and the probability value is used for representing the confidence coefficient of the calculated gesture prediction data of the target object.
For example, if the robot hand holding gesture model is a gaussian mixture model trained by using fingertip tactile information, joint gesture information and target object gesture as core parameters, for example, the currently acquired tactile information, gesture information of each joint and calculated gesture prediction data of the target object may be substituted into the gaussian mixture model, and a gaussian distribution probability of the gesture prediction data of the current target object in the gaussian mixture model may be calculated.
After the distribution probability is calculated, in step S1023, the determined distribution probability is compared with a preset probability threshold, and the pose estimation of the target object is determined based on the comparison result.
The preset probability threshold is a preset lower limit value of distribution probability, and is used for effectively judging the calculated gesture prediction data of the target object and determining the gesture estimation of the target object based on the effectiveness judgment.
The preset probability threshold can be set, for example, to 0.5 or 0.7 based on the actual situation. Embodiments of the present disclosure are not limited by the specific values of the preset probability threshold.
For example, the above-described process of comparing the distribution probability with a preset probability threshold and determining the pose estimation of the target object may be described in more detail. For example, if the preset probability threshold is set to 0.6, if the distribution probability that the gesture prediction data of the target object obtained by current calculation belongs to the robot hand holding gesture model is 0.8, which is greater than the preset probability threshold, the gesture prediction data of the target object obtained by current calculation is judged to be effective data, and the prediction estimation data of the target object is determined to be the gesture estimation of the target object. For example, if the calculated distribution probability that the posture prediction data of the target object belongs to the robot hand holding posture model is 0.4, which is smaller than the preset probability threshold, this indicates that the confidence of the calculated posture prediction data of the target object is low, it is determined as invalid data (invalid target posture), and the prediction data is not determined as the posture estimation of the target object. At this time, for example, the estimated posture value of the target object at the previous moment may still be used as the estimated posture value at the current moment, or the estimated posture value of the target object may be set as the preset posture value in the system.
Based on the above, in the present disclosure, in the process of estimating the gesture of the target object based on the tactile information of each fingertip and the gesture information of each joint of the robot, the gesture prediction data of the target object is generated according to the tactile information of each fingertip and the gesture information of each joint by using the pre-trained robot hand gesture model, the distribution probability that the gesture prediction data of the target object belongs to the robot hand gesture model is obtained, and the gesture estimation of the target object is determined by comparing the distribution probability with the preset probability threshold, so that the gesture of the target object can be estimated in real time and accurately according to the current tactile information and the joint gesture information of the robot hand, which is beneficial to obtaining the contact force feedback value of the robot fingertip based on the gesture of the target object, thereby improving the tactile feedback precision and reliability of the robot hand remote control system.
In some embodiments, for example, at each time during the operation of the robot, haptic information of each fingertip of the robot and pose information of each joint of the robot at that time are acquired according to the aforementioned process S102 and a target object pose estimate at that time is determined therefrom. Then, for the current time, after calculating the distribution probability that the pose prediction data of the target object at the time belongs to the robot hand holding pose model (step S1022), the step S1023 of comparing the determined distribution probability with a preset probability threshold and determining the pose estimation of the target object based on the comparison result may more specifically include: and determining the gesture prediction data of the target object at the current moment as the gesture estimation of the target object at the current moment under the condition that the determined distribution probability is larger than the preset probability threshold. And determining the posture estimation of the target object at the moment before the current moment as the posture estimation of the target object at the current moment under the condition that the determined distribution probability is smaller than or equal to the preset probability threshold.
Based on the above, in the present application, by determining the predicted data at the current time as the pose estimation of the target object at the current time when the distribution probability is greater than the preset threshold (i.e., the confidence of the predicted data at the current pose is high); and under the condition that the distribution probability is smaller than or equal to a preset threshold (namely, the confidence coefficient of the current gesture prediction data is lower), the gesture estimation of the target object at the current moment is continued to the gesture estimation of the target object at the moment before the current moment, so that the confidence coefficient of the current prediction data can be well determined based on the calculated distribution probability, the optimal gesture estimation of the target object can be flexibly determined based on the confidence coefficient of the prediction data, and the reliability and the accuracy of the gesture estimation of the target object can be improved.
In some embodiments, the robot hand gripping pose model is a gaussian mixture model. The gaussian mixture model (Gaussian Mixture Models) refers to a model formed by mixing (i.e., superimposing) a plurality of gaussian distribution models, which includes a plurality of gaussian distribution models, and each gaussian distribution model is independent of the other gaussian distribution model. It should be appreciated that embodiments of the present disclosure are not limited by the number of specific gaussian distribution models included in the gaussian mixture model.
The gaussian distribution model precisely quantizes things by using a gaussian probability density function (normal distribution curve), and decomposes one thing into a plurality of models formed based on the gaussian probability density function (normal distribution curve). The specific model structure of the Gaussian distribution model is mainly determined by parameters such as variance and mean of the Gaussian distribution model. Based on this, for example, parameters such as the mean value, the variance and the like of each gaussian distribution model in the robot hand holding gesture model can be debugged by using training data in pre-training, so that the generated robot hand holding gesture model (gaussian mixture model) can well describe the corresponding relationship between the fingertip tactile information, the joint gesture information and the target object gesture of the robot hand, and the expression of the gaussian distribution model obtained by training is shown in the following formula 1), for example.
Wherein (h, j, t) is a data relationship set of the robot and the target object, which is composed of the gesture information j of each joint of the robot, the touch information t of each fingertip of the robot and the gesture data h of the target object at a specific moment; p (h, j, t|G) is the probability of the distribution of the set of data relationships (h, j, t) belonging to the Gaussian mixture model G. M is the total number of the Gaussian distribution models included in the Gaussian mixture model, M is the mth Gaussian distribution model in the Gaussian mixture model, and the M is a positive integer which is more than or equal to 1 and less than or equal to M. Gamma m represents the prior condition of the mth Gaussian distribution model, which is a numerical value set according to actual needs; p m(h,j,t|μmm) characterizes the corresponding conditional probability density of the mth gaussian distribution model, and where μ m is the expectation of the mth gaussian distribution model, Σ m is the variance of the mth gaussian distribution model, both of which are specifically defined as follows.
Mu h,m is an expected value of h in the mth Gaussian distribution, mu q,m is an expected value of q in the mth Gaussian distribution, wherein q= { j, t }, j is posture information of each joint of the robot hand, and t is touch information of each fingertip of the robot hand. Sigma hh,m is the hh submatrix in the covariance matrix in the mth Gaussian distribution, sigma hq,m is the hq submatrix in the covariance matrix in the mth Gaussian distribution, sigma qh,m is the qh submatrix in the covariance matrix in the mth Gaussian distribution, and Sigma qq,m is the qq submatrix in the covariance matrix in the mth Gaussian distribution.
Further, from the trained gaussian mixture model, an expression of a gaussian mixture regression function (Gaussian Mixture Regression) of the gaussian mixture model can be obtained. This expression has, for example, the form of the following formula 3).
Wherein,A set of parameters is entered for the model, specifically,Wherein the method comprises the steps ofThe gesture information of each joint of the current robot is, for example, opening angle data of each current joint of the robot, which may be, for example, a 3-dimensional vector; For the touch information of each fingertip of the current robot, for example, when the touch signal data values detected by the touch sensors of the thumb, the index finger and the middle finger of the touch information robot are provided with 12 touch sensors (namely, 12 touch signal data values are collected by each fingertip), then For example, may be in the form of a matrix of 3 rows and 12 columns.To input parameter sets based on the modelAnd calculating the gesture prediction data of the obtained target object. M is the total number of the Gaussian distribution models included in the Gaussian mixture model, M is the mth Gaussian distribution model in the Gaussian mixture model, and M is a positive integer which is more than or equal to 1 and less than or equal to M.
Wherein,For example, having the expression shown in the following formula 4):
And wherein the parameters are The expressions of (2) are as follows:
wherein μ h,m represents the expected value of h in the mth gaussian, Σ hq,m represents the hq submatrix in the covariance matrix in the mth gaussian, Σ qq,m represents the qq submatrix in the covariance matrix in the mth gaussian, and μ q,m represents the expected value of q in the mth gaussian.
Wherein Σ hh,m represents the hh sub-matrix in the covariance matrix in the mth gaussian distribution, Σ hq,m represents the hq sub-matrix in the covariance matrix in the mth gaussian distribution, Σ qq,m represents the qq sub-matrix in the covariance matrix in the mth gaussian distribution, Σ qh,m represents the qh sub-matrix in the covariance matrix in the mth gaussian distribution.
Wherein, gamma m represents the prior condition of the mth Gaussian distribution model, which is a numerical value set according to actual needs,The corresponding conditional probability density of the mth gaussian distribution model is characterized, and wherein μ q,m characterizes the expected value of q in the mth gaussian distribution, Σ qq,m characterizes the qq submatrix in the covariance matrix in the mth gaussian distribution.
In the case where the robot hand gripping gesture model is a gaussian mixture model, the process S1022 of determining that the gesture prediction data of the target object belongs to the distribution probability of the robot hand gripping gesture model may be described in more detail. Fig. 4 illustrates an exemplary flowchart of a process S1022 of determining a distribution probability that the pose prediction data of the target object belongs to a robot hand gripping pose model according to an embodiment of the disclosure.
Referring to fig. 4, first, in step S1022-1, for each of the gaussian distribution models, a gaussian distribution probability that the posture prediction data of the target object belongs to the gaussian distribution model is determined.
After the gaussian distribution probability of the gesture prediction data of the target object in each gaussian distribution model is obtained, in step S1022-2, the gaussian distribution probabilities of the gesture prediction data of the target object belonging to each gaussian distribution model are weighted and summed, so as to generate the distribution probability of the gesture prediction data of the target object belonging to the robot hand holding gesture model.
It should be appreciated that in the weighted summation process, the weight value allocated to the gaussian probability distribution under each gaussian distribution model may be selected according to the actual needs and the actually performed task types, or may be a weight value preset for the system or preset for a user. Embodiments of the present disclosure are not limited by the particular weight values assigned to each gaussian probability in the weighted summation process.
For example, when the robot hand holding posture model is a gaussian mixture model having M gaussian distribution models, the distribution probability that the posture prediction data of the target object belongs to the robot hand holding posture model may be calculated, for example, according to the following formula 8).
Wherein,For the attitude information of each joint of the current robotTactile information for each fingertip of current robotAnd attitude prediction data of the target object calculated based on the modelA data relationship set of the composed robot and the target object; For the set of data relationships The probability of distribution belonging to the gaussian mixture model G. M is the mth Gaussian distribution model in the Gaussian mixture model, and is a positive integer which is more than or equal to 1 and less than or equal to M. And wherein μ m is the expectation of the mth gaussian distribution model, Σ m is the variance of the mth gaussian distribution model, the specific definition of both of which is shown in the foregoing equation 2).
Based on the above, on the one hand, by setting the robot hand holding gesture model to be a gaussian mixture model including a plurality of gaussian distribution models, the relation among the joint gesture information, the fingertip tactile information and the target gesture of the robot hand can be well and accurately described based on the set gaussian mixture model, thereby generating high-precision gesture prediction data of the real-time target object; on the other hand, the probability of the distribution of the gesture predicted data of the target object belonging to the robot hand holding gesture model is generated by calculating the Gaussian distribution probability of the gesture predicted data of the target object belonging to the Gaussian distribution model and carrying out weighted summation on the Gaussian distribution probabilities of the gesture predicted data of the target object belonging to the Gaussian distribution models, so that the probability of membership of the predicted data in each Gaussian distribution model and the probability of the Gaussian mixture model can be judged, the reliability and the accuracy of the finally calculated distribution probability are better, and the method is favorable for realizing good judgment of the confidence coefficient of the predicted data.
In some embodiments, the pose information of each joint includes opening angle information of a plurality of joints in a thumb, an index finger and a middle finger of the robot, and the pose estimate of the target object includes position data and euler angle data of the target object.
For example, the opening angle information is, for example, an angle measurement value obtained by detecting an included angle formed between two adjacent joints of the robot arm, and is used for representing the opening and closing degree of the joints.
For example, the euler angle data is pitch angle, yaw angle and roll angle data of the target object. The position data is, for example, the relative coordinate position of the target object with respect to the hand center point of the robot hand.
Based on the above, by setting that the posture information of the joints includes opening angle information of a plurality of joints in the thumb, the index finger and the middle finger of the robot, the posture estimation of the target object includes the position data and the euler angle data of the target object, so that the posture of the target object and the joint posture of the robot can be more comprehensively described based on actual needs, thereby being beneficial to improving the accuracy of the contact force feedback value generated based on the posture estimation of the target object and the joint posture information of the robot.
In some embodiments, the training process of the robot hand holding gesture model may be described in more detail, for example. FIG. 5A illustrates an exemplary flowchart of a training process 200 for a robot hand gripping gesture model in accordance with embodiments of the present disclosure. Fig. 5B shows a schematic diagram of a robot hand and a human hand region in training of a robot hand gripping gesture model according to an embodiment of the present disclosure. Fig. 5D shows a schematic diagram of a robot gripping gesture training process according to an embodiment of the present disclosure.
Referring to fig. 5A, 5B, and 5D, first, in step S201, a visual tracking mark is provided on a target object, and a tactile sensor is provided on a fingertip of a robot hand.
Referring to fig. 5B, the visual tracking mark refers to a mark point for realizing visual tracking. The marked point can be identified by a visual tracking device, thereby realizing the tracking of the position and the gesture of the marked point. It should be appreciated that one or more marker points may be provided at different locations of the target object to enable visual tracking of the position and pose of the target object (the visual markers provided on the target object are identified as white filled boxes in fig. 5B), as desired. Embodiments of the present disclosure are not limited by the particular number of visual tracking marks provided on the target object and the particular placement locations of the visual marks.
It should be appreciated that in some embodiments, visual tracking indicia may also be provided on a hand area of the robot hand (e.g., a hand center point or corresponding task finger joints and fingertips), with the visual tracking indicia provided on the finger portion shown in fig. 5B, for example, to track the change in pose and position of a particular portion of the robot hand during performance of a predetermined operational task.
The target object may be described in more detail, for example. A schematic diagram of a target object for a training process in accordance with an embodiment of the present disclosure is shown in fig. 5C. It will be appreciated from fig. 5C that the target object may include, for example, a circular bottle cap (a), a square object (b), and a hexagonal object (C), however, it will be appreciated that other shapes and physical characteristics of the target object may be utilized in the actual training process as desired. Embodiments of the present disclosure are not limited by the particular composition of the selected target object.
The tactile sensor is a means for detecting the contact force of the respective fingertip against the target object. According to actual needs, for example, a plurality of tactile sensors may be disposed at each fingertip of the robot, for example, 12 tactile sensors may be disposed, or 15 tactile sensors may be disposed, and the embodiments of the present disclosure are not limited by the specific disposition positions and the number of the tactile sensors at the fingertips.
In step S202, the manipulator adjusts the gesture of the target object, wherein the gesture information of each manipulator joint and the touch information of each fingertip of the manipulator on the target object acquired by the touch sensor are recorded during the gesture adjustment of the target object, so as to generate a plurality of training data, wherein each training data comprises the gesture information of each joint and the touch information of each fingertip of the corresponding manipulator on the target object.
As described above, the process of manipulating the robot to adjust the target posture is, for example: the user executes preset actions through the hands of the user, acquires hand gesture information (such as gesture information of all joints of the hand) of the user through a touch remote control device arranged at the hands of the user, and generates gesture control signals for the robot based on the hand gesture information so as to realize remote control of the robot.
The gesture information of the robot joint and the meaning of the touch information of each fingertip of the robot to the target object are the same as those described previously, and are not described here again.
In step S203, the visual tracking process is performed on the target, and standard target posture data corresponding to each of the plurality of training data of the target object is generated.
For example, in the process of performing posture adjustment on the target object, by tracking a visual tracking mark provided on the target object by using one or more visual cameras (for example, stereoscopic cameras) provided in the surrounding environment of the robot hand, and further performing visual processing based on the track information obtained by the tracking, target posture data of the target object is obtained and used as standard target posture data. And the standard target gesture data corresponds to gesture information of the robot joint acquired at the moment and touch information of each fingertip of the robot to the target object acquired by the touch sensor.
It should be appreciated that the steps S202 and S203 may be performed sequentially, in reverse order, or simultaneously. Embodiments of the present disclosure are not limited by the specific order of execution of steps S202 and S203.
Thereafter, in step S204, for each of the plurality of training data, the posture model is held by the robot hand, and posture prediction data of the target object is generated based on the training data.
For example, when the robot hand gesture model is a gaussian mixture model, gesture prediction data of the target object may be generated based on the aforementioned formula 3), for example.
After generating the posture prediction data of the target object, in step S205, a loss function is generated for each of the plurality of training data based on the posture prediction data of the target object and the standard target posture data.
The loss function is intended to characterize the degree of deviation between the predicted value and the true value. When the loss function has a minimum, it is intended to characterize that the predicted value deviates minimally from the true value, i.e. that the similarity between the two is maximized. The loss function may be designed according to practical needs, for example, and embodiments of the present disclosure are not limited by the specific expression of the loss function.
Thereafter, in step S206, the robot hand gripping posture model is trained based on the loss function.
For example, the value of the parameter of the robot hand gripping gesture model may be adjusted based on the loss function such that the loss function takes a minimum value. In some embodiments, when the robot hand holds the gesture model and is a gaussian mixture model, expected, variance and gaussian priori condition parameters of each gaussian distribution model in the gaussian mixture model can be adjusted according to a loss function, so that the gaussian mixture model can accurately reflect the relationship among joint gesture information, fingertip touch information and target gesture of the robot hand.
In some embodiments, a virtual view of the target object and the robot arm will also be generated based on the pose estimate of the target object and pose information of each joint of the robot arm, and sent to a head mounted display worn by the user.
The training process and its effects will be described in more detail below in connection with the detailed description.
For example, the robot hand grip pose model may be trained with three target objects (bottle cap (a), square object (b), and hexagonal object (C)) in fig. 5C, and is a gaussian mixture model as described above. In the training process, each of the three target objects is first trained based on the method as described above, and specific training steps and processes for generating the robot hand holding gesture model corresponding to each target object are corresponding to the training steps described above with reference to fig. 5B and 5D, which are not described herein.
After model training is completed, in order to evaluate the accuracy of the gesture model held by the robot hand after training, for each of three target objects, 1000 times of manipulation control are performed on each target object, and the gesture estimated value of each target object based on the touch information is generated by utilizing the gesture model held by the robot hand after training according to the touch information of each fingertip of the robot hand and the gesture information of each joint. Meanwhile, a visual tracking mark is set on a target object to perform visual tracking processing on the target object, and an estimated posture value of the target object based on visual information is generated based on the visual tracking processing (the estimated posture value based on visual information is taken as a reference value). Wherein the haptic information-based pose estimation value and the visual information-based pose estimation value each include a position estimation value and a pose angle (euler angle) estimation value of the target object.
Thereafter, an error between the haptic information-based posture estimation value and the visual information-based posture estimation value is calculated, and an error histogram is generated. Fig. 5E illustrates an error histogram applying a trained robotic hand grip pose model according to embodiments of the present disclosure.
Fig. 5E includes three histogram sets (the histograms at the upper and lower positions are one histogram set) that respectively indicate the distribution states of errors in the posture estimation values based on the tactile information with respect to the posture estimation values based on the visual information in the case where the target objects are the cap (a), the square object (b), and the hexagonal object (c). Wherein the upper histogram in each set of histograms characterizes an error of the position estimate of the haptic information-based target object relative to the position estimate of the visual information-based target object, wherein the abscissa is a distance unit (here, e.g., meters) and the ordinate is a probability value characterizing a probability that the position estimate of the haptic information-based target object has a particular error value with the position estimate of the visual information-based target object in the 1000 pose estimation comparisons; the lower histogram in each set of histograms characterizes the error of the euler angle estimate of the haptic information-based target object relative to the euler angle estimate of the visual information-based target object, wherein the abscissa characterizes the angle value and the ordinate is the probability value, which characterizes the probability that the euler angle estimate of the haptic information-based target object has a specific error value with the euler angle estimate of the visual information-based target object in the 1000 pose estimation comparisons.
For example, taking the first histogram set as an example, it shows a distribution state of errors of the posture estimation value based on the tactile information with respect to the posture estimation value based on the visual information in the case of 1000 times of posture estimation of the target object when the target object is the cap (a). And it can be seen from the upper histogram that in the 1000 pose estimation, in most cases, the error of the position estimation value of the target object based on the tactile information and the position estimation value of the target object based on the visual information is less than 2mm. And it can be seen based on the lower histogram that in most of the 1000 pose estimations, the error of the euler angle estimation value of the target object based on the haptic information with respect to the euler angle estimation value of the target object based on the visual information is 0. From this, it can be seen that the robot hand after training holds the posture sensor and can accurately reflect the relation between joint posture information, fingertip touch information and the posture of the target object of the robot hand, and can realize the high accuracy posture estimation to this target object.
Further, combining the three sets of histograms of the three target objects, it can be seen that the average error of the position estimates for the three objects using the haptic information is about 1mm, and the average error of the estimated values for the euler angles for the three objects is less than 10 degrees. The result shows that under the condition of vision shielding, the object gesture in the operation process of the robot can be estimated with high precision and high reliability by applying the robot hand holding gesture model trained in the method.
Based on the above, in the present disclosure, by setting a visual tracking mark on a target object and performing visual tracking processing, taking pose information of the target object obtained by the visual tracking processing as standard target pose data, generating a loss function by using the standard target pose data and pose prediction data of the target object generated by a robot hand holding pose model, and training the robot hand holding pose model via the loss function, the parameter in the robot hand holding pose model can be accurately adjusted based on the target pose obtained by the visual tracking, so that the trained robot hand holding pose model can accurately reflect the relationship between joint pose information, fingertip touch information and pose of the target object of the robot hand, and accurately generate pose estimation of the corresponding target object in real time according to the joint pose information and fingertip touch information of the robot hand.
In some embodiments, the robot hand is trained to hold the pose model such that the error of the pose prediction data of the target object and the standard target pose data is less than a preset error threshold.
The preset error threshold represents the maximum boundary value of the error between the gesture prediction data of the target object and the standard target gesture data, and is used for judging whether the trained robot hand holding gesture model can well describe the relationship among the joint gesture information, the fingertip touch information and the gesture of the target object. And when the errors of the gesture prediction data of the target object and the standard target gesture data are smaller than a preset error threshold value, the robot hand holding gesture model is characterized to reach the expected training standard.
It should be appreciated that the preset error threshold may be set according to actual needs, and embodiments of the present disclosure are not limited by the specific data value of the preset error threshold.
Based on the above, in the present disclosure, by setting the preset error threshold, it is possible to clearly and intuitively understand whether the current training process reaches the preset standard by comparing the error between the posture prediction data of the target object and the standard target posture data with the preset error threshold. The preset error threshold value can be adjusted based on different operation task requirements of the robot hand or different requirements of a user, so that the accuracy and precision of the hand-held gesture model of the trained robot hand can be flexibly adjusted.
In some embodiments, the above-described process S103 of determining the contact force feedback value of each fingertip of the robot hand based on the pose estimation of the target object and the pose information of each joint of the robot hand may be described in more detail, for example. Fig. 6 shows an exemplary flowchart of a process S103 of determining contact force feedback values for respective fingertips of a robot hand according to an embodiment of the present disclosure.
Referring to fig. 6, first, in step S1031, a target contact state is determined based on the posture estimation of the target object and the posture information of each joint of the robot.
The target contact state is used for representing the contact degree of the target object and the robot. For example, a virtual three-dimensional view of the target object and the robot may be established based on the pose estimation of the target object and pose information of each joint of the robot, so as to determine a target contact state of the target object and the robot from among a plurality of preset target contact states based on the virtual three-dimensional view.
It should be appreciated that a plurality of preset target contact states may be provided, such as untouched, contacted, etc. Embodiments of the present disclosure are not limited by the number of target contact states set.
Thereafter, in step S1032, a contact force feedback value for each fingertip of the robot is determined based on the target contact state and the physical attribute data of the target object.
The physical properties of the target object refer to physical property information of materials, structures, hardness, strength, rigidity and the like of the target object. Embodiments of the present disclosure are not limited by the specific composition of the physical attributes.
For example, the process of determining the contact force feedback value of each fingertip according to the physical properties of the target contact state and the target object may be, for example: according to the determined contact state, if contact is generated, the generation position and the contact force direction of the interaction force (contact force) between the two are further calculated, and the physical property and the contact state of the target object are comprehensively considered to determine the magnitude of the interaction force (contact force feedback value).
It should be appreciated that the foregoing only provides a specific way to determine the fingertip contact force feedback value, and the fingertip contact force feedback value may be determined based on the target contact state and the physical attribute data of the target object in other ways according to practical situations.
Based on the above, in the present disclosure, in determining the contact force feedback value of each fingertip of the robot, the target contact state is first determined according to the pose estimation of the target object and the pose information of each joint of the robot, and then the contact force feedback value of each fingertip of the robot is determined together with the physical attribute data of the target object through the target contact state, so that in determining the contact force feedback value, both the dynamic interaction state of the target object and the robot and the influence of the physical characteristics such as the structural material of the target object on the contact force feedback value are considered, and compared with the case that only the touch signal collected by the fingertip is taken as the contact force feedback value, the accuracy and reliability of the contact force feedback value are improved, for example, if the target object is made of a rigid material with higher hardness in the case that the determined target contact state is the same, the determined contact force feedback value is larger; if the target object is made of a flexible material having a small hardness, the determined contact force feedback value is small, whereby the generation of a dynamic and flexible contact force feedback value can be achieved.
In some embodiments, the target contact state comprises: non-contact (NC), start contact (EC), continuous contact (SC), end of Contact (CE).
For example, the non-contact state refers to a state in which the target object is not in contact with each fingertip of the robot. The initial contact state means that the target object and the fingertip of the robot hand are abrupt from the non-contact state to have contact. The continuous contact state means that the target object is kept in contact with the fingertip of the robot hand. The contact end state refers to a state that the robot arm and the target object are separated from contact after being contacted and correspondingly operated.
It should be understood that the above definitions and descriptions of states are merely exemplary. Other ways of representing and defining the contact state may be used as desired. Embodiments of the present disclosure are not limited by the specific characterization meaning and representation of the individual contact states.
Based on the above, in the present disclosure, by setting multiple contact states, the robot hand and the target object can be well judged and represented under the condition that they are in multiple different contact states. Thereby being beneficial to accurately generating the corresponding contact force feedback value of the fingertip based on the contact state and the physical attribute data of the target object.
In some embodiments, determining the contact force feedback value for each fingertip of the robot further comprises: based on the posture estimation of the target object and the posture information of each joint of the robot, generating a virtual view of the target object and the robot, and transmitting the virtual view to a user.
The virtual view of the target object and the robot may be generated, for example, via a simulation system in a virtual reality system. Or the virtual view may be generated based on a preset algorithm or system, and embodiments of the present disclosure are not limited by the manner in which the virtual view is generated.
The virtual view refers to a visual image for representing the relative positions of the robot and the target object and the attitudes of the robot and the target object, and may be, for example, a three-dimensional view of the target object and the robot.
The virtual view may be sent to the user, for example, via virtual reality glasses worn on the eyes of the user or a head mounted display, for example, presented on the screen of the virtual reality glasses for viewing by the user. Or the virtual view may be displayed on a display screen or cell phone screen for viewing by the user. Embodiments of the present disclosure are not limited by the particular manner in which the virtual view is sent to the user and its final presentation.
Based on the above, in the present disclosure, in the process of generating the contact force feedback value, by generating the virtual view of the target object and the robot based on the pose estimation of the target object and the pose information of each joint of the robot, on one hand, compared with the video image captured by the camera (the camera is located at a specific position point), the present disclosure can provide the virtual view of the target object and the robot under various viewing angles to the user according to the actual needs, and the virtual view does not have the problem of incomplete visual information due to the blind point of the camera or the shielding of the captured image of the camera by the human hand, so that the positional relationship between the target object and the robot can be reflected in an omnibearing manner. On the other hand, compared with the transmission of video stream data shot by a camera, the virtual view transmitted by the method has smaller data volume, thereby being capable of remarkably reducing the transmission bandwidth between a host and a slave and improving the transmission speed and reliability. In addition, the virtual view also improves the man-machine interaction experience of the user, so that the user can perform mutual verification through the visual feedback and the tactile feedback, and the reliability of the remote control process is further improved.
In some embodiments, the robot remote control method further comprises the step of pre-calibrating the robot before obtaining the tactile information of each fingertip of the robot on the target object. Fig. 7A shows an exemplary flowchart of a robot pre-calibration step 300 according to an embodiment of the present disclosure. Fig. 7B shows a schematic diagram of a robot pre-calibration process according to an embodiment of the disclosure. The pre-calibration step will be described in more detail below with reference to fig. 7A and 7B.
Referring to fig. 7A, first, in step S301, when a hand performing a remote control operation performs a preset hand operation, joint data of the robot hand and joint data of the hand performing the remote control operation are acquired.
The preset hand motion is a hand motion set for realizing the calibration of the robot, and the motion is designed for determining the mapping proportion relation between the positions and the opening degrees of corresponding joints of the robot and the human hand. Referring to fig. 7B, the motion may include, for example, an open limit of the hand and a closed limit of the hand, or may also include other hand gesture motions as may be desired. Embodiments of the present disclosure are not limited by the specific composition of the preset hand motion.
It should be understood that the joint data of the robot arm herein refers to data representing the joint layout conditions of the robot arm, such as the joint positions, the joint arrangement, the joint postures, and the like. Specifically, it may include, for example, the aforementioned posture information (for example, joint opening angle) of each joint, and it may also include data such as the arrangement structure of the joints on the robot and their specific master-slave joint association relationship. It should be appreciated that embodiments of the present disclosure are not limited by the specific composition of the joint data.
The joint data of the human hand performing the remote control action refers to data representing joint layout conditions such as joint positions, joint setting modes, joint postures and the like of the human hand. Specifically, it may include, for example, hand posture information (joint opening angle of a hand), and it may also include data of a corresponding coordinate position of a joint on the hand, a joint distance, and the like. It should be appreciated that embodiments of the present disclosure are not limited by the specific composition of the joint data of a human hand.
For example, joint data of a human hand may be acquired by a haptic remote control device or control detection means provided at the human hand. Acquisition of the data of the joints of the human hand may be achieved, for example, by an exoskeleton data glove placed on the hand of the human. However, it should be appreciated that the acquisition of joint data of a human hand may also be accomplished in other ways, as desired.
Thereafter, in step S302, a mapping relationship between the joints of the robot and the joints of the hand performing the remote control operation is determined based on the joint data of the robot and the joint data of the user' S hand.
The mapping relation refers to a mapping relation between joint data of a human hand and joint data of a robot hand, and a mapping proportion parameter. Which will be described in more detail below in connection with specific embodiments.
For example, the process of determining the mapping relationship of the joints of the robot hand and the joints of the human hand may be: first, according to a preset corresponding relation, the joints of the human hand and the joints of the robot hand are corresponding. Fig. 7C illustrates a schematic diagram of a correspondence of a robot joint and a human hand joint according to an embodiment of the present disclosure.
Referring to fig. 7C, in which the exoskeleton data glove provided on the hand of the human hand is a joint characterizing the human hand, it can be seen that in the task, the involved human hand joints include D1-D5, and the robot hand is provided with robot hand joints J1-J11 corresponding to the human hand joints D1-D5, and specific correspondence relationships thereof are, for example: the joint D1 corresponds to the joint J1, the joint D2 corresponds to the joints J2-J4, the joint D3 corresponds to the joints J5-J7, the joint D4 corresponds to the joints J8-J10, and the joint D5 corresponds to the joint J11. And also shown in fig. 7C are a plurality of tactile sensors disposed at the tips of the robot hand fingers (tactile sensors T1-T12 are disposed at the tips in the figure).
However, it should be appreciated that the above only gives an example of one correspondence of a robot joint to a human hand joint. Other corresponding modes can be set according to actual needs. Embodiments of the present disclosure are not limited by the correspondence of the robotic joints to the human hand joints.
Then, based on the preset correspondence relation, the joint data of the robot, and the joint data of the hand, a mapping ratio parameter between a certain joint of the hand and the corresponding joint of the robot can be calculated, for example, when the unified preset action is performed, the opening angle of the corresponding joint of the robot is calculated to be K times (where K is a positive integer greater than 0) the opening angle of the joint of the hand, the distance between adjacent joints of the robot is L times (where L is a positive integer greater than 0) the distance between adjacent joints of the hand, and the like, thereby obtaining the mapping relation between the joint data of the hand and the joint data of the robot.
Based on the map, in step S303, a target map is determined based on the map. Specifically, under the condition that the mapping relation meets a preset condition, taking the mapping relation as a target mapping relation between the robot hand and the human hand; and under the condition that the mapping relation does not meet the preset condition, adjusting the mapping proportion parameter of the robot hand and the human hand in the mapping relation, and taking the adjusted mapping relation as a target mapping relation.
The predetermined condition is to determine whether the mapping relationship between the current robot hand and the human hand meets the standard (here, the standard meets the standard and means that the joint action can be correctly mapped based on the mapping relationship), and the predetermined condition may be, for example, based on a manually set mapping relationship range (for example, including an upper limit range, a lower limit range and a relational expression of each mapping ratio parameter) or a mapping relationship range determined based on a mapping relationship obtained after calibration when the robot hand is executed last time, and in addition, the predetermined condition may also be obtained by calculating through a preset algorithm based on actual needs. Embodiments of the present disclosure are not limited by the specific content of the predetermined condition.
For example, if the preset condition is the upper and lower limit ranges of the mapping proportion parameters and the relation thereof set by people, the current mapping relation is compared with the preset condition, if the mapping proportion parameters in the current mapping relation are within the upper and lower limit ranges defined by the preset condition and meet the relation, the mapping relation is determined to be the correct mapping relation, and the mapping relation is determined to be the target mapping relation in the process of executing the current operation task by the robot. If a part of mapping proportion parameters in the current mapping relation are out of the upper limit range and the lower limit range of each mapping proportion parameter, for example, a certain mapping proportion parameter is far greater than the preset upper limit range and the preset lower limit range, the mapping proportion parameters are judged to be incorrect mapping relation, and the mapping proportion parameters are adjusted based on the upper limit range and the lower limit range and the relation defined by the preset condition, for example, in the case, the corresponding mapping proportion parameters are reduced until the mapping relation meets the preset condition, and the adjusted mapping relation is taken as a target mapping relation.
It should be appreciated that the pre-calibration step of the robot may be performed, for example, before the step S101 of acquiring the tactile information of each fingertip of the robot on the target object by the robot remote control method, after the step S105 of completing one gesture remote control process and acquiring the tactile feedback by the robot remote control method, or during the execution of the robot remote control method based on a false alarm of the system or an active trigger of the user. Embodiments of the present disclosure are not limited by the timing of execution of the pre-calibration step in the robot remote control method.
Based on the above, in the present disclosure, by setting the pre-calibration step, under the condition of executing the pre-set hand motion, joint data of the robot and joint data of a hand executing the remote control motion are collected and a mapping relationship between the two is determined, and by comparing the mapping relationship with the predetermined condition, the mapping relationship between the robot and the hand is determined, so that for different users (for example, different joint distances and different finger opening and closing limit degrees), the robot can well adapt to hand features of the user, and accurately reproduce the manipulation gesture of the hand of the user, thereby being beneficial to realizing accurate and reliable robot remote control, and being beneficial to improving man-machine interaction experience.
In some embodiments, the preset hand motion includes a hand open limit motion a and a hand close limit motion b.
Referring to fig. 7B, the hand opening limit motion is a gesture for instructing the user and the finger Zhang Kaiju of the robot to be in the same plane with the palm. The hand holding limit action is to instruct the user to contact with the finger tip of the robot hand, and the finger is approximately perpendicular to the palm.
Based on the above, by setting that the preset hand motion includes a hand opening limit motion and a hand holding limit motion, the joint data characteristics of a hand and a robot hand under the limit of the opening and closing limit can be well detected, so that the mapping relationship between the robot hand and the hand can be accurately constructed, and the follow-up accurate remote control operation process of the robot can be realized.
In some embodiments, the haptic remote control device is configured to obtain hand pose information and generate a pose control signal for the robot based on the hand pose information. Based on the method, the touch remote control equipment can realize the real-time remote control process of the robot based on the human hand, and can generate a contact force feedback value according to the touch information of the fingertips of the robot and the joint posture information of each joint of the robot so as to realize good touch feedback of the human hand, thereby being beneficial to realizing the accurate and reliable closed-loop control process of the robot.
Next, a specific application scenario and an application procedure of the robot remote control method will be described in detail with reference to specific embodiments. Fig. 8A shows a schematic diagram of a robot remote control method controlling a robot according to an embodiment of the disclosure.
Referring to fig. 8A, there is shown a robot remote control system capable of performing the robot remote control method as described previously. And in this scenario, for example, the haptic remote control device located at the hand of the master end user senses the gesture of the human hand joint, and maps the gesture of the human hand joint to the joint gesture of the corresponding joint of the robot based on the mapping relationship between the gesture of the human hand joint and the gesture of the robot joint (the mapping relationship may be generated based on the foregoing pre-calibration step, for example), so as to control the hand of the humanoid robot iCub at the slave end to perform the corresponding operation. Under the application scene, the action of unscrewing the plastic bottle cap is executed for the remote control humanoid robot iCub, wherein the target object is a round plastic bottle cap. And during this manipulation the user makes a complete rotation (one full revolution) of the cap.
In the application scenario, the haptic remote control device is an exoskeleton data glove provided for a human hand, the exoskeleton data glove collects the gesture of the human hand and generates a robot gesture control signal, and meanwhile, the exoskeleton data glove can also receive haptic feedback (robot fingertip contact force) from the robot hand and provide the haptic feedback to the corresponding fingertip of the human hand.
The generation process of the haptic feedback in this application scenario will be described in detail. First, initial pose information of each target object is provided by a visual detection device. Then, in the process of manipulating the robot, based on the tactile information of the robot finger tip detected by the robot finger tip tactile sensor and the posture information of each joint of the robot hand, a posture estimation of the target object is constructed via a pre-trained gaussian mixture model, and using the posture estimation of the target object and the posture information of each joint of the robot hand, a modeling simulation is performed in a virtual reality system (VR system), and a contact force feedback value of each finger tip of the robot hand is determined based on the simulation result. Thereafter, the contact force feedback value is transmitted to the haptic remote control device for haptic rendering, specifically, the signal range of the finally generated contact force feedback value of each fingertip is 0 to 100, and the intensity range of the moment generated by the motor in the haptic remote control device is 0 to 1. Therefore, the signal of the contact force feedback value is linearly mapped to the strength of the moment of the motor, the motor is controlled to generate the moment in the corresponding strength range, and the moment is acted on the finger tip of the user, so that the user can sense the current stress of the robot and continue to carry out gesture adjustment control, and the remote control of the robot is continued based on the adjusted gesture.
In some embodiments, the virtual view of the robot arm and the target object obtained by the simulation of the virtual reality system may be transmitted to a head-mounted display HMD located on the head of the person, and the virtual view may be displayed on a display screen of the display.
In addition, in this operation scenario, the task performance of implementing the operation task by applying the robot remote control method in the present disclosure will be further compared with the task performance of the existing conventional robot remote control method (for example, the robot is operated based on the visual signal and the directly detected tactile signal as shown in fig. 1B), to evaluate the performance of the robot remote control method in actual operation.
Fig. 8B illustrates a comparison of a robot remote control method with a tactile signal detected by a robot finger tip tactile sensor when a robot remote control method is conventional according to an embodiment of the present disclosure. The robot remote control method and the traditional robot remote control method both execute remote control operation for controlling the round bottle cap to rotate in the whole circle.
In the comparison process, firstly, the touch information of the touch sensors of the thumb, the index finger and the middle finger of the robot hand (namely, the touch reading values of the touch sensors) is collected when the two remote control methods are adopted for operation, and then the touch information of the corresponding fingertips is compared, so that three groups of comparison graphs (the waveform graphs at the upper position and the lower position of the robot hand are one group) are obtained. In the comparison chart, the tactile sensors of the thumb, the index finger and the middle finger of the robot hand respectively from left to right acquire the tactile information in the two remote control methods. And for each group of comparison graphs, the waveform graph at the upper part is fingertip touch information obtained by applying the robot remote control method with enhanced hand feeling in the disclosure, and the waveform graph at the lower part is fingertip touch information obtained by applying the traditional remote control method. And for each waveform graph, its horizontal axis represents time units and its vertical axis represents the tactile reading of the tactile sensor, and wherein the values of the mean value of the tactile reading (Average) and the Variance of the tactile reading (Variance) are also noted.
As shown in fig. 8B, the conventional remote control method generally has a large tactile reading value, and there are a plurality of abrupt points (e.g., peak points of instantaneous reading values) of the tactile reading value, which means that in the conventional remote control method, a contact force exerted by a finger of a robot on a target object is generally large, and the force is unstable, which makes excessive reading values at the abrupt points possibly damage the robot, cause damage to related parts, reduce the service life of the robot, and on the other hand, the tactile feedback generated based on the tactile feedback is extremely unstable, so that a user cannot correctly judge the current operation execution condition based on the tactile feedback which is rapidly changed, thereby possibly generating erroneous judgment and erroneous manipulation decisions. Compared with the traditional remote control method, the touch sense reading value of the touch sensor is more continuous and balanced, and the touch sense reading value is hardly changed suddenly; and the touch sense reading of the touch sensor is gentle and generally smaller, the touch sense sensor can enable the robot to realize a preset operation process in a mode of being mild with a target object, the operation mode is more consistent with the operation process of the user hand, the protection of the robot parts can be well realized, the service life of the robot is effectively prolonged, and the remote control operation with high precision can be realized.
Based on the above, by using the robot hand grip posture model composed of the gaussian mixture model in this application scene, the noise and uncertainty of the tactile sensor of the tip of the robot finger can be modeled, and the posture of the target object can be estimated stably. On the one hand, the user can make better judgment on the contact state of the current target object and the robot hand based on the tactile feedback of the method in the disclosure in terms of user experience. In particular, the haptic feedback (contact force feedback) provided by the robot remote control method of the present disclosure is more accurate and continuous, and thus has a better manipulation experience.
And to further analyze the performance of performing the vial cap opening task under two different methods we also compared the time required to complete a full rotation (360 degrees) of the vial cap. Under the condition of using the traditional robot remote control method, the average time for completing the complete rotation of the bottle cap is 180 seconds, and when the robot remote control method with the hand feeling enhancement is adopted for operation, the action is completed only by about 108 seconds, and the completion efficiency of the remote control operation task is greatly improved on the basis of ensuring the good realization of the operation task.
According to an embodiment of the present disclosure, there is also provided a robot remote control system 500 with enhanced hand feel, which includes a slave end 510 and a master end 520 capable of communicating with each other, and which is capable of performing, for example, the robot remote control method as described above and has the functions as described above. Fig. 9 illustrates an exemplary block diagram of a robotic remote control system 500 with hand feel enhancement in accordance with an embodiment of the present disclosure.
Referring to fig. 9, the slave 510 includes: a robot 511, a haptic data acquisition device 512, a joint posture information acquisition device 513, a target object posture estimation device 514, a slave data transmission device 515.
The robot hand 511 refers to a robot part for performing a specific operation, for example, having a plurality of finger sub-parts each having a plurality of finger joints. The embodiment of the disclosure is not limited by the specific hand joint arrangement mode and the arrangement number of the hand joints of the robot.
The tactile data obtaining device 512 is disposed at each fingertip of the robot hand, and is configured to obtain tactile information of each fingertip of the robot hand to a target object, the tactile information being a detected value of a contact force of the corresponding fingertip with the target object.
The target object refers to a target object which is interacted with the robot when the robot performs a specific task or action. Embodiments of the present disclosure are not limited by the type of the target object and the specific content of the particular task.
The step of acquiring the touch information of each fingertip of the robot hand on the target object is to acquire the touch information when one or more fingers of the robot hand for executing tasks interact with the target object.
The tactile information is a detection value of acting force between a target object and a fingertip of the robot hand, namely a detection value of contact force of the fingertip to the target object, under the current hand gesture of the robot hand. Embodiments of the present disclosure are not limited by the manner in which the tactile information is collected for each fingertip of the robotic arm and its specific numerical values.
The joint posture information acquisition means 513 is configured to acquire posture information of each joint of the robot hand, the posture information of each joint including opening angle information of the joint. The opening angle information refers to an angle value of an included angle formed between two adjacent joints, and is used for representing the opening and closing degree of the joints.
It should be appreciated that the opening angle information of the joint may also include other data content according to actual needs, and embodiments of the present disclosure are not limited by the specific composition of the pose information of the joint.
The target object pose estimation device 514 is configured to generate a pose estimation of the target object based on the haptic information of each fingertip of the robot hand and the pose information of each joint.
The posture estimation of the target object refers to the predicted data of the posture state of the target object. It should be appreciated that embodiments of the present disclosure are not limited by the specific composition of the pose estimation of the target object and its manner of expression.
The slave-end data transmission device 515 is configured to transmit the posture estimation of the target object and the posture information of each joint of the robot arm to the master end.
The main end 520 includes: a haptic force calculation means 521, a haptic feedback generation means 522.
The tactile-force calculating means 521 is configured to determine a contact-force feedback value of each fingertip of the robot hand based on the posture estimation of the target object and the posture information of each joint of the robot hand.
The haptic force calculation device 521 may be, for example, a simulation model in a virtual reality system, and embodiments of the present disclosure are not limited by the specific composition of the haptic force calculation device.
The haptic feedback generation means 522 is matched to the robot hand 511 and configured to provide haptic feedback corresponding to the contact force feedback values of the respective fingertips of the robot hand to the human hand.
The tactile feedback generating apparatus 522 refers to a device that can provide the user with the tactile feedback or other data information of the robot hand to implement bilateral teleoperation. Embodiments of the present disclosure are not limited by the particular type of haptic feedback generation device 522 and its composition.
In some embodiments, the master may further comprise a master remote control 523, for example, configured to obtain hand pose information and generate pose control signals for the robot hand based on the hand pose information. The master remote control 523 is a device for realizing remote control of the robot based on a user instruction (e.g., a user hand gesture). Embodiments of the present disclosure are not limited by the particular type of master remote control 523 and its composition. For example, the haptic feedback generation device 522 and the master remote control device 523 may be the same device, such as an exoskeleton data glove provided on a human hand.
Based on the above, in the present disclosure, in the process of performing robot remote control, after detecting the tactile information at the tip of the robot, the posture estimation of the target object is constructed based on the detected tactile information of the tip and the posture information of each joint of the robot, and the posture estimation of the target object and the posture information of each joint of the robot are integrated to determine the contact force feedback value of each tip of the robot together. Compared with the method for directly transmitting the contact force information detected by the touch sensor, the robot remote control method for enhancing the hand feeling can effectively filter noise in the detected touch force information by determining the contact force feedback value based on the target object gesture and the robot gesture, improve the accuracy of the generated contact force feedback value, flexibly and dynamically feed back the touch force born by the robot, effectively avoid the injury to the hand of a user when the sensor is influenced by surrounding physical environment to generate signal abnormality (such as abnormal peak value of the detection value of the contact force generated by signal mutation), effectively avoid the situation of error control caused by misjudgment on the operation state of the robot when the sensor signal is delayed or interrupted, enhance the accuracy and reliability of the touch feedback of the robot to the user in the robot remote control process, optimize the man-machine interaction experience of the touch feedback, and realize the robot remote control process for enhancing the hand feeling.
In some embodiments, the master end 520 further includes a visual feedback device 524 disposed at the head of the user, the visual feedback device 524 being configured to generate a virtual view of the target object and the robot based on the pose estimate of the target object and the pose information of each joint of the robot, and display the virtual view.
The virtual view refers to a visual image for representing the relative positions of the robot and the target object and the attitudes of the robot and the target object, and may be, for example, a three-dimensional view of the target object and the robot. Embodiments of the present disclosure are not limited by the particular manner in which the virtual view is sent to the user and its final presentation.
Based on the above, in the present disclosure, in the process of generating the contact force feedback value, by generating the virtual view of the target object and the robot based on the pose estimation of the target object and the pose information of each joint of the robot, the user can intuitively and conveniently understand the position and the pose state of the current target object and the robot, and visual feedback is added beyond tactile feedback, so that on one hand, the man-machine interaction experience of the user is improved, and on the other hand, the user can mutually verify the tactile feedback through the visual feedback, and further, the reliability of the remote control process is improved.
In accordance with another aspect of the present disclosure, a robotic remote control device 950 having enhanced hand feel is presented. Fig. 10 illustrates an exemplary flow chart of a hand-feel enhanced robotic remote control device 950 according to an embodiment of the disclosure.
The robotic remote control device 950 shown in fig. 10 may be implemented as one or more special purpose or general purpose computer system modules or components, such as a personal computer, notebook computer, tablet computer, cell phone, personal Digital Assistant (PDA) DIGITAL ASSISTANCE, and any smart portable device. Wherein the robot remote control device 950 may include at least one processor 960 and a memory 970.
Wherein the at least one processor is configured to execute program instructions. The memory 970 may be present in the robot remote 950 in various forms of program storage units as well as data storage units, such as hard disk, read Only Memory (ROM), random Access Memory (RAM), which can be used to store various data files used by the processor in processing and/or executing the robot remote control process, as well as possible program instructions executed by the processor. Although not shown in the figures, the robot remote 950 may also include an input/output component that supports input/output data flow between the robot remote 950 and other components. The robot remote 950 may also send and receive information and data from a network via a communication port.
In some embodiments, the set of instructions stored by the memory 970, when executed by the processor 960, causes the robotic remote control device 950 to perform operations comprising: obtaining touch information of each fingertip of a robot to a target object, wherein the touch information is a detection value of contact force between the corresponding fingertip and the target object; acquiring the posture information of each joint of the robot, wherein the posture information of each joint comprises the opening angle information of the joint; generating an attitude estimate of the target object based on tactile information of each fingertip of the robot and attitude information of each joint; determining a contact force feedback value of each fingertip of the robot based on the posture estimation of the target object and the posture information of each joint of the robot; and providing haptic feedback corresponding to the contact force feedback values of the fingertips of the robot hand to the human hand through a haptic remote control device matched with the robot hand.
In some embodiments, the robot remote control device 950 with enhanced hand feel may receive instructions transmitted from a device or user external to the robot remote control device 950 with enhanced hand feel and perform the robot remote control method described above on the robot to implement the functions of the robot remote control system described above.
Although in fig. 10, the processor 960 and the memory 970 are presented as separate modules, it will be appreciated by those skilled in the art that the above-described device modules may be implemented as separate hardware devices or may be integrated as one or more hardware devices. The specific implementation of the different hardware devices should not be taken as a factor limiting the scope of protection of the present disclosure, as long as the principles described in this disclosure can be implemented.
According to another aspect of the present disclosure, there is also provided a non-volatile computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a computer, can perform the method as described above.
Program portions of the technology may be considered to be "products" or "articles of manufacture" in the form of executable code and/or associated data, embodied or carried out by a computer readable medium. A tangible, persistent storage medium may include any memory or storage used by a computer, processor, or similar device or related module. Such as various semiconductor memories, tape drives, disk drives, or the like, capable of providing storage functionality for software.
All or a portion of the software may sometimes communicate over a network, such as the internet or other communication network. Such communication may load software from one computer device or processor to another. For example: a hardware platform loaded from a server or host computer of the robot remote control device to a computer environment, or other computer environment implementing the system, or similar functioning system associated with providing information required for robot remote control. Thus, another medium capable of carrying software elements may also be used as a physical connection between local devices, such as optical, electrical, electromagnetic, etc., propagating through cable, optical cable, air, etc. Physical media used for carrier waves, such as electrical, wireless, or optical, may also be considered to be software-bearing media. Unless limited to a tangible "storage" medium, other terms used herein to refer to a computer or machine "readable medium" mean any medium that participates in the execution of any instructions by a processor.
The present disclosure uses specific words to describe embodiments of the disclosure. Such as "first/second embodiment," "an embodiment," and/or "some embodiments," means a particular feature, structure, or characteristic associated with at least one embodiment of the present disclosure. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the present disclosure may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the disclosure can be illustrated and described in terms of several patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the present disclosure may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present disclosure may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (17)

1. A robot remote control method with enhanced hand feel, comprising:
Obtaining touch information of each fingertip of a robot to a target object, wherein the touch information is a detection value of contact force between the corresponding fingertip and the target object;
acquiring the posture information of each joint of the robot, wherein the posture information of each joint comprises the opening angle information of the joint;
Generating an attitude estimate of the target object based on tactile information of each fingertip of the robot and attitude information of each joint;
Determining a contact force feedback value of each fingertip of the robot based on the posture estimation of the target object and the posture information of each joint of the robot; and
And providing haptic feedback corresponding to the contact force feedback values of the fingertips of the robot hand to the human hand through haptic remote control equipment matched with the robot hand.
2. The robot remote control method according to claim 1, wherein generating the pose estimation of the target object based on the haptic information of each fingertip of the robot and the pose information of each joint comprises:
Based on a robot hand holding gesture model, gesture prediction data of the target object are generated according to touch information of each fingertip and gesture information of each joint;
Determining the distribution probability that the gesture prediction data of the target object belong to a robot hand holding gesture model based on the touch information of each fingertip and the gesture information of each joint, wherein the robot hand holding gesture model is a nonlinear model;
comparing the determined distribution probability with a preset probability threshold value, and determining the gesture estimation of the target object based on the comparison result.
3. The robot remote control method of claim 2, wherein the robot hand gripping pose model is a gaussian mixture model comprising a plurality of gaussian distribution models, wherein determining a distribution probability that the pose prediction data of the target object belongs to the robot hand gripping pose model comprises:
Determining, for each gaussian distribution model in the gaussian mixture model, a gaussian distribution probability that the pose prediction data of the target object belongs to the gaussian distribution model;
And carrying out weighted summation on the Gaussian distribution probabilities of the gesture prediction data of the target object belonging to each Gaussian distribution model to generate the distribution probability of the gesture prediction data of the target object belonging to the robot hand holding gesture model.
4. The robot remote control method of claim 2, wherein said comparing the determined distribution probability with a preset probability threshold value, determining an attitude estimate of the target object based on the comparison result comprises:
Under the condition that the determined distribution probability is larger than the preset probability threshold, determining the gesture prediction data of the target object at the current moment as the gesture estimation of the target object at the current moment; and determining the posture estimation of the target object at the moment before the current moment as the posture estimation of the target object at the current moment under the condition that the determined distribution probability is smaller than or equal to the preset probability threshold.
5. The robot remote control method according to claim 1, wherein the posture information of each joint includes opening angle information of a plurality of joints among a thumb, an index finger, and a middle finger of the robot, and the posture estimation of the target object includes position data and euler angle data of the target object.
6. The robot remote control method of claim 1, wherein the robot hand grip pose model is trained via the steps of:
setting a visual tracking mark on a target object, and setting a touch sensor on the fingertip of the robot hand;
The method comprises the steps of manipulating a robot to adjust the gesture of a target object, wherein gesture information of joints of the robot and touch information of fingertips of the robot to the target object, which are acquired by a touch sensor, are recorded in the gesture adjustment process of the target object so as to generate a plurality of training data, and each training data comprises gesture information of each joint and touch information of fingertips of the corresponding robot to the target object;
performing visual tracking processing on the target to generate standard target posture data corresponding to each training data in a plurality of training data of the target object;
for each of the plurality of training data,
Using the robot hand to hold the gesture model, generating gesture prediction data of the target object based on the training data;
generating a loss function based on the attitude prediction data of the target object and the standard target attitude data;
based on the loss function, the robot hand gripping gesture model is trained.
7. The robot hand remote control method of claim 6, wherein the robot hand holding gesture model is trained such that an error of the gesture prediction data of the target object from the standard target gesture data is less than a preset error threshold.
8. The robot remote control method according to claim 1, wherein determining the contact force feedback value of each fingertip of the robot based on the posture estimation of the target object and the posture information of each joint of the robot comprises:
determining a target contact state based on the posture estimation of the target object and the posture information of each joint of the robot;
And determining a contact force feedback value of each fingertip of the robot hand based on the target contact state and the physical attribute data of the target object.
9. The robot remote control method of claim 8, wherein the target contact state comprises: non-contact, start contact, continuous contact, end contact.
10. The robot remote control method of claim 1, wherein determining the contact force feedback value for each fingertip of the robot further comprises:
Based on the posture estimation of the target object and the posture information of each joint of the robot, generating a virtual view of the target object and the robot, and transmitting the virtual view to a user.
11. The robot remote control method of claim 1, wherein the method further comprises a pre-calibration step comprising:
Under the condition that a hand executing remote control action executes a preset hand action, acquiring joint data of the robot hand and joint data of the hand executing the remote control action;
Determining a mapping relation between the joints of the robot and the joints of the hand for executing remote control action based on the joint data of the robot and the joint data of the hand of the user; and
Taking the mapping relation as a target mapping relation between the robot hand and the human hand under the condition that the mapping relation meets the preset condition; and under the condition that the mapping relation does not meet the preset condition, adjusting the mapping proportion parameter of the robot hand and the human hand in the mapping relation, and taking the adjusted mapping relation as a target mapping relation.
12. The robot remote control method of claim 11, wherein the predetermined hand motions comprise a hand open limit motion and a hand closed limit motion.
13. The robot hand remote control method of claim 1, wherein the haptic remote control device is configured to acquire hand gesture information and generate a gesture control signal for the robot hand based on the hand gesture information.
14. A robot remote control system with enhanced hand feel, comprising a slave end and a master end capable of communicating with each other, and wherein,
The slave comprises:
a robot arm;
a tactile data acquisition device provided at each fingertip of the robot hand and configured to acquire tactile information of each fingertip of the robot hand to a target object, the tactile information being a detection value of a contact force of the corresponding fingertip with the target object;
A joint posture information acquisition device configured to acquire posture information of each joint of the robot hand, the posture information of each joint including opening angle information of the joint;
a target object posture estimation device configured to generate a posture estimation of the target object based on tactile information of each fingertip of the robot hand and posture information of each joint;
a slave-end data transmission device configured to transmit the posture estimation of the target object and the posture information of each joint of the robot arm to the master end;
The main end comprises:
A haptic force calculation device configured to determine a contact force feedback value of each fingertip of the robot based on the posture estimation of the target object and the posture information of each joint of the robot;
And a haptic feedback generation device matched with the robot hand and configured to provide haptic feedback corresponding to the contact force feedback values of the fingertips of the robot hand to the human hand.
15. The robot remote control system of claim 14, wherein the master end further comprises a visual feedback device disposed at the head of the user, the visual feedback device configured to generate a virtual view of the target object and the robot based on the pose estimate of the target object and pose information of each joint of the robot, and to display the virtual view.
16. A robotic remote control device having enhanced hand feel, wherein the robotic remote control device comprises a processor and a memory containing a set of instructions that, when executed by the processor, cause the robotic remote control device to perform operations comprising:
Obtaining touch information of each fingertip of a robot to a target object, wherein the touch information is a detection value of contact force between the corresponding fingertip and the target object;
acquiring the posture information of each joint of the robot, wherein the posture information of each joint comprises the opening angle information of the joint;
Generating an attitude estimate of the target object based on tactile information of each fingertip of the robot and attitude information of each joint;
Determining a contact force feedback value of each fingertip of the robot based on the posture estimation of the target object and the posture information of each joint of the robot; and
And providing haptic feedback corresponding to the contact force feedback values of the fingertips of the robot hand to the human hand through haptic remote control equipment matched with the robot hand.
17. A computer readable storage medium having stored thereon computer readable instructions which when executed by a computer perform the method of any of the preceding claims 1-13.
CN202110346871.9A 2021-03-31 2021-03-31 Robot remote control method, system, equipment and medium with hand feeling enhancement Active CN115139292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110346871.9A CN115139292B (en) 2021-03-31 2021-03-31 Robot remote control method, system, equipment and medium with hand feeling enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110346871.9A CN115139292B (en) 2021-03-31 2021-03-31 Robot remote control method, system, equipment and medium with hand feeling enhancement

Publications (2)

Publication Number Publication Date
CN115139292A CN115139292A (en) 2022-10-04
CN115139292B true CN115139292B (en) 2024-08-20

Family

ID=83404645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110346871.9A Active CN115139292B (en) 2021-03-31 2021-03-31 Robot remote control method, system, equipment and medium with hand feeling enhancement

Country Status (1)

Country Link
CN (1) CN115139292B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106527738A (en) * 2016-12-08 2017-03-22 东北大学 Multi-information somatosensory interaction glove system and method for virtual reality system
CN109202942A (en) * 2017-06-30 2019-01-15 发那科株式会社 The simulator of hand control device, hand control method and hand

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019059364A1 (en) * 2017-09-22 2019-03-28 三菱電機株式会社 Remote control manipulator system and control device
CN110174949A (en) * 2019-05-28 2019-08-27 欣旺达电子股份有限公司 Virtual reality device and posture perception and tactile sense reproduction control method
CN111444459A (en) * 2020-02-21 2020-07-24 哈尔滨工业大学 Method and system for determining contact force of teleoperation system
CN112506339A (en) * 2020-11-30 2021-03-16 北京航空航天大学 Virtual hand force sense synthesis method and system for wearable touch sense interaction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106527738A (en) * 2016-12-08 2017-03-22 东北大学 Multi-information somatosensory interaction glove system and method for virtual reality system
CN109202942A (en) * 2017-06-30 2019-01-15 发那科株式会社 The simulator of hand control device, hand control method and hand

Also Published As

Publication number Publication date
CN115139292A (en) 2022-10-04

Similar Documents

Publication Publication Date Title
CN109960880B (en) Industrial robot obstacle avoidance path planning method based on machine learning
Qian et al. Developing a gesture based remote human-robot interaction system using kinect
US11413748B2 (en) System and method of direct teaching a robot
Qu et al. Human-like coordination motion learning for a redundant dual-arm robot
Sutanto et al. Learning latent space dynamics for tactile servoing
Lepora et al. Pose-based tactile servoing: Controlled soft touch using deep learning
JP2023542055A (en) Interactive tactile perception method for object instance classification and recognition
CN106406518A (en) Gesture control device and gesture recognition method
CN102830798A (en) Mark-free hand tracking method of single-arm robot based on Kinect
Huang et al. Grasping novel objects with a dexterous robotic hand through neuroevolution
CN113829357A (en) Teleoperation method, device, system and medium for robot arm
Chen et al. A human–robot interface for mobile manipulator
CN112936282B (en) Method and system for improving motion sensing control accuracy of industrial robot
Amatya et al. Real time kinect based robotic arm manipulation with five degree of freedom
Lu et al. Surface following using deep reinforcement learning and a gelsighttactile sensor
CN115139292B (en) Robot remote control method, system, equipment and medium with hand feeling enhancement
CN113561172B (en) Dexterous hand control method and device based on binocular vision acquisition
CN115741671A (en) Manipulator teleoperation method and related equipment
Sun et al. Digital-Twin-Assisted Skill Learning for 3C Assembly Tasks
Cretu et al. Estimation of deformable object properties from shape and force measurements for virtualized reality applications
Chen et al. Differentiable Discrete Elastic Rods for Real-Time Modeling of Deformable Linear Objects
Falco et al. Improvement of human hand motion observation by exploiting contact force measurements
CN118163118B (en) Visual and tactile fusion display method, device and system and robot control method, device and system
CN117961916B (en) Object grabbing performance judgment method, object grabbing device and object grabbing system
Tao et al. Human-Computer Interaction Using Fingertip Based on Kinect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant