CN118123847B - Man-machine cooperation system, operation control method and device - Google Patents
Man-machine cooperation system, operation control method and device Download PDFInfo
- Publication number
- CN118123847B CN118123847B CN202410558115.6A CN202410558115A CN118123847B CN 118123847 B CN118123847 B CN 118123847B CN 202410558115 A CN202410558115 A CN 202410558115A CN 118123847 B CN118123847 B CN 118123847B
- Authority
- CN
- China
- Prior art keywords
- representing
- speed control
- robot
- fusion
- control amount
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000033001 locomotion Effects 0.000 claims abstract description 114
- 239000012636 effector Substances 0.000 claims abstract description 23
- 230000004927 fusion Effects 0.000 claims description 90
- 238000004364 calculation method Methods 0.000 claims description 31
- 230000003044 adaptive effect Effects 0.000 claims description 29
- 239000011159 matrix material Substances 0.000 claims description 20
- 230000001133 acceleration Effects 0.000 claims description 13
- 238000013016 damping Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 230000003287 optical effect Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 210000004247 hand Anatomy 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 239000003550 marker Substances 0.000 description 3
- 210000000707 wrist Anatomy 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003183 myoelectrical effect Effects 0.000 description 1
- 238000000554 physical therapy Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/081—Touching devices, e.g. pressure-sensitive
- B25J13/084—Tactile sensors
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The embodiment of the application belongs to the technical field of artificial intelligence, and relates to a human-computer cooperation system, an operation control method and a device, wherein the system comprises the following components: the robot comprises a positioner, a robot, a self-adaptive cooperation module and a whole body control module; the end effector of the robot is provided with a force/touch sensor; a force/tactile sensor for measuring force information applied by the end effector; a positioner for measuring movement data of a movement part of an operator; the self-adaptive cooperation module is used for solving the speed control quantity and the pose control quantity of the tail end of the robot based on the motion data and the force information; and the whole body control module is used for calculating the joint speed control quantity of the robot based on the speed control quantity and the pose control quantity. The technical scheme adopted by the application can improve the operability of man-machine cooperation aiming at the deformation target object.
Description
Technical Field
The present application relates to the field of robots, and in particular, to a system for human-computer collaboration, and an operation control method and apparatus.
Background
With the popularization of robot applications, in order to enable robots to better simulate human operations, a robot control mode of human-machine cooperation is gaining more and more attention.
However, in the existing man-machine cooperation control mode, taking man-machine cooperation for carrying the object as an example, when the deformable object is operated, the object is easy to deform in the operation process, so that the difficulty of man-machine cooperation operation is improved.
Disclosure of Invention
The embodiment of the application aims to provide a man-machine cooperation system, an operation control method and a device, so as to improve the operability of man-machine cooperation for a deformation target object.
In a first aspect, an embodiment of the present application provides a system for human-computer collaboration, including the following technical solutions:
A system of human-machine collaboration, the system comprising: the system comprises a positioner, a robot, a force/touch sensor, a self-adaptive cooperation module and a whole body control module; the end effector of the robot sets the force/tactile sensor;
The force/touch sensor is used for measuring force information of acting force applied by the end effector to a target object;
the locator is used for measuring the movement data of the movement part of the operator;
The self-adaptive cooperation module is used for solving the speed control quantity and the pose control quantity of the tail end of the robot based on the motion data and the force information;
The whole body control module is used for solving joint speed control quantity of a robot joint based on the speed control quantity and the pose control quantity so as to indicate the robot to move based on the joint speed control quantity;
Wherein, the self-adaptive cooperation module includes: an admittance control sub-module and a fusion control sub-module; the fusion control submodule comprises: a fusion calculation unit and a variable calculation unit;
The admittance control sub-module is used for solving the initial speed control quantity of the tail end of the robot based on the force information;
the fusion calculation unit is configured to calculate, based on formula (3), a fusion speed control amount after the initial speed control amount and the motion data are fused:
(3)
The variable calculation unit is used for converting the fusion speed control quantity into the speed control quantity and the pose control quantity based on formulas (4) and (5);
(4)
(5)
Wherein, Representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing motion data; Representing a fusion speed control amount; Is a speed control amount; The control quantity of the pose is the pose control quantity.
Further, in one embodiment, the fusion calculation unit obtains the adaptive fusion coefficient by the following formula;
(6)
Wherein, Representing the adaptive fusion coefficient; representing the current time; Representing a moving window length; representing a custom minute value for avoiding zero computation; representing an initial speed control amount; Representing the motion data; representing the time obtained by subtracting the moving window from the current time; wherein,
The saidCharacterizing variability properties of the manipulated object;
When (when) When equal to 0, the object to be operated is not deformable;
When (when) When equal to 1, the object to be operated is completely deformable;
When (when) When the object portion representing the operation is deformed.
Further, in one embodiment, the admittance control submodule includes:
An admittance control unit for calculating an initial speed control amount of the tip of the robot based on the force information by combining the formula (1) and the formula (2);
(1)
(2)
Wherein, Representing an inertia matrix; Representing a damping matrix; Representing force information or other force information derived based on the force information; an acceleration indicating the tip of the robot; An initial speed control amount indicating the tip of the robot; representing a time step in the control system; Representing the robot control period.
In a second aspect, an embodiment of the present application provides a method for controlling operation of human-computer collaboration, the method including the steps of:
Based on the motion data and the force information, solving the speed control quantity and the pose control quantity of the tail end of the robot; wherein the motion data is of a motion part of an operator; the force information is force information of acting force applied by the end effector to the target object;
Based on the speed control amount and the pose control amount, calculating a joint speed control amount of a robot joint to indicate the robot motion based on the joint speed control amount;
wherein, based on the motion data and the force information, the method for calculating the speed control amount and the pose control amount of the tail end of the robot comprises the following steps:
Based on the force information, an initial speed control amount of the tail end of the robot is obtained;
Based on the formula (3), the fusion speed control quantity after the initial speed control quantity and the motion data are fused is obtained:
(3)
converting the fusion speed control amount into the speed control amount and the pose control amount based on formulas (4) and (5);
(4)
(5)
Wherein, Representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing motion data; Representing a fusion speed control amount; Is a speed control amount; The control quantity of the pose is the pose control quantity.
In a third aspect, an embodiment of the present application provides a human-computer collaborative operation control apparatus, the apparatus including:
the self-adaptive cooperation module is used for solving the speed control quantity and the pose control quantity of the tail end of the robot based on the motion data and the force information; wherein the motion data is of a motion part of an operator; the force information is force information of acting force applied by the end effector to the target object;
The whole body control module is used for solving joint speed control quantity of a robot joint based on the speed control quantity and the pose control quantity so as to indicate the robot to move based on the joint speed control quantity;
Wherein, the self-adaptive cooperation module includes: an admittance control sub-module and a fusion control sub-module; the fusion control submodule comprises: a fusion calculation unit and a variable calculation unit;
The admittance control sub-module is used for solving the initial speed control quantity of the tail end of the robot based on the force information;
the fusion calculation unit is configured to calculate, based on formula (3), a fusion speed control amount after the initial speed control amount and the motion data are fused:
(3)
The variable calculation unit is used for converting the fusion speed control quantity into the speed control quantity and the pose control quantity based on formulas (4) and (5);
(4)
(5)
Wherein, Representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing motion data; Representing a fusion speed control amount; Is a speed control amount; The control quantity of the pose is the pose control quantity.
In a fourth aspect, an embodiment of the present application provides a controller, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the above-described operation control method for man-machine cooperation when executing the computer program.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the human-machine collaborative operation control method described above.
Compared with the prior art, the embodiment of the application has the following main beneficial effects:
According to the embodiment of the application, the speed control quantity and the pose control quantity of the tail end of the robot are obtained after the force information of acting force applied to the target object by the tail end executor of the robot is fused with the multidimensional information of the motion data of the motion part of the operator; and based on the speed control quantity and the pose control quantity, the joint speed control quantity of the robot is obtained, and the deformation of the target object under different stress conditions is fully considered, so that the operability of man-machine cooperation aiming at the deformed target object can be improved, and the working efficiency and the safety are improved.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for the description of the embodiments of the present application, it being apparent that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without the exercise of inventive effort for a person of ordinary skill in the art.
FIG. 1 is a block diagram of one embodiment of a system to which the present application is applied;
FIG. 2 is a schematic frame construction of one embodiment of a human-machine collaboration system of the present application;
FIG. 3 is a flow chart of one embodiment of a human-machine cooperative operation control method of the present application;
FIG. 4 is a schematic structural view of an embodiment of the human-machine cooperative operation control device of the present application;
FIG. 5 is a schematic diagram of an embodiment of a computer device of the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the applications herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the description of the drawings above are intended to cover a non-exclusive inclusion. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in FIG. 1, FIG. 1 is a block diagram of one embodiment of a system to which the present application is applied.
An embodiment of the present application provides a system 100 for human-computer collaboration, the system including: a positioner 110, a robot 120, a force/touch sensor 130, and a controller 140; the end effector of the robot 110 is provided with a force/tactile sensor 130.
And (3) a robot:
specifically, the robot 120 located at the slave may be, but is not limited to: a humanoid robot; or by a series or parallel connection of robotic arms (e.g., delta, four-axis, or six-axis).
Since the end effector is provided at the end of the robot 120, the initial speed control amount, acceleration, speed, etc. of the end of the robot according to the embodiment of the present application are also regarded as the initial speed control amount, acceleration, speed, etc. of the end effector of the robot. The contact surface of the end effector with the target is provided with a force/tactile sensor 130.
Force/touch sensor force:
a force/tactile sensor for measuring force information of the end effector exerting force on the target object.
Wherein the force/tactile sensor 120 refers to a force sensor and/or a tactile sensor.
In particular, the force sensor may be, but is not limited to: a two-dimensional or multi-dimensional force sensor for measuring data of a two-dimensional or multi-dimensional force.
Specifically, the touch sensor is used for measuring and obtaining touch information on the premise of matching with various man-machine interaction functions such as grabbing and the like of the end effector.
To achieve gripping objects of different shapes and softness, etc., the tactile sensor and object contact surface is typically flexible and resilient. Tactile information includes, but is not limited to: array type multidimensional force information, surface deformation information, temperature information, texture information and the like.
Implementations of the tactile sensor include flexible contact surfaces, sensing circuitry, computing devices, and tactile information parsing algorithms.
It should be noted that, for convenience of understanding, the data signals about the force measured by the force/touch sensor may be collectively referred to as force information in the embodiments of the present application.
In a preferred embodiment, the force/tactile sensor is a three-dimensional tactile sensor so that force sensing data closer to the human body's touch can be obtained from multiple dimensions.
Specifically, the force/touch sensor is calibrated in advance, so that the pose conversion relationship between the force/touch sensor and the end effector can be obtained.
By taking a robot as an example, the tail ends of the two arms of the robot are provided with smart hands (namely end effectors) with touch sensors, and the force/touch sensors are distributed on the fingers and the palms of the smart hands, so that the force information of the embodiment of the application can be measured through the force/touch sensors; or the tail end of the mechanical arm is provided with a clamping jaw, and the contact surface of the clamping jaw and the target object is distributed with a force/touch sensor.
A positioner:
A positioner 110 for measuring movement data of a movement portion of an operator.
The motion data may be various data information reflecting a motion state of a motion part of an operator, for example: speed information and/or acceleration information.
In particular, the positioner may employ various devices that may be used for positioning, now or later developed, such as: an optical tracking positioning device, an IMU, an image sensor, a position encoder, a biochemical sensor (e.g., myoelectric slave sensor), or an actuator body (e.g., wearable device, exoskeleton) provided with the above-described positioner. The IMU is an inertial measurement unit that measures movement data (e.g., acceleration and angular velocity) of one or more joints of an operator.
In one embodiment, the positioner has a preset calibration relation with the operator, so that the movement condition of the operator can be directly or indirectly reflected based on the movement data collected by the positioner.
By way of example, taking an arm exoskeleton as an example, a plurality of connecting rods and the like form an actuator main body, and the actuator main body can be provided with an IMU corresponding to the arm joint position, and the arm skeleton is worn on the arm of an operator, so that motion data of corresponding joints in the arm motion process of the operator can be acquired through the IMU.
For example, to obtain the motion data of the hand described in the following embodiments, the optical tracking positioning device may be calibrated in advance, then a calibration plate (Marker) of the optical tracking positioning device is fixed on the wrist of the hand, and the device system is used to capture the marked motion data, which is usually three-dimensional space coordinates with a time stamp; performing discrete differential processing on the motion data to obtain motion data (such as speed information); and then filtering, smoothing and other post-processing are carried out on the motion data, so that the noise can be further reduced, the accuracy is improved, and the final motion data is obtained.
And (3) a controller:
The controller 140 includes: an adaptive collaboration module and a whole body control module. Further details will be provided in the operation control device of the human-computer cooperation of the following embodiments of the adaptive coordination module and the whole body control module.
The controller 140 is communicatively connected to the positioner 110, the robot 120, the force/touch sensor 130, and the like, respectively, by wired or wireless means.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G/5G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection.
The controller in the embodiment of the application can be, but is not limited to: a computer terminal (Personal Computer, PC); an industrial control computer terminal (Industrial Personal Computer, IPC); a mobile terminal; a server; the system comprises a terminal and a server, and is realized through interaction between the terminal and the server; a programmable logic controller (Programmable Logic Controller, PLC); a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA); a Digital signal processor (Digital SignalProcesser, DSP) or a micro-control unit (Microcontroller unit, MCU). The controller generates program instructions based on a pre-fixed program in combination with data output from the positioner, robot, force/touch sensor, etc. By way of example, may be applied to a computer device as shown in fig. 5.
The controller 140 according to the embodiment of the present application may be a separate controller, or may be fully or partially integrated into a robot, a positioner, a force/touch sensor, etc., which is not limited by the present application.
In one embodiment, the system according to the embodiment of the present application may further include: an image sensor and/or a presenter (omitted from the figures), etc. Wherein, the image sensor is used for shooting the image data of the robot operation state of the slave end, etc. The demonstrator is used for generating a space virtual image of the target object based on the image data of the operation state and displaying the virtual image to an operator; or directly display the image data captured by the image sensor to an operator. In particular, the presenter may be a VR, AR, or display device.
It should be noted that, the positioner, the robot, the force/touch sensor, and the like mentioned in the embodiment of the present application may be a real object in a real environment, or may be a virtual object in a simulation platform, so as to achieve the effect of connecting the real object through the simulation environment. The controller which depends on the virtual environment to complete training can be transplanted to the real environment to control or retrain the real object, and resources and time of the training process can be saved.
As shown in fig. 2 and 4, fig. 2 is a schematic frame structure of an embodiment of the human-computer collaboration system of the present application; fig. 4 is a schematic structural view of an embodiment of the human-computer cooperative operation control apparatus of the present application.
Based on the human-computer cooperation system described in the above embodiment, the embodiment of the present application provides a human-computer cooperation operation control device, where the embodiment of the device corresponds to the embodiment of the human-computer cooperation operation control method shown in fig. 3, and the device may be specifically applied to the controller 140.
The adaptive coordination module 141 is configured to calculate a speed control amount and a pose control amount of the end of the robot based on the motion data and the force information; the motion data are motion data of the motion part of the operator measured based on the positioner; the force information is force information based on the force exerted by the end effector of the robot on the target object as measured by the force/touch sensor.
The whole body control module 142 is configured to calculate a joint velocity control amount of the robot joint based on the velocity control amount and the pose control amount, so as to instruct the robot to move based on the joint velocity control amount.
According to the embodiment of the application, the speed control quantity and the pose control quantity of the tail end of the robot are obtained after the force information of acting force applied to the target object by the tail end executor of the robot is fused with the multidimensional information of the motion data of the motion part of the operator; and based on the speed control quantity and the pose control quantity, the joint speed control quantity of the robot is obtained, and the deformation of the target object under different stress conditions is fully considered, so that the operability of man-machine cooperation aiming at the deformed target object can be improved, and the working efficiency and the safety are improved.
For ease of understanding, the above-described individual modules are described in further detail below.
In one embodiment, the adaptive collaboration module 141 includes: admittance control submodule 1411 and fusion control submodule 1412.
An admittance control submodule 1411 is used for obtaining the initial speed control quantity of the tail end of the robot based on the force information.
In one embodiment, the admittance control sub-module 1411 in the controller calculates an initial velocity control amount to obtain the tip of the robot based on force information output by force/touch sensor measurements or force information after some preprocessing.
In one embodiment, admittance control submodule 1411 may include: admittance control unit.
And an admittance control unit for calculating an initial speed control amount of the tip of the robot based on the force information in combination with the formulas (1) and (2).
Constant force control can be generally achieved using admittance control laws: the formula may be rewritten as a velocity and acceleration calculation formula for the tip of the robot:
(1)
(2)
Wherein, Representing an inertia matrix; Representing a damping matrix; Representing force information directly based on force/touch sensor measurements (e.g., examples in robotic grasping ropes referred to later), or other force information derived based on force information (e.g., examples of robotic massage referred to later); an acceleration indicating the tip of the robot; Representing the speed of the tip of the robot (i.e., the initial speed control amount of the tip of the robot according to the embodiment of the present application); representing a time step in the control system; Representing the robot control period.
According to the embodiment of the application, the admittance control submodule 1411 can realize the conversion from the external force to the initial speed control quantity of the robot under the condition that the environmental characteristics are not clarified in advance, so that the robot obtains the capability of compliant external force.
A fusion control submodule 1412, configured to fuse the initial speed control amount and the motion data to obtain a fused speed control amount; the fusion speed control quantity is converted into a terminal speed control quantity and a control quantity.
In particular, the motion data may include, but is not limited to: the amount of movement speed of the operator's movement part.
In one embodiment, the fusion control sub-module may include: a fusion calculation unit and a variable calculation unit;
The fusion calculation unit is used for calculating the fusion speed control quantity after the fusion of the initial speed control quantity and the motion speed quantity (namely the motion data in the embodiment of the application) based on the formula (3);
(3)
Wherein, Representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing the amount of motion speed; Indicating the amount of fusion speed control.
Wherein the adaptive fusion coefficientInitial speed control amount used to adjust the admittance control submodule 1411 output onlineWith the movement speed of the operator's movement part, e.g. handWeights in the fusion calculation to obtain a fused speed control quantity after fusion。
A variable calculation unit for converting the fusion speed control amount into a speed control amount and a pose control amount of the tip of the robot based on formulas (4) and (5) using integration;
(4)
(5)
Wherein, Is a speed control amount; The control quantity of the pose is the pose control quantity.
The embodiment of the application controls the quantity through the speedOnly the linear velocity is describedComprisesAnd a posture speed, the posture speed being 0. Then the speedThe pose control quantity can be obtained through an integral formula. The method of the embodiment of the application can simply realize the calculation of the speed control quantity and the pose control quantity.
It should be noted that, in addition to the above embodiment of converting the fusion speed control amount into the terminal speed control amount and the terminal control amount by using integration, the variable calculation unit may also convert the fusion speed control amount into the terminal speed control amount and the terminal control amount by using a neural network or other existing or future developed methods as required, which are all within the scope of the present application.
Further, in one embodiment, the fusion calculation unit fuses coefficients for the adaptationThe equation (6) can be used to calculate:
(6)
Wherein, Representing the adaptive fusion coefficient; representing the current time; Representing a moving window length; representing a custom minute value for avoiding zero computation; representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing the amount of motion speed; Indicating the time obtained by subtracting the moving window from the current time, for example, the current time is 10 th second, and the moving window is set to 1 second, then Then it is the 9 th second.
The adaptive fusion coefficient is as followsThe deformable properties of the manipulated object may be characterized. When the adaptive fusion coefficient is equal to 0, the operated target object is not deformable;
When (when) When equal to 1, the operated object is completely deformable;
When meeting the requirements When the object to be operated is deformed, the object to be operated is deformed.
Taking a man-machine interactive handling string (which is a common deformable object) as an example, the force transfer characteristics on the two-end connection of the string can be described simply as:
Wherein, Representing the force of the string conduction; indicating the elastic modulus of the string when fully stretched and overstretched; Representing the elastic coefficient of the string when under stretched; Representing the length of a straight line at both ends of the string; Indicating the length of the string when fully extended.
Since the force transmission characteristics of the string are close to those of the rigid object when the string is fully stretched and overstretched, the elastic coefficient at that time isThe numerical value is larger. The string has weak force transmission capability in understretching, and the elasticity coefficient at the moment can be knownThe value is smaller. In practice, it isAnd (3) withSatisfy the following requirementsIs a relationship of (3).
If only the force conducted by the string is used for driving the robot to move through the admittance controller to realize the man-machine cooperation carrying, the following formula can be used for describing:
Wherein, Representing an inertia matrix; Representing a damping matrix; representing the force conducted by the string (i.e., force information of an embodiment of the present application); an acceleration indicating the tip of the robot; The initial speed control amount of the tip of the robot is represented.
The above equation can be rewritten as a calculation equation of the initial speed control amount of the tip of the robot:
(1)
(2)
And then fusing the motion data with the motion data of the hand to generate a fusion speed control quantity:
If the string is under-stretched, carrying the string by man-machine cooperation, and setting the hand movement speed of the operator to be 0.5m/s. The speed control amount obtained by the admittance control submodule 1141 at this time was 0.1m/s. On the basis of the admittance control sub-module 1141, the fusion controller also considers the motion speed of the hands of the operator, and if the adaptive fusion coefficient is 0.5, the control speed output by the fusion control sub-module 1142 is 0.35m/s. It can be seen that the fusion control sub-module 1142 can better track and conform to the intention of the operator and reduce the operation difficulty compared with the simple admittance control sub-module 1141.
For convenience of understanding, another example is described by taking a scene of massaging or physiotherapy of a human body by using a robot, and the calculation formula of the initial speed control amount of the end of the robot is as follows:
(1)
(2)
Wherein, Representing an inertia matrix; Representing a damping matrix; Representing the difference between the actual acting force and the expected acting force of the end effector of the robot on the human body (namely, other force information obtained based on the force information in the embodiment of the application); an acceleration indicating the tip of the robot; The initial speed control amount of the tip of the robot is represented.
If the hand movement information of the person to be massaged is considered on the basis, the person to be massaged can have the capability of actively adjusting the massage force. The initial speed control quantity output by the admittance control submodule and the motion speed quantity of the hands of the person are fused to obtain the fusion speed control quantity:
If force control is only achieved in the Z-axis of the end tool coordinate system during massage, the rate control via admittance control sub-module 1141 is 0.1m/s. On the basis of the admittance control sub-module 1141, the fusion controller also considers the hand movement speed of the person to be massaged, and if the adaptive fusion coefficient is 0.8 and the hand movement speed of the person to be massaged is 0.2m/s, the control speed output by the fusion control sub-module 1142 is 0.26m/s. It can be known that, compared with the simple admittance control submodule 1141, the fusion control submodule 1142 can enable the person to be massaged to actively adjust the massage force, so as to improve the user experience.
According to the embodiment of the application, the initial speed control quantity of the tail end of the robot is obtained based on the force information of the acting force applied by the tail end actuator to the target object; fusing the initial speed control quantity and the motion data of the motion part of the operator to obtain a fused speed control quantity; the fusion speed control quantity is converted into the terminal speed control quantity and the terminal control quantity, and the deformation conditions of the target object under different stress conditions are fully considered, so that the operability of man-machine cooperation aiming at the deformed target object can be improved, and the working efficiency and the safety are improved.
A whole body control module 142 for obtaining joint velocity control amounts of the robot joints based on the velocity control amounts and the pose control amountsTo indicate robot motion based on the joint velocity control amount.
In one embodiment, the whole body control module can calculate the joint speed control quantity of the robot based on the speed control quantity and the pose control quantity based on quadratic programming. The cost function of the quadratic programming is as follows:
Wherein, Representing the whole body freedom degree of the robot; Representing chassis degrees of freedom; Representing the degree of freedom of the mechanical arm; the joint speed control quantity of each joint is represented and is an optimized variable; Representing a whole body jacobian matrix; Representing the current pose of the tail end of the robot; Representing a desired pose of the tip of the robot; Representing a desired speed of the tip of the robot; ,, Are all a diagonal positive coefficient matrix, Is the damping coefficient.
By solving the quadratic programming problem, the joint speed control quantity of each joint of the robot can be obtained。
It should be noted that, in addition to the method described in the above embodiment, the whole body control module may also be any existing or future developed module such as a neural network, if necessary, so long as the joint speed control amount of each joint of the robot can be obtained based on the speed control amount and the pose control amount, which falls within the scope of the present application.
As shown in fig. 3, fig. 3 is a flow chart of an embodiment of the human-machine cooperative operation control method of the present application. It should be noted that the operation control method of human-computer cooperation provided in the embodiment of the present application is generally performed by the controller 140 described in fig. 1 or 2 in the above embodiment.
The embodiment of the application provides a man-machine cooperation operation control method, which can comprise the following method steps:
Step 210, based on the motion data and the force information, calculating the speed control amount and the pose control amount of the tail end of the robot; the motion data are motion data of a motion part of an operator; the force information is force information of acting force applied by the end effector to the target object;
step 220 obtains a joint velocity control amount of the robot joint based on the velocity control amount and the pose control amount to instruct the robot to move based on the joint velocity control amount.
According to the embodiment of the application, the speed control quantity and the pose control quantity of the tail end of the robot are obtained after the force information of acting force applied to the target object by the tail end executor of the robot is fused with the multidimensional information of the motion data of the motion part of the operator; and based on the speed control quantity and the pose control quantity, the joint speed control quantity of the robot is obtained, and the deformation of the target object under different stress conditions is fully considered, so that the operability of man-machine cooperation aiming at the deformed target object can be improved, and the working efficiency and the safety are improved.
For ease of understanding, the method steps described above are described in further detail below.
Step 210, based on the motion data and the force information, calculating the speed control amount and the pose control amount of the tail end of the robot; the motion data are motion data of a motion part of an operator; the force information is force information of the end effector exerting a force on the target object.
In one embodiment, prior to step 210, the method steps may be included as follows:
Step 230 obtains force information.
In one embodiment, the controller obtains force information of the force applied to the target by the end effector or force information after some preprocessing, output by the force/touch sensor measurement, from the memory or server at a preset address.
Step 240 obtains movement data of the movement portion of the operator.
In one embodiment, control retrieves motion data of the operator's motion part (e.g., hand) or some pre-processed motion data output by the positioner measurement from a memory or server at a preset address.
In one embodiment, step 210 may include the following method steps:
step 211 obtains an initial speed control amount of the tip of the robot based on the force information.
It should be noted that step 211 may include: directly obtaining the initial speed control quantity of the tail end of the robot based on the force information; alternatively, other force information may be obtained based on the force information, and the initial speed control amount of the distal end of the robot may be obtained based on the other force information.
Further, in one embodiment, step 211 may comprise the following method steps:
Step 2111, based on the force information, combines the formula (1) and the formula (2) to obtain an initial speed control amount of the end of the robot;
(1)
(2)
Wherein, Representing an inertia matrix; Representing a damping matrix; Representing force information or other force information derived based on the force information; an acceleration indicating the tip of the robot; The initial speed control amount of the tip of the robot is represented.
Step 212, fusing the initial speed control quantity and the motion data to obtain a fused speed control quantity; the fusion speed control quantity is converted into a terminal speed control quantity and a control quantity.
Further, in one embodiment, step 212 may include the following method steps:
Step 2121 obtains a fusion speed control amount obtained by fusing the initial speed control amount and the movement speed amount based on the formula (3):
(3)
Step 2122 converts the fusion speed control amount into a speed control amount and a pose control amount based on formulas (4) and (5);
(4)
(5)
Wherein, Representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing motion data; Representing a fusion speed control amount; Is a speed control amount; The control quantity of the pose is the pose control quantity.
Further, in one embodiment, the adaptive fusion coefficients may be found by the following formula;
(6)
Wherein, Representing the adaptive fusion coefficient; representing the current time; Representing a moving window length; representing a custom minute value for avoiding zero computation; representing an initial speed control amount; Representing motion data; representing the time obtained by subtracting the moving window from the current time; wherein,
Characterizing variability properties of the manipulated object;
When (when) When equal to 0, the object to be operated is not deformable;
When (when) When equal to 1, the object to be operated is completely deformable;
When (when) When the object portion representing the operation is deformed.
Step 220 obtains a joint velocity control amount of the robot joint based on the velocity control amount and the pose control amount to instruct the robot to move through the joint velocity control amount of the joint.
In one embodiment, step 220 may comprise the following method steps:
Step 221, combining the cost function of quadratic programming, and calculating the joint speed control quantity based on the speed control quantity and the pose control quantity; the cost function of the quadratic programming is as follows:
Wherein, Representing the whole body freedom degree of the robot; Representing chassis degrees of freedom; Representing the degree of freedom of the mechanical arm; the joint speed control quantity of each joint is represented and is an optimized variable; Representing a whole body jacobian matrix; Representing the current pose of the tail end of the robot; Representing a desired pose of the tip of the robot; Representing a desired speed of the tip of the robot; ,, Are all a diagonal positive coefficient matrix, Is the damping coefficient.
According to the embodiment of the application, the speed control quantity and the pose control quantity of the tail end of the robot are obtained after the force information of acting force applied to the target object by the tail end executor of the robot is fused with the multidimensional information of the motion data of the motion part of the operator; and based on the speed control quantity and the pose control quantity, the joint speed control quantity of the robot is obtained, and the deformation of the target object under different stress conditions is fully considered, so that the operability of man-machine cooperation aiming at the deformed target object can be improved, and the working efficiency and the safety are improved.
For ease of understanding, the above method steps are described below by taking an example of an operator-human collaboration robot handling carpets. According to the carpet storage device, the carpet is stored by the man-machine cooperation control robot, so that the subsequent cooperative conveying operation is convenient.
The carpet is used as an operated object, the unfolded outline is rectangular, the size is 1.6mx1.2m, the outline is cylindrical after being accommodated, the radius is 0.1m, and the length is 1.6m. The carpet after storage is not rigid and is a part of deformable target object.
The operating system is configured as follows:
The robot is a humanoid robot, the tail ends of two arms of the humanoid robot are provided with smart hands, and the inner sides and palms of the smart hands are provided with touch sensors (namely force/touch sensors).
The locator adopts optical motion tracking equipment to sense the hand motion of an operator. Specifically, a Marker array may be disposed at the wrist of the operator, and an optical motion tracking system built up of 3 cameras may be disposed in the operator's room.
In the operation process, an operator gives an instruction for carrying the carpet in a composite manner to the robot, and the robot analyzes the instruction after receiving the instruction to generate a control sequence and execute the control sequence.
In the execution process, the robot first identifies a target object on the ground in a scene through the image sensor, and after the target object with semantic information being a carpet is matched, a double-arm clamping action sequence is further generated. Then, the robot finishes the clamping and lifting actions of one end of the carpet, reports waiting voice and waits for further instructions of operators.
And the operator observes that the robot finishes the clamping and lifting actions of one end of the carpet, and after hearing the voice broadcast waiting of the robot, lifts the other end of the carpet, and triggers an IO signal for continuously executing the next instruction by using a wireless switch. The robot system enters a cooperative work state, namely, the steps of executing the operation control method of the man-machine cooperation are started, and the method specifically comprises the following steps:
step 210 obtains a speed control amount and a pose control amount of the tip of the robot based on the motion data and the force information.
In the cooperative operation control process, the controller processes the data output by the touch sensor and calculates to obtain the external force applied by the dexterous hand; And acquire hand motion informationThe hand movement information is obtained by an optical movement tracker through tracking a Marker of the wrist part。
In one implementation, an adaptive collaboration module in the controller converts the two information into a tip speed control quantity for the robotAnd pose control amount. Wherein, fuse in the controllerThe setting is made to be 0.001,Set to 0.25s.
Step 220 obtains a joint velocity control amount of the robot joint based on the velocity control amount and the pose control amount to instruct the robot to move through the joint velocity control amount of the joint.
In one embodiment, the overall control module of the controller then controls the speed by an amountAnd pose control amountThe control amount of the joint speed of each joint of the robot is converted into。
Finally, controlling the speed of each jointAnd the robot servo system is issued to execute, so that man-machine cooperative transportation is completed.
For other relevant descriptions of the operation control method of man-machine cooperation in the embodiment of the present application, refer to the relevant descriptions of the adaptive cooperation module and the whole body control module in the above embodiment, and the detailed description is not repeated here.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
Referring specifically to fig. 5, in order to solve the above technical problem, an embodiment of the present application further provides a controller (taking the computer device 5 as an example).
The computer device 5 comprises a memory 61, a processor 62, a network interface 63 communicatively connected to each other via a system bus. It is noted that only computer device 6 having components 61-63 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device, and the like.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 61 includes at least one type of readable storage media including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 61 may be an internal storage unit of the computer device 6, such as a hard disk or a memory of the computer device 6. In other embodiments, the memory 61 may also be an external storage device of the computer device 6, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the computer device 6. Of course, the memory 61 may also comprise both an internal memory unit of the computer device 6 and an external memory device. In this embodiment, the memory 61 is typically used for storing an operating system and various application software installed on the computer device 6, such as program codes of an operation control method of man-machine cooperation, and the like. Further, the memory 61 may be used to temporarily store various types of data that have been output or are to be output.
The processor 62 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 62 is typically used to control the overall operation of the computer device 6. In this embodiment, the processor 62 is configured to execute a program code stored in the memory 61 or process data, such as a program code for executing an operation control method of man-machine cooperation.
The network interface 63 may comprise a wireless network interface or a wired network interface, which network interface 63 is typically used for establishing a communication connection between the computer device 6 and other electronic devices.
The present application also provides another embodiment, namely, a computer-readable storage medium storing a man-machine cooperation operation program executable by at least one processor to cause the at least one processor to perform the steps of the man-machine cooperation operation control method as described above.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.
Claims (5)
1. A system for human-machine collaboration, the system comprising: a positioner, a robot, a force and/or haptic sensor, an adaptive collaboration module, and a whole body control module; the end effector of the robot sets the force and/or tactile sensor;
the force and/or touch sensor is used for measuring force information of acting force applied by the end effector to a target object;
the locator is used for measuring the movement data of the movement part of the operator;
The self-adaptive cooperation module is used for solving the speed control quantity and the pose control quantity of the tail end of the robot based on the motion data and the force information;
The whole body control module is used for solving joint speed control quantity of a robot joint based on the speed control quantity and the pose control quantity so as to indicate the robot to move based on the joint speed control quantity;
Wherein, the self-adaptive cooperation module includes: an admittance control sub-module and a fusion control sub-module; the fusion control submodule comprises: a fusion calculation unit and a variable calculation unit;
the admittance control sub-module is used for solving the initial speed control quantity of the tail end of the robot based on the force information by combining the formula (1) and the formula (2);
(1)
(2)
Wherein, Representing an inertia matrix; Representing a damping matrix; Representing force information or other force information derived based on the force information; an acceleration indicating the tip of the robot; An initial speed control amount indicating the tip of the robot; representing a time step in the control system; Representing a robot control period;
the fusion calculation unit is configured to calculate, based on formula (3), a fusion speed control amount after the initial speed control amount and the motion data are fused:
(3)
The variable calculation unit is used for converting the fusion speed control quantity into the speed control quantity and the pose control quantity based on formulas (4) and (5);
(4)
(5)
Wherein, Representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing motion data; Representing a fusion speed control amount; Is a speed control amount; The control quantity is the pose control quantity;
The fusion calculation unit obtains the self-adaptive fusion coefficient through the following formula;
(6)
Wherein, Representing the adaptive fusion coefficient; representing the current time; Representing a moving window length; representing a custom minute value for avoiding zero computation; representing an initial speed control amount; Representing the motion data; representing the time obtained by subtracting the moving window from the current time; wherein,
The saidCharacterizing variability properties of the manipulated object;
When (when) When equal to 0, the object to be operated is not deformable;
When (when) When equal to 1, the object to be operated is completely deformable;
When (when) When the object portion representing the operation is deformed.
2. A human-computer collaborative operation control method, characterized in that the method comprises the following steps:
Based on the motion data and the force information, solving the speed control quantity and the pose control quantity of the tail end of the robot; wherein the motion data is of a motion part of an operator; the force information is force information of acting force applied by the end effector to the target object;
Based on the speed control amount and the pose control amount, calculating a joint speed control amount of a robot joint to indicate the robot motion based on the joint speed control amount;
wherein, based on the motion data and the force information, the method for calculating the speed control amount and the pose control amount of the tail end of the robot comprises the following steps:
Based on the force information, combining the formula (1) and the formula (2), and solving an initial speed control amount of the tail end of the robot;
(1)
(2)
Wherein, Representing an inertia matrix; Representing a damping matrix; Representing force information or other force information derived based on the force information; an acceleration indicating the tip of the robot; An initial speed control amount indicating the tip of the robot; representing a time step in the control system; Representing a robot control period;
Based on the formula (3), the fusion speed control quantity after the initial speed control quantity and the motion data are fused is obtained:
(3)
converting the fusion speed control amount into the speed control amount and the pose control amount based on formulas (4) and (5);
(4)
(5)
Wherein, Representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing motion data; Representing a fusion speed control amount; Is a speed control amount; The control quantity is the pose control quantity;
The self-adaptive fusion coefficient is obtained through the following formula;
(6)
Wherein, Representing the adaptive fusion coefficient; representing the current time; Representing a moving window length; representing a custom minute value for avoiding zero computation; representing an initial speed control amount; Representing the motion data; representing the time obtained by subtracting the moving window from the current time; wherein,
The saidCharacterizing variability properties of the manipulated object;
When (when) When equal to 0, the object to be operated is not deformable;
When (when) When equal to 1, the object to be operated is completely deformable;
When (when) When the object portion representing the operation is deformed.
3. An operation control device for man-machine cooperation, the device comprising:
the self-adaptive cooperation module is used for solving the speed control quantity and the pose control quantity of the tail end of the robot based on the motion data and the force information; wherein the motion data is of a motion part of an operator; the force information is force information of acting force applied by the end effector to the target object;
The whole body control module is used for solving joint speed control quantity of a robot joint based on the speed control quantity and the pose control quantity so as to indicate the robot to move based on the joint speed control quantity;
Wherein, the self-adaptive cooperation module includes: an admittance control sub-module and a fusion control sub-module; the fusion control submodule comprises: a fusion calculation unit and a variable calculation unit;
the admittance control sub-module is used for solving the initial speed control quantity of the tail end of the robot based on the force information by combining the formula (1) and the formula (2);
(1)
(2)
Wherein, Representing an inertia matrix; Representing a damping matrix; Representing force information or other force information derived based on the force information; an acceleration indicating the tip of the robot; An initial speed control amount indicating the tip of the robot; representing a time step in the control system; Representing a robot control period;
the fusion calculation unit is configured to calculate, based on formula (3), a fusion speed control amount after the initial speed control amount and the motion data are fused:
(3)
The variable calculation unit is used for converting the fusion speed control quantity into the speed control quantity and the pose control quantity based on formulas (4) and (5);
(4)
(5)
Wherein, Representing an initial speed control amount; Representing the adaptive fusion coefficient; Representing motion data; Representing a fusion speed control amount; Is a speed control amount; The control quantity is the pose control quantity;
The fusion calculation unit obtains the self-adaptive fusion coefficient through the following formula;
(6)
Wherein, Representing the adaptive fusion coefficient; representing the current time; Representing a moving window length; representing a custom minute value for avoiding zero computation; representing an initial speed control amount; Representing the motion data; representing the time obtained by subtracting the moving window from the current time; wherein,
The saidCharacterizing variability properties of the manipulated object;
When (when) When equal to 0, the object to be operated is not deformable;
When (when) When equal to 1, the object to be operated is completely deformable;
When (when) When the object portion representing the operation is deformed.
4. A controller comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the human-machine cooperative operation control method of claim 2 when the computer program is executed.
5. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the operation control method of human-computer collaboration as claimed in claim 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410558115.6A CN118123847B (en) | 2024-05-08 | 2024-05-08 | Man-machine cooperation system, operation control method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410558115.6A CN118123847B (en) | 2024-05-08 | 2024-05-08 | Man-machine cooperation system, operation control method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118123847A CN118123847A (en) | 2024-06-04 |
CN118123847B true CN118123847B (en) | 2024-07-02 |
Family
ID=91240746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410558115.6A Active CN118123847B (en) | 2024-05-08 | 2024-05-08 | Man-machine cooperation system, operation control method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118123847B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109848983A (en) * | 2018-12-10 | 2019-06-07 | 华中科技大学 | A kind of method of highly conforming properties people guided robot work compound |
CN113579476A (en) * | 2021-08-25 | 2021-11-02 | 清华大学 | Device and method for detecting absolute spatial attitude of surface of to-be-welded workpiece based on fusion of gravity sensing and visual sensing |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102456872B1 (en) * | 2021-12-30 | 2022-10-21 | 서울대학교산학협력단 | System and method for tracking hand motion using strong coupling fusion of image sensor and inertial sensor |
CN114474072B (en) * | 2022-03-18 | 2023-07-04 | 中科新松有限公司 | Track fusion method, device, equipment and storage medium |
CN114905508B (en) * | 2022-04-19 | 2023-08-22 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | Robot grabbing method based on heterogeneous feature fusion |
-
2024
- 2024-05-08 CN CN202410558115.6A patent/CN118123847B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109848983A (en) * | 2018-12-10 | 2019-06-07 | 华中科技大学 | A kind of method of highly conforming properties people guided robot work compound |
CN113579476A (en) * | 2021-08-25 | 2021-11-02 | 清华大学 | Device and method for detecting absolute spatial attitude of surface of to-be-welded workpiece based on fusion of gravity sensing and visual sensing |
Also Published As
Publication number | Publication date |
---|---|
CN118123847A (en) | 2024-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106826838B (en) | Interaction bionic mechanical arm control method based on Kinect visual depth sensor | |
CN106313049A (en) | Somatosensory control system and control method for apery mechanical arm | |
Dou et al. | Inverse kinematics for a 7-DOF humanoid robotic arm with joint limit and end pose coupling | |
CN115469576B (en) | Teleoperation system based on human-mechanical arm heterogeneous motion space hybrid mapping | |
CN111645093B (en) | Force sense feedback data glove for teleoperation | |
CN113119104B (en) | Mechanical arm control method, mechanical arm control device, computing equipment and system | |
Yang et al. | A study of the human-robot synchronous control system based on skeletal tracking technology | |
Liu et al. | HIT prosthetic hand based on tendon-driven mechanism | |
Pepe et al. | Development of an haptic interface based on twisted string actuators | |
Zhang et al. | One-DOF six-bar space gripper with multiple operation modes and force adaptability | |
CN118123847B (en) | Man-machine cooperation system, operation control method and device | |
Ji et al. | Self-identification of cable-driven exoskeleton based on asynchronous iterative method | |
CN116394276B (en) | Sample generation and model training method, device and system | |
Karam et al. | Design and implementation of a wireless robotic human hand motion-controlled using arduino | |
CN116629373A (en) | Model training system, training method, training device and storage medium | |
Wang et al. | Design and implementation of humanoid robot behavior imitation system based on skeleton tracking | |
CN112894794A (en) | Human body arm action simulation method and device, terminal equipment and storage medium | |
CN116383667B (en) | Model training and motion instruction prediction method, device and system | |
Imran et al. | Open Arms: Open-Source Arms, Hands & Control | |
CN116394265B (en) | Attitude sensor calibration method, attitude sensor calibration device, attitude sensor calibration equipment and storage medium | |
Nguyen et al. | Performance evaluation of an inverse kinematic based control system of a humanoid robot arm using MS Kinect | |
CN116542310B (en) | Model training and motion instruction prediction method, device and system for robot | |
Graziano et al. | A wireless haptic data suit for controlling humanoid robots | |
CN118061200B (en) | Force feedback method and device for teleoperation system based on vibration induction | |
CN118046394B (en) | Teleoperation motion control method, device, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |