CN117283555A - Method and device for autonomously calibrating tool center point of robot - Google Patents
Method and device for autonomously calibrating tool center point of robot Download PDFInfo
- Publication number
- CN117283555A CN117283555A CN202311416497.0A CN202311416497A CN117283555A CN 117283555 A CN117283555 A CN 117283555A CN 202311416497 A CN202311416497 A CN 202311416497A CN 117283555 A CN117283555 A CN 117283555A
- Authority
- CN
- China
- Prior art keywords
- tool
- coordinate system
- pose
- center point
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000009434 installation Methods 0.000 claims abstract description 38
- 230000000007 visual effect Effects 0.000 claims description 36
- 238000012545 processing Methods 0.000 claims description 21
- 238000013527 convolutional neural network Methods 0.000 claims description 19
- 239000000284 extract Substances 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000001131 transforming effect Effects 0.000 claims description 5
- 238000003860 storage Methods 0.000 claims description 4
- 239000000758 substrate Substances 0.000 claims 2
- 238000003466 welding Methods 0.000 description 12
- 210000003128 head Anatomy 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000036544 posture Effects 0.000 description 6
- 239000003292 glue Substances 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000004026 adhesive bonding Methods 0.000 description 1
- 238000012271 agricultural production Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
The application discloses a method and a device for autonomously calibrating a tool center point of a robot, wherein the method comprises the following steps: acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively; identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points; and calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system. The tool mounting pose or track can be specified at will, only the requirement that the image sensor can collect the image data of the tool is met, and the pose of the vision sensor relative to the robot base coordinate system is not required to be calibrated independently.
Description
Technical Field
The present application relates to the field of robotics, and more particularly, to a method and apparatus for autonomous calibration of a tool center point by a robot.
Background
With the development of robot technology, robots are being applied to more and more fields, for example, robots may be applied in scenes of industrial and agricultural production, social services, home services, and the like. For example, industrial robots are widely used in the fields of welding, assembly, handling, painting, polishing, etc.
Robots often require a tool to be installed for performing certain tasks such as welding the welding gun of the robot, glue gun of the glue robot, handling the clamps of the robot, etc. Because the size and the shape of each tool are different, a tool center point needs to be confirmed to represent the tool, and the pose relation between the tool and the robot is described, so that the robot can accurately execute the track by taking the tool center point as a reference. Tool mounting pose is described in the robot base coordinate system:
the existing scheme I is as follows: the manual operation robot changes the mounting position and the pose of the tool, so that the tool points to the same point in the space in different poses; based on the constraint condition that the position of the point is unchanged relative to the robot base coordinate system and the pose data of the tool mounting position when the tool points to the point, the position of the tool center point relative to the coordinate system where the mounting surface of the tool is located can be calibrated.
The existing scheme II: detecting, by one or more laser sensors, a signal that blocks a laser beam when a center point of a tool mounted by the robot is in motion or stopped; based on the line or surface constraint condition formed by the laser beam and the tool mounting position and pose data when the laser beam is blocked, the pose of the tool center point relative to the coordinate system where the tool center point is arranged can be calibrated.
However, the above scheme has the following disadvantages:
scheme one disadvantage: the robot is required to be manually operated, the tool mounting pose is changed, the tool center points point are pointed to the same position, the process is tedious, the efficiency is low, an operator needs to enter a robot working area, and potential safety hazards exist; after the tool mounting pose is changed, the fact that the center point of the tool points to the same position of the front is needed to be judged by naked eyes, and a large error exists; only the position can be calibrated by the point constraint.
Scheme two has the disadvantage: after the sensor is deployed, a sensor coordinate system is required to be defined and described under a robot base coordinate system; corresponding tool mounting positions and postures are recorded when the laser beam is blocked, so that high requirements are placed on the performance of the sensor and the robot control system, such as communication frequency, feedback delay and time stamp synchronization; the robot needs to explore according to the established logic movement, so that blocking signals are triggered, and the efficiency is low; if the tool collides and deforms in the using process, calibration failure can be caused.
Disclosure of Invention
In order to overcome the problems in the related art, the embodiment of the invention provides a method and a device for autonomously calibrating a tool center point of a robot, and the technical scheme is as follows:
according to a first aspect of an embodiment of the present invention, there is provided a method for autonomous calibration of a tool center point by a robot, comprising:
acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points;
and calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
Optionally, the preset feature points of the tool are identified from the image, including:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
Optionally, identifying and extracting the preset feature points of the tool from the image includes:
processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
Optionally, the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the multiple sets of feature point data corresponding to different configurations and the pose data of the corresponding multiple sets of tool installation positions in the world coordinate system includes:
obtaining parameters of the visual sensor through calibration;
respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
Optionally, the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool installation pose data includes:
the following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation:and->
Wherein,is a pose description of the vision sensor in the world coordinate system, and is +.>Is a pose description of the tool mounting position in the world coordinate system, < >>Is the pose description of the tool feature points under the coordinate system of the tool installation position, and is +.>The pose description of the tool feature points under a visual sensor coordinate system; k represents the configuration of the robot, k= [1,2 ], m-1, m]The method comprises the steps of carrying out a first treatment on the surface of the i represents a feature point, i= [0,1, …, n-1, n]I=0 denotes a tool center point.
According to a second aspect of embodiments of the present invention, there is provided an apparatus for autonomous calibration of a tool center point by a robot, the apparatus comprising:
the acquisition module is used for acquiring a plurality of groups of images shot by the vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
the first processing module is used for respectively identifying preset characteristic points of the tool from the images corresponding to different configurations and outputting a plurality of sets of characteristic point data corresponding to different configurations, wherein the characteristic points comprise tool center points;
the second processing module is used for calibrating pose data of the tool center point in the tool installation position coordinate system based on multiple sets of feature point data corresponding to different configurations and pose data of corresponding multiple sets of tool installation positions in the world coordinate system.
Optionally, the first processing module is configured to:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
Optionally, the first processing module is configured to:
processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
Optionally, the second processing module is configured to:
obtaining parameters of the visual sensor through calibration;
respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
Optionally, the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool installation pose data includes:
the following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation:and->
Wherein,is a pose description of the vision sensor in the world coordinate system, and is +.>Is a pose description of the tool mounting position in the world coordinate system, < >>Is the pose description of the tool feature points under the coordinate system of the tool installation position, and is +.>The pose description of the tool feature points under a visual sensor coordinate system; k represents the configuration of the robot, k= [1,2, …, m-1, m]The method comprises the steps of carrying out a first treatment on the surface of the i represents a feature point, i= [0,1, ], n-1, n]I=0 denotes a tool center point.
According to a third aspect of embodiments of the present invention, there is provided an apparatus for autonomous calibration of a tool center point by a robot, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points;
and calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
According to a fourth aspect of embodiments of the present invention there is provided a computer readable storage medium having stored thereon computer instructions which when executed by a processor implement the steps of the method of any of the first aspects of embodiments of the present invention.
According to the technical scheme provided by the embodiment of the invention, by changing the mounting position and the track of the tool, only the condition that the vision sensor can acquire the tool image data is met, and the vision sensor does not need to independently calibrate the position and the posture of the vision sensor relative to the robot base coordinate system.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for autonomous calibration of a tool center point by a robot according to an embodiment of the present application;
fig. 2 is a schematic diagram of a CNN model provided in an embodiment of the present application;
FIG. 3 is a schematic view of a tool feature point provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of calibrating parameters of a vision sensor;
fig. 5 is a schematic structural diagram of an apparatus for autonomous calibration of a tool center point of a robot according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms first and second and the like in the description and in the claims of the present application and in the above-described figures are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to the listed steps or elements but may include steps or elements not expressly listed.
The application is applied to robots, which may include a mobile base and a robotic arm. The moving base can drive the robot to integrally move, and the mechanical arm can move relative to the moving base. The tail end of the mechanical arm of the robot comprises tool mounting positions for mounting different tools to finish different operations, such as a welding gun of a welding robot, a glue gun of a gluing robot, a clamp of a carrying robot and the like, and corresponding tools can be mounted according to the use situation.
Fig. 1 shows a method for autonomously calibrating a tool center point of a robot according to an embodiment of the present invention, which can be applied to a robot or an industrial personal computer of a robot. The method comprises the following steps S101 to S103:
in step S101, a plurality of sets of images captured by the vision sensor are acquired, where the plurality of sets of images are obtained by the vision sensor capturing images of tools of different poses under different configurations of the robot.
The visual sensor may be included in the robot body or may be independent of the robot body. By changing the configuration of the robot, the tool mounting position of the robot can be changed, so that tools mounted on the tool mounting position can be in different postures.
In step S102, preset feature points of the tool are identified from the images corresponding to different configurations, and multiple sets of feature point data corresponding to different configurations are output, where the feature points include a tool center point.
In step S103, pose data of the tool center point in the tool installation position coordinate system is calibrated based on the plurality of sets of feature point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
According to the method for autonomously calibrating the tool center point of the robot, the tool mounting position or track can be specified at will, only the image data of the tool can be acquired by the image sensor, and the position and the posture of the vision sensor relative to the robot base coordinate system (namely the world coordinate system) do not need to be calibrated independently.
In an embodiment of the present application, step S102 identifies preset feature points of the tool from the images corresponding to different configurations, and outputs multiple sets of feature point data corresponding to different configurations, where the feature points include a tool center point. Wherein, from the preset feature point of image recognition instrument, include:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
The embodiment adopts a feature point identification method based on deep learning, and uses a convolutional neural network (Convolutional Neural Networks, CNN) model for automatically identifying and extracting the feature points of the tool from the image data acquired by the vision sensor.
FIG. 2 is a schematic diagram of an exemplary CNN model that employs a multi-layer architecture, including multiple convolution layers, each for extracting features at a different level of an image; and a plurality of head layers (classifiers), each of which is used for outputting features with different dimensions, wherein the heads have interaction. The characteristic point identification method based on the AI algorithm can be self-iterated and upgraded, and generalization and calibration precision are improved.
In an embodiment of the present application, step S102 identifies and extracts preset feature points of the tool from the image, including: processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
Taking a tool as a welding gun head as an example, the specific steps of outputting characteristic point data are as follows:
step A1: and shooting the gun head of the welding gun by using a binocular stereoscopic vision sensor.
Step A2: a left eye image is acquired and scaled to 256 x 256.
The preset size of the image scaling may be set according to the computing power, for example, when the computing power is sufficient, the image may be scaled to 512×512.
Step A3: the scaled left eye image is input into a convolutional neural network for operation, and characteristic point positions such as a gun head outline position 31, a welding wire starting point position 32, a welding wire tail end position 33 and the like are output, wherein a characteristic point schematic diagram is shown in fig. 3.
Step A4: a right eye image is acquired and scaled to 256 x 256.
Step A5: and inputting the scaled right eye image into a convolutional neural network for operation, and outputting characteristic point data such as a welding gun head outline, a welding wire starting point position, a welding wire tail end position and the like.
In this way, feature point data of preset feature points according to the left-eye image and according to the right-eye image can be obtained, respectively. The feature points may include tool center points, and may also include points representing tool feature contours. The feature point data includes location information and semantic information. The position information may be coordinates of the feature points in the left-eye image coordinate system/the right-eye image coordinate system, and the semantic information may be human-understandable information such as "gun head outline", "wire start position", "wire end position", and the like.
In an embodiment of the present application, step S103 marks pose data of a tool center point in a tool installation position coordinate system based on a plurality of sets of feature point data corresponding to different configurations and pose data of a plurality of sets of corresponding tool installation positions in a world coordinate system, and includes steps B1 to B3:
step B1: and obtaining parameters of the visual sensor through calibration.
Step B2: and respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor.
Step B3: and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
The coordinates of the feature points on the tool under the left and right image coordinate systems of the vision sensor are respectively as follows:
L p i =[ L x i , L y i ], R p i =[ R x i , R y i ]
the pose of the feature point on the tool under the coordinate system of the vision sensor is recorded as follows: C p i =[ C t i , C r i ]
wherein the position vector is C t i =[ C tx i , C ty i , C tz i ]
The attitude vector is C r i =[ C rx i , C ry i , C rz i ]
Where i represents a feature point, i= [0,1, ], n-1, n]. Is provided with C p 0 Is the position and the posture of the center point of the tool, C p i=1,...,n and (5) other characteristic point positions and postures.
Parameters of the binocular camera are obtained through calibration: baseline: b; focal length: f, as shown in fig. 4. Wherein delta i =| L x i - R x i And I is the parallax of a certain feature point under the binocular. Based on the principle of similar triangles, it is possible to:
therefore, the pose of the feature point under the visual sensor coordinate system can be deduced as follows:
in an embodiment of the present application, step B3 of calibrating pose data of a tool center point in a tool mounting position coordinate system based on pose data of the tool center point in a vision sensor coordinate system and tool mounting pose data, includes the following steps:
a plurality of sets of images have been previously acquired by step S101, the plurality of sets of images being captured by changing the configuration of the robot so that the tool appears in different poses within the field of view of the vision sensor. According to the steps, a plurality of sets of pose data of the tool feature point pose (calculated based on parallax as described above) in the visual sensor coordinate system and the pose data of the tool mounting position in the world coordinate system can be obtained. The tool mounting pose data can be obtained based on forward kinematics solution of the robot.
When the configuration of the robot is k:
the pose of the tool feature point under the visual sensor coordinate system is as follows:
will beSE (3) description of>
The pose of the tool mounting position is as follows:its SE (3) description, noted +.>
Where k= [1,2, ], m-1, m ].
Taking the robot base coordinate system as a world coordinate system, and marking as W;is a pose description of the vision sensor in the world coordinate system, and is +.>Is a tool mounting seat in the worldPose description under the standard system +.>Is the pose description of the tool feature points under the coordinate system of the tool installation position, and is +.>Is the pose description of the tool characteristic points under the visual sensor coordinate system, and the inverse is And->The value of (c) is independent of i and k.
The formation of a closed-chain system can be expressed as:
multiplying the above equation by left and rightThe method can obtain:
when m groups of data are acquired, the data can be obtained according to a least square method based on the conversion of the Cronecker product:
and->
Thus, the pose description of the tool characteristic point under the coordinate system of the tool mounting position is obtained, and the position of the tool center point relative to the coordinate system (the coordinate system of the tool mounting position) of the mounting surface of the tool center point can be calibrated.
In an embodiment of the application, the robot can periodically place the tool in the visual field of the visual sensor, and autonomously detect whether the central point pose of the tool is changed or not; if the calibration is changed, the calibration function is triggered, and the calibration and correction are performed independently. The vision sensor in the application can be used for not only tool center point calibration, but also correction of robot kinematics parameters, and can also be used for positioning, navigation and obstacle avoidance.
The following are examples of the apparatus of the present invention that may be used to perform the method embodiments of the present invention.
FIG. 5 is a block diagram illustrating an apparatus for robotic autonomous calibration tool center point, which may be a terminal or a portion of a terminal, which may be implemented as part or all of an electronic device by software, hardware, or a combination of both, according to one exemplary embodiment. As shown in fig. 5, the apparatus includes:
the acquisition module 501 is configured to acquire a plurality of groups of images captured by a vision sensor, where the plurality of groups of images are obtained by the vision sensor capturing tools of different poses under different configurations of the robot respectively;
the first processing module 502 is configured to identify preset feature points of a tool in images corresponding to different configurations, and output multiple sets of feature point data corresponding to different configurations, where the feature points include a tool center point;
the second processing module 503 is configured to calibrate pose data of the tool center point in the tool installation position coordinate system based on multiple sets of feature point data corresponding to different configurations and pose data of multiple corresponding sets of tool installation positions in the world coordinate system.
In an embodiment, the first processing module 502 is configured to:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
In an embodiment, the first processing module 502 is configured to:
processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
In an embodiment, the second processing module 502 is configured to:
obtaining parameters of the visual sensor through calibration;
respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
In an embodiment, the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool installation pose data includes:
the following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation:and->
Wherein,is that the vision sensor is under the world coordinate systemPose description of->Is a pose description of the tool mounting position in the world coordinate system, < >>Is the pose description of the tool feature points under the coordinate system of the tool installation position, and is +.>The pose description of the tool feature points under a visual sensor coordinate system; k represents the configuration of the robot, k= [1,2 ], m-1, m]The method comprises the steps of carrying out a first treatment on the surface of the i represents a feature point, i= [0,1, …, n-1, n]I=0 denotes a tool center point.
In another embodiment of the present application, there is also provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for autonomous calibration of a tool center point for a robot as described in any of the above.
In another embodiment of the present application, there is also provided an apparatus for autonomous calibration of a tool center point by a robot, the apparatus may include:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points;
and calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
It should be noted that, the specific implementation of the processor in this embodiment may refer to the corresponding content in the foregoing, which is not described in detail herein.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (12)
1. A method for autonomously calibrating a tool center point for a robot, comprising:
acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points;
and calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
from the preset feature points of the image recognition tool, comprising:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
identifying and extracting preset feature points of a tool from an image, including:
processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
4. The method according to claim 3, wherein the calibrating the pose data of the tool center point in the tool installation position coordinate system based on the sets of feature point data corresponding to different configurations and the pose data of the corresponding sets of tool installation positions in the world coordinate system includes:
obtaining parameters of the visual sensor through calibration;
respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
5. The method of claim 4, wherein calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting position pose data comprises:
the following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation:
wherein,is a pose description of the vision sensor in the world coordinate system, and is +.>Is a pose description of the tool mounting position in the world coordinate system, < >>Is the pose description of the tool feature points under the coordinate system of the tool installation position, and is +.>The pose description of the tool feature points under a visual sensor coordinate system; k represents the configuration of the robot, k= [1,2 ], m-1, m]The method comprises the steps of carrying out a first treatment on the surface of the i represents a feature point, i= [0,1, …, n-1, n]I=0 denotes a tool center point.
6. An apparatus for autonomously calibrating a tool center point of a robot, the apparatus comprising:
the acquisition module is used for acquiring a plurality of groups of images shot by the vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
the first processing module is used for respectively identifying preset characteristic points of the tool from the images corresponding to different configurations and outputting a plurality of sets of characteristic point data corresponding to different configurations, wherein the characteristic points comprise tool center points;
the second processing module is used for calibrating pose data of the tool center point in the tool installation position coordinate system based on multiple sets of feature point data corresponding to different configurations and pose data of corresponding multiple sets of tool installation positions in the world coordinate system.
7. The apparatus of claim 6, wherein the first processing module is to:
the convolutional neural network model is used to identify and extract the preset feature points of the tool from the image.
8. The apparatus of claim 7, wherein the first processing module is to:
processing a left eye image and a right eye image shot by a binocular stereo vision sensor respectively: and (3) scaling the image to a preset size, inputting the image into a convolutional neural network model for operation, and outputting characteristic point data.
9. The apparatus of claim 6, wherein the second processing module is to:
obtaining parameters of the visual sensor through calibration;
respectively acquiring pose data of a plurality of groups of characteristic points under a visual sensor coordinate system based on parameters of the visual sensor, wherein the pose data comprise poses of tool center points under the visual sensor;
and calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting pose data.
10. The apparatus of claim 9, wherein the calibrating the pose data of the tool center point in the tool mounting position coordinate system based on the pose data of the tool center point in the vision sensor coordinate system and the tool mounting position pose data comprises:
the following pose relationship is determined:
transforming the pose relation into:
the least square method based on the kronecker product conversion obtains the pose relation:and->
Wherein,is a pose description of the vision sensor in the world coordinate system, and is +.>Is a pose description of the tool mounting position in the world coordinate system, < >>Is the pose description of the tool feature points under the coordinate system of the tool installation position, and is +.>The pose description of the tool feature points under a visual sensor coordinate system; k represents the configuration of the robot, k= [1,2 ], m-1, m]The method comprises the steps of carrying out a first treatment on the surface of the i represents a feature point, i= [0,1, ], n-1, n]I=0 denotes a tool center point.
11. An apparatus for autonomously calibrating a tool center point of a robot, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a plurality of groups of images shot by a vision sensor, wherein the plurality of groups of images are obtained by shooting tools with different poses under different configurations of the robot by the vision sensor respectively;
identifying preset feature points of the tool from images corresponding to different configurations and outputting multiple sets of feature point data corresponding to different configurations, wherein the feature points comprise tool center points;
and calibrating pose data of the tool center point in the tool installation position coordinate system based on the plurality of sets of characteristic point data corresponding to different configurations and the pose data of the corresponding plurality of sets of tool installation positions in the world coordinate system.
12. A computer readable storage medium having stored thereon computer instructions, which when executed by a processor, implement the steps of the method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311416497.0A CN117283555B (en) | 2023-10-29 | 2023-10-29 | Method and device for autonomously calibrating tool center point of robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311416497.0A CN117283555B (en) | 2023-10-29 | 2023-10-29 | Method and device for autonomously calibrating tool center point of robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117283555A true CN117283555A (en) | 2023-12-26 |
CN117283555B CN117283555B (en) | 2024-06-11 |
Family
ID=89239128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311416497.0A Active CN117283555B (en) | 2023-10-29 | 2023-10-29 | Method and device for autonomously calibrating tool center point of robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117283555B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080188983A1 (en) * | 2007-02-05 | 2008-08-07 | Fanuc Ltd | Calibration device and method for robot mechanism |
CN104827480A (en) * | 2014-02-11 | 2015-08-12 | 泰科电子(上海)有限公司 | Automatic calibration method of robot system |
CN107192331A (en) * | 2017-06-20 | 2017-09-22 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of workpiece grabbing method based on binocular vision |
CN108527360A (en) * | 2018-02-07 | 2018-09-14 | 唐山英莱科技有限公司 | A kind of location position system and method |
CN109029257A (en) * | 2018-07-12 | 2018-12-18 | 中国科学院自动化研究所 | Based on stereoscopic vision and the large-scale workpiece pose measurement system of structure light vision, method |
CN109900207A (en) * | 2019-03-12 | 2019-06-18 | 精诚工科汽车系统有限公司 | The tool center point scaling method and system of robot vision tool |
CN110640745A (en) * | 2019-11-01 | 2020-01-03 | 苏州大学 | Vision-based robot automatic calibration method, equipment and storage medium |
CN113524194A (en) * | 2021-04-28 | 2021-10-22 | 重庆理工大学 | Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning |
CN114001653A (en) * | 2021-11-01 | 2022-02-01 | 亿嘉和科技股份有限公司 | Calibration method for central point of robot tool |
CN114310880A (en) * | 2021-12-23 | 2022-04-12 | 中国科学院自动化研究所 | Mechanical arm calibration method and device |
CN114310881A (en) * | 2021-12-23 | 2022-04-12 | 中国科学院自动化研究所 | Calibration method and system for mechanical arm quick-change device and electronic equipment |
CN115179294A (en) * | 2022-08-02 | 2022-10-14 | 深圳微美机器人有限公司 | Robot control method, system, computer device, and storage medium |
US20230278224A1 (en) * | 2022-03-07 | 2023-09-07 | Path Robotics, Inc. | Tool calibration for manufacturing robots |
-
2023
- 2023-10-29 CN CN202311416497.0A patent/CN117283555B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080188983A1 (en) * | 2007-02-05 | 2008-08-07 | Fanuc Ltd | Calibration device and method for robot mechanism |
CN104827480A (en) * | 2014-02-11 | 2015-08-12 | 泰科电子(上海)有限公司 | Automatic calibration method of robot system |
CN107192331A (en) * | 2017-06-20 | 2017-09-22 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of workpiece grabbing method based on binocular vision |
CN108527360A (en) * | 2018-02-07 | 2018-09-14 | 唐山英莱科技有限公司 | A kind of location position system and method |
CN109029257A (en) * | 2018-07-12 | 2018-12-18 | 中国科学院自动化研究所 | Based on stereoscopic vision and the large-scale workpiece pose measurement system of structure light vision, method |
CN109900207A (en) * | 2019-03-12 | 2019-06-18 | 精诚工科汽车系统有限公司 | The tool center point scaling method and system of robot vision tool |
CN110640745A (en) * | 2019-11-01 | 2020-01-03 | 苏州大学 | Vision-based robot automatic calibration method, equipment and storage medium |
CN113524194A (en) * | 2021-04-28 | 2021-10-22 | 重庆理工大学 | Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning |
CN114001653A (en) * | 2021-11-01 | 2022-02-01 | 亿嘉和科技股份有限公司 | Calibration method for central point of robot tool |
CN114310880A (en) * | 2021-12-23 | 2022-04-12 | 中国科学院自动化研究所 | Mechanical arm calibration method and device |
CN114310881A (en) * | 2021-12-23 | 2022-04-12 | 中国科学院自动化研究所 | Calibration method and system for mechanical arm quick-change device and electronic equipment |
US20230278224A1 (en) * | 2022-03-07 | 2023-09-07 | Path Robotics, Inc. | Tool calibration for manufacturing robots |
CN115179294A (en) * | 2022-08-02 | 2022-10-14 | 深圳微美机器人有限公司 | Robot control method, system, computer device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117283555B (en) | 2024-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111452040B (en) | System and method for associating machine vision coordinate space in a pilot assembly environment | |
JP5835926B2 (en) | Information processing apparatus, information processing apparatus control method, and program | |
CN110268358B (en) | Position control device and position control method | |
JP3834297B2 (en) | Image processing device | |
EP3222393B1 (en) | Automated guidance system and method for a coordinated movement machine | |
EP2682711B1 (en) | Apparatus and method for three-dimensional measurement and robot system comprising said apparatus | |
JP6855492B2 (en) | Robot system, robot system control device, and robot system control method | |
WO2017087521A1 (en) | Three-dimensional visual servoing for robot positioning | |
US20150120047A1 (en) | Control device, robot, robot system, and control method | |
CN111897349A (en) | Underwater robot autonomous obstacle avoidance method based on binocular vision | |
CN111151463A (en) | Mechanical arm sorting and grabbing system and method based on 3D vision | |
CN114102585A (en) | Article grabbing planning method and system | |
EP3250346A1 (en) | 3d segmentation for robotic applications | |
CN114742883B (en) | Automatic assembly method and system based on plane workpiece positioning algorithm | |
CN113878588B (en) | Robot compliant assembly method based on tactile feedback and oriented to buckle type connection | |
CN113524183A (en) | Relative position obtaining method, robot arm control method, and robot arm system | |
CN111275758B (en) | Hybrid 3D visual positioning method, device, computer equipment and storage medium | |
CN114851209B (en) | Industrial robot working path planning optimization method and system based on vision | |
CN111360851A (en) | Hybrid servo control device and method for robot integrating touch and vision | |
CN114299039B (en) | Robot and collision detection device and method thereof | |
JP2006224291A (en) | Robot system | |
CN114800524A (en) | System and method for actively avoiding collision of human-computer interaction cooperative robot | |
CN108724183B (en) | Control method, system and related device of carrying mechanical arm | |
CN112338922B (en) | Five-axis mechanical arm grabbing and placing method and related device | |
CN117283555B (en) | Method and device for autonomously calibrating tool center point of robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |