CN115617043A - Robot and positioning method, device, equipment, server and storage medium thereof - Google Patents
Robot and positioning method, device, equipment, server and storage medium thereof Download PDFInfo
- Publication number
- CN115617043A CN115617043A CN202211281454.1A CN202211281454A CN115617043A CN 115617043 A CN115617043 A CN 115617043A CN 202211281454 A CN202211281454 A CN 202211281454A CN 115617043 A CN115617043 A CN 115617043A
- Authority
- CN
- China
- Prior art keywords
- map
- robot
- positioning information
- positioning
- frame image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 173
- 238000003860 storage Methods 0.000 title claims description 70
- 230000007613 environmental effect Effects 0.000 claims abstract description 48
- 230000033001 locomotion Effects 0.000 claims abstract description 29
- 230000000007 visual effect Effects 0.000 claims description 55
- 238000012545 processing Methods 0.000 claims description 45
- 230000009466 transformation Effects 0.000 claims description 18
- 238000004891 communication Methods 0.000 claims description 17
- 238000004140 cleaning Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 9
- 238000010801 machine learning Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims 1
- 230000000875 corresponding effect Effects 0.000 description 48
- 238000010586 diagram Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 16
- 238000004364 calculation method Methods 0.000 description 14
- 238000010276 construction Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 239000013589 supplement Substances 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0251—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Software Systems (AREA)
- Electromagnetism (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The application discloses a robot positioning method. The robot positioning method comprises the following steps: acquiring current environmental data during movement of the robot in the physical space based on the first map; matching the environment data with the first map so as to select to acquire positioning information of the robot mapped in the first map based on the environment data and the first map or acquire the positioning information of the robot mapped in the first map based on the environment data and the second map according to a matching result; wherein the second map has an association relationship with the first map, and the obtaining of the positioning information of the robot mapped in the first map based on the environment data and the second map comprises: and converting the positioning information of the robot mapped in the second map into the positioning information in the first map based on the incidence relation.
Description
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a robot, a positioning method thereof, a robot control device, a computer storage medium, an electronic device, and a server.
Background
With the development of automation technology and artificial intelligence, robots are widely used in various occasions to replace manual work, for example, robots replace manual work to clean floor surfaces. The robot is a machine device that automatically performs work, and needs to determine its position and posture to perform the next operation in performing a task.
Robots generally use SLAM (Simultaneous Localization and Mapping) technology to construct and locate maps. However, the SLAM technology performs positioning and mapping simultaneously, and in order to reduce the amount of calculation and improve the positioning efficiency when the surrounding environment changes little or infrequently, a map built once is usually saved for positioning in the next task. In some scenarios, the stored map is, for example, a visual map constructed based on a visual SLAM technology, and since the visual SLAM technology depends on image data, when the lighting condition changes in the environment, the robot is difficult to locate through the stored visual map. In other scenarios, the robot is configured with a laser device, the stored map is, for example, a laser map constructed based on a laser SLAM technology, and although the data acquired by the laser device can better reflect a real physical environment, when an object in the environment changes, for example, the position of an original object in the environment changes or an object is newly added in the environment, the robot cannot perform positioning or the positioning accuracy is unreliable according to the stored laser map.
Therefore, it is desirable to provide a positioning method that can not only improve the positioning accuracy but also ensure the positioning efficiency.
Disclosure of Invention
In view of the above-mentioned shortcomings of the related art, the present application aims to provide a robot and a positioning method thereof, a control device of the robot, a computer storage medium, an electronic device, and a server, so as to overcome the technical problems of poor positioning accuracy and low efficiency in the related art.
To achieve the above and other related objects, a first aspect of the present disclosure discloses a robot positioning method, including: acquiring current environmental data during movement of the robot in the physical space based on the first map; matching the environment data with the first map so as to select to acquire positioning information of the robot currently mapped in the first map based on the environment data and the first map according to a matching result, or acquire positioning information of the robot currently mapped in the first map based on the environment data and the second map; wherein the second map has an association relationship with the first map, and the obtaining of the positioning information of the robot mapped in the first map based on the environment data and the second map comprises: and converting the positioning information of the robot mapped in the second map into the positioning information in the first map based on the incidence relation.
A second aspect of the present disclosure discloses a control apparatus of a robot, including an interface device for performing data communication with the robot; a storage device storing at least one program; processing means, connected to said storage means and said interface means, for executing said at least one program to perform and implement the robot positioning method as described in any of the embodiments disclosed in the first aspect of the present application.
A third aspect of the present disclosure discloses a robot comprising: sensor means for acquiring environmental data; a moving means for performing a moving operation; a storage device to store at least one program; processing means connected to said sensor means, moving means, and storage means for executing said at least one program to perform a robot positioning method as described in any of the embodiments disclosed in the first aspect of the present application.
A fourth aspect of the present disclosure discloses a computer storage medium storing at least one computer program, where the computer program, when executed by a processor, controls a device where the storage medium is located to execute the robot positioning method as described in any one of the embodiments disclosed in the first aspect of the present disclosure.
A fifth aspect of the present disclosure discloses an electronic device, comprising: an interface device for data communication with the robot; a storage device storing at least one program; processing means, connected to said storage means and said interface means, for executing said at least one program to perform and implement the robot positioning method as described in any of the embodiments disclosed in the first aspect of the present application.
A sixth aspect of the present disclosure discloses a server, comprising: an interface device for data communication with the robot; a storage device storing at least one program; processing means, connected to said storage means and said interface means, for executing said at least one program to perform and implement the robot positioning method as described in any of the embodiments disclosed in the first aspect of the present application.
In summary, the present application discloses a robot and a positioning method thereof, a control device of the robot, a computer storage medium, an electronic device, and a server, wherein a first map is set as a basic map to provide a basis for the robot to navigate, and when the positioning accuracy of the first map is high, that is, when the matching consistency between environmental data and features of the first map is high, the environmental data is matched with the first map to perform positioning and navigation, and when the positioning accuracy of the first map is low, that is, when the matching consistency between the environmental data and the first map is low, the robot obtains positioning information based on the environmental data and a second map and the association relationship between the first map and the second map, and maps the relatively accurate positioning information on the second map to the positioning information in the first map, so that the robot continues to navigate and move based on the first map with relatively accurate positioning. In this way, when the positioning is not accurate based on the first map, the positioning information obtained based on the second map is used as a supplement, so that the positioning efficiency is improved, the calculation amount is reduced, and the probability of positioning failure or error is reduced.
Other aspects and advantages of the present application will be readily apparent to those skilled in the art from the following detailed description. Only exemplary embodiments of the present application have been shown and described in the following detailed description. As those skilled in the art will recognize, the disclosure of the present application enables those skilled in the art to make changes to the specific embodiments disclosed without departing from the spirit and scope of the invention as it is directed to the present application. Accordingly, the descriptions in the drawings and the specification of the present application are illustrative only and not limiting.
Drawings
The specific features of the invention to which this application relates are set forth in the appended claims. The features and advantages of the invention to which this application relates will be better understood by reference to the exemplary embodiments described in detail below and the accompanying drawings. The brief description of the drawings is as follows:
fig. 1 is a flowchart illustrating a robot positioning method according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating step S120 according to an embodiment of the present application.
FIG. 3 is a flow chart illustrating the process of determining the coordinate transformation relationship between the first map and the second map according to an embodiment of the present application.
Fig. 4 is a schematic diagram illustrating the repositioning of key frames in a first map in a second map according to an embodiment of the present application.
Fig. 5 is a schematic diagram illustrating the relocation of key frames in a second map in a first map according to an embodiment of the present application.
FIG. 6 is a flow chart illustrating a method for mapping a robot according to an embodiment of the present application.
Fig. 7 is a flowchart illustrating step S320 in an embodiment of the present application.
Fig. 8 is a flowchart illustrating step S330 in an embodiment of the present application.
Fig. 9 is a schematic diagram illustrating the determination of a keyframe image and the mapping of the robot to the first map according to an embodiment of the present disclosure.
FIG. 10 is a diagram illustrating a reprojection error according to an embodiment of the present disclosure.
FIG. 11 is a flowchart illustrating a method for updating a map for a robot according to an embodiment of the present application.
Fig. 12 is a flowchart illustrating step S430 in an embodiment of the present application.
Fig. 13 is a schematic structural diagram of a robot according to an embodiment of the present disclosure.
Fig. 14 is a schematic structural diagram of an electronic device in an embodiment of the basic application.
Fig. 15 is a schematic diagram of a server in an embodiment of the basic application.
Detailed Description
The following description of the embodiments of the present application is provided for illustrative purposes, and other advantages and capabilities of the present application will become apparent to those skilled in the art from the present disclosure.
In the following description, reference is made to the accompanying drawings that describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that changes in the module or unit composition, electrical, and operation may be made without departing from the spirit and scope of the present disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Although the terms first, second, etc. may be used herein to describe various elements or parameters in some instances, these elements or parameters should not be limited by these terms. These terms are only used to distinguish one element or parameter from another element or parameter. For example, a first map may be referred to as a second map, and similarly, a second map may be referred to as a first map, without departing from the scope of the various described embodiments. The first map and the second map are both describing one map, but they are not the same map unless the context clearly dictates otherwise.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
As described in the background art, in a robot positioning method using a map that is constructed and stored, in consideration of the calculation amount and the positioning efficiency, the robot positioning result is often unreliable or impossible to position due to environmental changes. Specifically, in a manner of positioning according to a stored visual map constructed based on the SLAM technology, the visual map is sensitive to light changes in the environment, and frame images acquired by the robot are different in the same scene with different lights, so that the positioning is difficult; in a mode of positioning according to a stored laser map, the robot is sensitive to position changes of objects, and once an object moves or newly increases in space, the robot cannot judge the position of the robot according to data detected by a laser device.
In order to solve the influence of light change on positioning based on a visual map, in some embodiments, a map which is constructed and stored in advance is constructed in a way of extracting image features by a machine learning method, and in positioning, a frame image is processed in a way of machine learning, so that although the influence of light change on positioning is reduced, the calculation amount is large, the positioning speed is slow, and the problems of large calculation load and non-instant positioning are caused.
In view of this, the present application discloses a robot and a positioning method thereof, a control device of the robot, a computer storage medium, an electronic device, and a server, in which a first map is set as a base map to provide a basis for the robot to navigate, and in a case where a positioning accuracy of the first map is high, that is, in a case where a matching consistency of environmental data and features of the first map is high, the environmental data is matched with the first map to perform positioning and navigation, and so on, and in a case where the positioning accuracy of the first map is low, that is, in a case where the matching consistency of the environmental data and the first map is low, positioning information on the second map is mapped to the positioning information in the first map with the help of the robot based on the environmental data and the second map, and an association relationship between the first map and the second map, so that the robot continues to navigate and move based on the first map with relatively accurate positioning. In this way, when the positioning is not accurate based on the first map, the positioning information obtained based on the second map is used as a supplement, so that the positioning efficiency is improved, the calculation amount is reduced, and the probability of positioning failure or error is reduced.
The physical space in the present application refers to an actual three-dimensional space in which the mobile robot is located, and may be described by abstract data constructed in the spatial coordinate system. For example, the physical space includes, but is not limited to, a home residence, a public place (e.g., an office, a mall, a hospital, an underground parking lot, and a bank), and the like. For a mobile robot, the physical space generally refers to a space in a room, i.e., a space having boundaries in a length, a width, and a height direction. The system particularly comprises physical spaces such as shopping malls and waiting halls which have the characteristics of large space range, high scene repetition degree and the like.
The first map described in the present application may be, for example, a laser map or a visual map of a corresponding physical space constructed in advance. The second map described in the present application may also be, for example, a laser map or a visual map corresponding to the physical space. In order to distinguish the first map from the second map, in some embodiments or examples where the first map corresponds to a laser map or a visual map, the laser map is also referred to as a first laser map, and the visual map is also referred to as a first visual map. In some embodiments or examples where the second map corresponds to a laser map or a visual map, the laser map is also referred to as a second laser map and the visual map is also referred to as a second visual map.
In some embodiments, the first laser map is a map, such as a grid map or a feature map, that the robot builds based on laser SLAM techniques in the movement of the physical space. For example, the robot is configured with a laser device, when the robot needs to construct a first map, such as when the robot is first placed in a current physical space, the robot controls the laser device to scan the surrounding environment during movement, constructs the first laser map of the current physical space based on point cloud data detected by the laser device, and stores the first laser map after the first laser map is constructed for use in subsequent work of the robot. Of course, the first laser map may be constructed by using other techniques, which is not limited in this application. Because the laser map can accurately reflect the real physical space and has high positioning efficiency, in the embodiment that the first map is set as the first laser map, the robot moves according to the first map during navigation, and the efficient and high-precision positioning of most positions in the physical space can be realized based on the first map and the environmental data.
In some embodiments, the first Visual map is a map constructed based on VSLAM (Visual SLAM, visual SLAM-based, collectively referred to as Visual Simultaneous Localization and Mapping) technology. Specifically, the robot is provided with an image pickup device, when the robot needs to construct a first map, if the robot is placed in the current physical space for the first time, the robot controls the image pickup device to obtain image data of the surrounding environment during movement, constructs the first visual map of the current physical space based on the image data, and stores the first visual map after the first visual map is constructed for use in subsequent work of the robot. Therefore, in the example in which the first map is set as the first visual map, since the map constructed based on the VSLAM has an advantage of high speed in robot positioning, the map constructed based on the VSLAM moves as the first map during navigation performed by the robot, and efficient positioning based on the first map and the environmental data can be ensured in most of scenes.
In some embodiments, the second map is a second laser map or a second visual map that is pre-constructed. In the embodiment, the coordinate conversion relationship between the second map and the first map is determined as the association relationship between the first map and the second map. The specific construction method of the second laser map may refer to the description of the first laser map, and is not described herein again. In some examples, the specific construction manner of the second visual map can also refer to the description for the first visual map; in other examples, the second visual map is constructed based on a mode of extracting image features through machine learning, the robustness of the map on environmental changes is high, but because the frame image is processed through a mode related to machine learning, the calculation amount in composition and positioning is large, and therefore the accuracy of robot positioning can be improved by using the second visual map as a supplement to the first map positioning.
In other embodiments, the second map is a second laser map or a second visual map constructed based on the first map. Thus, in the present embodiment, the association relationship between the first map and the second map can be formed in the process of constructing the second map.
In some embodiments, the key frame described in this application refers to a frame where a key action is located in a robot motion change or a frame representative of a motion change. For example, the redundancy of information between close frames is high, and the key frame is a representative frame of local close frames, for example, the robot is not moving in the original place, although it will obtain a normal frame, the key frame will not increase because there is no motion change. Specifically, the key frames correspond to different types of key frames according to different types of the acquired environment data, for example, the acquired environment data includes image data acquired by an image pickup device, and then the key frames correspond to visual key frames, which are also referred to as key frame images in some embodiments or examples disclosed in the present application. For another example, if the collected environmental data includes point cloud data acquired by a laser device, the keyframe corresponds to a keyframe point cloud. It should be noted that the point cloud data constituting the key frame point cloud is not generated at the same time point, and the point cloud data accumulated within a fixed time duration (e.g., 100 ms) or within a fixed rotation angle (e.g., one rotation of the single line laser device) is generally used as a frame of point cloud, where the key frame point cloud is the frame of point cloud where the key action is located or a representative frame of point cloud.
In some embodiments, the feature information described herein includes feature points, feature lines, and the like, which may be represented by descriptors, for example. The positioning information is used for representing the pose of the robot in the physical space. In consideration of the fact that the robots have the same or different poses in the physical space at different times, the positioning information of the frame image, the frame point cloud, the key frame image, the key frame point cloud, or the like is also used to represent the pose of the robot in the physical space at a certain time, for example, the positioning information of the frame image is used to represent the pose of the robot in the physical space at the time corresponding to the frame image, or it can be understood that the positioning information of the frame image is used to represent the pose of the robot in the physical space when the frame image is acquired. The positioning information mapped on the first map or the second map also represents the map on which the pose of the robot is mapped in the physical space, considering the difference of reference bases. Meanwhile, in consideration of the above different times and different reference conditions, the positioning information of the frame image, the frame point cloud, the key frame image, or the key frame point cloud on the first map or the second map may also indicate the pose of the robot in the physical space at a certain time to be mapped onto the first map or the second map, for example, the positioning information of the key frame image on the first map indicates the pose of the robot in the physical space mapped onto the first map at the time corresponding to the key frame image, or it may also be understood that the pose of the robot is mapped onto the first map when the robot acquires the key frame image.
The robot is a mobile device equipped with a laser device and/or an image pickup device, and is used for executing operations based on the laser device and/or the image pickup device, wherein the executed operations comprise work tasks in an application scene, and the robot positioning method, the robot map building method, the robot map updating method and the like disclosed by the application. The robot refers to an autonomous mobile device with the ability to construct maps in physical space, including but not limited to: an unmanned aerial vehicle, an industrial robot, a home-accompanying mobile device, a medical mobile device, a home cleaning robot, a commercial cleaning robot, an intelligent vehicle, a patrol robot, and the like. In different application scenarios, the robot may be configured to perform corresponding tasks, for example, the robot is used indoors to perform floor cleaning work, and in this application scenario, the robot may also be referred to as a cleaning robot, a floor washing robot, an automatic cleaning robot, and the like. In other indoor scenarios, the robot may be a family accompanying mobile robot, a patrol robot, or a robot for delivering food/goods.
In some embodiments, the present application discloses a robot positioning method. In some examples, the robot positioning method is performed by a robot, and further, may be performed by a control device provided on the robot. In some examples, the robot positioning method may also be performed by a processing device configured with a server in data communication with the robot to control the robot to perform corresponding actions when performing the robot positioning method disclosed herein. In other examples, the robot positioning method may also be performed by a processing device configured on an electronic device, the electronic device being in data communication with the robot to control the robot to perform corresponding operations when performing the robot positioning method disclosed herein. The following embodiments are described as examples in which the robot positioning method is executed by a control device disposed in a robot.
Referring to fig. 1, a flowchart of a robot positioning method according to an embodiment of the present application is shown, where the robot positioning method includes: step S110 and step S120.
In step S110, the control device acquires current environmental data while the robot moves in the physical space based on the first map.
The environment data may include at least one of point cloud data or image data corresponding to the first map and the second map type settings. In one embodiment, the first map and the second map are both set as laser maps, and the environment data includes point cloud data acquired by a laser device. In another embodiment, the first map and the second map are both provided as visual maps, and the environment data includes image data acquired by the image pickup apparatus. In still other embodiments, one of the first map and the second map is provided as a laser map, and the other is provided as a visual map, and the environment data includes point cloud data and image data acquired by the laser device and the image pickup device, respectively.
In one embodiment, the laser device is configured on the robot, the laser device comprising: single line lidar or multiline lidar, etc. The laser device can be controlled by the control device, and during the movement of the robot, the control device controls the laser device to emit laser beams at a certain rotation angle, so that the point cloud data of the surrounding environment can be acquired. Considering that single-point cloud data corresponding to one laser beam cannot be used alone to reflect a real physical space condition, in some examples, point cloud data accumulated within a fixed time duration (e.g., 100 ms) or point cloud data accumulated within a fixed rotation angle (e.g., one rotation of a single-line laser radar) is used as one frame of point cloud, and a timestamp is set for each frame of point cloud, and the control device acquires the point cloud data in a frame manner and also acquires the timestamp corresponding to each frame of point cloud while acquiring the point cloud.
In one embodiment, the image capturing device is disposed on the robot, and includes: cameras, video cameras, camera modules integrated with optical systems or CCD chips, camera modules integrated with optical systems and CMOS chips, and the like. The image pickup device may be controlled by the control device, and the control device controls the image pickup device to photograph the surrounding environment during movement of the robot, thereby acquiring image data of the surrounding environment. The image data comprises frame images, each frame image is provided with a time stamp, the control device acquires the image data in a frame image mode, namely, the image pickup device sends the acquired image data to the control device in a frame mode. In some examples, the image capturing apparatus transmits the acquired frame images to the control apparatus, for example, the image capturing apparatus transmits the acquired frame images to the control apparatus every time the image capturing apparatus acquires one frame image; in some examples, the image pickup apparatus transmits frame images at fixed time intervals to the control apparatus, for example, the image pickup apparatus acquires one frame image at times t0, t1, t2, t3, and t4, respectively, and transmits only the frame images acquired at times t0, t2, and t4 to the control apparatus; in other examples, the image pickup apparatus transmits the frame images to the control apparatus at fixed frame intervals, for example, the image pickup apparatus acquires consecutive frame images, and transmits the first frame image, the fifth frame image, the ninth frame image, the thirteenth frame image, … … to the control apparatus at frame intervals of three frames, respectively. The above is merely an example, and the present application does not limit how the image pickup apparatus transmits the frame image to the control apparatus.
It should be understood that the current environment data in step S110 includes at least one of current point cloud data and current image data. The current point cloud data includes at least one frame of point cloud, and the current image data includes at least one frame of image, that is, the current environment data includes at least one of the at least one frame of point cloud and the at least one frame of image. In embodiments where both the first map and the second map are provided as laser maps, the current environmental data comprises at least one frame of point cloud. In an embodiment where both the first map and the second map are provided as visual maps, the current environmental data comprises at least one image. In an embodiment where one of the first map and the second map is configured as a laser map and the other is configured as a visual map, the current environmental data includes at least one frame of point cloud and at least one frame of image.
Because the laser device and the image pickup device have respective sampling frequency characteristics, at least one frame of point cloud and/or at least one frame of image included in the current environmental data are not necessarily obtained at the current time, nor are necessarily obtained at the same time, and only the current environmental data obtained within a time period taking the current time as an interception point is required to be ensured. In other words, during the movement of the robot in the physical space based on the first map, the current environmental data acquired by the control device includes at least one frame of point cloud and/or at least one frame of image acquired within a period of time with the current time as a cutoff point. The time period may be, for example, a fixed value, in an example, the current environment data includes at least one frame of point cloud and at least one frame of image, the fixed value may be set as a maximum value in frame sampling periods of the laser device and the image capturing device, and the frame of point cloud and the frame of image acquired closest to the current time in the time range are used as the current environment data. Of course, those skilled in the art can design the time period range according to different hardware devices and different application scenarios, and the application is not limited to this.
It should be noted that, in some embodiments or examples of the present application, in order to indicate a time relationship between environment data at a certain time period in the historical environment data and current environment data, an identifier representing a precedence order is added to the environment data, where, for example, the previous environment data is used to indicate the historical environment data acquired before the current environment data. Further, according to different types of the environment data, the environment data may be replaced with corresponding data included therein, such as a previous frame image, a previous frame point cloud, and the like, without changing the added identifiers representing the sequence.
It should be noted that the types of data included in the current environment data are not limited to the above embodiment, and the environment data may include data respectively corresponding to the first map and the second map.
Referring to fig. 1, in step S120, the control device matches the environment data with the first map, so as to select to obtain the positioning information of the robot currently mapped in the first map based on the environment data and the first map, or obtain the positioning information of the robot currently mapped in the first map based on the environment data and the second map according to the matching result.
Referring to fig. 2, which is a flowchart illustrating step S120 according to an embodiment of the present disclosure, as shown in the figure, the step S120 includes step S121, step S122, and step S123 or step S124.
In step S121, the control device matches the current environmental data with the first map. Specifically, in the matching of the current environment data with the first map, the control device matches data of the type that is consistent with the first map in the current environment data with the first map. For example, in an example where the first map is a laser map, the control device matches at least one frame of point cloud in the current environmental data with the first map; in an example where the first map is a visual map, the control means matches at least one frame of image in the current environment data with the first map.
Taking the first map as the first visual map as an example, the control device matches the current environmental data with the first map, and performs feature matching on the at least one frame of image and the positioning data set of the first map. The set of localization data includes keyframe images and landmark points. The key frame image includes feature information and positioning information. The positioning information is used for representing the pose of the robot when acquiring the key frame image, in other words, the positioning information can represent the observation relationship between the landmark point and the corresponding key frame image. The landmark points are three-dimensional points mapped to a physical space by feature points in the key frame images, and the feature points of different key frame images can be mapped to the same landmark point.
Specifically, the control device performs feature matching on the at least one frame of image and the positioning data set of the first map to determine a key frame image matching the frame of image in the positioning data set and a feature matching relationship between the frame of image and the key frame image. The matched key frame image refers to a key frame having a similarity with the at least one frame image, for example, a key frame having an overlapping area with the at least one frame image is determined to have a similarity, that is, a key frame having an overlapping area with the at least one frame image is determined to be a matched key frame image. The feature matching relationship includes a one-to-one matching relationship between feature points or a one-to-one matching relationship between descriptors representing feature information, and the present application is not limited herein.
Taking the first map as the first laser map as an example, the current environmental data acquired by the control device includes at least one frame of point cloud, and the control device matches the current environmental data with the first map, that is, matches the at least one frame of point cloud with the first map. In one example, the laser map is set as a grid map, and the control device compares the at least one frame of point cloud with the grid map, for example, by means of violent matching, so as to obtain a confidence level of the frame of point cloud on the grid map, wherein the confidence level is used for representing the probability of an accurate pose obtained based on the comparison of the frame of point cloud with the grid map.
Referring to fig. 2, in step S122, the control device determines whether a matching result of the feature matching between the current environment data and the first map meets a preset matching condition. If it is determined that the matching condition is satisfied, step S123 is executed, and if it is determined that the matching condition is not satisfied, step S124 is executed.
In some embodiments, the matching condition includes that the number of key frames in the first map matching the current environment data reaches a set threshold. For example, in the embodiment in which the first map is set as the first visual map in step S121, matching key frame images can be obtained, and the control device compares the number of matching key frame images with a set threshold value, and considers that the matching condition is reached when the number reaches the threshold value, and considers that the matching condition is not reached when the number does not reach the threshold value. Of course, the first map may also be set as the first laser map, and the control device compares the number of the matched keyframe point clouds with a set threshold, and when the number reaches the threshold, it is determined that the matching condition is reached, and when the number does not reach the threshold, it is determined that the matching condition is not reached.
In some embodiments, the matching condition includes that the number of pairs of feature points reaches a set threshold. For example, in the embodiment where the first map is set as the first visual map in step S121, the feature matching relationship between the first visual map and the key frame image in the one frame image may be determined, and the control device determines that the number of pairs of feature points reaches the set threshold value according to the feature matching relationship, that is, the matching condition is considered to be reached, otherwise, the matching condition is considered to be not reached. For example, if the set threshold is 100 pairs, it is determined that the matching condition is reached when the number of pairs of feature points reaches 100 pairs, and it is determined that the matching condition is not reached when the number of pairs of feature points does not reach 100 pairs.
In some embodiments, the matching condition comprises a confidence reaching a set threshold. For example, in the embodiment that the first map is set as the grid map in step S121, the confidence of the frame of point cloud in the current environment data on the first map may be obtained, the control device further compares the confidence with a set threshold, and when the confidence reaches the set threshold, it is determined that the matching condition is reached, otherwise, it is determined that the matching condition is not reached.
The matching conditions described in any embodiment of step S122 above are merely examples, and a person skilled in the art may set different matching conditions according to different types and matching manners of actually setting a map or combine the matching conditions of any embodiment above to set a new matching condition, which is not limited in this application.
Referring to fig. 2, when it is determined based on step S122 that the matching result of the current environment data and the first map meets the matching condition, that is, the positioning of the robot according to the first map is reliable, step S123 is performed. In step S123, the control device acquires the positioning information of the robot currently mapped on the first map based on the current environment data and the first map.
In one embodiment, in step S123, the control device matches the current environment data with a first map to obtain the positioning information of the robot mapped in the first map. Here, the process of matching the current environment data with the first map may include a partial process in step S121, for example, in an embodiment where the first map is set as the first visual map in step S121, a matching keyframe image and a feature matching relationship may be obtained, then, in step S123, the control device may further determine, according to the matching keyframe image, the feature matching relationship, and a landmark Point corresponding to the keyframe image in the positioning data set of the first map, a pose when the robot obtains the current environment data by using a PnP (Perspective-n-Point) method, that is, positioning information mapped in the first map for the current robot. For another example, in the embodiment where the first map is set as the first laser map in step S121, the current environmental data is compared with the first map, and then, in step S123, the control device further determines the positioning information of the current robot mapped on the first map according to the comparison result.
It should be noted that, in some embodiments, the step S123 further includes optimizing the positioning information of the current robot mapping in the first map. In some examples, the control device further optimizes the positioning information mapped in the first map by the current robot by using the reprojection error as a constraint term, so that the optimized positioning information minimizes the reprojection error, wherein the definition of the reprojection error is described in detail later.
Referring to fig. 2, when it is determined that the matching condition is not met according to step S122, that is, it indicates that the positioning of the robot according to the first map is not reliable, step S124 is performed. In step S124, the control device acquires the positioning information of the robot currently mapped on the first map based on the current environment data and the second map. Wherein the second map has an association relation with the first map.
In one embodiment, the step S124 includes: and converting the positioning information of the robot mapped in the second map into the positioning information in the first map based on the incidence relation. And the positioning information of the robot mapped in the second map is determined by the control device based on the matching of the current environment data and the second map.
In one embodiment, the control device matches the type of data in the current environment data that is consistent with the second map. For example, in the example where the second map is a laser map, the control device matches at least one frame of point cloud in the current environment data with the second map; in an example where the second map is a visual map, the control means matches at least one image in the current environment data with the second map.
In one embodiment, the second map is set as a second visual map, and the control device matches the current environment data with the second map by feature matching the at least one image with a positioning data set of the second map. Specifically, the control device performs feature matching on the at least one frame of image and a positioning data set of a second map, so as to obtain the pose of the at least one frame of image in the current environment data acquired by the robot based on the PnP method, that is, the positioning information mapped in the second map for the robot at present. In another embodiment, in which the second map is set as the second visual map, the control device matches the at least one image with the second map in a machine learning manner, so as to obtain the positioning information of the robot currently mapped in the second map.
In an embodiment, the second map is set as a second laser map, and the control device matches the current environment data with the second map, that is, matches the at least one frame of point cloud with the second map. In one example, the second laser map is set as a grid map, and the control device compares the at least one frame of point cloud with the grid map, for example, by means of violent matching, so as to obtain position information of the frame of point cloud on the grid map, that is, positioning information of the current robot mapped in the second map.
It should be understood that, the manner of obtaining the positioning information of the robot mapped in the second map according to the current environment data and the second map is various, and the above embodiments of the present application are only examples and should not be construed as limiting the present application.
In an embodiment, the association relationship between the first map and the second map is set as a coordinate transformation relationship between the first map and the second map, and in this embodiment, the control device converts the current positioning information of the robot in the second map into the positioning information in the first map according to the coordinate transformation relationship. The coordinate transformation relationship is determined by repositioning key frames of the first map and key frames of the second map. In view of this, in some embodiments, the robot positioning method disclosed herein further comprises: and repositioning each key frame of the first map and each key frame of the second map to determine the coordinate transformation relation between the first map and the second map.
Referring to fig. 3, which is a flowchart illustrating a method for determining a coordinate transformation relationship between a first map and a second map according to an embodiment of the present application, as shown in the figure, the step S210 and the step S220 are included in the step of repositioning the key frames of the first map and the key frames of the second map to determine the coordinate transformation relationship between the first map and the second map.
In step S210, the control device relocates each key frame in the first map in the second map, and relocates each key frame in the second map in the first map, so as to obtain the paired data set. Each key frame in the first map and the second map can be, for example, a key frame image or a key frame point cloud, respectively, according to the type of the first map and the second map.
Wherein the pairing data set comprises positioning information of the successfully relocated key frame on the first map and positioning information of the successfully relocated key frame on the second map. In view of performing the relocation in the second map and the relocation in the first map respectively in step S210, in some embodiments, the pairing data set includes first pairing data and second pairing data, the first pairing data includes positioning information on the first map and the second map respectively of the keyframes successfully relocated in the second map, and the second pairing data includes positioning information on the first map and the second map respectively of the keyframes successfully relocated in the first map.
The following describes the relocation on the first map and the second map, respectively, with reference to the drawings.
Referring to fig. 4, a schematic diagram of the present application is shown for repositioning key frames in a first map in a second map in an embodiment, where the key frames in the first map are illustrated as three frames, as shown in the figure, a node 1 in the first map is used to represent a pose (i.e., positioning information) of the first key frame in the first map, and a pose (i.e., positioning information on the second map) of the first key frame after repositioning in the second map is represented by a node 1'; node 2 is used to represent the pose of the second key frame in the first map, and the pose of the second key frame after repositioning in the second map (i.e. the positioning information of the second key frame on the second map) is represented by node 2'; node 3 is used to represent the pose of the third key frame in the first map, and the pose of the third key frame after repositioning in the second map (i.e. the positioning information of the third key frame on the second map) is represented by node 3', then the first pairing data in fig. 4 includes: (node 1, node 1 '), (node 2, node 2 '), (node 3, node 3 ').
Please refer to fig. 5, which is a schematic diagram illustrating that each key frame in the second map is repositioned in the first map in an embodiment of the present application, as shown in the figure, the key frame in the second map is taken as an example to be described as three frames, as shown in the figure, a node 4 in the second map is used to represent a pose (i.e., positioning information) of the fourth key frame in the second map, and a pose (i.e., positioning information of the fourth key frame on the first map) after the repositioning of the fourth key frame in the first map is represented by a node 4'; node 5 is used to represent the pose of the fifth key frame in the second map, and the pose of the fifth key frame after repositioning in the first map (i.e. the positioning information of the fifth key frame on the first map) is represented by node 5'; node 6 is used to represent the pose of the sixth key frame in the second map, and the pose of the sixth key frame after repositioning in the first map (i.e. the positioning information of the sixth key frame on the first map) is represented by node 6', then the second pairing data in fig. 5 includes: (node 4, node 4 '), (node 5, node 5 '), (node 6, node 6 ').
With continued reference to fig. 3, in step S220, the control device determines the coordinate transformation relationship between the first map and the second map based on the paired data sets.
In one embodiment, the control device determines the coordinate transformation relationship between the first map and the second map based on the positioning information of the key frames in the paired data sets on the first map and the positioning information on the second map. For example, the control device determines a transformation matrix between the Point pairs (i.e., the positioning information of the keyframe on the first map and the positioning information on the second map are the Point pairs) as the coordinate transformation relationship of the first map and the second map by an ICP (Iterative Closest Point) algorithm. Of course, other data registration algorithms can be used by those skilled in the art to determine the coordinate transformation relationship between the two maps in the light of the present application, and the present application is not limited to this calculation manner.
In the case where the first map and the second map are too different, even if the coordinate conversion relationship between the two maps is obtained to convert the positioning information on the second map into the positioning information on the first map, the obtained positioning information on the first map is not reliable. In view of this, in some embodiments, before the step S220, the method further includes: and a step of judging whether the paired data set meets a preset condition, and executing step S220 when the paired data set is judged to meet the preset condition. Specifically, in an example, the preset condition is that the number of successfully relocated key frames in the paired data set reaches a preset threshold, when it is determined that the number of successfully relocated key frames reaches the preset threshold according to the paired data set, the control device continues to perform step S220, when it is determined that the number of successfully relocated key frames does not reach the preset threshold according to the paired data set, the control device does not perform step S220, and further, in step S120, when the association relationship between the second map and the first map is not obtained (i.e., there is no coordinate transformation relationship between the first map and the second map), the control device still selects to obtain the positioning information currently mapped in the first map by the robot based on the environment data and the first map.
In another embodiment, the association relationship between the first map and the second map is established during construction of the second map based on the first map, so that the control device can use the positioning information of the robot currently mapped in the second map as the positioning information in the first map.
In view of this, in some embodiments, the present application further discloses a method for constructing a map by a robot, which may be performed before step S110 is performed or during step S110 and step S120 are performed as part of the robot positioning method disclosed in the present application, or may be performed separately, and a second map successfully constructed may be used as the second map in the robot positioning method disclosed in the present application. In some examples, the method of the robot for constructing the map is performed by the robot, and further, may be performed by a control device provided on the robot. In some examples, the method for the robot to construct the map may also be performed by a processing device configured to a server, and the server may be in data communication with the robot to control the robot to perform corresponding actions when performing the method for the robot to construct the map disclosed in the present application. In other examples, the method for the robot to construct the map may also be performed by a processing device configured on an electronic device, and the electronic device is in data communication with the robot to control the robot to perform corresponding operations when performing the method for the robot to construct the map disclosed in the present application. The following embodiments are described as examples in which a method for constructing a map by a robot is executed by a control device disposed in the robot.
Referring to fig. 6, a flowchart of a method for a robot to construct a map according to an embodiment of the present disclosure is shown, and as shown in the drawing, the method for the robot to construct the map includes steps S310, S320, and S330.
In step S310, the control device acquires environmental data while the robot moves in the physical space based on the first map. The data type and the obtaining manner of the environmental data refer to the related description in step S110 of the robot positioning method, which is not described herein again.
Wherein positioning information mapped to the first map by the robot at different times is available based on the environment data so that the robot can move in the physical space based on the first map. In view of this, the method of the robot for constructing a map further includes: and matching the acquired current environment data with the first map to obtain positioning information of the robot mapped to the first map currently. The process and manner of obtaining the status information of the current robot mapped on the first map in this step are similar to those in step S123 of the robot positioning method, please refer to the description of step S123 in any of the embodiments above, and are not repeated herein.
Referring to fig. 6, in step S320, the control device analyzes the moving pose of the robot in the physical space based on the environment data to obtain a key frame and positioning information thereof.
In an embodiment, before the step S320, the method further includes: and constructing an initialization map when the current environment data is judged to be the non-initialization map. Specifically, taking the second map to be constructed as the visual map as an example, when it is determined that there is no initialization map according to the current environment data, the step of constructing the initialization map includes: and when the frame image in the current environment data is judged to be the non-initialization map, constructing the initialization map based on at least two frame images. The initialization map is an initial map that is constructed, and provides an initial coordinate system of the second map, an initial landmark point, and the like, and may also provide a correspondence between the coordinate system of the second map and the image coordinate system, and the like. In one example, the control device may perform feature recognition and matching on a frame image (which may also be referred to as a current frame image) in the current environment data to determine whether there is an initialization map. When judging that no initialization map exists, the control device constructs the initialization map according to the position of the matched feature of the current frame image and the previous frame image and the movement information obtained from the previous frame image to the current frame image. It should be understood that, when the current frame image is the first frame image, the pose of the robot corresponding to the first frame image should be used as the starting point of the coordinate system for constructing the second map, the control device continues to acquire the current environmental data during the movement, at this time, the first frame image is the previous frame image, the frame image in the current environmental data that is continuously acquired is the current frame image, and the control device performs the construction of the initialization map based on the current frame image and the previous frame image at this time.
In order to make the coordinate system of the second map consistent with the coordinate system of the first map, in some embodiments, the step of constructing the initialization map based on at least two frames of images includes: and mapping the robot to the positioning information on the first map at the moment corresponding to the first frame image as the positioning information of the first frame image. Specifically, as described above, the pose of the robot corresponding to the first frame image is the starting point of the coordinate system for constructing the second map, and the positioning information of the robot mapped onto the first map at the time corresponding to the first frame image is used as the positioning information of the first frame image to construct the initialization map, so that the initial coordinate system constructed by the initialization map corresponding to the second map is consistent with the coordinate system of the first map, and thus it can be ensured that the coordinate system for continuously constructing the local map based on the initialization map and finally forming the second map can be consistent with the coordinate system of the first map.
Referring to fig. 7, which is a flowchart illustrating step S320 in an embodiment of the present application, as shown in the figure, the step S320 includes: step S321 and step S322.
In step S321, the control device analyzes the robot movement posture based on the current frame image and the previous frame image to determine the positioning information of the current frame image. In some examples, the control device first obtains initial positioning information corresponding to a current frame image based on the current frame image and a previous frame image; and then optimizing the initial positioning information to obtain the positioning information of the current frame image. In some examples, the control device may track a previous frame image based on the current frame image to obtain initial positioning information corresponding to the current frame image, and may also determine a relative displacement between the current frame image and the previous frame image according to the movement information of the robot to determine the initial positioning information corresponding to the current frame image. In some examples, optimizing the initial positioning information may be performed by matching the current frame image with a constructed portion of the second map (also referred to as a local map) to optimize the initial positioning information in a map-optimized manner, so as to obtain the positioning information corresponding to the current frame image.
In step S322, the control means makes a judgment on the current frame image to determine whether to take it as the key frame image.
Specifically, in some examples, the control device determines that the current frame image is a key frame image if the difference between the current frame image and the key frame image determined last time is determined to be a preset frame number or a preset time length, for example, the preset frame number is 20 frames, and determines that the current frame image is a key frame image if the difference between the current frame image and the key frame image determined last time reaches 20 frames. In this example, setting a preset frame number or a preset duration as a determination condition can indicate that the scene of the robot movement changes, and therefore, a key frame needs to be added. In other examples, the control device determines that the current frame image is used as the key frame image when judging that the feature points in the current frame image exceed the preset feature point number. In still other examples, the control device may further determine whether to use the current frame image as the key frame image according to the busy level of the local map construction thread, where the current frame image is required to be idle as the key frame image in the case of the local map construction thread. It should be noted that the determination manners in the above examples are all examples, and a person skilled in the art may design a standard that a current frame image can be used as a key frame image according to actual requirements, and may also combine the determination conditions in the above examples, which is not limited in this application.
Referring to fig. 6, in step S330, the control device constructs a local map of the physical space based on the keyframe and the positioning information thereof, and optimizes the local map according to a constraint term, so that a coordinate system of the local map is consistent with a coordinate system of the first map.
The local map is a description mode of the intermediate form map when the second map is not constructed, and refers to a map corresponding to a part of the physical space which is constructed based on the incremental initialization map during the movement of the robot in the physical space, that is, the local map can reflect part of information in the physical space.
Referring to fig. 8, which is a flowchart illustrating step S330 in an embodiment of the present application, as shown, step S330 includes: step S331 and step S332.
In step S331, the control device inserts the key frame into the local map, and updates landmark points in the local map based on the inserted key frame and positioning information thereof. Each embodiment or example of step S330 is described below by taking the key frame as a key frame image as an example.
Inserting key frame images into the local map provides for updating landmark points in the local map. In one embodiment, updating landmark points in the local map comprises: deleting landmark points and generating new landmark points. In one example, the control device cleans up useless landmark points in the local map, for example, if the key frame image of the landmark point is observed to be less than 3 frames, that is, the key frame image corresponding to the landmark point is less than 3 frames, the landmark point is considered to be useless, and the control device deletes the landmark point. In one example, the control device performs feature matching of the inserted key frame image with a historical key frame image in a local map and triangularization according to positioning information of the key frame image to generate a new landmark point.
In step S332, the control device optimizes the local map according to the constraint term so that the location information of the keyframes in the optimized local map can make the constraint term within the ideal threshold. Wherein the constraint term at least comprises the positioning information of the key frame in the local map and the error of the positioning information mapped to the first map by the robot at the same time. The description is continued by taking an example in which the key frame is set as a key frame image.
To obtain the constraint term in step S332, in some embodiments, the method for the robot to construct the map further includes: and determining the key frame and simultaneously carving the positioning information mapped on the first map by the robot. The description is continued by taking an example in which the key frame is set as a key frame image.
In an embodiment, the first map and the second map to be constructed are the same type of map, for example, both are set as visual maps, the image data based on the two maps are frame images of the same image capturing device, and since the robot determines the location of the robot mapped on the first map according to the acquired frame images continuously during the movement of the first map in the physical space, the control device only needs to determine the frame images in the environmental data at the same time according to the time stamps of the key frame images in the local map, and then can directly obtain the location information mapped onto the first map by the robot at the same time.
It is contemplated that in some embodiments the first map and the second map are provided as different types of maps, and as such, the acquired environmental data may include the same type of data or different types of data, the different types of data typically being acquired by different types of devices, the different types of devices typically having different sampling frequency characteristics. Therefore, another type of data, such as a frame point cloud, which is carved simultaneously with the environment data, in the environment data may not be directly obtained through the timestamp of the key frame image in the local map, that is, the positioning information, which is mapped onto the first map by the robot, which is carved simultaneously with the environment data, may not be obtained.
In view of this, in an embodiment where the first map is set as the first laser map and the second map is set as the second visual map, the step of determining the keyframe image while inscribing the positioning information mapped onto the first map by the robot comprises: and determining two frames of point clouds adjacent to the time stamp in the environmental data based on the time stamp of the key frame image, so as to determine the positioning information of the robot mapped on the first map simultaneously with the key frame image based on the two adjacent frames of point clouds.
Referring to fig. 9, a schematic diagram of the positioning information mapped on the first map by the robot when determining the key frame image according to an embodiment of the present disclosure is shown, as shown in the figure, during the movement of the robot in the physical space, the acquired environment data includes a frame point cloud acquired by the laser device at a frequency 1/T1 (i.e., a period T1), and a frame image acquired by the image capturing device at a frequency 1/T2 (i.e., a period T2), a timestamp of the key frame image in the local map is T2_ n, the control device determines two frame point clouds temporally adjacent to the timestamp T2_ n in the environment data based on the timestamp T2_ n, that is, the frame point cloud with a timestamp of T1_ n-1 and the frame point cloud with a timestamp of T1_ n in fig. 9, further determines the positioning information mapped on the first map by the robot according to corresponding time of the two frame point clouds, and derives the positioning information mapped on the first map by the robot at time T2_ n through an interpolation algorithm.
It should be noted that, in some embodiments, the control device does not acquire two frames of point clouds adjacent to the timestamp from the environment data based on the timestamp of the key frame image, and then the control device waits for a period of time and then acquires the point clouds, or the control device deletes the current key frame image. In some examples, the control device continues to acquire the point cloud of the adjacent frame after the timestamp is not acquired in the environmental data based on the timestamp of the key frame image after waiting for a period of time, for example, when the control device controls the laser device and the image acquisition device to respectively acquire current environmental data, the control device first acquires the frame image and then acquires the point cloud of the frame, and then the control device waits for a period of time and then acquires the point cloud of the adjacent frame before and after the timestamp of the key frame image. In other examples, the control device deletes the current key frame image if the neighboring frame point cloud before the timestamp is not obtained in the environment data based on the timestamp of the key frame image, for example, if the key frame image is the environment data that appears first in the acquisition of the environment data, that is, if there is no frame point cloud data before the timestamp of the key frame image, the control device discards the key frame image.
In one embodiment, in step S332, optimizing the local map includes: and optimizing the positioning information of the key frame image in the local map. In particular, the control means aim at said optimization with constraint terms (i.e. including the error between the positioning information of the keyframe image and the positioning information of the robot mapped onto the first map at the same time) within ideal thresholds. In other words, in the process of constructing the local map, the positioning error between the second map and the first map, that is, the coordinate system error between the coordinate system of the local map and the first map is continuously corrected, so that the positioning information on the second map formed after the local map is finally constructed can be directly used as the positioning information on the first map. For example, the local map includes 10 key frame images including the newly inserted key frame image, the control device may obtain the positioning information of the robot mapped on the first map at the same time based on the timestamps of the 10 key frame images, and use the error between all pairs of positioning information paired at the same time (i.e., the positioning information of the key frame images in the local map and the positioning information of the robot mapped on the first map at the same time) as the constraint term for optimizing the local map, so as to obtain the optimal positioning information of each key frame image, so that the error between each pair of positioning information is within the ideal threshold. The ideal threshold value is used to indicate a tolerance range for the constraint term, within which the obtained optimal solution (i.e. the positioning information of the optimal key frame image) is within a precision range.
In one embodiment, the constraint term further comprises: a reprojection error of landmark points in the local map. Please refer to fig. 10, which is a schematic diagram showing a reprojection error of the present application in an embodiment, where the reprojection error refers to an error between a projection of a real landmark point Pj in a three-dimensional space on an image, for example, a reprojection point obtained by performing a second projection on the image according to the calculated landmark point Pj ' and the positioning information of the corresponding frame image in fig. 9, where the projection points of the two frame images are X1j and X2j (the projection points are also referred to as pixel points), and the reprojection point obtained by performing a second projection on the image according to the calculated landmark point Pj ' and the positioning information of the corresponding frame image, for example, the calculated landmark point Pj ' in fig. 9 is X1j ' and X2j '. That is, in the present embodiment, in the process of optimizing the local map, the reprojection error is further included in the constraint term, so that the optimized positioning information of each key frame image enables the error sum between the reprojection error and the positioning information of each key frame image and the positioning information of the robot mapped onto the first map at the same time to be within the ideal threshold.
In an embodiment, on the basis that the constraint term includes the positioning information of the key frame images and the error between the positioning information of the robot mapped onto the first map, the method further includes: and inertial navigation positioning information corresponding to the key frame image in the local map and the error of the positioning information. That is, in the present embodiment, in the process of optimizing the local map, the constraint item further includes an error between the inertial navigation positioning information and the positioning information corresponding to the key frame image, so that the optimized positioning information of each key frame image enables the sum of the error between the inertial navigation positioning information and the positioning information corresponding to the key frame image, and the error between the positioning information of each key frame image and the positioning information mapped onto the first map by the robot at the same time to be within the ideal threshold. The above mentioned constraint terms are only examples, and of course, in other embodiments, the constraint terms may also include the three errors described in the foregoing embodiments, that is, in the process of optimizing the local map, the optimized positioning information of each key frame image can make the sum of the three errors within the ideal threshold. Other constraint terms can be added by the person skilled in the art according to the actual needs, and the application is not limited to the above. It should be understood that the Inertial navigation in the "Inertial navigation positioning information" in this application refers to an Inertial navigation system (Inertial navigation system, abbreviated as INS, inertial navigation) which uses an accelerometer and a gyroscope to measure the acceleration and angular velocity of an object and continuously estimate the position, attitude and velocity of the moving object.
In other embodiments, in step S332, optimizing the local map further includes optimizing landmark points in the local map based on the positioning information including optimizing the key frame images in the local map. The optimization of the landmark points in the local map is to optimize the spatial coordinate positions of the landmark points, for example, the control device optimizes the landmark points based on the optimized positioning information of each key frame.
In some embodiments, the step S330 further includes a step S333, and in the step S333, the control device deletes a part of the key frames based on the feature matching result of the inserted key frames and the historical key frames in the local map. Therefore, data redundancy can be avoided, and the calculation efficiency is improved. The explanation is continued with the key frame set as a key frame image. For example, the control device performs feature matching on the inserted key frame image and the historical key frame image, and selects to delete the key frame image or one of the three historical key frame images if the control device judges that 90% of landmark points corresponding to the inserted key frame image can be observed by at least three historical key frame images.
In order to obtain the second map corresponding to the physical space, the method for constructing the map by the robot disclosed by the application needs to perform incremental construction work based on a local map, and in view of this, the method for constructing the map by the robot comprises the following steps: and repeating the steps of step S310, step S320, and step S330 in any of the above embodiments to obtain a second map corresponding to the physical space. Since the local map and the first map are aligned in the loop of the steps S310, S320, and S330, the coordinate system of the second map obtained matches the coordinate system of the first map, in other words, the error between the coordinate system of the second map and the coordinate system of the first map is within the accuracy range, and the positioning information mapped on the second map by the robot can be used as the positioning information mapped on the first map by the robot.
As described in the robot positioning method in any of the foregoing embodiments, the control device may select positioning based on the positioning information of the first map or the second map based on the matching result of the environment data and the first map, but in some cases, the robot does not have a complete second map during movement in the physical space, for example, if the control device is performing construction of the second map based on the method for constructing a map by the robot disclosed in the present application, it is not reliable that the control device performs positioning based on the positioning information of the second map.
In view of this, in some embodiments, the robot positioning method described herein further comprises: and judging the integrity of the second map. In some examples, the step may be performed before the step 120, the control device performs the step S120 only when the second map is determined to be complete, and the control device directly matches the environment data with the first map to obtain the positioning information of the robot in the first map when the second map is determined to be incomplete, and does not select the robot according to the matching result. In some examples, this step may also be performed in step S120, and may be performed only before the selection is performed according to the matching result in step S120, when it is determined in step S120 that the second map is incomplete, the control device selects to acquire the positioning information of the current robot in the first map based on the environment data and the first map, and when it is determined in step S120 that the second map is complete, the control device continues to perform the selection according to the matching result.
In one embodiment, the determining the integrity of the second map includes determining the integrity of the complete second map, but this may result in an excessive amount of computation. Therefore, in order to reduce the amount of calculation, in an embodiment, the determining the integrity of the second map includes a step of determining the integrity of the sub-area map to which the current robot is mapped, and the second map is considered to be complete when the sub-area map to which the current robot is mapped is determined to be complete. In view of this, in some embodiments, the robot positioning method disclosed herein further comprises: and dividing the second map into a plurality of sub-area maps. For example, the second map may be partitioned according to the first map, such as partitioning the second map according to a size, obstacle distribution, etc. of the first map; the second map can also be divided according to the preset size; the second map may also be divided according to the amount of computation, which is not limited in this application.
In an embodiment, the determining the integrity of the sub-area map corresponding to the current position of the robot includes: and determining a sub-area map corresponding to the current position of the robot. In an example, the control device first determines the positioning information of the current robot on the first map based on the environment data and the first map, then converts the positioning information of the current robot on the first map into the positioning information on the second map according to the association relationship between the first map and the second map, and may determine the sub-area map corresponding to the positioning information according to the positioning information of the current robot on the second map, that is, the sub-area map corresponding to the position where the current robot is located. Please refer to the description of any of the embodiments above, and details thereof are not repeated herein.
For example, the integrity of the map in the above embodiments may be determined by the number of the key frames, for example, if the number of the key frames reaches a certain scale, the map is considered to be completely constructed; and the like, may be determined by reference to the first map, and the present application is not limited thereto.
In some embodiments, in the process of executing the robot positioning method in any one of the above embodiments, such as executing any one of the above steps S110 and S120, in order to ensure consistency of the coordinate systems of the second map and the first map, the robot positioning method disclosed in the present application further includes: and updating the second map.
In view of this, in some embodiments, the present application further discloses a method for updating a map by a robot, which may be performed as part of the robot positioning method disclosed in the present application in the process of performing step S110 and step S120, or may be performed separately, and the updated second map may be used as the second map in the robot positioning method disclosed in the present application. In some examples, the method for updating the map by the robot is executed by the robot, and further, may be executed by a control device configured on the robot. In some examples, the method for updating a map by a robot may also be performed by a processing device configured on a server, where the server and the robot may be in data communication to control the robot to perform corresponding actions when performing the method for updating a map by a robot disclosed herein. In other examples, the method for updating the map by the robot may also be performed by a processing device configured on an electronic device, where the electronic device is in data communication with the robot to control the robot to perform corresponding operations when performing the method for updating the map by the robot disclosed in the present application. The following embodiments describe an example in which a method for updating a map by a robot is executed by a control device disposed in the robot.
Referring to fig. 11, a flowchart of a method for updating a map by a robot according to an embodiment of the present application is shown, where the method for updating a map by a robot includes steps S410, S420, and S430.
In step S410, the control device acquires environmental data while the robot moves in the physical space based on the first map. In some embodiments, step S140 and step S110 are a multiplexing of steps, that is, the environment data obtained in step S110 can be used for positioning the robot in step S120, and step S110 can also be used as step S410 in the map updating method for the robot, and the obtained environment data is used in the subsequent steps of the map updating method for the robot. Specifically, please refer to the related description in any embodiment of step S110 in the robot positioning method in the above description for the data type and the obtaining manner for obtaining the environmental data in step S410, which are not repeated herein.
In step S420, the control device analyzes the moving pose of the robot in the physical space based on the environment data to obtain a key frame and its positioning information.
In some embodiments, the execution process of step S420 is similar to step S320 in the method for constructing a map by a robot described in any of the foregoing embodiments, and the specific execution process of step S420 can be described with reference to any embodiment of step S320 in the method for constructing a map by a robot, except that since the robot updates the map to be an update to the second map, in the method for constructing a map by a robot, the existence of the initialization map and the related steps of construction before step S320 may not be necessary before step S420, and may also be adaptively changed to the judgment and construction of the existence of the second map.
In step S430, the control device updates the second map based on the keyframe and the positioning information thereof, so that the coordinate system of the updated second map coincides with the coordinate system of the first map. Wherein updating the second map based on the key frames and their location information comprises: optimizing the second map according to a constraint term.
In an embodiment, the updating the second map in step S430 includes updating the complete second map, but this results in an excessive calculation amount, and the part of the second map corresponding to the part of the physical space that the robot has not moved to is not changed in updating the second map, so that the updating of the complete second map also results in a waste of calculation resources.
In view of this, in one embodiment, the updating the second map in step S430 includes: and updating a sub-area map on which the current robot is mapped by the control device based on the key frame and the positioning information thereof. The sub-area map is obtained by dividing the robot, for example, the second map can be divided according to the first map, for example, the second map can be divided according to the size of the first map, the distribution of obstacles and the like; the second map can also be divided according to the preset size; the second map may also be divided according to the amount of computation, which is not limited in this application.
In an embodiment, the step of updating, by the control device, a sub-area map on which the current robot is mapped based on the key frame and the positioning information thereof includes: and determining a sub-area map corresponding to the current position of the robot. In an example, the control device first determines, based on the environment data and the first map, positioning information of the current robot on the first map, then converts the positioning information of the current robot on the first map into positioning information on the second map according to an association relationship between the first map and the second map, and determines, according to the positioning information of the current robot on the second map, a sub-area map corresponding to the positioning information, that is, a sub-area map corresponding to a position where the current robot is located. For the relationship between the first map and the second map and the transformation of the positioning information of the first map and the second map, please refer to the description of any embodiment of the robot positioning method and the method for constructing the map by the robot, which is not described herein again.
As mentioned above, updating the second map in any of the embodiments of the above or following step S430 may be understood as updating the complete second map and may also be understood as updating the sub-area map thereof. Therefore, the description is omitted for the sake of brevity.
In an embodiment, please refer to fig. 12, which is a flowchart illustrating step S430 in an embodiment of the present application, wherein, as shown, step S430 includes: step S431 and step S432.
In step S431, the control device inserts the key frame into the second map, and updates landmark points in the second map based on the inserted key frame and its positioning information. In some embodiments, the step S431 is performed similarly to the step S331 in the method for constructing a map by a robot described in any of the foregoing embodiments, and the specific step S431 is performed as described in any of the foregoing embodiments of the method for constructing a map by a robot with reference to the step S331, except that since the robot constructs a map as a second map, in any of the embodiments of the step S331 in the method for constructing a map by a robot, a form in the second map construction process is referred to as a local map, and in the step S431 in the method for updating a map by a robot, a description related to constructing a local map in the step S331 needs to be adaptively modified into a description related to updating the second map.
In step S432, the control device optimizes the second map according to the constraint term, so that the location information of the keyframes in the optimized second map can make the constraint term within the ideal threshold. The constraint term includes at least an error of the positioning information of the keyframe in the second map and the positioning information of the robot mapped on the first map at the same time.
In some embodiments, the step S432 is performed similarly to the step S332 in the method for constructing a map by a robot described in any of the foregoing embodiments, and the specific step S432 is performed as described with reference to any of the foregoing embodiments of the step S332 in the method for constructing a map by a robot, except that since the robot constructs a map as a second map, in any of the embodiments of the step S332 in the method for constructing a map by a robot, a form in the second map construction process is referred to as a local map, and in the step S432 in the method for updating a map by a robot, the description related to constructing the local map in the step S332 is modified adaptively to the description related to updating the second map.
In some embodiments, the step S430 further includes a step S433, in which the control device deletes a part of the key frames based on the feature matching result of the inserted key frames and the historical key frames in the second map in the step S433. Therefore, data redundancy can be avoided, and the calculation efficiency is improved. The description is made with the key frame set as a key frame image. For example, the control device performs feature matching on the inserted key frame image and the historical key frame image, and selects to delete the key frame image or one of the three historical key frame images if the control device judges that 90% of landmark points corresponding to the inserted key frame image can be observed by at least three historical key frame images.
The application also discloses a robot, which is used for executing the robot positioning method, the robot map building method or the robot map updating method in any embodiment. Referring to fig. 13, which is a schematic structural diagram of a robot according to an embodiment of the present disclosure, as shown in the figure, the robot 1 includes a moving device 10, a control device 11, and a sensor device 12.
In an embodiment, the moving device 10 is used for performing a moving operation, for example, the moving device 10 is disposed at the bottom of the robot 1 to move the robot 1. In some embodiments, the moving device 10 includes a driving assembly and driving wheels disposed on two opposite sides of the bottom of the robot 1, and the driving wheels are driven by the driving assembly to move the robot 1. Specifically, the driving wheels are driven to drive the robot 1 to perform back-and-forth reciprocating motion, rotational motion, curvilinear motion or the like according to a planned movement trajectory, or drive the robot 1 to perform posture adjustment, and provide two contact points of the robot 1 with a walking surface. In other embodiments, the moving device 10 further includes a driven wheel located in front of the driving wheel, and the driven wheel and the driving wheel maintain the balance of the robot 1 in the motion state.
In one embodiment, the sensor device 12 is used to collect environmental data, including: a laser device. The laser device is horizontally arranged on the top of the robot 1, so that the control device 11 of the robot 1 can control the laser device to rotate and project laser lines without being shielded by the body of the robot 1, and the laser device can scan the surrounding environment in the largest range. Of course, in other embodiments, the laser device may be disposed at a certain inclination angle at the front or the top of the robot 1 according to different application scenarios or different functions provided, which is not limited in this application. For example, the laser device may be arranged as a single line lidar or a multiline lidar.
In one embodiment, the sensor device 12 further comprises: an image pickup apparatus. The image pickup apparatus is disposed on the robot, and includes: cameras, video cameras, camera modules integrated with optical systems or CCD chips, camera modules integrated with optical systems and CMOS chips, and the like. The image pickup device may be controlled by the control device, and during the movement of the robot, the control device controls the image pickup device to photograph the surrounding environment, thereby acquiring image data of the surrounding environment. The power supply system of the image capturing apparatus may be controlled by the power supply system of the robot, and during the power-on movement of the robot, the image capturing apparatus starts capturing image frames and supplies them to the control apparatus 11. The image capturing device may be disposed on top of the robot. Taking the cleaning robot as an example, in some examples, the image pickup device of the robot is disposed on the middle, or edge, of its top cover. The visual field optical axis of the image pickup device is + -30 DEG with respect to the vertical line or 60-120 DEG with respect to the horizontal line. In some examples, the optical axis of the camera of the cleaning robot is angled-30 °, -29 °, -28 °, -27 ° … … -1 °, 0 °, 1 °, 2 ° … … °, or 30 ° with respect to vertical. In still other examples, the optical axis of the camera of the robot is at an angle of 60 °, 61 °, 62 °, … … °, 120 ° to the horizontal. It should be noted that, those skilled in the art should understand that the angle between the optical axis and the vertical line or the horizontal line is only an example, and not limited to the range of the precision of the angle being 1 °, and the precision of the angle may be higher, such as reaching 0.1 °, 0.01 ° or more, according to the design requirement of the actual robot, and the present invention is not limited to the exhaustive example.
In an embodiment, the control device 11 is disposed on the robot 1, and is configured to control the moving device 10 to drive the robot 1 to move so as to execute the robot positioning method, the method for constructing a map by the robot, or the method for updating a map by the robot described in any of the above embodiments. In some examples, the control device 11 cooperates with the mobile device 10 by controlling at least one of the laser device and the image capturing device to perform the robot positioning method, the method for constructing the map by the robot, or the method for updating the map by the robot described in any of the above embodiments. The control device 11 may also control the robot 1 to perform work tasks, such as cleaning the ground, cycling security checks, etc.
In some embodiments, the control device 11 of the robot disclosed in the present application comprises interface means, memory, and processing means, etc. Wherein the interface device is used for data communication with the robot 1, for example, with a laser device and an image pickup device of the robot 1. The storage device is used to store at least one program, and in some examples, the storage device may also store data acquired through the interface device, such as image data, point cloud data, and the like. The processing device is connected with the storage device and the interface device, and is configured to execute the at least one program, so as to coordinate the storage device and the interface device to execute and implement the robot positioning method, the method for constructing a map by the robot, or the method for updating a map by the robot as described in any one of the above embodiments.
In an embodiment, the processing device may be configured to read and execute computer readable instructions. In a specific implementation, the processing device may mainly include a controller, an operator, and a register. The controller is mainly responsible for instruction decoding and sending out control signals for operations corresponding to the instructions. The arithmetic unit is mainly responsible for executing fixed-point or floating-point arithmetic operation, shift operation, logic operation and the like, and can also execute address operation and conversion. The register is mainly responsible for storing register operands, intermediate operation results and the like temporarily stored in the instruction execution process. In a specific implementation, the hardware architecture of the processing device may be an Application Specific Integrated Circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, etc. The processing means may comprise one or more processing units, such as: the processing device may include an application processing device (AP), a modem processing device, a graphics processing device (GPU), an image signal processing device (ISP), a controller, a video codec, a digital signal processing Device (DSP), a baseband processing device, and/or a neural-network processing device (NPU), and the like. Wherein the different processing units may be separate devices or may be integrated in one or more processing devices.
In an embodiment, the storage device is coupled to the processing device for storing various software programs and/or sets of instructions. In particular implementations, the storage may include high speed random access storage and may also include non-volatile storage, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The storage device can store an operating system, such as an embedded operating system of uCOS, vxWorks, RTLinux and the like. The storage device may also store a communication program that may be used to communicate with the smart terminal, the electronic device, one or more servers, or additional devices.
The application also discloses an electronic device, which is used for communicating with a robot to control the robot to move and realize the robot positioning method, the method for constructing the map by the robot, or the method for updating the map by the robot as described in any embodiment above.
In some embodiments, the electronic device is a device capable of digital computation, logic processing, and information processing of data, including but not limited to: personal computers, servers, server clusters, intelligent terminals, cloud-based server systems, and the like.
Referring to fig. 14, which is a schematic structural diagram of an electronic device in an embodiment of the basic application, the electronic device 2 at least includes an interface device 20, a storage device 21, and a processing device 22. In some embodiments, the electronic device 2 may further include a display device (not shown) or the like in data connection through the interface device.
In some embodiments, the storage device 21 is used for storing at least one program, which can be executed by the processing device 22 to coordinate the storage device 21 and the interface device 20 to implement the robot positioning method, the method for constructing the map by the robot, or the method for updating the map by the robot described in any of the above embodiments. Here, the storage device 21 includes, but is not limited to: read-Only Memory (ROM), random Access Memory (RAM), and non-volatile Memory (NVRAM). For example, the storage 21 includes a flash memory device or other non-volatile solid-state storage device. In certain embodiments, the storage device 21 may also include memory remote from the one or more processing devices 22, such as network-attached memory accessed via RF circuitry or external ports and a communication network, which may be the internet, one or more intranets, local area networks, wide area networks, storage area networks, and the like, or suitable combinations thereof. The memory controller may control access to the memory by other components of the device, such as the CPU and peripheral interfaces.
In some embodiments, the interface device 20 includes at least one interface unit, and each interface unit is used for outputting a visual interface, receiving a human-computer interaction event generated according to the operation of a technician, and the like. For example, the interface device 20 includes, but is not limited to: a serial interface such as an HDMI interface or a USB interface, or a parallel interface, etc. In one embodiment, the interface device 20 further comprises a network communication unit, which is a device for data transmission using a wired or wireless network, such as but not limited to: an integrated circuit including a network card, a local area network module such as a WiFi module or a bluetooth module, a wide area network module such as a mobile network, and the like.
In some embodiments, the display device is used for displaying a visual interface, i.e. an operation interface. Examples of the display device include a display, which may be a hardware device for displaying and generating input events in case of integrating a touch sensor. The display device may be in data connection with a processing device 22 through an interface device 20.
The processing means 22 are connected to said interface means 20, storage means 21 and display means, according to the hardware means actually comprised by the computer device. The processing device 22 includes one or more processors. The processing means 22 is operable to perform data read and write operations with the storage means 21. The processing device 22 includes one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more Digital Signal Processors (DSPs), one or more Field Programmable logic arrays (FPGAs), or any combination thereof.
The application also discloses a server, which is used for communicating with a robot to control the robot to move and realize the robot positioning method, the method for constructing the map by the robot or the method for updating the map by the robot as described in any embodiment. Here, the server system is an electronic device capable of performing digital computation, logic processing, and information processing on data, which includes but is not limited to: a central computer, a server, a cluster of servers, a cloud-based server system, etc. In an example where the server system is a cloud server system, the cloud server system is, for example, one or more entity servers arranged according to various factors such as functions, loads, and the like, for example, a server based on a cloud architecture includes a public cloud server and a private cloud server, where the public or private cloud server includes SaaS, paaS, iaaS, and the like. The private cloud service end comprises a Mei Tuo cloud computing service platform, an Array cloud computing service platform, an Amazon cloud computing service platform, a Baidu cloud computing platform, a Tencent cloud computing platform and the like. In an example where the server system is formed by a distributed or centralized server cluster, the server cluster is formed by at least one physical server, each physical server is configured with a plurality of virtual servers, each virtual server runs at least some steps of the robot positioning method, the method for constructing the map by the robot, or the method for updating the map by the robot in any of the above embodiments, and the virtual servers communicate with each other through a network.
Referring to fig. 15, which is a schematic structural diagram of a server according to an embodiment of the basic application, the server 3 at least includes an interface device 30, a storage device 31, and a processing device 32. In some embodiments, the server 3 may also include a display device (not shown) or the like in data connection through the interface device.
In an embodiment, the at least one storage device 31 is configured to store at least one program; in an embodiment, the storage 31 comprises a storage server or memory, which may comprise high speed random access memory, and may also comprise non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state storage devices. In certain embodiments, the memory may also include memory that is remote from the one or more processors, such as network-attached memory accessed via RF circuitry or external ports and a communication network (not shown), which may be the internet, one or more intranets, local area networks, wide area networks, storage area networks, and the like, or suitable combinations thereof. The memory controller may control access to the memory by other components of the device, such as the CPU and peripheral interfaces.
In an embodiment, the processing device 32 is connected to the storage device 31, and is configured to execute and implement the robot positioning method, the method for constructing a map by the robot, or the method for updating a map by the robot described in any of the above embodiments when the at least one program is executed. The processing apparatus 32 is, for example, a server, such as an application server or the like, that includes a processor operatively coupled with a memory and/or a non-volatile storage device.
In an embodiment, the interface device 30 comprises at least one interface unit, and each interface unit is used for outputting a visual interface, receiving a human-computer interaction event generated according to the operation of a technician, and the like. For example, the interface device 30 includes, but is not limited to: a serial interface such as an HDMI interface or a USB interface, or a parallel interface, etc. In one embodiment, the interface device 30 further comprises a network communication unit, which is a device for data transmission using a wired or wireless network, such as but not limited to: an integrated circuit including a network card, a local area network module such as a WiFi module or a bluetooth module, a wide area network module such as a mobile network, and the like.
The present application also provides a computer storage medium storing at least one program which, when executed by a processor, controls an apparatus in which the storage medium is located to perform the robot positioning method, the method for constructing a map by the robot, or the method for updating a map by the robot as described in any one of the above embodiments.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium, which includes instructions for enabling a robot equipped with the storage medium to perform all or part of the steps of the method according to the embodiments of the present application.
In the embodiments provided herein, the computer storage media may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, U-disk, removable disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to refer to non-transitory, tangible storage media. Disk and disc, as used in this application, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In one or more exemplary aspects, the functions described in the computer program of the robot positioning method, or the robot map building method, or the robot map updating method described herein may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied in processor-executable software modules, which may be located on tangible, non-transitory computer storage media. Tangible, non-transitory computer storage media may be any available media that can be accessed by a computer.
The flowchart and block diagrams in the figures described above illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In summary, in the present application, a first map is set as a basic map to provide a basis for a robot to navigate and move, when a first map has a high positioning accuracy, that is, when a matching consistency between environmental data and features of the first map is high, the environmental data is matched with the first map to perform positioning and navigation and the like, and when the first map has a low positioning accuracy, that is, when the matching consistency between the environmental data and the first map is low, the robot obtains positioning information based on the environmental data and the second map and an association relationship between the first map and the second map, and maps the accurate positioning information on the second map as the positioning information in the first map, so that the robot continues to navigate and move based on the first map with accurate positioning. In this way, when the positioning is not accurate based on the first map, the positioning information obtained based on the second map is used as a supplement, so that the positioning efficiency is improved (that is, the calculation amount is reduced), and the probability of positioning failure or error is reduced.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which may be made by those skilled in the art without departing from the spirit and technical spirit of the present disclosure be covered by the claims of the present application.
Claims (30)
1. A method of robot positioning, comprising the steps of:
acquiring current environmental data during movement of the robot in the physical space based on the first map;
matching the environment data with the first map so as to select to acquire positioning information of the robot currently mapped in the first map based on the environment data and the first map according to a matching result, or acquire positioning information of the robot currently mapped in the first map based on the environment data and the second map;
wherein the second map has an association relationship with the first map, and the obtaining of the positioning information of the robot mapped in the first map based on the environment data and the second map comprises: and converting the positioning information of the robot mapped in the second map into the positioning information in the first map based on the incidence relation.
2. The robot positioning method according to claim 1, wherein the first map is provided as a laser map that is previously constructed corresponding to the physical space, and the second map is provided as a visual map corresponding to the physical space.
3. The robot positioning method according to claim 1, wherein the first map is provided as a first visual map previously constructed corresponding to the physical space, and the second map is provided as a second visual map corresponding to the physical space.
4. The robot positioning method according to claim 3, wherein the first visual map is a map constructed based on a VSLAM technique, and the second visual map is a map constructed based on a machine learning image feature extraction method.
5. The robot positioning method according to claim 1, wherein the environment data includes point cloud data and image data acquired by a laser device and an image pickup device, respectively.
6. The robot positioning method according to claim 1, wherein the environment data includes image data acquired by an image pickup device.
7. The robot positioning method according to claim 1, wherein the association relationship between the first map and the second map is established by any one of:
constructing the second map based on the first map; or
And repositioning each key frame of the first map and each key frame of the second map to determine the coordinate transformation relation between the first map and the second map.
8. The robot positioning method according to claim 7, wherein the repositioning the key frames of the first map and the second map to determine the coordinate transformation relationship between the first map and the second map comprises:
repositioning each key frame in the first map in the second map, and repositioning each key frame in the second map in the first map to obtain a paired data set; wherein the pairing data set comprises positioning information of the successfully relocated key frame on a first map and positioning information on a second map;
a coordinate transformation relationship between the first map and the second map is determined based on the paired data sets.
9. The robot positioning method of claim 8, wherein the step of determining the coordinate transformation relationship between the first map and the second map based on the paired data sets further comprises: and judging whether the pairing data set meets a preset condition.
10. A robot positioning method according to claim 7, characterized in that said building of said second map based on said first map comprises the steps of:
acquiring environmental data during movement of the robot in the physical space based on the first map;
analyzing the moving pose of the robot in the physical space based on the environment data to obtain a key frame and positioning information thereof;
constructing a local map of the physical space based on the key frame and the positioning information thereof, and optimizing the local map according to a constraint item so as to enable a coordinate system of the local map to be consistent with a coordinate system of the first map; wherein the constraint item at least comprises the positioning information of the key frame in the local map and the error of the robot mapping to the positioning information on the first map at the same time;
and repeating the steps to obtain a second map corresponding to the physical space.
11. The robot positioning method according to claim 10, further comprising: and matching the acquired current environment data with the first map to obtain positioning information of the robot mapped to the first map currently.
12. The robot positioning method according to claim 10, further comprising: and determining the key frame and simultaneously carving the positioning information mapped on the first map by the robot.
13. The method according to claim 10, further comprising, before the step of analyzing the moving pose of the robot in the physical space based on the environment data to obtain keyframes and their positioning information:
and constructing an initialization map based on at least two frame images when the frame images in the current environment data are judged to be not initialized.
14. The robot positioning method of claim 13, wherein the step of constructing an initialization map based on at least two images comprises: and mapping the robot to the positioning information on the first map at the moment corresponding to the first frame image as the positioning information of the first frame image.
15. The robot positioning method according to claim 10, wherein the step of analyzing the moving pose of the robot in the physical space based on the environment data to obtain the keyframes and the positioning information thereof comprises:
analyzing the moving posture of the robot based on the current frame image and the previous frame image to determine the positioning information of the current frame image;
and judging the current frame image to determine whether the current frame image is taken as the key frame image.
16. The robot positioning method according to claim 15, wherein the step of analyzing the moving posture of the robot based on the current frame image and the frame image at the previous time to determine the positioning information of the current frame image comprises:
acquiring initial positioning information of the current frame image based on the current frame image and the previous frame image;
and optimizing the initial positioning information to obtain the positioning information of the current frame image.
17. The method according to claim 10, wherein the step of constructing a local map of the physical space based on the keyframes and their positioning information and optimizing the local map according to a constraint so that the coordinate system of the local map coincides with the coordinate system of the first map comprises:
inserting the key frame into the local map, and updating landmark points in the local map based on the inserted key frame and positioning information thereof;
and optimizing the local map according to the constraint item, so that the positioning information of the key frames in the optimized local map can ensure that the constraint item is within an ideal threshold value.
18. The method according to claim 17, wherein the step of constructing a local map of the physical space based on the keyframes and their positioning information and optimizing the local map according to a constraint so that the coordinate system of the local map coincides with the coordinate system of the first map further comprises:
and deleting part of the key frames based on the feature matching results of the inserted key frames and the historical key frames in the local map.
19. The robot positioning method of claim 10, wherein the constraint term further comprises: reprojection error of landmark points in the local map.
20. The robot positioning method of claim 10, wherein the constraint term further comprises: and inertial navigation positioning information corresponding to the key frame in the local map and the error of the positioning information.
21. The robot positioning method according to claim 1, wherein converting the current positioning information of the robot in the second map into the positioning information in the first map based on the association comprises:
positioning information of the robot mapped in the second map at present is used as positioning information in the first map;
and according to the coordinate conversion relation between the first map and the second map, converting the positioning information of the robot mapped in the second map into the positioning information in the first map.
22. The robot positioning method according to claim 1, further comprising: and judging the integrity of the second map.
23. The robot positioning method of claim 22, wherein the determining the second map integrity comprises: and judging the integrity of the sub-area map on which the current robot is mapped.
24. The robot positioning method of claim 23, further comprising: and dividing the second map into a plurality of sub-area maps.
25. A robot positioning method according to claim 1, characterized in that the robot is a cleaning robot.
26. A control device for a robot, comprising:
an interface device for data communication with the robot;
a storage device storing at least one program;
processing means, coupled to said storage means and said interface means, for executing said at least one program to perform and implement a robot positioning method according to any of claims 1-25.
27. A robot, comprising:
sensor means for acquiring environmental data;
a moving means for performing a moving operation;
a storage device to store at least one program;
processing means connected to said sensor means, moving means, and storage means for executing said at least one program to perform the robot positioning method of any of claims 1-25.
28. A computer storage medium, in which at least one computer program is stored which, when being executed by a processor, controls an apparatus in which the storage medium is located to carry out a robot positioning method according to any one of claims 1-25.
29. An electronic device, comprising:
an interface device for data communication with the robot;
a storage device storing at least one program;
processing means, coupled to said storage means and said interface means, for executing said at least one program to perform and implement a robot positioning method according to any of claims 1-25.
30. A server, comprising:
an interface device for data communication with the robot;
a storage device storing at least one program;
processing means, coupled to said storage means and said interface means, for executing said at least one program to perform and implement a robot positioning method according to any of claims 1-25.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211281454.1A CN115617043A (en) | 2022-09-30 | 2022-09-30 | Robot and positioning method, device, equipment, server and storage medium thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211281454.1A CN115617043A (en) | 2022-09-30 | 2022-09-30 | Robot and positioning method, device, equipment, server and storage medium thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115617043A true CN115617043A (en) | 2023-01-17 |
Family
ID=84865267
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211281454.1A Pending CN115617043A (en) | 2022-09-30 | 2022-09-30 | Robot and positioning method, device, equipment, server and storage medium thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115617043A (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011137695A (en) * | 2009-12-28 | 2011-07-14 | Clarion Co Ltd | Map display apparatus and map distribution system |
CN107328419A (en) * | 2017-06-21 | 2017-11-07 | 上海斐讯数据通信技术有限公司 | The planing method and sweeping robot in a kind of cleaning path of sweeping robot |
CN109974722A (en) * | 2019-04-12 | 2019-07-05 | 珠海市一微半导体有限公司 | A kind of the map rejuvenation control method and map rejuvenation control system of vision robot |
CN110000786A (en) * | 2019-04-12 | 2019-07-12 | 珠海市一微半导体有限公司 | A kind of historical map or atlas of view-based access control model robot utilizes method |
WO2020107772A1 (en) * | 2018-11-30 | 2020-06-04 | 上海肇观电子科技有限公司 | Map building method and localization method for robot |
WO2020155543A1 (en) * | 2019-02-01 | 2020-08-06 | 广州小鹏汽车科技有限公司 | Slam map joining method and system |
US20200269844A1 (en) * | 2019-02-27 | 2020-08-27 | Honda Motor Co., Ltd. | Vehicle control device |
US20200306989A1 (en) * | 2017-07-28 | 2020-10-01 | RobArt GmbH | Magnetometer for robot navigation |
US20210131821A1 (en) * | 2019-03-08 | 2021-05-06 | SZ DJI Technology Co., Ltd. | Techniques for collaborative map construction between an unmanned aerial vehicle and a ground vehicle |
CN112904365A (en) * | 2021-02-10 | 2021-06-04 | 广州视源电子科技股份有限公司 | Map updating method and device |
US20210268652A1 (en) * | 2020-02-28 | 2021-09-02 | Irobot Corporation | Systems and methods for managing a semantic map in a mobile robot |
CN113674351A (en) * | 2021-07-27 | 2021-11-19 | 追觅创新科技(苏州)有限公司 | Robot and drawing establishing method thereof |
WO2021233452A1 (en) * | 2020-05-22 | 2021-11-25 | 杭州海康机器人技术有限公司 | Map updating method and apparatus |
US20220080600A1 (en) * | 2020-09-15 | 2022-03-17 | Irobot Corporation | Particle filters and wifi robot localization and mapping |
-
2022
- 2022-09-30 CN CN202211281454.1A patent/CN115617043A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011137695A (en) * | 2009-12-28 | 2011-07-14 | Clarion Co Ltd | Map display apparatus and map distribution system |
CN107328419A (en) * | 2017-06-21 | 2017-11-07 | 上海斐讯数据通信技术有限公司 | The planing method and sweeping robot in a kind of cleaning path of sweeping robot |
US20200306989A1 (en) * | 2017-07-28 | 2020-10-01 | RobArt GmbH | Magnetometer for robot navigation |
WO2020107772A1 (en) * | 2018-11-30 | 2020-06-04 | 上海肇观电子科技有限公司 | Map building method and localization method for robot |
WO2020155543A1 (en) * | 2019-02-01 | 2020-08-06 | 广州小鹏汽车科技有限公司 | Slam map joining method and system |
US20200269844A1 (en) * | 2019-02-27 | 2020-08-27 | Honda Motor Co., Ltd. | Vehicle control device |
US20210131821A1 (en) * | 2019-03-08 | 2021-05-06 | SZ DJI Technology Co., Ltd. | Techniques for collaborative map construction between an unmanned aerial vehicle and a ground vehicle |
CN110000786A (en) * | 2019-04-12 | 2019-07-12 | 珠海市一微半导体有限公司 | A kind of historical map or atlas of view-based access control model robot utilizes method |
WO2020207007A1 (en) * | 2019-04-12 | 2020-10-15 | 珠海市一微半导体有限公司 | Visual robot-based historical map utilization method |
CN109974722A (en) * | 2019-04-12 | 2019-07-05 | 珠海市一微半导体有限公司 | A kind of the map rejuvenation control method and map rejuvenation control system of vision robot |
US20210268652A1 (en) * | 2020-02-28 | 2021-09-02 | Irobot Corporation | Systems and methods for managing a semantic map in a mobile robot |
WO2021233452A1 (en) * | 2020-05-22 | 2021-11-25 | 杭州海康机器人技术有限公司 | Map updating method and apparatus |
US20220080600A1 (en) * | 2020-09-15 | 2022-03-17 | Irobot Corporation | Particle filters and wifi robot localization and mapping |
CN112904365A (en) * | 2021-02-10 | 2021-06-04 | 广州视源电子科技股份有限公司 | Map updating method and device |
CN113674351A (en) * | 2021-07-27 | 2021-11-19 | 追觅创新科技(苏州)有限公司 | Robot and drawing establishing method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108717710B (en) | Positioning method, device and system in indoor environment | |
CN110874100B (en) | System and method for autonomous navigation using visual sparse maps | |
WO2020223974A1 (en) | Method for updating map and mobile robot | |
CN108986161B (en) | Three-dimensional space coordinate estimation method, device, terminal and storage medium | |
JP6571274B2 (en) | System and method for laser depth map sampling | |
CN107160395B (en) | Map construction method and robot control system | |
JP6694169B2 (en) | System and method for capturing still and / or moving scenes using a multiple camera network | |
KR101725060B1 (en) | Apparatus for recognizing location mobile robot using key point based on gradient and method thereof | |
JP6732746B2 (en) | System for performing simultaneous localization mapping using a machine vision system | |
WO2020023982A9 (en) | Method and apparatus for combining data to construct a floor plan | |
KR101950558B1 (en) | Pose estimation apparatus and vacuum cleaner system | |
WO2022160790A1 (en) | Three-dimensional map construction method and apparatus | |
JP7422105B2 (en) | Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device | |
CN108789421B (en) | Cloud robot interaction method based on cloud platform, cloud robot and cloud platform | |
CN111220148A (en) | Mobile robot positioning method, system and device and mobile robot | |
JP7351892B2 (en) | Obstacle detection method, electronic equipment, roadside equipment, and cloud control platform | |
CN113052907B (en) | Positioning method of mobile robot in dynamic environment | |
CN112967340A (en) | Simultaneous positioning and map construction method and device, electronic equipment and storage medium | |
CN112041634A (en) | Mobile robot positioning method, map building method and mobile robot | |
JP2022502791A (en) | Systems and methods for estimating robot posture, robots, and storage media | |
KR20200143228A (en) | Method and Apparatus for localization in real space using 3D virtual space model | |
WO2024021340A1 (en) | Robot following method and apparatus, and robot and computer-readable storage medium | |
WO2021208015A1 (en) | Map construction and positioning method, client, mobile robot, and storage medium | |
CN111168685A (en) | Robot control method, robot, and readable storage medium | |
WO2022246812A1 (en) | Positioning method and apparatus, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |