CN116560373A - Robot obstacle avoidance method, device, equipment and medium based on blind area obstacle - Google Patents
Robot obstacle avoidance method, device, equipment and medium based on blind area obstacle Download PDFInfo
- Publication number
- CN116560373A CN116560373A CN202310601378.6A CN202310601378A CN116560373A CN 116560373 A CN116560373 A CN 116560373A CN 202310601378 A CN202310601378 A CN 202310601378A CN 116560373 A CN116560373 A CN 116560373A
- Authority
- CN
- China
- Prior art keywords
- obstacle
- movement
- robot
- point cloud
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000033001 locomotion Effects 0.000 claims abstract description 351
- 230000002688 persistence Effects 0.000 claims abstract description 45
- 230000003068 static effect Effects 0.000 claims abstract description 26
- 238000001514 detection method Methods 0.000 claims abstract description 17
- 230000008859 change Effects 0.000 claims description 65
- 230000004888 barrier function Effects 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 17
- 230000007613 environmental effect Effects 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000001914 filtration Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0219—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a robot obstacle avoidance method, device, equipment and medium based on blind area obstacles. The method comprises the following steps: determining speed information of the movement obstacle in the running range of the robot according to the current frame point cloud information and the historical frame point cloud information acquired by the sensor; determining whether the movement obstacle has a movement persistence characteristic according to the speed information of the movement obstacle; if yes, controlling the robot to perform movement obstacle avoidance according to the overlapping range of the predicted movement track of the movement obstacle and the blind area; the blind area is determined according to the detection range of the sensor and the operation range of the robot; otherwise, controlling the robot to perform static obstacle avoidance. According to the invention, the motion persistence characteristic of the movement obstacle is determined through the speed information of the movement obstacle, so that the robot is controlled to execute different obstacle avoidance strategies according to the running condition of the movement obstacle, and the safe running of the robot when executing tasks is ensured.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a robot obstacle avoidance method, device, equipment and medium based on blind area obstacles.
Background
Robots are widely used in the fields of express logistics, intelligent storage, industry and the like to perform various tasks such as navigation, obstacle avoidance, goods taking, goods placing and the like, and the robots need to accurately acquire surrounding environment information in real time when performing the tasks.
At present, a robot generally adopts a laser radar or a 3D camera to acquire environmental point cloud information, but because the sensor installed on the robot has a visual blind area, in order to realize the effect of safety obstacle avoidance, the robot not only needs to accurately acquire the obstacle in the detection range of the sensor, but also needs to perform state maintenance, namely memory, on the obstacle entering the blind area of the sensor, so that the task can be better executed.
However, the movement state of the movement obstacle is changed along with the time development, the state change is difficult to control, the memory state of the robot is easy to be wrong, and the task execution of the robot is influenced.
Disclosure of Invention
The invention provides a robot obstacle avoidance method, device, equipment and medium based on a blind area obstacle, which are used for solving the problem of safe driving caused by uncertainty of a robot in entering a blind area of a movement obstacle.
According to an aspect of the present invention, there is provided a robot obstacle avoidance method based on a blind area obstacle, wherein at least one sensor for acquiring environmental point cloud information is configured on a robot, including:
Determining speed information of the movement obstacle in the running range of the robot according to the current frame point cloud information and the historical frame point cloud information acquired by the sensor;
determining whether the movement obstacle has a movement persistence characteristic according to the speed information of the movement obstacle;
if yes, controlling the robot to move to avoid the obstacle according to the overlapping range of the predicted movement track of the movement obstacle and the blind area; the blind area is determined according to the detection range of the sensor and the operation range of the robot;
otherwise, controlling the robot to conduct static obstacle avoidance.
According to another aspect of the present invention, there is provided a robot obstacle avoidance apparatus based on a blind area obstacle, wherein at least one sensor for acquiring environmental point cloud information is configured on the robot, including:
the obstacle speed determining module is used for determining speed information of the moving obstacle in the running range of the robot according to the current frame point cloud information and the historical frame point cloud information acquired by the sensor;
the obstacle movement characteristic determining module is used for determining whether the movement obstacle has movement persistence characteristics according to the speed information of the movement obstacle;
The movement obstacle avoidance module is used for controlling the robot to perform movement obstacle avoidance according to the overlapping range of the predicted movement track of the movement obstacle and the blind area if the movement obstacle has the movement persistence characteristic; the blind area is determined according to the detection range of the sensor and the operation range of the robot;
and the static obstacle avoidance module is used for controlling the robot to perform static obstacle avoidance if the movement obstacle does not have the movement continuity characteristic.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the blind zone obstacle avoidance method of any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the robot obstacle avoidance method based on blind area obstacles according to any one of the embodiments of the present invention when executed.
According to the technical scheme provided by the embodiment of the invention, the movement persistence characteristic of the movement obstacle is determined through the speed information of the movement obstacle, so that the robot is controlled to execute different obstacle avoidance strategies according to the running condition of the movement obstacle, and the safe running of the robot when executing tasks is ensured.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a robot obstacle avoidance method based on a blind area obstacle according to a first embodiment of the present invention;
fig. 2 is a flowchart of another robot obstacle avoidance method based on a blind area obstacle according to a second embodiment of the present invention;
Fig. 3 is a schematic structural diagram of a robot obstacle avoidance device based on a blind area obstacle according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device for implementing a robot obstacle avoidance method based on a blind area obstacle according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "candidate," "target," and the like in the description and claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
Fig. 1 is a flowchart of a robot obstacle avoidance method based on a blind area obstacle according to an embodiment of the present invention, where the embodiment is applicable to a situation of guiding a robot to avoid a blind area obstacle when performing a task, the method may be performed by a robot obstacle avoidance device based on a blind area obstacle, and the robot obstacle avoidance device based on a blind area obstacle may be implemented in hardware and/or software, and the robot obstacle avoidance device based on a blind area obstacle may be configured in a device with communication and calculation, such as a robot or a server, so as to control the robot to operate according to the robot obstacle avoidance device based on a blind area obstacle. As shown in fig. 1, the method includes:
s110, determining speed information of the movement obstacle in the running range of the robot according to the current frame point cloud information and the historical frame point cloud information acquired by the sensor.
The sensor may be a laser radar device or a 3D camera arranged on the robot, and the like, which may be used for acquiring point cloud information of the surrounding environment of the robot. The current frame point cloud information refers to environmental information acquired by the robot at the current moment, and may be, for example, point cloud information scanned by a laser point cloud at the current moment or a current frame 3D image shot by a 3D camera. The historical frame point cloud information refers to multi-frame point cloud information acquired by the robot before the current moment, the number of frames included in the historical frame can be determined according to actual conditions, the historical frame is not limited herein, and in order to fully embody the motion characteristics of the motion obstacle, the historical frame comprises at least two frames. The robot operation range refers to an area range of running of the robot, which is preset for the robot, for example, a task execution scene range is determined to be the robot operation range according to execution task information of the robot, if the robot executes a task of picking goods, the goods placing area is the robot operation range.
Specifically, multi-frame point cloud information currently acquired by a sensor on the robot is acquired, obstacle information is acquired from the multi-frame point cloud information, static obstacles and movement obstacles are distinguished according to the movement speed of the obstacles, and meanwhile speed information, including speed information and speed direction information, of the movement obstacles in the multi-frame point cloud information is determined.
Exemplary, after the point cloud data is acquired, the point cloud data is converted into a global coordinate system through pose information of the robot, and the point cloud data after the coordinate system conversion is processed to obtain obstacle information. For example, the down-sampling voxel filtering processing is carried out on the point cloud data after the coordinate system conversion to reduce the processing of the point cloud data quantity and improve the obstacle avoidance efficiency of the robot; and removing noise in the point cloud data and removing the ground point cloud data through filtering operation. The filtering operation includes radius filtering, etc. And finally, clustering the point cloud data through a clustering algorithm to obtain barrier information in the point cloud data, wherein the moving speed of the barrier is determined according to the barrier information in the current frame and the previous frame, wherein the moving speed comprises a speed direction and a speed size, when the speed sizes determined by the continuous multiframes are smaller than a preset speed threshold, the barrier is determined to be a static barrier, and otherwise, the barrier is considered to be a movement barrier. The preset speed threshold may be determined according to the actual situation, for example, in order to avoid measurement errors, the preset speed threshold is set to 0.2m/s.
S120, determining whether the movement obstacle has the movement continuity characteristic according to the speed information of the movement obstacle. If yes, step S130 is executed, otherwise step S140 is executed.
The motion continuation feature is used for representing whether the motion speed of the motion obstacle has an intermittent stop phenomenon, for example, whether the motion feature of the motion obstacle is stop-and-go, if the motion speed of the motion obstacle does not have the intermittent stop phenomenon, the motion obstacle has the motion continuation feature, otherwise, the motion obstacle does not have the motion continuation feature. I.e. stop-and-go movement barriers, do not have a movement-sustaining feature. The motion continuation feature may also be described as a degree of certainty characterizing the operational trend of the motion obstacle, which is predictable if the motion obstacle has a motion continuation feature; otherwise, the movement obstacle is not regular due to stop and go, so that the movement trend of the movement obstacle is not deterministic, i.e. the prediction result may be inaccurate.
Specifically, the speed change information of the movement obstacle with the movement continuity characteristic is kept within a certain deviation range, namely, the phenomenon of speed sudden drop or sudden increase does not occur. Therefore, determining the speed deviation value of a plurality of adjacent frames according to the speed information of the movement obstacle of the continuous multi-frame, and if at least one adjacent frame exists, determining that the movement obstacle does not have the movement persistence characteristic; and if the speed deviation values of the movement barriers of all adjacent frames in the continuous multi-frame are smaller than the preset speed deviation threshold, determining that the movement barriers have movement persistence characteristics. Different obstacle avoidance strategies are executed for the robot according to the operation characteristics of the movement obstacle in the operation environment of the robot, so that the obstacle avoidance accuracy and safety of the robot are improved.
In one possible embodiment, the speed information includes a speed direction and a speed magnitude;
accordingly, determining whether the movement obstacle has a movement continuation feature according to the speed information of the movement obstacle includes:
determining direction change information of the movement obstacle in multi-frame point cloud information according to the speed direction of the movement obstacle in the current frame point cloud information and the historical frame point cloud information, and determining speed change information of the movement obstacle in the multi-frame point cloud information according to the speed of the movement obstacle in the current frame point cloud information and the historical frame point cloud information;
and determining whether the movement obstacle has the movement persistence characteristic according to the direction change information and the speed change information of the movement obstacle in the multi-frame point cloud information.
In order to ensure the accuracy of the motion continuity characteristic determination of the movement obstacle, the speed change information and the speed direction change information of the movement obstacle are combined for simultaneous determination.
The speed deviation value and the speed direction deviation value of a plurality of adjacent frames are determined according to the speed information of the movement obstacle of the continuous multiframe, the overall speed deviation value is determined according to the speed deviation value of the plurality of adjacent frames, and the average value of the speed deviation values of the plurality of adjacent frames is used as speed change information. The overall speed direction deviation value is determined from the speed direction deviation values of the plurality of adjacent frames, and as the direction change information, for example, an average value of the speed direction deviation values of the plurality of adjacent frames is used as the direction change information.
If the direction change information is larger than a preset direction threshold value or the speed change information is larger than a preset speed threshold value, determining that the movement barrier does not have the movement persistence characteristic; and if the direction change information is smaller than or equal to a preset direction threshold value or the speed change information is smaller than or equal to a preset speed threshold value, determining that the movement barrier has the movement persistence characteristic.
In one possible embodiment, the direction change information of the movement obstacle is determined according to the following formula:
CLth i =100-100*|vth i+1 -vth i |/π;
wherein CLth i Information representing the change of direction of a moving obstacle in the i-th frame point cloud information, vth i Representing a speed direction of the moving obstacle in the i-th frame point cloud information;
determining rate change information of the movement obstacle according to the following formula:
CLx i =100-100*|vx i+1 -vx i |/vx i ;
wherein CLx is i Velocity change information, vx, representing movement obstacle in i-th frame point cloud information i The speed of the movement obstacle in the i-th frame point cloud information is represented.
Specifically, direction change information and rate change information are determined according to the speed and the speed direction of the current multiframe of the movement obstacle. For example, if the speed information of the obstacle is determined according to the speed information of the last 10 frames, i=1, 2 … …, and the direction change information and the speed change information of the moving obstacle in each frame of point cloud information are respectively determined according to the speed and the speed direction in the 10 frames of point cloud information.
In one possible embodiment, determining whether the movement obstacle has a movement persistence feature according to direction change information and rate change information of the movement obstacle in the multi-frame point cloud information includes:
determining a motion state parameter of the motion obstacle in the target frame point cloud information according to the direction change information and the speed change information of the motion obstacle in the target frame point cloud information;
determining a motion persistence parameter of the movement obstacle according to the motion state parameter of the movement obstacle in the multi-frame point cloud information;
and if the movement persistence parameter is greater than a preset persistence threshold value, determining that the movement obstacle has movement persistence characteristics.
The method comprises the steps of respectively determining direction change information and speed change information of each frame of a moving obstacle in a current frame and a historical frame, and determining a moving state parameter of the frame according to the direction change information and the speed change information of each frame, wherein target frame point cloud information refers to any frame point cloud information in the current frame point cloud information and the historical frame point cloud information, and the moving state parameter of the moving obstacle in the target frame point cloud information represents the speed change condition between the target frame and the adjacent frame.
And determining the overall motion persistence parameter of the movement obstacle according to the motion state parameters of each frame of the movement obstacle in the current frame and the historical frame, avoiding the misjudgment of the movement characteristics of the movement obstacle caused by the measurement error of the sensor or other errors, and further improving the obstacle avoidance accuracy and safety of the robot. If the movement persistence parameter is greater than a preset persistence threshold value, the movement speed of the movement obstacle is characterized by no mutation, namely the movement speed is always kept within a certain deviation range, and the movement obstacle has the movement persistence characteristic.
Wherein the motion state parameter is determined according to the following formula:
CL i =k*CLth i +(1-k)*CLx i ;
wherein CL is i Representing motion state parameters of a motion obstacle in point cloud information of an ith frame, k representing weight in a speed direction, and CLth i Direction change information, CLx, representing movement obstacle in i-th frame point cloud information i Rate change information indicating a movement obstacle in the i-th frame point cloud information;
the motion persistence parameter is determined according to the following formula:
wherein CL represents a motion continuation parameter of the movement obstacle, and n is a total frame number of the point cloud information for determining the motion state parameter.
Since the direction change and the rate change have different degrees of influence on the motion characteristic determination of the motion obstacle, weights are set for the direction change information and the rate change information when determining the motion state parameters of the motion obstacle in each frame of point cloud information, wherein the weights k epsilon [0,1]The specific value of the weight can be determined according to the influence degree of the speed and the direction of the obstacle in the actual task execution scene on the driving safety of the robot, and the specific value is not limited herein. Finally, the motion continuation parameter of the motion obstacle is determined according to the average value of the motion state parameters of the multi-frame point cloud information in the current frame and the historical frame, for example, the motion continuation parameter is determined according to the continuous 10-frame point cloud information When the motion state parameter is determined, the total frame number of the point cloud information of the motion state parameter is 9 framesWhen the movement state parameter is greater than 75, it is determined that the movement obstacle has a movement continuation characteristic.
S130, controlling the robot to move to avoid the obstacle according to the overlapping range of the predicted movement track of the movement obstacle and the blind area.
The blind area is determined according to the detection range of the sensor and the operation range of the robot. The area not covered by the detection range of the sensor in the operation range of the robot is a blind area, and the range of the blind area is continuously changed because the robot is continuously operated.
If all the movement barriers in the detection range of the sensor have the movement persistence characteristic, the robot is controlled to execute a movement obstacle avoidance strategy, and the movement obstacle avoidance strategy is to avoid the obstacle according to the predicted movement track of the movement barriers. Because the movement obstacle has the characteristic of movement continuity, the movement track of the movement obstacle has a certain rule, the predicted movement track is determined according to the rule, and then the robot is controlled to move to avoid the obstacle according to the overlapping range of the predicted movement track and the blind area, so that the obstacle avoidance efficiency and the safety of the robot are improved.
Specifically, if the target movement obstacle is detected to enter the blind area, determining an overlapping range of the target movement obstacle and the blind area according to a predicted movement track of the movement obstacle, setting the overlapping range as a robot travel prohibition area, and determining a travel path by the robot according to a detection result of a current sensor and the travel prohibition area so as to realize safe travel under the condition of the detection blind area.
For example, for a motion obstacle having a motion continuation characteristic, a motion trajectory of the obstacle is predicted according to information of the current frame and the last two frames (i.e., the last three frames). Assuming that the trajectory of the movement obstacle is a conic, y=a×x 2 +b x+c; the coordinates of the obstacle in the global coordinate system of the last three frames are (x 1, y 1), (x 2, y 2), (x 3, y 3) respectively, and then the coordinates are respectively brought into the quadratic curveThe equation can obtain a ternary once equation set so as to solve the predicted motion trail of the motion obstacle of the values of a, b and c.
S140, controlling the robot to conduct static obstacle avoidance.
If at least one movement obstacle which does not have the movement continuity characteristic exists in the movement obstacle in the detection range of the sensor, determining the movement obstacle which does not have the movement continuity characteristic as a reference movement obstacle, controlling the robot to execute a static obstacle avoidance strategy, stopping the robot from running, and determining the moment for releasing the running stopping state according to the running state of the reference movement obstacle.
Specifically, if a reference movement obstacle with uncertain movement exists in the detection range of the robot, the robot is controlled to stop waiting, and a subsequent obstacle avoidance strategy is determined according to the updated movement state of the reference movement obstacle.
In one possible embodiment, after controlling the robot to make a stationary obstacle avoidance, the method further comprises:
monitoring speed information of the movement obstacle without movement continuity characteristics;
if the movement obstacle is determined to meet at least one of the following conditions: and the robot is static, has the characteristic of motion continuity, leaves the running range of the robot or enters a blind area, and is controlled to move to avoid the obstacle.
After the robot stops waiting, the reference movement obstacle without the movement persistence feature is continuously monitored, and if the reference movement obstacle is detected to be static, has the movement persistence feature, leaves the running range of the robot or enters a blind area, the movement obstacle avoidance strategy is continuously executed. For example, if the reference movement obstacle is stationary, determining the obstacle as a stationary obstacle, and memorizing position information of the stationary obstacle to avoid obstacle avoidance errors caused by the stationary obstacle entering a blind area range along with the running of the robot; if the reference movement obstacle has the movement persistence characteristic, controlling the robot to perform obstacle avoidance driving according to the movement obstacle avoidance strategy in the step 130; if the reference movement obstacle leaves the operation range of the robot, controlling the robot to avoid the obstacle according to the movement characteristics of other movement obstacles; if the reference movement obstacle enters the blind area, the control robot determines the current blind area as a forbidden operation area, determines the expected departure time of the blind area according to the historical operation data of the reference movement obstacle, and releases the limit of the static operation area of the current blind area after the expected departure time of the blind area is reached.
Because the motion trail of the motion obstacle without the motion persistence feature does not have a fixed rule, in order to avoid the influence of the motion obstacle without the motion persistence feature on the operation safety of the robot, the motion obstacle without the motion persistence feature exists in the operation range of the robot, and the robot executes a static obstacle avoidance strategy so as to avoid the problem of the safety of the robot caused by the fact that the motion obstacle suddenly enters a blind area.
According to the technical scheme provided by the embodiment of the invention, the movement persistence characteristic of the movement obstacle is determined through the speed information of the movement obstacle, so that the robot is controlled to execute different obstacle avoidance strategies according to the running condition of the movement obstacle, and the safe running of the robot when executing tasks is ensured.
Example two
Fig. 2 is a flowchart of a robot obstacle avoidance method based on a blind area obstacle according to a second embodiment of the present invention, and the present embodiment further describes "control robot performs motion obstacle avoidance according to an overlapping range of a predicted motion track of a motion obstacle and a blind area" in the above embodiment. As shown in fig. 2, the method includes:
s210, dividing the operation range of the robot to obtain a plurality of candidate areas.
The robot operation range refers to an area range of running of the robot, which is preset for the robot, for example, a task execution scene range is determined to be the robot operation range according to execution task information of the robot, if the robot executes a goods task, the goods placing area is the robot operation range.
Specifically, the operation range of the robot is acquired, and an occupied grid diagram of the area is generated. Illustratively, assuming that the width of the put area is W meters and the length is L meters, each candidate area is a square of 0.01 meters in size, each candidate area is referred to as a cell. The footprint of the put area is composed of 10000 cells w×l. When an occupation grid diagram is generated, each cell is provided with an initial occupation parameter, the occupation parameter characterizes the probability that the area is occupied by an obstacle, and the initial occupation parameter is 0.
S220, determining a target area from the candidate areas according to the overlapping range of the predicted motion trail and the blind area of the motion obstacle, and determining the occupation parameter of the target area as a first value.
And determining the position of the movement obstacle at the future moment according to the predicted movement track of the movement obstacle, determining the candidate region position corresponding to the position as a target region, and determining the occupation parameter of the target region as a first value, wherein the first value is larger than 1 so as to represent the probability that the region is occupied by the obstacle.
For example, for a moving obstacle entering a view blind area of a 3D camera, the position of the obstacle is estimated according to a predicted movement track of the obstacle, and the obstacle corresponding to the position is called a virtual obstacle. And converting the virtual obstacle into a map coordinate system through coordinate transformation, and assigning a larger occupation parameter to the candidate area occupied by the virtual obstacle so as to represent the obstacle occupation condition of the area in the blind area.
S230, determining an obstacle avoidance area of the robot according to a comparison result of the occupation parameter of the candidate area and a preset parameter threshold, and controlling the robot to move to avoid the obstacle according to the obstacle avoidance area.
Wherein the first value is greater than or equal to a preset parameter threshold. Setting a larger value than a preset parameter threshold value for the occupation parameter of the candidate area occupied by the virtual obstacle in the blind area, so that the robot avoids the area possibly occupied by the movement obstacle in the blind area during running, and further improving the running safety of the robot in the blind area.
Specifically, if the occupation parameter of the candidate area in the blind area is greater than a preset parameter threshold, the area is set as a robot travel prohibition area, and the robot determines a travel path according to the detection result of the current sensor and the travel prohibition area, so that safe travel under the condition that the detection blind area exists is realized.
Because the robot in the blind area cannot detect specific conditions, if a future travel path is determined directly according to the point cloud information detected by the current sensor, the condition of the future travel path blind area exists, and because the condition of the obstacle in the blind area is ambiguous, the travel safety of the robot can be influenced.
For example, if it is detected that the movement obstacle leaves the robot operation area or the movement obstacle enters the blind area for a time longer than the expected departure blind area time, the occupancy parameter of the target area is set to an initial value. Wherein the expected dead zone departure time is determined based on historical operational data of the movement obstacle.
In one possible embodiment, before the sensor acquires the current frame point cloud data, the method further includes:
determining obstacle position information of the movement obstacle according to the historical frame point cloud information;
determining a reference area from the candidate areas according to the obstacle position information, and determining the occupation parameter of the reference area as a second numerical value; wherein the second value is less than the first value;
Correspondingly, after the sensor acquires the current frame point cloud data, the method further comprises the steps of:
determining updated obstacle position information of the movement obstacle according to the current frame point cloud information;
and updating the occupation parameters of the reference area according to the updated obstacle position information so that the robot can determine the obstacle avoidance area of the robot according to the occupation parameters of the reference area.
After the sensor acquires the historical frame point cloud data, a reference area is determined according to the position information of the detected movement obstacle in the historical frame point cloud data, the occupation parameter of the reference area is set to be a second value, for example, 1, the second value is smaller than the first value, and the first value is larger than a preset parameter threshold value in order to ensure the safety of the robot driving in the blind area because the first value represents the prediction result of the obstacle in the blind area, and the reference area represents that the area is occupied by the obstacle, so that the second value is smaller than the preset parameter threshold value.
After the sensor acquires the current frame point cloud data, determining an updated reference area according to the position information of the detected movement obstacle in the current frame point cloud data, and subtracting a third value from the value of the occupation parameter of the reference area if the reference area in the history frame point cloud data does not appear in the updated reference area, wherein the third value is smaller than or equal to the second value; if the reference area repeatedly appears in the updated reference area, adding a third value to the value of the occupation parameter of the reference area; and if the reference area which does not appear in the historical reference areas in the reference areas is updated, setting the occupation parameter of the reference area to a second value. And continuously updating the occupation parameters in the candidate areas according to the new current frame point cloud information acquired by the sensor, so that the occupation parameters of the candidate areas represent the probability situation occupied by the obstacle. When the candidate region enters the blind area range along with the running of the robot, the probability that the region of the robot is possibly occupied by the obstacle can be prompted according to the occupation parameter conditions in the candidate region. If the occupation parameter of the candidate area in the blind area is larger than a preset parameter threshold, the probability that the area is occupied by the obstacle in the history frame is larger, the area is also possibly occupied by the obstacle in the blind area, and the area is determined to be a robot no-driving area; and because the occupation parameters of the candidate areas are updated according to the new point cloud information which is continuously acquired by the robot, the accuracy and the safety of the robot running according to the occupation parameters of the blind areas are improved.
Optionally, static obstacle position information is determined according to the obtained current frame point cloud information and the obtained historical frame point cloud information, a static area is determined from the candidate area according to the static obstacle position information, and an occupation parameter of the static area is determined to be a first value. And the occupancy parameters of the static area are not updated. Illustratively, the occupation parameters of the static area within the preset distance range of the robot are not updated. Because the dead zone position is located in the close range of the sensor, the obstacle cannot be accurately perceived in the close range, and therefore the occupied parameters of the static area in the preset distance range are not updated, and the problem of robot driving safety caused by untimely updating is avoided. The preset distance range is determined according to the detection range of the sensor, and is generally set to be 0.1-0.5 m.
According to the embodiment of the invention, through determining the occupation parameters of the candidate areas in the blind area, the robot can run in the blind area, the risk of collision with the obstacle when the robot runs in the blind area is avoided, the running safety of the robot is ensured, the utilization rate of the running space of the robot is improved, and the whole blind area is prevented from being set as the running prohibition area of the robot. And the safety of the whole operation of the robot is improved by updating the occupation parameters of the candidate areas in the operation area.
Example III
Fig. 3 is a schematic structural diagram of a robot obstacle avoidance device based on a blind area obstacle according to a third embodiment of the present invention, where at least one sensor for acquiring environmental point cloud information is configured on the robot. As shown in fig. 3, the apparatus includes:
an obstacle speed determining module 310, configured to determine speed information of a moving obstacle in a robot operation range according to the current frame point cloud information and the historical frame point cloud information acquired by the sensor;
an obstacle movement characteristic determining module 320, configured to determine whether the movement obstacle has a movement continuation characteristic according to the speed information of the movement obstacle; if yes, then the motion obstacle avoidance module 330 is executed, otherwise, the stationary obstacle avoidance module 340 is executed.
The motion obstacle avoidance module 330 is configured to control the robot to perform motion obstacle avoidance according to an overlapping range of a predicted motion trajectory of the motion obstacle and a blind area if the motion obstacle has a motion persistence feature; the blind area is determined according to the detection range of the sensor and the operation range of the robot;
the static obstacle avoidance module 340 is configured to control the robot to perform static obstacle avoidance if the movement obstacle does not have a movement continuation feature.
Optionally, the robot obstacle avoidance module based on the blind area obstacle comprises:
optionally, the speed information includes a speed direction and a speed size;
correspondingly, the obstacle movement characteristic determining module comprises:
a change information determining unit, configured to determine direction change information of the movement obstacle in multi-frame point cloud information according to a speed direction of the movement obstacle in the current frame point cloud information and the historical frame point cloud information, and determine speed change information of the movement obstacle in multi-frame point cloud information according to a speed of the movement obstacle in the current frame point cloud information and the historical frame point cloud information;
and the motion continuation characteristic determining unit is used for determining whether the motion obstacle has the motion continuation characteristic according to the direction change information and the speed change information of the motion obstacle in the multi-frame point cloud information.
Optionally, the direction change information of the movement obstacle is determined according to the following formula:
CLth i =100-100*|vth i+1 -vth i |/π;
wherein CLth i Information on direction change, vth, representing the movement obstacle in the i-th frame point cloud information i Representing a speed direction of the movement obstacle in the i-th frame point cloud information;
Determining rate change information of the movement obstacle according to the following formula:
CLx i =100-100*|vx i+1 -vx i |/vx i ;
wherein CLx is i Velocity change information, vx, representing the movement obstacle in the ith frame point cloud information i Representing the speed of the movement obstacle in the i-th frame point cloud information.
Optionally, the motion continuation feature determination unit is specifically configured to:
determining a motion state parameter of the motion obstacle in the target frame point cloud information according to the direction change information and the speed change information of the motion obstacle in the target frame point cloud information;
determining a motion persistence parameter of the motion obstacle according to the motion state parameter of the motion obstacle in the multi-frame point cloud information;
if the motion persistence parameter is greater than a preset persistence threshold value, determining that the motion barrier has motion persistence characteristics;
wherein the motion state parameter is determined according to the following formula:
CL i =k*CLth i +(1-k)*CLx i ;
wherein CL is i Representing the motion state parameter of the motion obstacle in the i-th frame point cloud information, wherein k represents the weight of the speed direction and CLth i Information indicating the change of direction of the movement obstacle in the i-th frame point cloud information, CLx i Rate change information representing the movement obstacle in the i-th frame point cloud information;
The motion persistence parameter is determined according to the following formula:
wherein CL represents a motion continuation parameter of the movement obstacle, and n is a total frame number of the point cloud information for determining the motion state parameter.
Optionally, the device further comprises a static obstacle avoidance contact module, which is used for monitoring speed information of the movement obstacle without movement continuity characteristics after controlling the robot to perform static obstacle avoidance;
if the movement obstacle is determined to meet at least one of the following conditions: and controlling the robot to move to avoid the obstacle when the robot is stationary, has the characteristic of motion continuity, leaves the running range of the robot or enters the blind area.
Optionally, the apparatus further includes a region dividing module, configured to:
before controlling the robot to move according to the overlapping range of the predicted movement track of the movement obstacle and the blind area to avoid the obstacle, dividing the running range of the robot to obtain a plurality of candidate areas;
correspondingly, the motion obstacle avoidance module is specifically used for:
determining a target area from the candidate areas according to the overlapping range, and determining the occupation parameter of the target area as a first numerical value;
determining an obstacle avoidance area of the robot according to a comparison result of the occupation parameter of the candidate area and a preset parameter threshold value, and controlling the robot to move and avoid an obstacle according to the obstacle avoidance area; wherein the first value is greater than or equal to the preset parameter threshold.
Optionally, the device further includes an area occupation parameter determining module, configured to determine, before the sensor acquires the current frame point cloud data, obstacle location information of the movement obstacle according to the historical frame point cloud information;
determining a reference area from the candidate areas according to the obstacle position information, and determining the occupation parameter of the reference area as a second numerical value; wherein the second value is less than the first value;
correspondingly, the device also comprises an area occupation parameter updating module for
After the sensor acquires the current frame point cloud data, determining updated obstacle position information of the movement obstacle according to the current frame point cloud information;
and updating the occupation parameters of the reference area according to the updated obstacle position information so that the robot can determine the obstacle avoidance area of the robot according to the occupation parameters of the reference area.
The robot obstacle avoidance device based on the blind area obstacle provided by the embodiment of the invention can execute the robot obstacle avoidance method based on the blind area obstacle provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
The data acquisition, storage, use, processing and the like in the technical scheme meet the relevant regulations of national laws and regulations, and the public sequence is not violated.
Example IV
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as a method based on robotic obstacle avoidance of blind zone obstacles.
In some embodiments, the blind zone obstacle avoidance method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the method of blind zone obstacle-based robotic obstacle avoidance described above may be performed. Alternatively, in other embodiments, processor 11 may be configured to perform method blind zone obstacle-based robotic obstacle avoidance by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application specific reference products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Claims (10)
1. The robot obstacle avoidance method based on the blind area obstacle is characterized in that at least one sensor for acquiring environment point cloud information is configured on the robot, and the method comprises the following steps:
determining speed information of the movement obstacle in the running range of the robot according to the current frame point cloud information and the historical frame point cloud information acquired by the sensor;
determining whether the movement obstacle has a movement persistence characteristic according to the speed information of the movement obstacle;
If yes, controlling the robot to move to avoid the obstacle according to the overlapping range of the predicted movement track of the movement obstacle and the blind area; the blind area is determined according to the detection range of the sensor and the operation range of the robot;
otherwise, controlling the robot to conduct static obstacle avoidance.
2. The method of claim 1, wherein the speed information includes a speed direction and a speed magnitude;
correspondingly, determining whether the movement obstacle has a movement persistence feature according to the speed information of the movement obstacle comprises:
determining direction change information of the movement obstacle in multi-frame point cloud information according to the speed direction of the movement obstacle in the current frame point cloud information and the historical frame point cloud information, and determining speed change information of the movement obstacle in the multi-frame point cloud information according to the speed of the movement obstacle in the current frame point cloud information and the historical frame point cloud information;
and determining whether the movement obstacle has a movement persistence characteristic according to the direction change information and the speed change information of the movement obstacle in the multi-frame point cloud information.
3. The method according to claim 2, characterized in that the direction change information of the movement obstacle is determined according to the following formula:
CLth i =100-100*|vth i+1 -vth i |/π;
wherein CLth i Information on direction change, vth, representing the movement obstacle in the i-th frame point cloud information i Representing a speed direction of the movement obstacle in the i-th frame point cloud information;
determining rate change information of the movement obstacle according to the following formula:
CLx i =100-100*|vx i+1 -vx i |/vx i ;
wherein CLx is i Velocity change information, vx, representing the movement obstacle in the ith frame point cloud information i Representing the speed of the movement obstacle in the i-th frame point cloud information.
4. A method according to claim 3, wherein determining whether the movement obstacle has a movement continuation feature based on direction change information and rate change information of the movement obstacle in multi-frame point cloud information comprises:
determining a motion state parameter of the motion obstacle in the target frame point cloud information according to the direction change information and the speed change information of the motion obstacle in the target frame point cloud information;
determining a motion persistence parameter of the motion obstacle according to the motion state parameter of the motion obstacle in the multi-frame point cloud information;
If the motion persistence parameter is greater than a preset persistence threshold value, determining that the motion barrier has motion persistence characteristics;
wherein the motion state parameter is determined according to the following formula:
CL i =k*CLth i +(1-k)*CLx i ;
wherein CL is i Representing the motion state parameter of the motion obstacle in the i-th frame point cloud information, wherein k represents the weight of the speed direction and CLth i Information indicating the change of direction of the movement obstacle in the i-th frame point cloud information, CLx i Rate change information representing the movement obstacle in the i-th frame point cloud information;
the motion persistence parameter is determined according to the following formula:
wherein CL represents a motion continuation parameter of the movement obstacle, and n is a total frame number of the point cloud information for determining the motion state parameter.
5. The method of claim 1, wherein after controlling the robot for stationary obstacle avoidance, the method further comprises:
monitoring speed information of the movement obstacle without movement continuity characteristics;
if the movement obstacle is determined to meet at least one of the following conditions: and controlling the robot to move to avoid the obstacle when the robot is stationary, has the characteristic of motion continuity, leaves the running range of the robot or enters the blind area.
6. The method of claim 1, wherein prior to controlling the robot to perform motion obstacle avoidance based on the overlapping range of the predicted motion profile of the motion obstacle and the blind zone, the method further comprises:
dividing the operation range of the robot to obtain a plurality of candidate areas;
correspondingly, controlling the robot to perform movement obstacle avoidance according to the overlapping range of the predicted movement track of the movement obstacle and the blind area, including:
determining a target area from the candidate areas according to the overlapping range, and determining the occupation parameter of the target area as a first numerical value;
determining an obstacle avoidance area of the robot according to a comparison result of the occupation parameter of the candidate area and a preset parameter threshold value, and controlling the robot to move and avoid an obstacle according to the obstacle avoidance area; wherein the first value is greater than or equal to the preset parameter threshold.
7. The method of claim 6, wherein prior to the sensor acquiring current frame point cloud data, the method further comprises:
determining obstacle position information of the movement obstacle according to the historical frame point cloud information;
determining a reference area from the candidate areas according to the obstacle position information, and determining the occupation parameter of the reference area as a second numerical value; wherein the second value is less than the first value;
Correspondingly, after the sensor acquires the current frame point cloud data, the method further comprises:
determining updated obstacle position information of the movement obstacle according to the current frame point cloud information;
and updating the occupation parameters of the reference area according to the updated obstacle position information so that the robot can determine the obstacle avoidance area of the robot according to the occupation parameters of the reference area.
8. Robot keeps away barrier device based on blind area barrier, its characterized in that disposes at least one sensor that acquires environmental point cloud information on the robot, includes:
the obstacle speed determining module is used for determining speed information of the moving obstacle in the running range of the robot according to the current frame point cloud information and the historical frame point cloud information acquired by the sensor;
the obstacle movement characteristic determining module is used for determining whether the movement obstacle has movement persistence characteristics according to the speed information of the movement obstacle;
the movement obstacle avoidance module is used for controlling the robot to perform movement obstacle avoidance according to the overlapping range of the predicted movement track of the movement obstacle and the blind area if the movement obstacle has the movement persistence characteristic; the blind area is determined according to the detection range of the sensor and the operation range of the robot;
And the static obstacle avoidance module is used for controlling the robot to perform static obstacle avoidance if the movement obstacle does not have the movement continuity characteristic.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the blind zone obstacle avoidance method of any of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to implement the blind zone obstacle avoidance method of any of claims 1-7 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310601378.6A CN116560373A (en) | 2023-05-25 | 2023-05-25 | Robot obstacle avoidance method, device, equipment and medium based on blind area obstacle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310601378.6A CN116560373A (en) | 2023-05-25 | 2023-05-25 | Robot obstacle avoidance method, device, equipment and medium based on blind area obstacle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116560373A true CN116560373A (en) | 2023-08-08 |
Family
ID=87501727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310601378.6A Pending CN116560373A (en) | 2023-05-25 | 2023-05-25 | Robot obstacle avoidance method, device, equipment and medium based on blind area obstacle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116560373A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117148837A (en) * | 2023-08-31 | 2023-12-01 | 上海木蚁机器人科技有限公司 | Dynamic obstacle determination method, device, equipment and medium |
-
2023
- 2023-05-25 CN CN202310601378.6A patent/CN116560373A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117148837A (en) * | 2023-08-31 | 2023-12-01 | 上海木蚁机器人科技有限公司 | Dynamic obstacle determination method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112526999B (en) | Speed planning method, device, electronic equipment and storage medium | |
CN114771572A (en) | Automatic driving track prediction method, device, equipment and storage medium | |
CN114815851A (en) | Robot following method, robot following device, electronic device, and storage medium | |
CN114987546A (en) | Training method, device and equipment of trajectory prediction model and storage medium | |
CN116560373A (en) | Robot obstacle avoidance method, device, equipment and medium based on blind area obstacle | |
EP4145408A1 (en) | Obstacle detection method and apparatus, autonomous vehicle, device and storage medium | |
CN114030483B (en) | Vehicle control method, device, electronic equipment and medium | |
CN115139303A (en) | Grid well lid detection method, device, equipment and storage medium | |
CN116499487B (en) | Vehicle path planning method, device, equipment and medium | |
CN117168488A (en) | Vehicle path planning method, device, equipment and medium | |
CN116358584A (en) | Automatic driving vehicle path planning method, device, equipment and medium | |
CN114919570A (en) | Parking obstacle avoidance method and device, electronic equipment and storage medium | |
CN114842305A (en) | Depth prediction model training method, depth prediction method and related device | |
CN116879921A (en) | Laser radar sensing method, device, equipment and medium based on grid | |
CN114475657B (en) | Control method and device for automatic driving vehicle and electronic equipment | |
CN114590248B (en) | Method and device for determining driving strategy, electronic equipment and automatic driving vehicle | |
CN117589188B (en) | Driving path planning method, driving path planning device, electronic equipment and storage medium | |
CN117109595B (en) | Ship channel planning method and device, electronic equipment and storage medium | |
CN118605529A (en) | Automatic driving control method and device, electronic equipment and storage medium | |
CN118269965A (en) | Collision cost determination method, device, equipment and medium | |
CN116224991A (en) | Motion state identification method, device, equipment and storage medium | |
CN116930914A (en) | Anti-drop detection method, device, equipment and medium for unmanned vehicle | |
CN116503831A (en) | Obstacle screening method, obstacle screening device, electronic equipment and storage medium | |
CN118683572A (en) | Automatic driving method, device, equipment and medium for vehicle | |
CN117742243A (en) | Equipment action collision prediction method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |