SUMMERY OF THE UTILITY MODEL
Therefore, in order to solve the above technical problems, it is necessary to provide a navigation and positioning system for an unmanned vehicle to solve the problems of single driving route and high cost of the unmanned vehicle.
The embodiment of the utility model provides an unmanned vehicle navigation positioning system, which comprises:
the system comprises a processing device, at least one unmanned vehicle and a plurality of image acquisition devices arranged above the working area of the unmanned vehicle,
the image acquisition equipment is in communication connection with the processing device; the unmanned vehicle is attached with a two-dimensional code label;
the image acquisition equipment is used for acquiring a two-dimensional code label image of the unmanned vehicle when the unmanned vehicle passes through the acquisition range of the image acquisition equipment and sending the two-dimensional code label image to the processing device; and the processing device is used for determining the position of the unmanned vehicle according to the two-dimensional code label image.
In one embodiment, a plurality of the image capture devices are evenly spaced over the unmanned vehicle work area.
In one embodiment, a plurality of monitoring devices is also included; at least part of the monitoring device is multiplexed as the image acquisition device.
In one embodiment, the acquisition ranges of a plurality of said image acquisition devices do not overlap.
In one embodiment, the acquisition ranges of at least some of the image acquisition devices overlap;
the processing device is further used for calculating the position of the unmanned vehicle according to the two-dimensional code label images when the two-dimensional code label images corresponding to the same unmanned vehicle and acquired by different image acquisition devices are acquired, and determining that the positioning fails when the calculated positions of the unmanned vehicles are different.
In one embodiment, the processing device is further configured to trigger the image capturing device to capture the two-dimensional code label image again after determining that the positioning fails.
In one embodiment, the acquisition ranges of at least a portion of the image acquisition devices overlap;
the processing device is further used for calculating the position of the unmanned vehicle according to the two-dimensional code label images when the two-dimensional code label images corresponding to the same unmanned vehicle and acquired by different image acquisition devices are acquired, and taking the average value of the calculated positions of the unmanned vehicle as the positioning position of the unmanned vehicle when the position difference value of any two calculated unmanned vehicles is smaller than a preset threshold value.
In one embodiment, the unmanned vehicle comprises an automated guided vehicle.
In one embodiment, the two-dimensional code label is attached to the roof of the unmanned vehicle.
According to the unmanned vehicle navigation positioning system provided by the embodiment of the utility model, the two-dimensional code labels are pasted on the unmanned vehicle, the two-dimensional code label images of the unmanned vehicle are acquired by adopting the plurality of image acquisition devices arranged above the working area of the unmanned vehicle, and then the processing device determines the position of the unmanned vehicle according to the positioning label image information. Therefore, the unmanned vehicle is only required to be pasted with the two-dimensional code label, various sensors for assisting positioning are not required to be installed, the unmanned vehicle is uniformly positioned and sensed by the processing device and the image acquisition equipment above the working area of the unmanned vehicle, tracks also need to be arranged in the working area, and therefore the cost is low and the navigation path is flexible.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
In the transportation management of the warehouse or the workshop in the prior art, in order to reduce the demand of workers, an unmanned vehicle is generally adopted to replace a manually driven vehicle to realize transportation and use in the workshop. An unmanned vehicle system used in a factory is generally equipped with an automatic guide device such as an electromagnetic or optical device and can travel along a predetermined guide path. Because the system has the advantages of higher automation degree and intelligent level, the system can be used in more and more places, such as storage, logistics, port transportation and other places, brings huge convenience to the production and transportation process, and has good application and development prospects.
The traditional unmanned vehicle navigation positioning method mainly comprises a magnetic stripe navigation method adopting an electromagnetic induction technology, a radio frequency tag positioning method adopting an RFID technology, a laser technology positioning method and the like. The magnetic stripe navigation is difficult to change the path, the RFID positioning precision is insufficient, and the laser positioning cost is high. Aiming at the problems of positioning accuracy, cost and path flexibility, the embodiment of the utility model provides an unmanned vehicle navigation positioning system.
Fig. 1 is a schematic structural diagram of an unmanned vehicle navigation and positioning system according to an embodiment of the present invention. As shown in fig. 1. The system includes a processing device 10, at least one unmanned vehicle 20, and a plurality of image capturing devices 30 disposed over a work area of the unmanned vehicle. Wherein, a plurality of image capturing devices 30 are all connected to the processing apparatus 10 in communication, for example, in wired or wireless communication, for data interaction. The unmanned vehicle 20 is attached with a two-dimensional code label 21. The processing device 10 may be, for example, a terminal processing device or a cloud server. For example, in fig. 1, the embodiment of the present invention is illustrated by taking 9 image capturing devices 30 and two unmanned vehicles 20 as an example.
The unmanned vehicle 20 is attached with the two-dimensional code tag 21, and the attaching direction of the two-dimensional code tag is fixed. For example, as shown in fig. 1, one side of the two-dimensional code tag 21 is parallel to the traveling direction of the unmanned vehicle 20 (the direction of the dotted arrow in fig. 1). The two-dimensional code label is a pattern which is distributed on a plane (in two-dimensional direction) according to a certain rule by using a certain specific geometric figure, is black and white and is alternated and records data symbol information. The concept of bit stream of '0' and '1' forming the internal logic base of computer is used skillfully in coding, and a plurality of geometric shapes corresponding to binary system are used for representing literal numerical value information. Since the mounting direction of the two-dimensional code tag 21 on the unmanned vehicle is fixed, the position of the unmanned vehicle can be determined by knowing the position of the two-dimensional code tag.
A plurality of image capturing apparatuses 30 are disposed above the unmanned vehicle work area, such as the roof and corners of a factory. When the unmanned vehicle 20 passes through the acquisition range of the image acquisition device 30, the image acquisition device 30 acquires the two-dimensional code tag image of the unmanned vehicle and transmits the two-dimensional code tag image to the processing device 10. The processing device 10 determines the position of the unmanned vehicle from the two-dimensional code label image.
According to the embodiment of the utility model, a complex sensor is not required to be arranged on the unmanned vehicle, only the two-dimensional code label is required to be pasted on the unmanned vehicle, the two-dimensional code label image on the unmanned vehicle is acquired through the plurality of image acquisition devices 30 above the working area of the unmanned vehicle, and the processing device can determine the position of the unmanned vehicle according to the two-dimensional code label image. Since no complex sensor is required to be arranged on the unmanned vehicle, the hardware cost of the system can be reduced. In addition, in the embodiment of the utility model, a track is not required to be arranged in the working area, and the unmanned vehicle is uniformly positioned and sensed by the processing device and the plurality of image acquisition devices above the working area of the unmanned vehicle, so that the driving track of the unmanned vehicle is more flexible, and the requirement of a complex path can be met.
On the basis of the above embodiment, optionally, the image capturing device 30 may include, for example, a camera or the like. The processing device 10 may determine the position of the unmanned vehicle from the location tag image information in the following manner: the unmanned vehicle identity information in the two-dimensional code label image is extracted by using an image processing algorithm, and meanwhile, the position of the two-dimensional code label is calculated by using a geometric relation according to the size and the shape of the two-dimensional code label image in the two-dimensional code label image. The position of the unmanned vehicle includes information such as coordinates and orientation of the unmanned vehicle.
Specifically, the processing device 10 may determine the position of the unmanned vehicle as follows:
extracting the outline of each position detection graph in the two-dimensional code label image;
calculating the center coordinate of the position detection graph according to the outline of the position detection graph;
calculating the center coordinates and the reference rotating shaft of the two-dimensional code label image according to the center coordinates of the position detection graphs;
calculating the relative position between the unmanned vehicle corresponding to the two-dimensional code label image and an image acquisition device for acquiring the two-dimensional code label image according to the central coordinate of the two-dimensional code label image and a reference rotating shaft;
and calculating the position of the unmanned vehicle corresponding to the two-dimension code label image according to the absolute position of the image acquisition device collecting the two-dimension code label image and the relative position between the unmanned vehicle corresponding to the two-dimension code label image and the image acquisition device acquiring the two-dimension code label image.
The two-dimensional code label image generally comprises three position detection graphs. As shown in fig. 2, three rectangles in the figure are outlines of three position detection patterns of the two-dimensional code label image, X1, X2, and X3 are central coordinates of the three position detection patterns of the two-dimensional code label image, O is a central point of the two-dimensional code label image, and let X1X2 vector be a reference rotation axis vector. The three position detection figure outlines of the two-dimensional code label image can be screened out according to the attribute parameters (taking the pixel value as a unit) such as the perimeter, the area and the size of the outline of the position detection figure, and the central coordinate of each position detection figure outline is calculated.
In one embodiment, optionally, a plurality of image capture devices are evenly spaced over the unmanned vehicle work area. In order to realize that the two-dimensional code label image on the unmanned vehicle can be acquired in an all-around and accurate manner in the unmanned vehicle working area, the embodiment of the utility model arranges the image acquisition equipment above the unmanned vehicle working area at even intervals. For example, a plurality of image acquisition devices arranged in a matrix are arranged in the whole area above the working area of the unmanned vehicle, and the distances between the adjacent image acquisition devices are equal.
In an embodiment, optionally, the unmanned vehicle navigation and positioning system provided in the embodiment of the present invention further includes a plurality of monitoring devices, and at least a part of the monitoring devices are reused as image capturing devices. According to the embodiment of the utility model, at least part of monitoring equipment deployed in the working area of the unmanned vehicle is reused as the image acquisition equipment for acquiring the two-dimensional code label image on the unmanned vehicle, and the image acquisition equipment does not need to be additionally deployed, so that the cost can be further reduced.
In one embodiment, optionally, the acquisition ranges of the plurality of image acquisition devices may not overlap. On the premise of ensuring that the two-dimensional code label image on the unmanned vehicle can be acquired in the whole unmanned vehicle working area, the distance between the image acquisition devices is increased as much as possible, the number of the image acquisition devices is reduced, and therefore the cost of the system is further reduced.
In one embodiment, optionally, the acquisition ranges of at least some of the image acquisition devices may also be set to overlap. In consideration of accuracy of navigation positioning, the embodiment of the utility model can also set the overlapping of the acquisition ranges of at least part of the image acquisition devices, so that a plurality of image acquisition devices can simultaneously acquire the two-dimensional code label images on the same unmanned vehicle. If the two-dimensional code label images on the same unmanned vehicle are simultaneously acquired by the image acquisition equipment, the processing device calculates the position of the unmanned vehicle according to the two-dimensional code label images respectively, and determines that the positioning fails when the calculated positions of the unmanned vehicles are different. For example, the image pickup device a1 and the image pickup device a2 simultaneously pick up two-dimensional code tag images on the same unmanned vehicle. The position of the unmanned vehicle calculated by the processing device according to the two-dimensional code label image acquired by the image acquisition device A1 is B1, the position of the unmanned vehicle calculated by the processing device according to the two-dimensional code label image acquired by the image acquisition device A2 is B2, and when the position B1 of the unmanned vehicle is different from the position B2 of the unmanned vehicle, it is indicated that an image acquisition error may occur in the image acquisition device, and then it is determined that the positioning fails.
In an embodiment, optionally, the processing device is further configured to trigger the image capturing apparatus to capture the two-dimensional code label image again after determining that the positioning fails. After the positioning fails, the image acquisition equipment is triggered again to acquire the two-dimensional code label image again, and the position of the unmanned vehicle is determined according to the two-dimensional code label image acquired again.
In one embodiment, optionally, the acquisition ranges of at least part of the image acquisition devices are set to overlap; the processing device is further used for calculating the positions of the unmanned vehicles according to the two-dimensional code label images respectively when the two-dimensional code label images corresponding to the same unmanned vehicle and acquired by different image acquisition devices are acquired, and taking the average value of the positions of the unmanned vehicles as the positioning position of the unmanned vehicle when the position difference value of any two calculated unmanned vehicles is smaller than a preset threshold value.
If a plurality of image capturing devices simultaneously capture two-dimensional code label images on the same unmanned vehicle, the processing device calculates the position of the unmanned vehicle according to the two-dimensional code label images, for example, the image capturing device a1 and the image capturing device a2 simultaneously capture two-dimensional code label images on the same unmanned vehicle. The position of the unmanned vehicle calculated by the processing device according to the two-dimensional code label image acquired by the image acquisition device A1 is B1, and the position of the unmanned vehicle calculated by the processing device according to the two-dimensional code label image acquired by the image acquisition device A2 is B2. The difference value between the position B1 of the unmanned vehicle and the position B2 of the unmanned vehicle is smaller than a preset threshold value, which indicates that the accuracy of the result of the image acquisition equipment is high, and the position of the unmanned vehicle can be calculated accordingly, so that the average value of the position B1 of the unmanned vehicle and the position B2 of the unmanned vehicle is used as the positioning position of the unmanned vehicle. For another example, the image pickup apparatus a1, the image pickup apparatus a2, and the image pickup apparatus A3 simultaneously pick up two-dimensional code label images on the same unmanned vehicle. The position of the unmanned vehicle calculated by the processing device according to the two-dimensional code label image acquired by the image acquisition device A1 is B1, the position of the unmanned vehicle calculated by the processing device according to the two-dimensional code label image acquired by the image acquisition device A2 is B2, the position of the unmanned vehicle calculated by the processing device according to the two-dimensional code label image acquired by the image acquisition device A3 is B3, the difference between the position B1 of the unmanned vehicle and the position B2 of the unmanned vehicle is smaller than a preset threshold value, the difference between the position B1 of the unmanned vehicle and the position B3 of the unmanned vehicle is smaller than a preset threshold value, and the difference between the position B2 of the unmanned vehicle and the position B3 of the unmanned vehicle is smaller than a preset threshold value, so that the average value of the position B1 of the unmanned vehicle, the position B2 of the unmanned vehicle and the position B3 of the unmanned vehicle is used as the positioning position of the unmanned vehicle.
In an embodiment, optionally, the unmanned vehicle in the embodiment of the present invention includes an automated guided vehicle. An Automatic Guided Vehicle (AGV) is a key device in an intelligent logistics storage system, and can be used as a transfer robot, thereby greatly saving labor cost and improving work efficiency. The unmanned vehicle navigation positioning system provided by the embodiment of the utility model can accurately position the AGV and smoothly finish moving to the target point so as to finish delivery, pickup or other tasks.
In one embodiment, optionally, the two-dimensional code label can be attached to the roof of the unmanned vehicle, so that the situation that the two-dimensional code label on the unmanned vehicle is shielded by other obstacles in the driving process of the unmanned vehicle in the working area of the unmanned vehicle, and the image acquisition equipment above the working area of the unmanned vehicle cannot acquire the two-dimensional code label image is prevented.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.