CN111814752B - Indoor positioning realization method, server, intelligent mobile device and storage medium - Google Patents
Indoor positioning realization method, server, intelligent mobile device and storage medium Download PDFInfo
- Publication number
- CN111814752B CN111814752B CN202010817857.8A CN202010817857A CN111814752B CN 111814752 B CN111814752 B CN 111814752B CN 202010817857 A CN202010817857 A CN 202010817857A CN 111814752 B CN111814752 B CN 111814752B
- Authority
- CN
- China
- Prior art keywords
- intelligent mobile
- candidate
- positioning
- mobile equipment
- scoring result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000003860 storage Methods 0.000 title claims abstract description 20
- 238000012806 monitoring device Methods 0.000 claims abstract description 53
- 230000007613 environmental effect Effects 0.000 claims description 41
- 238000012795 verification Methods 0.000 claims description 31
- 238000012544 monitoring process Methods 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 10
- 230000007812 deficiency Effects 0.000 abstract description 8
- 238000004891 communication Methods 0.000 description 24
- 238000004422 calculation algorithm Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 9
- 238000009434 installation Methods 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 6
- 230000002829 reductive effect Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 238000011065 in-situ storage Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000002633 protecting effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/38—Outdoor scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an indoor positioning realization method, a server, intelligent mobile equipment and a storage medium, wherein the method comprises the following steps: acquiring an identification position list of the identified intelligent mobile device according to video image data acquired from each monitoring device in a preset space; according to the identification position list and the known spatial position of the intelligent mobile equipment which is positioned, the intelligent mobile equipment to be positioned is searched out and a corresponding candidate position list is obtained; and sending the candidate position list to the corresponding intelligent mobile equipment to be positioned, and completing the positioning of the intelligent mobile equipment to be positioned according to the matching scoring result sent by the intelligent mobile equipment to be positioned. According to the invention, the indoor positioning of the intelligent mobile equipment is realized, the problem of positioning deficiency caused by manually moving the intelligent mobile equipment away is avoided, and the accuracy and the effectiveness of indoor positioning are improved.
Description
Technical Field
The invention relates to the technical field of indoor positioning, in particular to an indoor positioning realization method, a server, intelligent mobile equipment and a storage medium.
Background
With the high development of science and technology, the application fields of intelligent mobile devices such as mobile robots, unmanned vehicles and the like are more and more wide, such as industry, agriculture, medical treatment and the like. With the wide application of intelligent mobile devices, intelligentization is an important direction of development. One of the important types of intelligent mobile devices is intelligent mobile device, and one direction of the intelligent mobile device is navigation and obstacle avoidance in the moving process. An important link of the intelligent mobile device controlled by the computer in the motion process is that the computer needs to know where the intelligent mobile device is, namely the problem of positioning the intelligent mobile device.
Generally, an intelligent mobile device which depends on a laser SLAM technology or a visual SLAM technology has the problems of starting up self-positioning or intelligent mobile device "kidnapping", namely the problem of position loss after starting up or after being moved by a person. At present, the problem of using WIFI positioning, uwb positioning, two-dimensional code positioning, fixed-position starting-up and other modes to assist in achieving starting-up self-positioning exists. On one hand, the method needs to additionally add hardware, increases the deployment cost of the intelligent mobile equipment, and on the other hand, increases the use difficulty of the intelligent mobile equipment.
Disclosure of Invention
The invention aims to provide an indoor positioning realization method, a server, intelligent mobile equipment and a storage medium, so as to realize the indoor positioning of the intelligent mobile equipment, avoid the problem of positioning deficiency caused by manually moving the intelligent mobile equipment away, and improve the accuracy and the effectiveness of indoor positioning.
The technical scheme provided by the invention is as follows:
the invention provides an indoor positioning realization method which is characterized by being applied to a server and comprising the following steps:
acquiring an identification position list of the identified intelligent mobile device according to video image data acquired from each monitoring device in a preset space;
according to the identification position list and the known spatial position of the intelligent mobile equipment which is positioned, the intelligent mobile equipment to be positioned is searched out, and a corresponding candidate position list is obtained;
and sending the candidate position list to the corresponding intelligent mobile equipment to be positioned, and completing the positioning of the intelligent mobile equipment to be positioned according to the matching scoring result sent by the intelligent mobile equipment to be positioned.
Further, the step of acquiring the identified location list of the identified intelligent mobile device according to the video image data acquired from each monitoring device in the preset space includes the steps of:
Acquiring video image data of all monitoring devices distributed in a preset space, performing image recognition on the video image data, and screening out target monitoring devices which are recognized to the intelligent mobile device;
acquiring gesture information corresponding to each target monitoring device;
and calculating the position coordinates of the intelligent mobile equipment according to the gesture information corresponding to the target monitoring equipment, and summarizing all the position coordinates to obtain the identification position list.
Further, the step of sending the candidate location list to the corresponding intelligent mobile device to be located, and completing the location of the intelligent mobile device to be located according to the matching scoring result sent by the intelligent mobile device to be located includes the steps of:
respectively sending the candidate position lists to corresponding intelligent mobile equipment to be positioned;
receiving a preliminary matching scoring result sent by the intelligent mobile equipment to be positioned; the preliminary matching scoring result is obtained by respectively carrying out matching calculation on the intelligent mobile equipment to be positioned according to the candidate information in the candidate position list; the candidate information comprises a candidate floor map and a corresponding candidate position coordinate;
determining a candidate floor map with the maximum score value and a candidate position coordinate as a positioning result corresponding to the intelligent mobile equipment to be positioned; the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
Further, the candidate floor map and the candidate position coordinates with the maximum determined score value are positioning results corresponding to the intelligent mobile equipment to be positioned; the positioning result comprises the steps of:
acquiring a verification matching scoring result which is sent again by the intelligent mobile equipment after moving a preset distance;
if the change of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the positioning error is caused to be repositioned;
the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile equipment and the corresponding candidate floor map.
The invention provides an indoor positioning realization method, which is applied to intelligent mobile equipment and comprises the following steps:
receiving a candidate position list sent by a server; the candidate position list is obtained by a server according to the matching screening of the identification position list and the known spatial position of the intelligent mobile equipment which is already positioned after the identification position list of the intelligent mobile equipment which is identified is obtained according to the video image data; the video image data are acquired from each monitoring device in a preset space;
If the self-positioning is not finished, scanning the surrounding environment to obtain environment sensing data, and sequentially loading candidate information of the candidate position list to respectively perform the preliminary matching scoring result obtained by the matching calculation;
determining a candidate floor map with the maximum score value and a candidate position coordinate as a positioning result of the candidate floor map and the candidate position coordinate to finish self-positioning according to the preliminary matching scoring result; the positioning result comprises the floor and the position of the self.
Further, the step of scanning the surrounding environment to obtain environmental sensing data if the positioning is not completed, and sequentially loading candidate information of the candidate position list to respectively perform the preliminary matching scoring results obtained by the matching calculation includes the steps of:
scanning the current surrounding environment of the environment sensor by the environment sensing sensor, acquiring the environment sensing data and extracting to obtain environment characteristics; the environment sensing data comprises image observation data and laser observation data;
and respectively carrying out matching calculation on the structural features corresponding to the candidate floor maps in the candidate information and the environmental features to obtain the preliminary matching scoring result.
Further, after determining, according to the preliminary matching scoring result, that the candidate floor map with the largest score value and the candidate position coordinate are the positioning result where the candidate position coordinate is located, the method includes the steps of:
Scanning again to obtain environment sensing data after moving a preset distance to calculate to obtain a verification matching scoring result;
if the change of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the verification matching scoring result and the preliminary matching scoring result are in a state to be positioned for repositioning;
the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile equipment and the corresponding candidate floor map.
The invention also provides a server, which comprises a processor, a memory and a computer program stored in the memory and capable of running on the processor, wherein the processor is used for executing the computer program stored in the memory to realize the operation executed by the indoor positioning realization method.
The present invention also provides a storage medium having at least one instruction stored therein, the instruction being loaded and executed by a processor to implement operations performed by the indoor positioning implementation method.
The invention also provides intelligent mobile equipment, which comprises a processor, a memory and a computer program stored in the memory and capable of running on the processor, wherein the processor is used for executing the computer program stored in the memory to realize the operation executed by the indoor positioning realization method.
According to the indoor positioning realization method, the server, the intelligent mobile device and the storage medium, the indoor positioning of the intelligent mobile device can be realized, the problem of positioning deficiency caused by manually moving the intelligent mobile device away is avoided, and the accuracy and the effectiveness of indoor positioning are improved.
Drawings
The above features, technical features, advantages and implementation manners of the indoor positioning implementation method, the server, the intelligent mobile device and the storage medium will be further described in a clear and understandable manner by describing the preferred embodiments with reference to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a method for implementing indoor positioning of the present invention;
FIG. 2 is a flow chart of another embodiment of a method for implementing indoor positioning according to the present invention;
FIG. 3 is a schematic representation of the conversion of a camera coordinate system, a world coordinate system, and an imaging plane coordinate system;
fig. 4 is a flowchart of another embodiment of an indoor positioning implementation method of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
For the sake of simplicity of the drawing, the parts relevant to the present invention are shown only schematically in the figures, which do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In addition, in the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain the specific embodiments of the present invention with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the invention, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
In one embodiment of the present invention, as shown in fig. 1, an indoor positioning implementation method is applied to a server, and includes the steps of:
s110, acquiring an identification position list of the identified intelligent mobile device according to video image data acquired from each monitoring device in a preset space;
specifically, the preset spaces of hospitals, office buildings, markets and the like are the active areas of intelligent mobile equipment, so that a good video evidence-lifting and restoring site is provided for better protecting property safety and when disputes or bifurcation events occur, and monitoring equipment such as monitoring cameras are paved at all corners of the common site according to requirements.
The wireless communication chip can be installed on the monitoring equipment, and the WIFI wireless network is arranged in the field, so that the monitoring equipment can be in wireless communication connection with the server through video image data, and the server can be in wired connection with each monitoring equipment through a 485 bus. Thus, the server may obtain video image data it collects from the monitoring device. Then, the server carries out framing processing on the video image data to obtain a plurality of image frames, the server carries out image recognition through a deep learning technology, judges whether the intelligent mobile device is detected and recognized, and marks the image frames detected and recognized to the intelligent mobile device to obtain target image frames if the intelligent mobile device is detected and recognized. The server acquires the position coordinates of the identified intelligent mobile device according to the target image frame, and the installation position of the monitoring device corresponding to the target image frame of the identified intelligent mobile device is known, so that the floor where the identified intelligent mobile device is located can be acquired. Therefore, the position coordinates of the identified intelligent mobile equipment and the floor map corresponding to the position coordinates are bound to obtain an integral element list, the identified position list comprises a plurality of integral elements, and each integral element comprises the position coordinates and the floor map corresponding to the position coordinates.
S120, according to the identification position list and the known spatial position of the intelligent mobile equipment which is already positioned, searching the intelligent mobile equipment to be positioned and obtaining a corresponding candidate position list;
specifically, the intelligent mobile device with the completed positioning refers to an intelligent mobile device which determines the spatial position of the intelligent mobile device at the current moment, that is, the spatial position of the intelligent mobile device with the completed positioning at the same moment is one. The candidate location list comprises a plurality of candidate information, and each candidate information comprises a candidate location and a corresponding candidate floor map.
Since the intelligent mobile device in a fixed area (e.g. charging piles, preset stay areas) is able to accurately learn the known spatial position of itself (known spatial positions include known floors and known positions). Or, the staff inputs the known spatial position of the intelligent mobile device in the interactive interface of the intelligent mobile device, and then the known spatial position of the intelligent mobile device is reported to the server after the positioning of the intelligent mobile device is completed. It is of course also possible that the server, when passing through a fixed area provided with infrared sensors by means of the smart mobile device, performs motion trajectory estimation from motion sensors provided on the smart mobile device, thereby obtaining a known spatial position of the smart mobile device passing through the fixed area. Of course, the method of obtaining the positioning result corresponding to the intelligent mobile device to be positioned through the embodiment can also be used as one of the ways of positioning the intelligent device and the known spatial position thereof. After acquiring the identified position list of the identified intelligent mobile equipment and acquiring the known space positions of the intelligent mobile equipment which are positioned completely through the method, the server searches out the position coordinate with the maximum matching degree with the known position according to the known floors in the known space positions as a target coordinate position, deletes the position coordinate with the same target coordinate position in the identified position list, so that all the target coordinate positions can be matched and deleted, the known space positions of the server are not reported to the server, and the server is reported to be in an undesitioned state, then the intelligent mobile equipment is the intelligent mobile equipment to be positioned, then the position coordinates except the known space positions in the identified position list are screened out as a plurality of candidate positions of the intelligent mobile equipment to be positioned, and the candidate positions and the corresponding candidate floor maps are summarized to obtain a candidate position list.
The video image data of the three robots a, b, and c are captured by, for example, a monitoring device A, B, C at the first floor hoistway. The server can acquire three position coordinates, namely position coordinates D1, D2 and D3, through an image recognition technology, the communication of the robot A informs the server of the fact that the robot A belongs to the intelligent mobile device which is positioned completely and the known space position of the robot A, and the communication of the robot B and the robot C informs the server of the fact that the robot A belongs to the intelligent mobile device which is to be positioned. The server compares the position coordinates with the known space positions reported by the robot A, searches the position coordinates D1 which are most similar to the known space positions reported by the server, deletes the position coordinates D1 in the identification position list, then the position coordinates D2 and D3 are candidate positions, the floor map of the first floor corresponding to the position coordinates D2 and D3 is a candidate floor map, and then the candidate position list consists of first candidate information (including the floor map of the first floor of the position coordinates D2 < + >) and second candidate information (including the floor map of the first floor of the position coordinates D3 < + >).
And S130, sending the candidate position list to the corresponding intelligent mobile equipment to be positioned, and completing the positioning of the intelligent mobile equipment to be positioned according to the matching scoring result sent by the intelligent mobile equipment to be positioned.
Specifically, after the server obtains the candidate position list, the server sends the candidate position list to each intelligent mobile device to be positioned, the intelligent mobile device to be positioned can automatically detect and obtain the environment sensing data of the environment around the position of the intelligent mobile device to be positioned, and after the candidate position list is received, the server performs matching evaluation according to the candidate position list, the floor map corresponding to each floor and the environment sensing data to obtain a preliminary matching scoring result. Of course, the intelligent mobile device to be positioned can also automatically detect and acquire the environmental sensing data of the surrounding environment of the position after receiving the candidate position list, and then perform matching evaluation according to the candidate position list and the environmental sensing data to obtain a preliminary matching scoring result. The order in which the intelligent mobile device obtains the environmental sensing data and receives the candidate location list is not limited herein, and is within the scope of the present invention. After the intelligent mobile equipment to be positioned acquires the preliminary matching scoring result in the mode, the preliminary matching scoring result is sent to the server, and the server determines the positioning result corresponding to the intelligent mobile equipment to be positioned according to the preliminary matching scoring result.
In the embodiment, the method and the device are combined with the existing indoor monitoring equipment, the approximate positions of all intelligent mobile equipment in the field are identified through the object identification technology, and then the global positioning capability of the intelligent mobile equipment is utilized to realize the indoor positioning of the intelligent mobile equipment, so that the problem of positioning deficiency caused by manually moving the intelligent mobile equipment away is avoided, the indoor positioning result is more accurate, and the indoor positioning accuracy is improved. In addition, the general positions of all intelligent mobile devices in the field are recognized through the existing indoor monitoring equipment to realize preliminary screening, so that the positioned intelligent mobile devices are screened out and only the intelligent mobile devices to be positioned are positioned, the positioning matching range can be greatly reduced, and the overall positioning efficiency of all intelligent mobile devices in the field is improved.
In one embodiment of the present invention, as shown in fig. 2, an indoor positioning implementation method is applied to a server, and includes the steps of:
s111, obtaining video image data of all monitoring devices arranged in a preset space, performing image recognition on the video image data, and screening out target monitoring devices which are recognized to the intelligent mobile device;
Specifically, each monitoring device collects video image data in a monitoring area of the monitoring device, and each video image data comprises unique identification information of the monitoring device which is shot and obtained, wherein the unique identification information comprises but is not limited to deployment information of the monitoring device, a monitoring device code and a device serial number. After the server acquires the video image data, framing the video image data to obtain an image frame, wherein the image frame is provided with shooting time information and unique identification information. Then, the server performs image preprocessing such as graying and binarization on each image frame to obtain images to be identified, wherein each image to be identified has shooting time information and unique identification information, namely each image to be identified is bound and associated with the shooting time information and the unique identification information, so that the corresponding unique identification information of the image to be identified can be searched out according to the image to be identified, the unique identification information can correspond to a plurality of images to be identified, and one image to be identified corresponds to only one unique identification information.
After the server acquires the images to be identified, each image to be identified is respectively identified through the neural network model obtained through training in advance, and whether the intelligent mobile equipment is identified in the images to be identified is judged, wherein the intelligent mobile equipment is identified not only by the whole outline of the intelligent mobile equipment, but also by referring to the partial outline of the intelligent mobile equipment. If the intelligent mobile device is identified in the current image to be identified, determining the monitoring device corresponding to the unique identification information associated with the current image to be identified as the target monitoring device, and similarly screening all the target monitoring devices identified to the intelligent mobile device.
S112, acquiring posture information corresponding to each target monitoring device;
specifically, the posture information includes an installation position and a shooting angle of the monitoring device. The installation position of each monitoring device is known, that is, the world coordinates (X, Y, Z) of each monitoring device can be known, wherein X is the X-axis coordinate of the monitoring device relative to the preset origin, Y is the Y-axis coordinate of the monitoring device relative to the preset origin, H is the Z-axis coordinate of the monitoring device relative to the preset origin, that is, the height value, and the installation position of each monitoring device relative to the preset origin can be determined according to the world coordinates (X, Y, Z) of each monitoring device. If the monitoring devices are fixedly mounted and the shooting visual field range is not rotatably adjusted, the shooting angle of each monitoring device relative to a preset origin point can be determined according to the world coordinates (X, Y, Z) of each monitoring device, wherein the shooting angle comprises a pitch angle and a shooting direction. If the monitoring equipment is fixedly installed and can rotationally adjust the shooting visual field range, the pitch angle alpha and the rotation angle beta of the monitoring equipment can be calculated according to an acceleration sensor or a gyroscope arranged on the monitoring equipment, and the shooting angle of each monitoring equipment can be calculated according to world coordinates (X, Y, Z), the pitch angle alpha and the rotation angle beta of the monitoring equipment, so that the corresponding gesture information of each target monitoring equipment can be obtained.
S113, calculating to obtain the position coordinates of the intelligent mobile device according to the posture information corresponding to the target monitoring device, and summarizing all the position coordinates to obtain an identification position list;
specifically, a preset feature point of the target monitoring device (a center point of the intelligent mobile device, or other preset points such as a camera center point of the intelligent mobile device and a head center point of the intelligent mobile device) is selected, and pixel coordinates of a projection point of the preset feature point on the imaging plane are obtained.
The origin of coordinates of the world coordinate system is set according to the requirement, and any point in the preset space can be used as the origin of coordinates of the world coordinate system, and the world coordinate system can represent the space coordinates of the object in the space on the preset space. The camera coordinate system is shown in fig. 3 with the optical center Fc of the monitoring device as the origin, and the camera coordinate system (Xc, yc, zc) coincides with the optical axis OA, that is, the z-axis of the camera coordinate system (Xc, yc, zc) points in front of the monitoring device C, and the positive directions of the x-axis and the y-axis of the camera coordinate system (Xc, yc, zc) are parallel to the world coordinate system. The imaging plane coordinate system (u, v) represents the position of the pixel, and the origin of coordinates is the position of the intersection of the optical axis OA of the monitoring device and the imaging plane coordinate system (u, v). The origin of coordinates of the pixel coordinate system is in the upper left corner. According to the conversion relation among the pixel coordinate system, the world coordinate system, the camera coordinate system (Xc, yc, zc), the imaging plane coordinate system (u, v) and the gesture information corresponding to the target monitoring equipment, the position coordinates of the preset feature points P of the intelligent mobile equipment can be calculated and obtained, and then the position coordinates of the preset feature points are used as the position coordinates of the preset feature points P of the intelligent mobile equipment. The conversion relationships among the pixel coordinate system, the world coordinate system, the camera coordinate system (Xc, yc, zc), and the imaging plane coordinate system (u, v) are conventional, and will not be described in detail herein.
In addition, since the installation position of each monitoring device is known, the installation floor information of the monitoring device can be obtained according to the installation position of the monitoring device, and the floor where the intelligent mobile device is identified in the video image data obtained by the monitoring device is the same as the installation floor information of the monitoring device, therefore, the server can obtain the floor map where the identified intelligent mobile device is according to the target image frame. Each position coordinate corresponds to a floor map of a floor where the floor is located, and one floor map corresponds to a plurality of position coordinates. After the position coordinates of the identified intelligent mobile devices and the floor maps corresponding to the position coordinates are obtained according to the embodiment, the position coordinates of each identified intelligent mobile device and the floor maps corresponding to the position coordinates are respectively bound to obtain an identification position list.
The floor map and the position coordinates are taken as a whole element to be used as an element point for identifying the position list. Continuing with the above example, video image data of three robots a, b, and c are captured by the monitoring device A, B, C at the first floor hoistway. The server obtains three position coordinates in the manner described in the embodiment, and because the monitoring device is installed in the first floor, the three position coordinates correspond to the floor map of the first floor, and the three coordinate positions are respectively bound with the floor map of the first floor to obtain the identification position list corresponding to the first floor. And so on, the identification position list of each floor is obtained, and the details are not repeated here.
S120, according to the identification position list and the known spatial position of the intelligent mobile equipment which is already positioned, searching the intelligent mobile equipment to be positioned and obtaining a corresponding candidate position list;
s131, respectively sending the candidate position list to the corresponding intelligent mobile equipment to be positioned;
s132, receiving a preliminary matching scoring result sent by the intelligent mobile equipment to be positioned; the preliminary matching scoring result is obtained by respectively carrying out matching calculation on the intelligent mobile equipment to be positioned according to the candidate information in the candidate position list; the candidate information comprises a candidate floor map and a corresponding candidate position coordinate;
specifically, the floor map has known structural features and color features, wherein the structural features include, but are not limited to, straight line segments, corners, points, vertical lines, and the like, and the corresponding examples are wall, corner, lobe, door, and the like. The environmental features include geometric feature information including, but not limited to, straight line segments, corners, points, vertical lines, etc., and color feature information, corresponding examples are wall, corner, lobe, door, etc., respectively.
After the server acquires the candidate position list, the candidate position list is sent to each intelligent mobile device to be positioned, image observation data are acquired through a vision sensor, or laser observation data are acquired through a laser sensor, and feature extraction is carried out through the image observation data or the laser observation data, so that environmental features around the position where the server is located at the current moment are acquired. Because the candidate location list comprises the corresponding candidate floor maps, the structure features of each candidate floor map can be called to be acquired, then the to-be-positioned intelligent mobile equipment can respectively match the environment features with the structure features corresponding to each candidate floor map in the candidate location list, the similarity between the environment features and each candidate floor map is respectively acquired, so that a preliminary matching scoring result is acquired, and then each to-be-positioned intelligent mobile equipment sends the preliminary matching scoring result to the server.
The following specifically describes a process of extracting features from laser observation data to obtain environmental features: the method comprises the steps of obtaining laser observation data, carrying out region segmentation on the laser observation data, extracting geometric feature information contained in the laser observation data through a corner detection algorithm and a straight line fitting algorithm, and extracting the geometric feature information from the laser observation data is the prior art, and is not described in detail herein. The geometric characteristic information can be used for representing the environmental characteristic corresponding to the position of the intelligent mobile device when the intelligent mobile device acquires the laser observation data, and acquiring the geometric characteristic information of the laser observation data, which is obtained by scanning the current position (or the starting position, which can be any position in the preset space) of the intelligent mobile device to be positioned, as the environmental characteristic.
The following specifically describes a process of extracting features from image observation data to obtain environmental features: the image observation data is obtained, gray processing and binarization processing are performed on the image observation data, namely, a photographed image, geometrical feature information included in the image observation data is extracted by using edge detection algorithms such as a SIFT algorithm, a Sobel operator, a Previtt operator and the like, and the geometrical feature information is extracted from the image as the prior art, and is not described in detail herein. The geometric feature information can be used for characterizing one of the environmental features corresponding to the location where the intelligent mobile device acquires the image observation data. In addition, if the camera installed on the intelligent mobile device is a depth camera, the intelligent mobile device can scan the acquired geometrical feature information of the image observation data corresponding to the acquired geometrical feature information of the to-be-positioned intelligent mobile device at the current position through an image recognition algorithm to acquire the geometrical feature information in the shot image to serve as one of environment features.
S133, determining a candidate floor map with the maximum score value and a candidate position coordinate as a positioning result corresponding to the intelligent mobile equipment to be positioned; the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
Specifically, after receiving the preliminary matching scoring results sent by each intelligent mobile device to be positioned, the server compares the magnitudes of the multiple similarities according to the preliminary matching scoring results, and determines a candidate floor map with the maximum similarity and corresponding candidate position coordinates to obtain the positioning result of the intelligent mobile device to be positioned; the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
In this embodiment, the present invention combines with the existing indoor monitoring equipment, performs preliminary positioning by using the object recognition technology to obtain the rough position of each intelligent mobile device in the site, performs preliminary screening, and then uses the intelligent mobile device to collect environmental sensing data, that is, based on the laser observation data collected by the laser sensor or the image observation data collected by the vision sensor, performs quick matching in each candidate floor map according to the environmental sensing data (laser observation data and/or image observation data), so as to realize preliminary indoor positioning, avoid the problem of positioning deficiency caused by manually moving the intelligent mobile device, enable the indoor positioning result to be more accurate, and improve the indoor positioning accuracy.
Secondly, the general positions of all intelligent mobile devices in the field are recognized through the existing indoor monitoring equipment to realize preliminary screening, so that the positioned intelligent mobile devices are screened out and only the intelligent mobile devices to be positioned are positioned, the positioning matching range can be greatly reduced, and the overall positioning efficiency of all intelligent mobile devices in the field is improved.
Finally, after the intelligent mobile device is started, according to laser observation data acquired by the laser sensor or image observation data acquired by the visual sensor, finding out a candidate floor map with the maximum similarity value and a corresponding position coordinate as a positioning result of the intelligent mobile device to be positioned in an enumeration matching mode, thereby completing the positioning of the initial position. According to the embodiment, the environment does not need to be modified, and the modification of labeling, reflecting strips and the like in the environment is not needed as in the traditional method, so that the applicability is wide. And after the initial position is positioned, the motion trail of the intelligent mobile device is monitored by utilizing the motion data of the intelligent mobile device, so that the positioning of the intelligent mobile device in the moving process can be tracked and acquired in real time, and the accuracy and the reliability of indoor positioning are greatly improved.
The method comprises the steps of acquiring a recognition position list by video image data acquired by monitoring equipment which is arranged in a preset space in advance, then matching the recognition position list with the known spatial position of the intelligent mobile equipment which is already positioned to screen out all the intelligent equipment to be positioned and a candidate position list, and then utilizing laser scanning equipment (laser radar, millimeter wave radar and the like) or visual scanning equipment (including cameras, depth cameras, binocular cameras and the like) which are arranged on the intelligent mobile equipment to scan and extract the characteristics of the surrounding environment of the current position of the intelligent mobile equipment and match and position the structural characteristics corresponding to the candidate floor map in the candidate position list.
The intelligent mobile equipment has global positioning and positioning recovery capacity and simultaneously has great guarantee on the real-time aspect of positioning recovery. By acquiring the environmental features, when the environmental features meet a plurality of structural features, the environmental features are subjected to similarity matching with the structural features, and the structural feature with the largest similarity is determined to be the candidate floor map corresponding to the environment where the intelligent mobile equipment to be positioned is located. By utilizing more linear characteristics in the environment, when the linear characteristics of the environment are obvious, the number of matching points is greatly reduced by extracting the linear characteristics in the environment, and when the floor and the position of the intelligent mobile equipment are positioned according to the matching points, the algorithm can be quickly converged, the algorithm efficiency is improved, and the positioning result of the intelligent mobile equipment to be positioned can be quickly obtained.
S140, acquiring a verification matching scoring result which is sent again by the intelligent mobile equipment after moving a preset distance;
s150, if the change of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the positioning error is caused to be repositioned;
the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile equipment and the corresponding candidate floor map.
Specifically, after the positioning result corresponding to the intelligent mobile device to be positioned obtained by preliminary positioning is obtained according to the preliminary matching scoring result in the above manner, the intelligent mobile device moves for a distance, that is, the intelligent mobile device advances for a preset distance (for example, 0.5m or 1 m), or changes the orientation in situ, the manner of obtaining the preliminary matching scoring result is continued, the intelligent mobile device acquires image observation data again through the vision sensor, or acquires laser observation data through the laser sensor, then performs matching through the image observation data or the laser observation data to obtain the self verification matching scoring result, the server receives the verification matching scoring result sent by the intelligent mobile device, then compares the verification matching scoring result with the preliminary matching scoring result, and determines that the positioning result obtained by positioning is correct if the change of the verification matching scoring result and the preliminary matching scoring result does not exceed a threshold value. If the change of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the positioning result obtained by positioning is wrong, and relocating the intelligent mobile device with the positioning error to the intelligent mobile device with the non-positioning state.
In the embodiment, the positioning result obtained by initially positioning the intelligent mobile device is verified by comparing the verification matching scoring result and the preliminary matching scoring result, so that the positioning accuracy and reliability of the intelligent mobile device in a building activity scene are improved.
As shown in fig. 4, an indoor positioning implementation method of an embodiment of the present invention is applied to an intelligent mobile device, and includes:
s210, receiving a candidate position list sent by a server; the candidate position list is obtained by a server according to the matching screening of the identification position list and the known spatial position of the intelligent mobile equipment which is already positioned after the identification position list of the intelligent mobile equipment which is identified is obtained according to the video image data; video image data are acquired from each monitoring device in a preset space;
specifically, how the server obtains the identified location list of the identified intelligent mobile device after obtaining the video image data may refer to the embodiment corresponding to fig. 2, which is not described herein.
S220, if the self-positioning is not finished, scanning the surrounding environment to obtain environment sensing data, and sequentially loading candidate information of a candidate position list to respectively perform the preliminary matching scoring result obtained by the matching calculation;
Specifically, the intelligent mobile device can query log data of the intelligent mobile device, if the log data does not have the positioning result at the current moment, the intelligent mobile device determines that the intelligent mobile device does not complete positioning, and if the log data has the positioning result at the current moment, the intelligent mobile device determines that the intelligent mobile device completes positioning.
According to the embodiment, the intelligent mobile device to be positioned can report the known spatial position of the intelligent mobile device to the server after the positioning is completed, and the intelligent mobile device to be positioned can report the incomplete positioning of the intelligent mobile device to the server, so that the server can directly send the candidate position list to the intelligent mobile device to be positioned. After the server acquires the candidate position list, the candidate position list can be sent to each intelligent mobile device, each intelligent mobile device can judge whether the positioning is finished or not, if the positioning is finished, the candidate position list is ignored, if the positioning is not finished, the environment sensing data of the server for the surrounding environment of the position can be detected and acquired by the server, after the candidate position list is received, the matching evaluation is carried out according to the candidate position list, the floor map corresponding to each floor and the environment sensing data, and a preliminary matching scoring result is obtained.
Of course, the intelligent mobile device which determines that the mobile device does not complete positioning can also automatically detect and acquire the environment sensing data of the mobile device to the surrounding environment of the position after receiving the candidate position list, and then perform matching evaluation according to the candidate position list and the environment sensing data to obtain a preliminary matching scoring result. The order in which the intelligent mobile device obtains the environmental sensing data and receives the candidate location list is not limited herein, and is within the scope of the present invention.
S230, determining a candidate floor map with the maximum score value according to the preliminary matching scoring result, wherein the candidate position coordinates are the positioning result of the candidate floor map and the candidate position coordinates, and self positioning is completed; the positioning result comprises the floor and the position of the positioning result.
Specifically, the intelligent mobile device compares the magnitudes of the multiple similarities in the preliminary matching scoring result, and determines a candidate floor map with the maximum similarity and corresponding candidate position coordinates to obtain a positioning result of the intelligent mobile device to be positioned, wherein the positioning result comprises the intelligent mobile device which is not positioned, namely the floor and the position of the intelligent mobile device to be positioned.
In the embodiment, the method and the device are combined with the existing indoor monitoring equipment, the approximate positions of all intelligent mobile equipment in the field are identified through the object identification technology, and then the global positioning capability of the intelligent mobile equipment is utilized to realize the indoor positioning of the intelligent mobile equipment, so that the problem of positioning deficiency caused by manually moving the intelligent mobile equipment away is avoided, the indoor positioning result is more accurate, and the indoor positioning accuracy is improved. In addition, the general positions of all intelligent mobile devices in the field are recognized through the existing indoor monitoring equipment to realize preliminary screening, so that the positioned intelligent mobile devices are screened out and only the intelligent mobile devices to be positioned are positioned, the positioning matching range can be greatly reduced, and the overall positioning efficiency of all intelligent mobile devices in the field is improved.
An embodiment of the invention provides an indoor positioning implementation method, which is applied to intelligent mobile equipment and comprises the following steps:
s210, receiving a candidate position list sent by a server; the candidate position list is obtained by a server according to the matching screening of the identification position list and the known spatial position of the intelligent mobile equipment which is already positioned after the identification position list of the intelligent mobile equipment which is identified is obtained according to the video image data; video image data are acquired from each monitoring device in a preset space;
s221, scanning the current surrounding environment of the user through an environment sensing sensor, acquiring environment sensing data and extracting to obtain environment characteristics; the environment sensing data comprises image observation data and laser observation data;
specifically, the floor map has known structural features and color features, wherein the structural features include, but are not limited to, straight line segments, corners, points, vertical lines, and the like, and the corresponding examples are wall, corner, lobe, door, and the like. The environmental features include geometric feature information including, but not limited to, straight line segments, corners, points, vertical lines, etc., and color feature information, corresponding examples are wall, corner, lobe, door, etc., respectively.
After the server acquires the candidate position list, the candidate position list is sent to each intelligent mobile device to be positioned, image observation data are acquired through a vision sensor, or laser observation data are acquired through a laser sensor, and feature extraction is carried out through the image observation data or the laser observation data, so that environmental features around the position where the server is located at the current moment are acquired.
The following specifically describes a process of extracting features from laser observation data to obtain environmental features: the method comprises the steps of obtaining laser observation data, carrying out region segmentation on the laser observation data, extracting geometric feature information contained in the laser observation data through a corner detection algorithm and a straight line fitting algorithm, and extracting the geometric feature information from the laser observation data is the prior art, and is not described in detail herein. The geometric characteristic information can be used for representing the environmental characteristic corresponding to the position of the intelligent mobile device when the intelligent mobile device acquires the laser observation data, and acquiring the geometric characteristic information of the laser observation data, which is obtained by scanning the current position (or the starting position, which can be any position in the preset space) of the intelligent mobile device to be positioned, as the environmental characteristic.
The following specifically describes a process of extracting features from image observation data to obtain environmental features: the image observation data is obtained, gray processing and binarization processing are performed on the image observation data, namely, a photographed image, geometrical feature information included in the image observation data is extracted by using edge detection algorithms such as a SIFT algorithm, a Sobel operator, a Previtt operator and the like, and the geometrical feature information is extracted from the image as the prior art, and is not described in detail herein. The geometric feature information can be used for characterizing one of the environmental features corresponding to the location where the intelligent mobile device acquires the image observation data. In addition, if the camera installed on the intelligent mobile device is a depth camera, the intelligent mobile device can also use the color attribute corresponding to each piece of geometric feature information in the shot image as one of environment features through an image recognition algorithm.
S222, respectively carrying out matching calculation on the structural features corresponding to the candidate floor maps in the candidate information and the environmental features to obtain a preliminary matching scoring result;
specifically, the intelligent mobile device with the completed positioning refers to an intelligent mobile device which determines the spatial position of the intelligent mobile device at the current moment, that is, the spatial position of the intelligent mobile device with the completed positioning at the same moment is one. The candidate location list comprises a plurality of candidate information, and each candidate information comprises a candidate location and a corresponding candidate floor map.
Because the candidate position list comprises candidate positions and corresponding candidate floor maps, the intelligent mobile equipment can acquire the structural features of each candidate floor map, then the intelligent mobile equipment can respectively match the environmental features with the structural features corresponding to each candidate floor map in the candidate position list, and the similarity between the environmental features and each candidate floor map is acquired respectively so as to acquire a preliminary matching scoring result. And then, determining the intelligent mobile equipment which is not positioned, namely the intelligent mobile equipment to be positioned, comparing the magnitudes of the intelligent mobile equipment to be positioned according to a plurality of similarities in the preliminary matching scoring result, and determining a candidate floor map with the maximum similarity and corresponding candidate position coordinates to obtain a positioning result of the intelligent mobile equipment to be positioned, wherein the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
In this embodiment, the present invention combines with the existing indoor monitoring equipment, performs preliminary positioning by using the object recognition technology to obtain the rough position of each intelligent mobile device in the site, performs preliminary screening, and then uses the intelligent mobile device to collect environmental sensing data, that is, based on the laser observation data collected by the laser sensor or the image observation data collected by the vision sensor, performs quick matching in each candidate floor map according to the environmental sensing data (laser observation data and/or image observation data), so as to realize preliminary indoor positioning, avoid the problem of positioning deficiency caused by manually moving the intelligent mobile device, enable the indoor positioning result to be more accurate, and improve the indoor positioning accuracy.
Secondly, the general positions of all intelligent mobile devices in the field are recognized through the existing indoor monitoring equipment to realize preliminary screening, so that the positioned intelligent mobile devices are screened out and only the intelligent mobile devices to be positioned are positioned, the positioning matching range can be greatly reduced, and the overall positioning efficiency of all intelligent mobile devices in the field is improved.
Finally, after the intelligent mobile device is started, according to laser observation data acquired by the laser sensor or image observation data acquired by the visual sensor, finding out a candidate floor map with the maximum similarity value and a corresponding position coordinate as a positioning result of the intelligent mobile device to be positioned in an enumeration matching mode, thereby completing the positioning of the initial position. According to the embodiment, the environment does not need to be modified, and the modification of labeling, reflecting strips and the like in the environment is not needed as in the traditional method, so that the applicability is wide. And after the initial position is positioned, the motion trail of the intelligent mobile device is monitored by utilizing the motion data of the intelligent mobile device, so that the positioning of the intelligent mobile device in the moving process can be tracked and acquired in real time, and the accuracy and the reliability of indoor positioning are greatly improved.
The method comprises the steps of acquiring a recognition position list by video image data acquired by monitoring equipment which is arranged in a preset space in advance, then matching the recognition position list with the known space position of the intelligent mobile equipment which is already positioned to screen out all the intelligent equipment to be positioned and a candidate position list, and then utilizing laser scanning equipment (laser radar, millimeter wave radar and the like) or visual scanning equipment (cameras, depth cameras, binocular cameras and the like) which are arranged on the intelligent mobile equipment to scan and extract characteristics of the surrounding environment of the current position of the intelligent mobile equipment and match and position the structural characteristics corresponding to the candidate floor map in the candidate position list.
The intelligent mobile equipment has global positioning and positioning recovery capacity and simultaneously has great guarantee on the real-time aspect of positioning recovery. By acquiring the environmental features, when the environmental features meet a plurality of structural features, the environmental features are subjected to similarity matching with the structural features, and the structural feature with the largest similarity is determined to be the candidate floor map corresponding to the environment where the intelligent mobile equipment to be positioned is located. By utilizing more linear characteristics in the environment, when the linear characteristics of the environment are obvious, the number of matching points is greatly reduced by extracting the linear characteristics in the environment, and when the floor and the position of the intelligent mobile equipment are positioned according to the matching points, the algorithm can be quickly converged, the algorithm efficiency is improved, and the positioning result of the intelligent mobile equipment to be positioned can be quickly obtained.
S230, determining a candidate floor map with the maximum score value according to the preliminary matching scoring result, wherein the candidate position coordinates are the positioning result of the candidate floor map and the candidate position coordinates, and self positioning is completed; the positioning result comprises the floor and the position of the self;
s240, after the self-body moves a preset distance, the environment sensing data are scanned again to obtain the verification matching scoring result through calculation.
In this embodiment, the present invention combines with the existing indoor monitoring equipment, performs preliminary positioning by using the object recognition technology to obtain the rough position of each intelligent mobile device in the site, performs preliminary screening, and then uses the intelligent mobile device to collect environmental sensing data, that is, based on the laser observation data collected by the laser sensor or the image observation data collected by the vision sensor, performs quick matching in each candidate floor map according to the environmental sensing data (laser observation data and/or image observation data), so as to realize preliminary indoor positioning, avoid the problem of positioning deficiency caused by manually moving the intelligent mobile device away, enable the indoor positioning result to be more accurate, and improve the accuracy of indoor self-positioning of the intelligent mobile device.
In addition, after the intelligent mobile device is started, according to laser observation data acquired by the laser sensor or image observation data acquired by the visual sensor, a candidate floor map with the maximum score value and a positioning result corresponding to the intelligent mobile device are found out in an enumeration matching mode, so that the initial position is positioned. According to the embodiment, the environment does not need to be modified, and the modification of labeling, reflecting strips and the like in the environment is not needed as in the traditional method, so that the applicability is wide. And after the initial position is positioned, the motion trail of the intelligent mobile device is monitored by utilizing the motion data of the intelligent mobile device, so that the positioning of the intelligent mobile device in the moving process can be tracked and acquired in real time, and the accuracy and the reliability of the indoor self-positioning of the intelligent mobile device are greatly improved.
S250, if the change of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the self is in a state to be positioned for repositioning;
the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile equipment and the corresponding candidate floor map.
Specifically, after the positioning result corresponding to the intelligent mobile device to be positioned obtained by preliminary positioning is obtained according to the preliminary matching scoring result in the above manner, the intelligent mobile device moves for a distance, that is, the intelligent mobile device advances for a preset distance (for example, 0.5m or 1 m), or changes the orientation in situ, the manner of obtaining the preliminary matching scoring result in the above embodiment is continued, the intelligent mobile device acquires image observation data again through a vision sensor, or acquires laser observation data through a laser sensor, then performs matching through the image observation data or the laser observation data to obtain the self verification matching scoring result, compares the verification matching scoring result with the preliminary matching scoring result, and determines that the positioning result obtained by positioning is correct if the change of the verification matching scoring result and the preliminary matching scoring result does not exceed a threshold value. If the change of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the positioning result obtained by self-positioning is wrong, and then controlling the self-positioning.
In the embodiment, the positioning result obtained by initially positioning the intelligent mobile device is verified by comparing the verification matching scoring result and the preliminary matching scoring result, so that the self-positioning accuracy and reliability of the intelligent mobile device in a building activity scene are improved.
It will be apparent to those skilled in the art that the above-described program modules are merely illustrative of the division of each program module for convenience and brevity of description, and that in practical application, the above-described functional allocation may be performed by different program modules, i.e. the internal structure of the apparatus is divided into different program units or modules, to perform all or part of the above-described functions. The program modules in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one processing unit, where the integrated units may be implemented in a form of hardware or in a form of a software program unit. In addition, the specific names of the program modules are also only for distinguishing from each other, and are not used to limit the protection scope of the present application.
One embodiment of the invention is a server comprising a processor, a memory, wherein the memory is used for storing a computer program; and the processor is used for executing the computer program stored in the memory to realize the indoor positioning realization method in any method embodiment corresponding to the figures 1-2.
The server can be desktop computers, notebooks, palm computers, tablet computers, man-machine interaction screens and other devices.
The intelligent mobile device comprises a device body, a moving mechanism, a processor and a memory, wherein the moving mechanism is arranged at the lower part of the device body and comprises a plurality of travelling wheels connected with the lower part of the device body so as to move the travelling body; a memory for storing a computer program; and the processor is used for executing the computer program stored in the memory to realize the indoor positioning realization method in any method embodiment corresponding to the method shown in the figure 4.
The server or smart mobile device may include, but is not limited to, a processor, memory. Those skilled in the art will appreciate that the foregoing is merely an example of a server or smart mobile device and is not limiting of a server or smart mobile device and may include more or fewer components than those described above, or may combine certain components, or different components, such as: the server or smart mobile device may also include input/output interfaces, display devices, network access devices, communication buses, communication interfaces, and the like. The communication interface and the communication bus may further comprise an input/output interface, wherein the processor, the memory, the input/output interface and the communication interface complete communication with each other through the communication bus. The memory stores a computer program, and the processor is configured to execute the computer program stored in the memory, to implement the indoor positioning implementation method in the corresponding method embodiment.
The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be an internal storage unit of the server or smart mobile device, for example: a hard disk or a memory of the terminal equipment. The memory may also be an external storage device of the terminal device, for example: a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like, which are provided on the terminal device. Further, the memory may also include both an internal storage unit and an external storage device of the server or smart mobile device. The memory is used for storing the computer program and other programs and data required by the server or the intelligent mobile device. The memory may also be used to temporarily store data that has been output or is to be output.
A communication bus is a circuit that connects the elements described and enables transmission between these elements. For example, the processor receives commands from other elements through the communication bus, decrypts the received commands, and performs calculations or data processing based on the decrypted commands. The memory may include program modules such as a kernel, middleware, application programming interfaces (Application Programming Interface, APIs), and applications. The program modules may be comprised of software, firmware, or hardware, or at least two of them. The input/output interface forwards commands or data entered by a user through the input/output interface (e.g., sensor, keyboard, touch screen). The communication interface connects the server or intelligent mobile device with other network devices, user devices, networks. For example, the communication interface may be connected to a network by wire or wirelessly to connect to external other network devices or user devices. The wireless communication may include at least one of: wireless fidelity (WiFi), bluetooth (BT), near field wireless communication technology (NFC), global Positioning System (GPS) and cellular communications, among others. The wired communication may include at least one of: universal Serial Bus (USB), high Definition Multimedia Interface (HDMI), asynchronous transfer standard interface (RS-232), and the like. The network may be a telecommunications network or a communication network. The communication network may be a computer network, the internet of things, a telephone network. The server or smart mobile device may be connected to the network through a communication interface, and protocols used by the server or smart mobile device to communicate with other network devices may be supported by at least one of applications, application Programming Interfaces (APIs), middleware, kernels, and communication interfaces.
In one embodiment of the present invention, a storage medium has at least one instruction stored therein, where the instruction is loaded and executed by a processor to implement the operations performed by the corresponding embodiments of the indoor positioning implementation method in any of fig. 1-2. For example, the storage medium may be read-only memory (ROM), random-access memory (RAM), compact disk read-only (CD-ROM), magnetic tape, floppy disk, optical data storage device, etc.
They may be implemented in program code that is executable by a computing device such that they may be stored in a memory device for execution by the computing device, or they may be separately fabricated into individual integrated circuit modules, or a plurality of modules or steps in them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the parts of a certain embodiment that are not described or depicted in detail may be referred to in the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units may be stored in a storage medium if implemented in the form of software functional units and sold or used as stand-alone products. Based on this understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by sending instructions to related hardware by a computer program, where the computer program may be stored in a storage medium, and the computer program may implement the steps of each method embodiment described above when executed by a processor. Wherein the computer program may be in source code form, object code form, executable file or some intermediate form, etc. The storage medium may include: any entity or device capable of carrying the computer program, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that, the content contained in the storage medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example: in some jurisdictions, computer-readable storage media do not include electrical carrier signals and telecommunication signals, in accordance with legislation and patent practice.
It should be noted that the above embodiments can be freely combined as needed. The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.
Claims (9)
1. The indoor positioning implementation method is characterized by being applied to a server and comprising the following steps:
acquiring an identification position list of the identified intelligent mobile device according to video image data acquired from each monitoring device in a preset space;
according to the identification position list and the known spatial position of the intelligent mobile equipment which is positioned, the intelligent mobile equipment to be positioned is searched out, and a corresponding candidate position list is obtained;
the candidate position list is sent to the corresponding intelligent mobile equipment to be positioned, and the positioning of the intelligent mobile equipment to be positioned is completed according to the matching scoring result sent by the intelligent mobile equipment to be positioned, and the method specifically comprises the following steps:
respectively sending the candidate position lists to corresponding intelligent mobile equipment to be positioned;
receiving a preliminary matching scoring result sent by the intelligent mobile equipment to be positioned; the preliminary matching scoring result is obtained by respectively carrying out matching calculation on the intelligent mobile equipment to be positioned according to the candidate information in the candidate position list; the candidate information comprises a candidate floor map and a corresponding candidate position coordinate;
Determining a candidate floor map with the maximum score value and a candidate position coordinate as a positioning result corresponding to the intelligent mobile equipment to be positioned; the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned; the preliminary matching scoring result is obtained by respectively carrying out matching calculation by the intelligent mobile equipment to be positioned according to candidate information in a candidate position list, and the method specifically comprises the following steps:
invoking and acquiring structural features of each candidate floor map, and respectively matching the environmental features with the structural features corresponding to each candidate floor map in the candidate position list by the intelligent mobile equipment to be positioned, and respectively acquiring the similarity between the environmental features and each candidate floor map so as to acquire a preliminary matching scoring result;
the determining the candidate floor map with the largest score value and the candidate position coordinate as the positioning result corresponding to the intelligent mobile equipment to be positioned specifically comprises the following steps:
and comparing the magnitudes of the multiple similarities in the preliminary matching scoring result, and determining a candidate floor map with the maximum similarity and corresponding candidate position coordinates to obtain a positioning result of the intelligent mobile device to be positioned.
2. The indoor positioning implementation method according to claim 1, wherein the acquiring the identified location list of the identified intelligent mobile device according to the video image data acquired from each monitoring device in the preset space comprises the steps of:
Acquiring video image data of all monitoring devices distributed in a preset space, performing image recognition on the video image data, and screening out target monitoring devices which are recognized to the intelligent mobile device;
acquiring gesture information corresponding to each target monitoring device;
and calculating the position coordinates of the intelligent mobile equipment according to the gesture information corresponding to the target monitoring equipment, and summarizing all the position coordinates to obtain the identification position list.
3. The indoor positioning implementation method according to claim 2, wherein the candidate floor map and the candidate position coordinates with the largest determined score value are positioning results corresponding to the intelligent mobile device to be positioned; the positioning result comprises the steps of:
acquiring a verification matching scoring result which is sent again by the intelligent mobile equipment after moving a preset distance;
if the change of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the positioning error is caused to be repositioned;
the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile equipment and the corresponding candidate floor map.
4. The indoor positioning implementation method is characterized by being applied to intelligent mobile equipment and comprising the following steps of:
receiving a candidate position list sent by a server; the candidate position list is obtained by a server according to the matching screening of the identification position list and the known spatial position of the intelligent mobile equipment which is already positioned after the identification position list of the intelligent mobile equipment which is identified is obtained according to the video image data; the video image data are acquired from each monitoring device in a preset space;
if the positioning is not completed, scanning the surrounding environment to obtain environment sensing data, and sequentially loading candidate information of the candidate position list to respectively perform matching calculation to obtain preliminary matching scoring results, wherein the preliminary matching scoring results are respectively obtained by matching calculation of the intelligent mobile equipment to be positioned according to the candidate information in the candidate position list; the candidate information comprises a candidate floor map and a corresponding candidate position coordinate;
the preliminary matching scoring result is obtained by respectively carrying out matching calculation by the intelligent mobile equipment to be positioned according to candidate information in a candidate position list, and the method specifically comprises the following steps:
invoking and acquiring structural features of each candidate floor map, and respectively matching the environmental features with the structural features corresponding to each candidate floor map in the candidate position list by the intelligent mobile equipment to be positioned, and respectively acquiring the similarity between the environmental features and each candidate floor map so as to acquire a preliminary matching scoring result;
Determining a candidate floor map with the maximum score value and a candidate position coordinate as a positioning result of the candidate floor map and the candidate position coordinate to finish self-positioning according to the preliminary matching scoring result; the positioning result comprises the floor and the position of the self,
the determining, according to the preliminary matching scoring result, the candidate floor map with the largest score value, and the candidate position coordinate as the positioning result where the candidate position coordinate is located, which specifically includes:
and comparing the magnitudes of the multiple similarities in the preliminary matching scoring result, and determining a candidate floor map with the maximum similarity and corresponding candidate position coordinates to obtain a positioning result of the intelligent mobile device to be positioned.
5. The indoor positioning implementation method according to claim 4, wherein the steps of scanning the surrounding environment to obtain the environmental sensing data if the positioning is not completed, and sequentially loading the candidate information of the candidate position list to perform the preliminary matching scoring result obtained by the matching calculation respectively, include the steps of:
scanning the current surrounding environment of the environment sensor by the environment sensing sensor, acquiring the environment sensing data and extracting to obtain environment characteristics; the environment sensing data comprises image observation data and laser observation data;
And respectively carrying out matching calculation on the structural features corresponding to the candidate floor maps in the candidate information and the environmental features to obtain the preliminary matching scoring result.
6. The indoor positioning implementation method according to claim 4 or 5, wherein the determining, according to the preliminary matching scoring result, a candidate floor map with the largest score value, and the candidate position coordinates are the positioning result where the candidate position coordinates are located, and the self-positioning is completed; the positioning result comprises the floor and the position of the self, and then comprises the following steps:
scanning again to obtain environment sensing data after moving a preset distance to calculate to obtain a verification matching scoring result;
if the change of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the verification matching scoring result and the preliminary matching scoring result are in a state to be positioned for repositioning;
the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile equipment and the corresponding candidate floor map.
7. A server comprising a processor, a memory and a computer program stored in the memory and executable on the processor, the processor being configured to execute the computer program stored on the memory to perform the operations performed by the indoor positioning implementation method according to any one of claims 1 to 4.
8. A storage medium having stored therein at least one instruction loaded and executed by a processor to implement the operations performed by the indoor positioning implementation method of any of claims 1 to 4.
9. An intelligent mobile device comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor being configured to execute the computer program stored on the memory to perform the operations performed by the indoor positioning implementation method according to any one of claims 5 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010817857.8A CN111814752B (en) | 2020-08-14 | 2020-08-14 | Indoor positioning realization method, server, intelligent mobile device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010817857.8A CN111814752B (en) | 2020-08-14 | 2020-08-14 | Indoor positioning realization method, server, intelligent mobile device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111814752A CN111814752A (en) | 2020-10-23 |
CN111814752B true CN111814752B (en) | 2024-03-12 |
Family
ID=72859047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010817857.8A Active CN111814752B (en) | 2020-08-14 | 2020-08-14 | Indoor positioning realization method, server, intelligent mobile device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111814752B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022121606A1 (en) * | 2020-12-08 | 2022-06-16 | 北京外号信息技术有限公司 | Method and system for obtaining identification information of device or user thereof in scenario |
CN112850388B (en) * | 2020-12-31 | 2022-03-11 | 济宁市海富电子科技有限公司 | Method and device for service robot to enter elevator |
CN113177054B (en) * | 2021-05-28 | 2024-08-16 | 广州南方卫星导航仪器有限公司 | Device position updating method and device, electronic device and storage medium |
CN115484342A (en) * | 2021-06-15 | 2022-12-16 | 南宁富联富桂精密工业有限公司 | Indoor positioning method, mobile terminal and computer readable storage medium |
CN113587917A (en) * | 2021-07-28 | 2021-11-02 | 北京百度网讯科技有限公司 | Indoor positioning method, device, equipment, storage medium and computer program product |
CN114413903B (en) * | 2021-12-08 | 2024-07-09 | 上海擎朗智能科技有限公司 | Positioning method for multiple robots, robot distribution system and computer readable storage medium |
CN114582476A (en) * | 2022-02-21 | 2022-06-03 | 北京融威众邦电子技术有限公司 | Intelligent assessment triage system and method based on ANN |
CN117191021B (en) * | 2023-08-21 | 2024-06-04 | 深圳市晅夏机器人有限公司 | Indoor vision line-following navigation method, device, equipment and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101646067A (en) * | 2009-05-26 | 2010-02-10 | 华中师范大学 | Digital full-space intelligent monitoring system and method |
CN105246039A (en) * | 2015-10-20 | 2016-01-13 | 深圳大学 | Image processing-based indoor positioning method and system |
CN106455050A (en) * | 2016-09-23 | 2017-02-22 | 微梦创科网络科技(中国)有限公司 | Bluetooth and Wifi-based indoor positioning method, apparatus and system |
EP3299917A4 (en) * | 2015-05-18 | 2018-03-28 | TLV Co., Ltd. | Device management system and device management method |
CN107920386A (en) * | 2017-10-10 | 2018-04-17 | 深圳数位传媒科技有限公司 | Sparse independent positioning method, server, system and computer-readable recording medium |
CN108573268A (en) * | 2017-03-10 | 2018-09-25 | 北京旷视科技有限公司 | Image-recognizing method and device, image processing method and device and storage medium |
CN108717710A (en) * | 2018-05-18 | 2018-10-30 | 京东方科技集团股份有限公司 | Localization method, apparatus and system under indoor environment |
CN110428449A (en) * | 2019-07-31 | 2019-11-08 | 腾讯科技(深圳)有限公司 | Target detection tracking method, device, equipment and storage medium |
CN110579215A (en) * | 2019-10-22 | 2019-12-17 | 上海木木机器人技术有限公司 | positioning method based on environmental feature description, mobile robot and storage medium |
WO2020052319A1 (en) * | 2018-09-14 | 2020-03-19 | 腾讯科技(深圳)有限公司 | Target tracking method, apparatus, medium, and device |
CN111145223A (en) * | 2019-12-16 | 2020-05-12 | 盐城吉大智能终端产业研究院有限公司 | Multi-camera personnel behavior track identification analysis method |
CN111339363A (en) * | 2020-02-28 | 2020-06-26 | 钱秀华 | Image recognition method and device and server |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8515669B2 (en) * | 2010-06-25 | 2013-08-20 | Microsoft Corporation | Providing an improved view of a location in a spatial environment |
-
2020
- 2020-08-14 CN CN202010817857.8A patent/CN111814752B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101646067A (en) * | 2009-05-26 | 2010-02-10 | 华中师范大学 | Digital full-space intelligent monitoring system and method |
EP3299917A4 (en) * | 2015-05-18 | 2018-03-28 | TLV Co., Ltd. | Device management system and device management method |
CN105246039A (en) * | 2015-10-20 | 2016-01-13 | 深圳大学 | Image processing-based indoor positioning method and system |
CN106455050A (en) * | 2016-09-23 | 2017-02-22 | 微梦创科网络科技(中国)有限公司 | Bluetooth and Wifi-based indoor positioning method, apparatus and system |
CN108573268A (en) * | 2017-03-10 | 2018-09-25 | 北京旷视科技有限公司 | Image-recognizing method and device, image processing method and device and storage medium |
CN107920386A (en) * | 2017-10-10 | 2018-04-17 | 深圳数位传媒科技有限公司 | Sparse independent positioning method, server, system and computer-readable recording medium |
CN108717710A (en) * | 2018-05-18 | 2018-10-30 | 京东方科技集团股份有限公司 | Localization method, apparatus and system under indoor environment |
WO2020052319A1 (en) * | 2018-09-14 | 2020-03-19 | 腾讯科技(深圳)有限公司 | Target tracking method, apparatus, medium, and device |
CN110428449A (en) * | 2019-07-31 | 2019-11-08 | 腾讯科技(深圳)有限公司 | Target detection tracking method, device, equipment and storage medium |
CN110579215A (en) * | 2019-10-22 | 2019-12-17 | 上海木木机器人技术有限公司 | positioning method based on environmental feature description, mobile robot and storage medium |
CN111145223A (en) * | 2019-12-16 | 2020-05-12 | 盐城吉大智能终端产业研究院有限公司 | Multi-camera personnel behavior track identification analysis method |
CN111339363A (en) * | 2020-02-28 | 2020-06-26 | 钱秀华 | Image recognition method and device and server |
Non-Patent Citations (2)
Title |
---|
基于GPS与图像融合的智能车辆高精度定位算法;李承;胡钊政;胡月志;吴华伟;;交通运输系统工程与信息(第03期);全文 * |
李承 ; 胡钊政 ; 胡月志 ; 吴华伟 ; .基于GPS与图像融合的智能车辆高精度定位算法.交通运输系统工程与信息.2017,(第03期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111814752A (en) | 2020-10-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111814752B (en) | Indoor positioning realization method, server, intelligent mobile device and storage medium | |
US11320833B2 (en) | Data processing method, apparatus and terminal | |
EP4044146A1 (en) | Method and apparatus for detecting parking space and direction and angle thereof, device and medium | |
CN109325456B (en) | Target identification method, target identification device, target identification equipment and storage medium | |
CN112528831B (en) | Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment | |
CN110587597B (en) | SLAM closed loop detection method and detection system based on laser radar | |
CN108303096B (en) | Vision-assisted laser positioning system and method | |
KR101769601B1 (en) | Unmanned aerial vehicle having Automatic Tracking | |
US20200219281A1 (en) | Vehicle external recognition apparatus | |
JP2021530821A (en) | Methods, equipment and computer programs for performing 3D wireless model construction | |
CN111045000A (en) | Monitoring system and method | |
CN113936198A (en) | Low-beam laser radar and camera fusion method, storage medium and device | |
CN113838125B (en) | Target position determining method, device, electronic equipment and storage medium | |
CN117115784A (en) | Vehicle detection method and device for target data fusion | |
Lin et al. | Pedestrian detection by fusing 3D points and color images | |
CN110992424A (en) | Positioning method and system based on binocular vision | |
CN111935641B (en) | Indoor self-positioning realization method, intelligent mobile device and storage medium | |
CN115223135B (en) | Parking space tracking method and device, vehicle and storage medium | |
CN114155557B (en) | Positioning method, positioning device, robot and computer-readable storage medium | |
WO2022083529A1 (en) | Data processing method and apparatus | |
CN110673607A (en) | Feature point extraction method and device in dynamic scene and terminal equipment | |
CN114740867A (en) | Intelligent obstacle avoidance method and device based on binocular vision, robot and medium | |
CN113673288A (en) | Idle parking space detection method and device, computer equipment and storage medium | |
CN212044739U (en) | Positioning device and robot based on inertial data and visual characteristics | |
Nowak et al. | Vision-based positioning of electric buses for assisted docking to charging stations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |