CN114519739A - Direction positioning method and device based on recognition device and storage medium - Google Patents
Direction positioning method and device based on recognition device and storage medium Download PDFInfo
- Publication number
- CN114519739A CN114519739A CN202210417590.2A CN202210417590A CN114519739A CN 114519739 A CN114519739 A CN 114519739A CN 202210417590 A CN202210417590 A CN 202210417590A CN 114519739 A CN114519739 A CN 114519739A
- Authority
- CN
- China
- Prior art keywords
- determining
- information
- identification
- position information
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1408—Methods for optical code recognition the method being specifically adapted for the type of code
- G06K7/1417—2D bar codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/14—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
- G06K7/1404—Methods for optical code recognition
- G06K7/1439—Methods for optical code recognition including a method step for retrieval of the optical code
- G06K7/1443—Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Toxicology (AREA)
- Electromagnetism (AREA)
- Health & Medical Sciences (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure provides a direction positioning method, device and storage medium based on a recognition device, the method is applied to a mobile device, the method comprises: acquiring a collected image; carrying out image analysis on the collected image, and determining at least two identification devices and corresponding device position information in the collected image; performing relative position analysis on the device position information to determine the current position information of the mobile equipment; by applying the method provided by the embodiment of the disclosure, the close-range accurate positioning is realized, the method is simple and easy to implement, the reliability is high, and the method can be applied to the close-range accurate positioning of various scenes.
Description
Technical Field
The present disclosure relates to the field of visual positioning technologies, and in particular, to a direction positioning method and apparatus based on an identification apparatus, and a storage medium.
Background
With the development of intelligent fields such as computers, sensors and the like, intelligent household appliances are widely applied in daily life, such as floor sweeping machines and the like. The sweeper can plan a sweeping path through a map-based positioning scheme to complete sweeping of the ground to be swept, the sweeper needs to return to the charging pile for charging when sweeping is completed or electric quantity is insufficient, but due to the fact that the map-based positioning reliability is not high, the problem of high-reliability alignment of the sweeper and the charging pile is difficult to solve, a method of multi-group infrared alignment is commonly adopted when the sweeper is aligned to the charging pile at present, and the alignment between the sweeper and the charging pile still has deviation under the mode.
Disclosure of Invention
The present disclosure provides a direction positioning method, apparatus and storage medium based on an identification apparatus, to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a direction positioning method based on an identification device, the method being applied to a mobile device, the method including: acquiring a collected image; carrying out image analysis on the collected image, and determining at least two identification devices and corresponding device position information in the collected image; and carrying out relative position analysis on the device position information to determine the current position information of the mobile equipment.
In an embodiment, the image analyzing the captured image to determine at least two identification devices and corresponding device position information in the captured image includes: carrying out device identification on the acquired image to obtain a first identification device and a second identification device; identifying the first identification device and the second identification device to obtain first labeling information and second labeling information; determining first position information corresponding to the first identification device according to the first marking information; and determining second position information corresponding to the second identification device according to the second marking information.
In an embodiment, the analyzing the device location information to determine the current location information of the mobile device includes: determining a current relative position according to the first position information and the second position information; matching the current relative position with a preset relative position to obtain a matching result; and determining the current position information of the mobile equipment according to the matching result.
In an embodiment, the determining the current location information of the mobile device according to the matching result includes: if the matching result is that the current relative position is consistent with the front-view position corresponding to the preset relative position, determining that the current position information is in the front-view direction of the identification device; if the matching result is that the current relative position is consistent with the side-looking position corresponding to the preset relative position, determining that the current position information is in the side-looking direction of the recognition device; and if the matching result is that the current relative position is consistent with the overlook position corresponding to the preset relative position, determining that the current position information is in the overlook direction of the identification device.
In an embodiment, the obtaining the captured image includes: and acquiring an image through an image acquisition device to obtain an acquired image.
In an embodiment, the method further comprises: and controlling the mobile equipment to move to a target position according to the current position information of the mobile equipment.
In an implementation manner, the identification device is provided with an identifier, and the identifier is a two-dimensional code and/or an Aruco code.
According to a second aspect of the present disclosure, there is provided a direction positioning apparatus based on an identification apparatus, the apparatus being applied to a mobile device, the apparatus including: the image acquisition module is used for acquiring an acquired image; the image processing module is used for carrying out image analysis on the acquired image and determining at least two identification devices and corresponding device position information in the acquired image; and the first position determining module is used for carrying out relative position analysis on the device position information and determining the current position information of the mobile equipment.
In an embodiment, the image processing module includes: the device identification module is used for carrying out device identification on the acquired image to obtain a first identification device and a second identification device; the identification recognition module is used for identifying the first recognition device and the second recognition device to obtain first labeling information and second labeling information; the first position information determining module is used for determining first position information corresponding to the first identification device according to the first marking information; and the second position information determining module is used for determining second position information corresponding to the second identification device according to the second marking information.
In one embodiment, the first position determining module includes: a relative position determining module, configured to determine a current relative position according to the first position information and the second position information; the matching module is used for matching the current relative position with a preset relative position to obtain a matching result; and the second position determining module is used for determining the current position information of the mobile equipment according to the matching result.
In one embodiment, the second position determination module comprises: a first position determining submodule, configured to determine that the current position information is in the front view direction of the identification device if the matching result is that the current relative position is consistent with the front view position corresponding to the preset relative position; a second position determining submodule, configured to determine that the current position information is in a side view direction of the recognition device if the matching result is that the current relative position is consistent with the side view position corresponding to the preset relative position; and a third position determining submodule, configured to determine that the current position information is in the top view direction of the identification device if the matching result is that the current relative position is consistent with the top view position corresponding to the preset relative position.
In one embodiment, the apparatus further comprises: and the image acquisition module is used for acquiring images through the image acquisition device to obtain acquired images.
In one embodiment, the apparatus further comprises: and the control module is used for controlling the mobile equipment to move to the target position according to the current position information of the mobile equipment.
According to a third aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the present disclosure.
The mobile equipment analyzes the acquired images to determine at least two identification devices in the acquired images and device position information corresponding to the identification devices, analyzes the relative position of the obtained device position information, and determines the current position information of the mobile equipment, so that the close-range accurate positioning is realized, the method and the device can be used for the close-range accurate positioning of various scenes, and are simple, feasible and high in reliability.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 is a schematic flow chart illustrating an implementation of a direction positioning method based on an identification device according to a first embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating a second implementation flow of a direction positioning method based on an identification device according to a first embodiment of the present disclosure;
fig. 3 is a schematic flow chart showing an implementation of a direction positioning method based on an identification device according to a first embodiment of the present disclosure;
4 a-4 c are schematic diagrams illustrating a first implementation scenario of a direction positioning method based on an identification device according to a second embodiment of the present disclosure;
5 a-5 c are schematic diagrams illustrating a second implementation scenario of a direction positioning method based on an identification device according to a second embodiment of the present disclosure;
FIG. 6 is a block diagram of a direction locating device based on an identification device according to a third embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating a composition structure of an electronic device according to a fourth embodiment of the disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more apparent and understandable, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
Fig. 1 is a schematic flow chart illustrating an implementation of a direction positioning method based on an identification device according to a first embodiment of the present disclosure.
Referring to fig. 1, according to a first aspect of the embodiments of the present disclosure, there is provided a direction positioning method based on an identification device, the method including: step 101, acquiring a collected image; 102, carrying out image analysis on the collected image, and determining at least two identification devices and corresponding device position information in the collected image; and 103, analyzing the relative position of the device position information, and determining the current position information of the mobile equipment.
The direction positioning method based on the identification device is applied to mobile equipment, the mobile equipment analyzes the collected image to determine at least two identification devices in the collected image and device position information corresponding to the identification devices, analyzes the relative position of the obtained device position information to determine the current position information of the mobile equipment, realizes close-range accurate positioning, can be used for close-range accurate positioning of various scenes, and is simple and easy to implement and high in reliability.
The mobile equipment referred by the method can be intelligent household appliances such as a sweeper, a cleaner and the like, and can also be an automobile or other movable vehicles and the like.
In step 101, a collected image is obtained first, the mobile device may acquire a collected image by shooting through a camera device or by collecting a surrounding environment through optical scanning or the like, and a Format of the collected image may be Joint Photographic Experts Group (JPEG), Bitmap (Bitmap, BMP), Graphics Interchange Format (GIF), Portable Network Graphics (PNG), or the like.
In step 102, after obtaining the captured image, the mobile device analyzes the captured image to obtain information included in the captured image, which may be related information of the size, resolution, color, and the like of the captured image, or related information of an identifiable object included in the captured image, such as identifiable objects included in the captured image may be identified by an algorithm, and position information, attribute information, or other information of the identifiable object in the captured image may be obtained, and when it is identified that at least two identification devices are included in the captured image, the identification devices are marked, and position information corresponding to the identification devices is obtained.
In an implementation manner, the sizes and types of the identification devices may be the same or different, and each identification device carries the label information, where the label information is a unique identifier used to distinguish different identification devices, for example, different identification devices are identified by different Aruco codes. The mobile equipment analyzes the collected image, identifies the Aruco code on the identification device, and determines the identification device corresponding to the Aruco code through the information contained in the Aruco code, such as id, so that the position information of the identification device in the collected image can be determined.
In an implementation manner, when the collected image is analyzed, the collected image may be preprocessed to remove some useless images, such as the collected image without identifying the identification device or the collected image with low definition, so as to reduce the interference of the useless images with the subsequent operation.
In step 103, after determining at least two recognition devices in the captured image and determining the position information corresponding to the recognition devices in the captured image, the relative position information of the recognition devices is analyzed according to the identified position information of each recognition device, such as determining the relative position of each recognition device in the captured image and the captured image, the relative position between the recognition devices in the captured image, and the like, and the current position information of the mobile device can be determined according to the relative position between the recognition devices in the captured image.
Fig. 2 is a schematic diagram illustrating a second implementation flow of a direction positioning method based on an identification device according to a first embodiment of the present disclosure.
Referring to fig. 2, in the embodiment of the present invention, step 102, performing image analysis on the captured image to determine at least two recognition devices and corresponding device location information in the captured image, includes step 1021, performing device recognition on the captured image to obtain a first recognition device and a second recognition device; step 1022, performing identification recognition on the first recognition device and the second recognition device to obtain first labeling information and second labeling information; step 1023, determining first position information corresponding to the first identification device according to the first labeling information; and step 1024, determining second position information corresponding to the second identification device according to the second labeling information.
Specifically, after a collected image including at least two identification devices is collected, device identification is performed on the collected image, the identification devices in the image can be detected and identified through an image identification algorithm such as an open source vision processing library Opencv, a first identification device and a second identification device identified in the collected image are marked, first marking information corresponding to the first identification device and second marking information corresponding to the second identification device are obtained, the mobile device determines first position information corresponding to the first identification device in the collected image according to the first marking information, and determines second position information corresponding to the second identification device in the collected image according to the second marking information.
Fig. 3 is a schematic flow chart showing an implementation of a direction positioning method based on an identification device according to a first embodiment of the present disclosure.
Referring to fig. 3, in the embodiment of the present invention, step 103, performing a relative location analysis on the device location information, and determining current location information of the mobile device includes: step 1031, determining a current relative position according to the first position information and the second position information; step 1032, matching the current relative position with a preset relative position to obtain a matching result; and step 1033, determining the current position information of the mobile equipment according to the matching result.
Specifically, the first recognition device and the second recognition device are placed at the target position in a certain manner, and the mobile device stores a stored image including the first recognition device and the second recognition device and a corresponding direction thereof, which are observed from each direction when the first recognition device and the second recognition device are placed at the target position in a certain manner.
The preset relative position is a relative position between the first recognition device and the second recognition device in the stored image, and is also stored in the mobile equipment.
The current relative position refers to a relative position between the first recognition device and the second recognition device in the captured image, and is determined by first position information of the first recognition device and second position information of the second recognition device in the captured image.
And matching the current relative position with a preset relative position stored in the mobile equipment, and if the current relative position is successfully matched with a certain preset relative position, determining the direction of the mobile equipment for shooting the collected image as the direction of the stored image corresponding to the preset relative position, thereby determining the relative position relation between the mobile equipment and the target position.
In an implementation, determining the current location information of the mobile device according to the matching result includes: if the matching result is that the current relative position is consistent with the front-view position corresponding to the preset relative position, determining the current position information as the front-view direction of the identification device; if the matching result is that the current relative position is consistent with the side-looking position corresponding to the preset relative position, determining the current position information as the side-looking direction of the recognition device; and if the matching result is that the current relative position is consistent with the overlook position corresponding to the preset relative position, determining the current position information as the overlook direction of the recognition device.
In one embodiment, the mobile device stores at least three stored images of the front view, the side view and the top view when the first recognition device and the second recognition device are placed at the target position in a certain manner. Specifically, if the matching result is that the current relative position is consistent with the front-view position corresponding to the preset relative position, determining that the current position information is in the front-view direction of the identification device; the judgment that the matching result is the side-view position or the top-view position corresponding to the current relative position and the preset relative position is consistent with the judgment that the matching result is the front-view position corresponding to the current relative position and the preset relative position, and the details are not repeated here.
And if the matching result is that the front-view position, the side-view position and the top-view position corresponding to the current relative position and the preset relative position are not consistent, comparing the first identification device and the second identification device in the acquired image with the stored image, determining the preset relative position on the same plane with the acquired image, and determining the current position information of the mobile equipment according to the deviation condition of the current relative position of the acquired image and the preset relative position.
To facilitate a further understanding of the above embodiments, a specific implementation scenario is provided below.
Fig. 4a to 4c are schematic diagrams illustrating a first implementation scenario of a direction positioning method based on an identification device according to a second embodiment of the present disclosure.
Referring to fig. 4 a-4 c, in an embodiment, the first identification device 401 and the second identification device 402 are placed at the target position in a top-down and front-back manner, wherein one of the placement conditions may be that the first identification device 401 is placed above the second identification device 402 and the first identification device 401 is placed behind the second identification device 402, the first identification device 401 and the second identification device 402 are aligned up and down as seen from the front side of the first identification device 401 and the second identification device 402, fig. 4a shows a front view of the first identification device 401 and the second identification device 402 as actually placed as described above, fig. 4b shows a side view of the first identification device 401 and the second identification device 402 as actually placed as described above, fig. 4c shows a top view of the first identification device 401 and the second identification device 402 as actually placed as described above, fig. 4a to 4c are stored images stored in the mobile device, and the relative position relationship between the first recognition device 401 and the second recognition device 402 in fig. 4a to 4c is a preset relative position.
Fig. 5a to 5c are schematic diagrams illustrating a second implementation scenario of a direction positioning method based on an identification device according to a second embodiment of the present disclosure.
Referring to fig. 5a to 5c, matching the current relative position of the first recognition device 501 and the second recognition device 502 in the captured image with a preset relative position, and if the matching result is that the current relative position of the first recognition device 501 and the second recognition device 502 in the captured image is that the first recognition device 501 is located right above the second recognition device 502, that is, the elevational position corresponding to the preset relative position is consistent, it indicates that the current position information of the mobile device is located right in front of the actual placement position of the recognition devices; when the matching result shows that the front-view position, the side-view position and the top-view position corresponding to the current relative position and the preset relative position are not consistent, comparing the first recognition device 501 and the second recognition device 502 in the captured image with the stored image, if the stored image is determined to be the front view of the first recognition device 501 and the second recognition device 502 after being actually placed, determining that the preset relative position is in the same plane as the acquired image as shown in fig. 4a, comparing the direction of the normal position deviation of the current relative position in the acquired image with the preset relative position, when the first recognition means 501 is located to the left of the second recognition means 502 in the captured image, it indicates that the current location information of the mobile device is located to the front left of the actual placement location of the recognition means, when the first recognition means 501 is located to the right of the second recognition means 502 in the captured image, it indicates that the current position information of the mobile device is located right in front of the actual placement position of the recognition means.
In one embodiment, step 101, obtaining a captured image comprises: and acquiring an image through an image acquisition device to obtain an acquired image.
Specifically, the mobile device is provided with an image acquisition device for acquiring images, such as a camera and a scanner, and the mobile device can acquire the acquired images by shooting the surrounding environment through the camera or scanning the surrounding environment through the scanner.
In one embodiment, after performing a relative location analysis on the device location information to determine current location information of the mobile device, the method further comprises: and controlling the mobile equipment to move to the target position according to the current position information of the mobile equipment.
Specifically, the target position is a position where the identification device is actually placed, and is also a position to which the mobile device is to be moved, and the first identification device and the second identification device may be placed at the target position in a certain manner. The mobile equipment determines the relative position relationship between the mobile equipment and the target position according to the current relative positions of the first recognition device and the second recognition device in the collected image, and controls the mobile equipment to move to the target position according to the relative position relationship between the mobile equipment and the target position, the mobile equipment can directly move to the target position, or move to the position right in front of the target position first and then move to the target position, and the embodiment does not limit the specific moving mode of the mobile equipment.
In an embodiment, the identification device is provided with a mark, and the mark is a two-dimensional code and/or an Aruco code.
Specifically, in order to conveniently identify the identification device, the identification device is provided with an identifier, which may be a two-dimensional code and/or an Aruco code, and different identification devices may select different identifiers, for example, the identifiers provided on the first identification device and the second identification device are two-dimensional codes, or the identifier provided on the first identification device is a two-dimensional code and the identifier provided on the second identification device is an Aruco code. The two-dimensional code and/or the Aruco code may be affixed or otherwise attached to the identification device.
In an embodiment, the two-dimensional code and/or the Aruco code may also be used directly as the identification device.
In an embodiment, the identification may also be selected as another picture or identifiable object that can uniquely identify the identification device.
The two-dimensional code is a readable bar code, and black and white image recording data distributed on a plane (two-dimensional direction) according to a certain rule through a certain specific geometric figure has the characteristics of large information capacity, high reliability and the like.
The Aruco code is similar to a two-dimensional code, is a synthetic square mark, consists of a wide black border and an internal binary matrix, and is a synthetic mark, and the internal binary matrix determines the id of the Aruco code.
In the short-distance positioning of the mobile equipment, the collected image is analyzed to determine the first position information and the second position information of the first identification device and the second identification device, the current relative position of the first identification device and the second identification device in the collected image is determined according to the first position information and the second position information, the current relative position is matched with the preset relative position, the current position information of the mobile equipment is determined according to the matching result, the short-distance accurate positioning of the mobile equipment is realized through the identification device, and the identification device can be uniquely determined due to the fact that the identification device is provided with the identification information, so that the positioning reliability is very high.
Fig. 6 shows a block diagram of a direction positioning device based on an identification device according to a third embodiment of the present disclosure.
Referring to fig. 6, according to a second aspect of the embodiments of the present disclosure, there is provided an orientation positioning apparatus based on an identification apparatus, the apparatus including: an image acquisition module 602, configured to obtain a captured image; the image processing module 603 is configured to perform image analysis on the acquired image, and determine at least two recognition devices and corresponding device location information in the acquired image; a first location determining module 604, configured to perform a relative location analysis on the device location information to determine current location information of the mobile device.
In one embodiment, the image processing module 603 includes: a device identification module 6031, configured to perform device identification on the acquired image to obtain a first identification device and a second identification device; an identification recognition module 6032, configured to perform identification recognition on the first recognition device and the second recognition device to obtain first labeling information and second labeling information; a first location information determining module 6033, configured to determine, according to the first label information, first location information corresponding to the first identification device; a second location information determining module 6034, configured to determine second location information corresponding to the second identifying device according to the second label information.
In one embodiment, the first position determining module 604 includes: a relative position determining module 6041, configured to determine a current relative position according to the first position information and the second position information; a matching module 6042, configured to match the current relative position with a preset relative position to obtain a matching result; and a second location determining module 6043, configured to determine current location information of the mobile device according to the matching result.
In one possible implementation, the second position determining module 6043 includes: the first position determining submodule is used for determining the current position information as the front-view direction of the identification device if the matching result is that the current relative position is consistent with the front-view position corresponding to the preset relative position; the second position determining submodule is used for determining the current position information as the side-looking direction of the recognition device if the matching result is that the current relative position is consistent with the side-looking position corresponding to the preset relative position; and the third position determining submodule is used for determining the current position information as the overlook direction of the recognition device if the matching result is that the current relative position is consistent with the overlook position corresponding to the preset relative position.
In one embodiment, the apparatus further comprises: the image acquisition module 601 is configured to acquire an image through an image acquisition device to obtain an acquired image.
In one embodiment, the apparatus further comprises: and the control module 605 is configured to control the mobile device to move to the target location according to the current location information of the mobile device.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information or data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. A direction positioning method based on a recognition device is applied to mobile equipment, and comprises the following steps:
acquiring a collected image;
carrying out image analysis on the collected image, and determining at least two identification devices and corresponding device position information in the collected image;
and carrying out relative position analysis on the device position information to determine the current position information of the mobile equipment.
2. The method of claim 1, wherein the image analyzing the captured image to determine at least two recognition devices and corresponding device location information in the captured image comprises:
carrying out device identification on the acquired image to obtain a first identification device and a second identification device;
identifying the first identification device and the second identification device to obtain first labeling information and second labeling information;
determining first position information corresponding to the first identification device according to the first marking information;
and determining second position information corresponding to the second identification device according to the second marking information.
3. The method of claim 2, wherein performing a relative location analysis on the device location information to determine current location information of the mobile device comprises:
determining a current relative position according to the first position information and the second position information;
matching the current relative position with a preset relative position to obtain a matching result;
and determining the current position information of the mobile equipment according to the matching result.
4. The method of claim 3, wherein the determining the current location information of the mobile device according to the matching result comprises:
if the matching result is that the current relative position is consistent with the front-view position corresponding to the preset relative position, determining that the current position information is in the front-view direction of the identification device;
if the matching result is that the current relative position is consistent with the side-looking position corresponding to the preset relative position, determining that the current position information is in the side-looking direction of the recognition device;
and if the matching result is that the current relative position is consistent with the overlook position corresponding to the preset relative position, determining that the current position information is in the overlook direction of the identification device.
5. The method of claim 1, wherein the obtaining an acquired image comprises:
and acquiring an image through an image acquisition device to obtain an acquired image.
6. The method of claim 1, further comprising:
and controlling the mobile equipment to move to a target position according to the current position information of the mobile equipment.
7. The method according to claim 1, wherein the identification device is provided with a mark, and the mark is a two-dimensional code and/or an Aruco code.
8. A direction positioning device based on an identification device, which is applied to a mobile device, and comprises:
the image acquisition module is used for acquiring an acquired image;
the image processing module is used for carrying out image analysis on the acquired image and determining at least two identification devices and corresponding device position information in the acquired image;
and the first position determining module is used for carrying out relative position analysis on the device position information and determining the current position information of the mobile equipment.
9. The apparatus of claim 8, wherein the image processing module comprises:
the device identification module is used for carrying out device identification on the acquired image to obtain a first identification device and a second identification device;
the identification recognition module is used for identifying the first recognition device and the second recognition device to obtain first labeling information and second labeling information;
the first position information determining module is used for determining first position information corresponding to the first identification device according to the first marking information;
and the second position information determining module is used for determining second position information corresponding to the second identification device according to the second marking information.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210417590.2A CN114519739A (en) | 2022-04-21 | 2022-04-21 | Direction positioning method and device based on recognition device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210417590.2A CN114519739A (en) | 2022-04-21 | 2022-04-21 | Direction positioning method and device based on recognition device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114519739A true CN114519739A (en) | 2022-05-20 |
Family
ID=81600606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210417590.2A Pending CN114519739A (en) | 2022-04-21 | 2022-04-21 | Direction positioning method and device based on recognition device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114519739A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116185046A (en) * | 2023-04-27 | 2023-05-30 | 北京宸普豪新科技有限公司 | Mobile robot positioning method, mobile robot and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825160A (en) * | 2015-01-05 | 2016-08-03 | 苏州宝时得电动工具有限公司 | Positioning device based on image recognition and positioning method thereof |
CN106990776A (en) * | 2017-02-27 | 2017-07-28 | 广东省智能制造研究所 | Robot goes home localization method and system |
US20180367966A1 (en) * | 2016-09-09 | 2018-12-20 | International Business Machines Corporation | Providing visualization data to a co-located plurality of mobile devices |
US20210209367A1 (en) * | 2018-05-22 | 2021-07-08 | Starship Technologies Oü | Method and system for analyzing robot surroundings |
CN113907645A (en) * | 2021-09-23 | 2022-01-11 | 追觅创新科技(苏州)有限公司 | Mobile robot positioning method and device, storage medium and electronic device |
-
2022
- 2022-04-21 CN CN202210417590.2A patent/CN114519739A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105825160A (en) * | 2015-01-05 | 2016-08-03 | 苏州宝时得电动工具有限公司 | Positioning device based on image recognition and positioning method thereof |
US20180367966A1 (en) * | 2016-09-09 | 2018-12-20 | International Business Machines Corporation | Providing visualization data to a co-located plurality of mobile devices |
CN106990776A (en) * | 2017-02-27 | 2017-07-28 | 广东省智能制造研究所 | Robot goes home localization method and system |
US20210209367A1 (en) * | 2018-05-22 | 2021-07-08 | Starship Technologies Oü | Method and system for analyzing robot surroundings |
CN113907645A (en) * | 2021-09-23 | 2022-01-11 | 追觅创新科技(苏州)有限公司 | Mobile robot positioning method and device, storage medium and electronic device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116185046A (en) * | 2023-04-27 | 2023-05-30 | 北京宸普豪新科技有限公司 | Mobile robot positioning method, mobile robot and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110866496B (en) | Robot positioning and mapping method and device based on depth image | |
US11004235B2 (en) | Method and apparatus for determining position and orientation of bucket of excavator | |
CN112634343B (en) | Training method of image depth estimation model and processing method of image depth information | |
CN109191553B (en) | Point cloud rendering method, device, terminal and storage medium | |
CN107239794B (en) | Point cloud data segmentation method and terminal | |
CN113378760A (en) | Training target detection model and method and device for detecting target | |
CN111402413B (en) | Three-dimensional visual positioning method and device, computing equipment and storage medium | |
JP7295213B2 (en) | Signal light position determination method, device, storage medium, program, roadside equipment | |
CN111553302B (en) | Key frame selection method, device, equipment and computer readable storage medium | |
CN111126209B (en) | Lane line detection method and related equipment | |
CN114186007A (en) | High-precision map generation method and device, electronic equipment and storage medium | |
CN114519739A (en) | Direction positioning method and device based on recognition device and storage medium | |
CN114545426A (en) | Positioning method, positioning device, mobile robot and computer readable medium | |
CN115409951B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN114581890B (en) | Method and device for determining lane line, electronic equipment and storage medium | |
CN115265472A (en) | Method, device, equipment and medium for measuring pavement settlement | |
CN110443263B (en) | Closed loop detection method, device, equipment and computer readable medium | |
CN113032071A (en) | Page element positioning method, page testing method, device, equipment and medium | |
CN112435291A (en) | Multi-target volume measurement method and device and storage medium | |
CN113591852B (en) | Method and device for marking region of interest | |
CN115661485B (en) | Image feature extraction method, device, equipment and storage medium | |
CN112991179B (en) | Method, apparatus, device and storage medium for outputting information | |
CN110399892A (en) | Environmental characteristic extracting method and device | |
CN111126204B (en) | Method for detecting drivable area and related equipment | |
CN118447086A (en) | Traffic signal lamp sensing method and device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |