CN114677425A - Method and device for determining depth of field of object - Google Patents

Method and device for determining depth of field of object Download PDF

Info

Publication number
CN114677425A
CN114677425A CN202210266067.4A CN202210266067A CN114677425A CN 114677425 A CN114677425 A CN 114677425A CN 202210266067 A CN202210266067 A CN 202210266067A CN 114677425 A CN114677425 A CN 114677425A
Authority
CN
China
Prior art keywords
image data
real
field
historical
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210266067.4A
Other languages
Chinese (zh)
Inventor
程大治
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaoma Huixing Technology Co ltd
Original Assignee
Beijing Xiaoma Huixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaoma Huixing Technology Co ltd filed Critical Beijing Xiaoma Huixing Technology Co ltd
Priority to CN202210266067.4A priority Critical patent/CN114677425A/en
Publication of CN114677425A publication Critical patent/CN114677425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method and a device for determining depth of field of an object. The method comprises the following steps: acquiring a plurality of pieces of historical image data, wherein the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances away from a vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles; training by adopting each historical image data and the depth of field of the historical target corresponding to each historical image data to obtain a normalized model; acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by adopting a normalization model to calculate the depth of field; and performing inverse normalization processing on the calculated depth of field of the real-time target object by adopting related parameters of image acquisition equipment for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data. The normalization model of the scheme is suitable for image acquisition equipment with different visual angles and target objects with different distances.

Description

Method and device for determining depth of field of object
Technical Field
The present application relates to the field of computer vision, and in particular, to a method, apparatus, computer-readable storage medium, processor, vehicle and system for determining a depth of field of an object.
Background
Computer vision is primarily a simulation of biological vision by a computer and associated vision sensors. Firstly, a vision sensor is adopted to obtain an external image, and then the external image is converted into a digital signal, so that the data processing of the image is realized.
The estimation of the depth of field of an object is an important branch of computer vision, the existing scheme usually adopts a simple monocular scene depth estimation method and a binocular scene depth estimation method, the characteristics obtained by adopting a monocular camera are less, the adoption of a binocular camera requires stereo image matching, the calculation is more complex, and the problems of low accuracy are caused by adopting both the monocular scene depth estimation method and the binocular scene depth estimation method.
Disclosure of Invention
A primary objective of the present application is to provide a method, an apparatus, a computer-readable storage medium, a processor, a vehicle and a system for determining a depth of field of an object, so as to at least solve the problem of low accuracy of a method for estimating the depth of field of the object.
In order to achieve the above object, according to one aspect of the present application, there is provided a method of determining a depth of field of an object, the method being applied to a vehicle driving system including a vehicle and a plurality of image capturing devices mounted on the vehicle, the image capturing devices having different angles of view, including: acquiring a plurality of pieces of historical image data, wherein the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles; training to obtain a normalized model by adopting each historical image data and the depth of field of the historical target corresponding to each historical image data; acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by adopting the normalized model to calculate the depth of field; and performing inverse normalization processing on the calculated depth of field of the real-time target object by using the related parameters of the image acquisition equipment for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data.
Further, the method for performing inverse normalization processing on the calculated depth of field of the real-time target object by using the relevant parameters of the image acquisition device for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data comprises the following steps: determining a ratio relation between the calculated depth of field of the real-time target object and the real depth of field of the real-time target object to be determined according to the related parameters of the image acquisition equipment; and calculating the depth of field according to the ratio relation and the real-time target object, and determining the real depth of field of the real-time target object.
Further, training to obtain a normalized model by using each of the historical image data and the depth of field of the historical target object corresponding to each of the historical image data, including: filtering and threshold segmentation processing are carried out on the historical image data to obtain processed historical image data corresponding to the historical image data; and training to obtain a normalized model by adopting each processed historical image data and the depth of field of the historical target corresponding to each processed historical image data.
Further, training by using each processed historical image data and the depth of field of the historical target corresponding to each processed historical image data to obtain a normalized model, including: extracting a plurality of different characteristic parameters of the processed historical image data, wherein one color channel of the processed historical image data represents one characteristic parameter; and training to obtain a normalized model by adopting a plurality of different characteristic parameters of each processed historical image data and the depth of field of the historical target corresponding to each processed historical image data.
Further, under the condition that there are three image acquisition devices, training by using each of the historical image data and the depth of field of the historical target object corresponding to each of the historical image data to obtain a normalized model, including: constructing a training set, wherein the training set comprises a first number of the historical image data captured by a first field angle image capture device, a second number of the historical image data captured by a second field angle image capture device, and a third number of the historical image data captured by a third field angle image capture device, wherein the first number is determined by at least the size of the first field angle, the relative positional relationship between the first field angle image capture device and the vehicle, and the relative positional relationship between the historical object and the vehicle, wherein the second number is determined by at least the size of the second field angle, the relative positional relationship between the second field angle image capture device and the vehicle, and the relative positional relationship between the historical object and the vehicle, and wherein the third number is determined by at least the size of the third field angle, The relative position relation between the third field angle image acquisition device and the vehicle and the relative position relation between the historical target object and the vehicle are determined; and training by adopting the training set to obtain the normalized model.
Further, in the process of training by using each of the historical image data and the depth of field of the historical target corresponding to each of the historical image data, the method further includes: acquiring an error between an output result obtained by the normalized model and the depth of field of the historical target object; adjusting at least one of the first number, the second number, and the third number based on the error.
Further, after the inverse normalization processing is performed on the calculated depth of field of the real-time target object by using the parameters related to the image acquisition device for shooting the real-time image data, and the real depth of field of the real-time target object corresponding to the real-time image data is obtained, the method further includes: and determining the running speed and the running acceleration of the vehicle according to the real depth of field of the real-time target object.
Further, the relevant parameters of the image acquisition device include a relative pose between coordinates of the image acquisition device and world coordinates, an optical center position of the image acquisition device, and a distortion amount of the image acquisition device.
Further, the angle of view is one of 30 °, 60 °, 90 °, and 120 °.
Further, the normalization model is a convolutional neural network model, and the convolutional neural network model comprises an input layer, an output layer and a hidden layer.
According to another aspect of the present application, there is provided an apparatus for determining depth of field of an object, the apparatus being applied to a vehicle driving system including a vehicle and a plurality of image capturing devices mounted on the vehicle, the image capturing devices having different angles of view, the apparatus including: the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a plurality of pieces of historical image data, and the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles; the training unit is used for training by adopting the historical image data and the depth of field of the historical target object corresponding to the historical image data to obtain a normalized model; the second acquisition unit is used for acquiring real-time image data and determining a real-time target object corresponding to the real-time image data by adopting the normalization model to calculate the depth of field; and the processing unit is used for performing inverse normalization processing on the calculated depth of field of the real-time target object by using the related parameters of the image acquisition equipment for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data.
According to another aspect of the application, there is provided a computer readable storage medium comprising a stored program, wherein the program when executed controls an apparatus in which the computer readable storage medium is located to perform any of the methods.
According to another aspect of the application, a processor for running a program is provided, wherein the program when running performs any one of the methods.
According to yet another aspect of the application, there is provided a vehicle comprising one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods described.
According to yet another aspect of the present application, there is provided a system comprising the vehicle and a plurality of image capturing devices having different angles of view, the image capturing devices being mounted on the vehicle, the image capturing devices being in communication with the vehicle.
By applying the technical scheme of the application, a plurality of pieces of historical image data are obtained, the plurality of pieces of historical image data adopt a plurality of pieces of image acquisition equipment with different field angles, shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period, training by adopting each historical image data and the depth of field of the historical target object corresponding to each historical image data to obtain a normalized model, acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by using the normalized model to calculate the depth of field, and using the relevant parameters of the image acquisition equipment for shooting the real-time image data, and performing inverse normalization processing on the calculated depth of field of the real-time target object to obtain the real depth of field of the real-time target object corresponding to the real-time image data. The normalization model is obtained by acquiring historical target objects at the same distance and/or different distances from the vehicle by adopting a plurality of image acquisition devices with different field angles, so that the obtained normalization model is suitable for the image acquisition devices with different field angles and the target objects with different distances. The method comprises the steps of training to obtain a universal model, obtaining the depth of field of a target object corresponding to images acquired by the image acquisition equipment with different field angles, and then performing inverse normalization processing to obtain the real depth of field of the real-time target object.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. In the drawings:
fig. 1 shows a flow chart of a method of determining a depth of field of an object according to an embodiment of the application;
FIG. 2 shows a schematic diagram of an apparatus for determining depth of field of an object according to an embodiment of the present application;
FIG. 3 shows a schematic diagram of solving for a normalized multiplier according to an embodiment of the application.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the accompanying drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances in order to facilitate the description of the embodiments of the application herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It will be understood that when an element such as a layer, film, region, or substrate is referred to as being "on" another element, it can be directly on the other element or intervening elements may also be present. Also, in the specification and claims, when an element is described as being "connected" to another element, the element may be "directly connected" to the other element or "connected" to the other element through a third element.
As described in the background art, the method for estimating the depth of field of an object in the prior art has low accuracy, and to solve the problem of low accuracy of the method for estimating the depth of field of an object, embodiments of the present application provide a method, an apparatus, a computer-readable storage medium, a processor, a vehicle, and a system for determining the depth of field of an object.
According to an embodiment of the present application, a method of determining a depth of field of an object is provided.
Fig. 1 is a flow chart of a method of determining a depth of field of an object according to an embodiment of the present application. The method is applied to a vehicle driving system which comprises a vehicle and a plurality of image acquisition devices which are arranged on the vehicle and have different visual field angles, and as shown in fig. 1, the method comprises the following steps:
step S101, acquiring a plurality of pieces of historical image data, wherein the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles;
step S102, training by adopting each historical image data and the depth of field of the historical target corresponding to each historical image data to obtain a normalized model;
Step S103, acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by adopting the normalization model to calculate the depth of field;
and step S104, performing inverse normalization processing on the calculated depth of field of the real-time target object by using the related parameters of the image acquisition equipment for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data.
Specifically, the image capturing device may be a camera.
In the scheme, a plurality of pieces of historical image data are obtained by shooting historical targets at the same distance and/or different distances from the vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles, a normalized model is obtained by training by adopting each piece of historical image data and the depth of field of the historical target corresponding to each piece of historical image data, real-time image data are obtained, the normalized model is adopted to determine the real-time target corresponding to the real-time image data to calculate the depth of field, and the depth of field calculated by adopting the image acquisition devices shooting the real-time image data is subjected to inverse normalization processing by adopting related parameters of the image acquisition devices to obtain the real depth of field of the real-time target corresponding to the real-time image data. The normalization model is obtained by acquiring historical target objects at the same distance and/or different distances from the vehicle by adopting a plurality of image acquisition devices with different field angles, so that the obtained normalization model is suitable for the image acquisition devices with different field angles and the target objects with different distances. The method comprises the steps of training to obtain a universal model, obtaining the depth of field of a target object corresponding to images acquired by the image acquisition equipment with different field angles, and then performing inverse normalization processing to obtain the real depth of field of the real-time target object.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
In an optional embodiment, performing inverse normalization processing on the calculated depth of field of the real-time target object by using parameters related to the image acquisition device for capturing the real-time image data to obtain a true depth of field of the real-time target object corresponding to the real-time image data includes: determining a ratio relation between the calculated depth of field of the real-time target object and the real depth of field of the real-time target object to be determined according to the related parameters of the image acquisition equipment; and calculating the depth of field according to the ratio relation and the real-time target object, and determining the real depth of field of the real-time target object. The method comprises the steps of obtaining a corresponding relation between the calculated depth of field of a real-time target object and the real depth of field of the real-time target object through inverse normalization, obtaining a ratio relation between the calculated depth of field of the real-time target object and the real depth of field of the real-time target object as the depth of field is distance information, and determining the real depth of field of the real-time target object according to the relation between the calculated depth of field of the real-time target object and the ratio of the calculated depth of field of the real-time target object obtained by a normalization model. And the real depth of field of the real-time target object is accurately determined.
In a specific embodiment of the present application, the method for obtaining the depth of field of an object includes the following steps:
step 1: multiplying the homogeneous coordinate of the central point of the 3D target object by an external reference matrix of the camera, and then multiplying the homogeneous coordinate by an internal reference matrix of the camera to obtain a two-dimensional coordinate of the central point of the 3D target object in an image plane;
and 2, step: normalizing the two-dimensional coordinates of the central point of the 3D target object in the image plane through the internal reference matrix to obtain the coordinates of the central point of the 3D target object in the normalized image plane;
and 3, step 3: as shown in fig. 3, a ray is emitted from a camera center M to a two-dimensional coordinate N of a center point of a 3D target on a normalized image plane Z, the ray is intercepted at a predetermined depth (for example, 10 meters) to obtain a point P, a sphere B (for example, with a diameter of 1 cm) is constructed based on the point P, then the sphere B is projected onto the normalized image plane Z to obtain a circle C, the number of pixels occupied by the width of the circle is obtained, and the number of pixels is divided by the diameter of the sphere to obtain a normalized multiplier;
and 4, step 4: dividing the depth (namely the real depth) of the 3D target object obtained by labeling by a normalization multiplier to obtain the normalized object depth, and using the normalized object depth as a depth learning regression target;
and 5: reverse normalization: for a two-dimensional bounding box of an object on an image, the normalization model estimates a normalized depth. And (3) when the inverse normalization is carried out, taking the center point of the frame as a two-dimensional coordinate, and carrying out the step (2) and the step (3) to obtain a normalization multiplier. The depth estimated by the model is multiplied by the normalization multiplier, and the depth after the reverse normalization, namely the real depth, can be obtained.
In an optional embodiment, the training to obtain the normalized model by using each of the historical image data and the depth of field of the historical target corresponding to each of the historical image data includes: performing filtering processing and threshold segmentation processing on each historical image data to obtain processed historical image data corresponding to each historical image data; and training by adopting the processed historical image data and the depth of field of the historical target corresponding to the processed historical image data to obtain a normalized model. Specifically, the filtering process is to filter noise of the history image data, and the threshold segmentation process is to perform binarization segmentation. The processed historical image data is beneficial to the training of the model, and the accuracy of the normalized model obtained by training is ensured.
In an optional embodiment, the training to obtain the normalized model by using each processed historical image data and the depth of field of the historical target corresponding to each processed historical image data includes: extracting a plurality of different characteristic parameters of the processed historical image data, wherein one color channel of the processed historical image data represents one characteristic parameter; and training to obtain a normalized model by adopting a plurality of different characteristic parameters of the processed historical image data and the depth of field of the historical target corresponding to the processed historical image data. The training of the model is to train parameters, extract various different characteristic parameters of the processed historical image data, and train the parameters to obtain an accurate normalized model.
Specifically, the processed historical image data has ten color channels, each color channel can represent one characteristic parameter, and then images corresponding to different color channels can be adopted for training to obtain the normalized model.
In an optional embodiment, when there are three image capturing devices, training by using each of the historical image data and the depth of field of the historical target object corresponding to each of the historical image data to obtain a normalized model includes: constructing a training set including a first number of the historical image data captured by a first field of view image capturing device, a second number of the historical image data captured by a second field of view image capturing device, and a third number of the historical image data captured by a third field of view image capturing device, wherein the first number is determined by at least a size of the first field of view, a relative positional relationship between the first field of view image capturing device and the vehicle, and a relative positional relationship between the historical object and the vehicle, wherein the second number is determined by at least a size of the second field of view, a relative positional relationship between the second field of view image capturing device and the vehicle, and a relative positional relationship between the historical object and the vehicle, and wherein the third number is determined by at least a size of the third field of view, A relative positional relationship between the third field angle image capturing device and the vehicle and a relative positional relationship between the history object and the vehicle are determined; and training by adopting the training set to obtain the normalized model. The amount of data in a training set can be determined according to parameters such as the size of a field angle, the relative position relationship between the image acquisition equipment and the vehicle, the relative position relationship between the historical target object and the vehicle and the like, and the good applicability and the high accuracy of a normalized model obtained through training can be guaranteed through adaptive adjustment.
In an optional embodiment, in the course of training by using each of the historical image data and the depth of field of the historical target corresponding to each of the historical image data, the method further includes: acquiring an error between an output result obtained by the normalization model and the depth of field of the historical target object; adjusting at least one of the first number, the second number, and the third number based on the error. That is, in order to ensure the accuracy of the parameters of the model during the training process, at least one of the first number, the second number and the third number may be adjusted according to an error between an output result obtained by normalizing the model and the depth of field of the historical object.
In another embodiment, the parameters in the model may be adjusted according to an error between the output result obtained by normalizing the model and the depth of field of the historical object. For example, the number of layers of the network is adjusted.
In an optional embodiment, after performing denormalization processing on the calculated depth of field of the real-time object by using parameters related to the image capturing device for capturing the real-time image data to obtain a true depth of field of the real-time object corresponding to the real-time image data, the method further includes: and determining the running speed and the running acceleration of the vehicle according to the real depth of field of the real-time target object. Namely, the real-time navigation is guided according to the real depth of field of the real-time target object.
In an optional embodiment, the relevant parameters of the image capturing device include a relative pose between coordinates of the image capturing device and world coordinates, an optical center position of the image capturing device, and a distortion amount of the image capturing device. Of course, the relevant parameters include other parameters besides the relative pose between the coordinates of the image capturing device and the world coordinates, the optical center position of the image capturing device, and the distortion amount of the image capturing device, and those skilled in the art can select the parameters according to actual requirements.
In an alternative embodiment, the viewing angle is one of: 30 degrees, 60 degrees, 90 degrees and 120 degrees. Of course, the angle of view may also be other than 30 °, 60 °, 90 °, 120 °.
In an alternative embodiment, the normalization model is a convolutional neural network model, and the convolutional neural network model includes an input layer, an output layer, and a hidden layer.
The embodiment of the present application further provides an apparatus for determining a depth of field of an object, which is to be noted that the apparatus for determining a depth of field of an object according to the embodiment of the present application may be used to execute the method for determining a depth of field of an object according to the embodiment of the present application. The following describes an apparatus for determining a depth of field of an object according to an embodiment of the present application.
Fig. 2 is a schematic diagram of an apparatus for determining depth of field of an object according to an embodiment of the present application. The above-mentioned apparatus is applied to a vehicle driving system including a vehicle and a plurality of image capturing devices mounted on the vehicle and having different field angles, as shown in fig. 2, the apparatus includes:
a first acquiring unit 10, configured to acquire a plurality of pieces of historical image data, where the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances from the vehicle in a historical time period by using a plurality of image capturing devices with different field angles;
a training unit 20, configured to train to obtain a normalized model by using each piece of the historical image data and the depth of field of the historical target object corresponding to each piece of the historical image data;
a second obtaining unit 30, configured to obtain real-time image data, and determine a real-time target object calculation depth corresponding to the real-time image data by using the normalization model;
and the processing unit 40 is configured to perform inverse normalization processing on the calculated depth of field of the real-time target object by using the parameters related to the image acquisition device for capturing the real-time image data, so as to obtain the real depth of field of the real-time target object corresponding to the real-time image data.
In the above-mentioned aspect, the first acquiring unit acquires a plurality of pieces of history image data, the plurality of pieces of history image data are acquired by using a plurality of image capturing devices having different field angles, the training unit is used for training to obtain a normalized model by adopting each historical image data and the depth of field of the historical target corresponding to each historical image data, the second acquisition unit is used for acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by using the normalization model to calculate the depth of field, wherein the processing unit adopts the relevant parameters of the image acquisition equipment for shooting the real-time image data, and performing inverse normalization processing on the calculated depth of field of the real-time target object to obtain the real depth of field of the real-time target object corresponding to the real-time image data. The normalization model is obtained by acquiring historical target objects at the same distance and/or different distances from the vehicle by adopting a plurality of image acquisition devices with different field angles, so that the obtained normalization model is suitable for the image acquisition devices with different field angles and the target objects with different distances. The method comprises the steps of training to obtain a universal model, obtaining the depth of field of a target object corresponding to images acquired by the image acquisition equipment with different field angles, and then performing inverse normalization processing to obtain the real depth of field of the real-time target object.
In an optional embodiment, the processing unit includes a first determining module and a second determining module, where the first determining module is configured to determine, according to the relevant parameter of the image capturing device, a ratio relationship between a calculated depth of field of a real-time target object and a true depth of field of the real-time target object to be determined; the second determining module is used for calculating the depth of field according to the ratio relation and the real-time target object and determining the real depth of field of the real-time target object. The method comprises the steps of obtaining a corresponding relation between the calculated depth of field of a real-time target object and the real depth of field of the real-time target object through inverse normalization, obtaining a ratio relation between the calculated depth of field of the real-time target object and the real depth of field of the real-time target object as the depth of field is distance information, and determining the real depth of field of the real-time target object according to the calculated depth of field and the ratio relation of the real-time target object obtained through a normalization model. And the real depth of field of the real-time target object is accurately determined.
In an optional embodiment, the training unit includes a processing module and a first training module, where the processing module is configured to perform filtering processing and threshold segmentation processing on each piece of historical image data to obtain processed historical image data corresponding to each piece of historical image data; the first training module is used for training by adopting the processed historical image data and the depth of field of the historical target corresponding to the processed historical image data to obtain a normalized model. Specifically, the filtering process is to filter noise of the history image data, and the threshold value dividing process is to perform binarization division. The processed historical image data is beneficial to the training of the model, and the accuracy of the normalized model obtained by training is ensured.
In an optional embodiment, the first training module includes an extraction submodule and a training submodule, the extraction submodule is configured to extract a plurality of different feature parameters of the processed historical image data, and one color channel of the processed historical image data represents one of the feature parameters; the training submodule is used for training by adopting a plurality of different characteristic parameters of the processed historical image data and the depth of field of the historical target object corresponding to the processed historical image data to obtain a normalized model. The training of the model is to train parameters, extract various different characteristic parameters of the processed historical image data, and train the parameters to obtain an accurate normalized model.
In an alternative embodiment, when there are three image capturing devices, the training unit includes a construction module and a second training module, the construction module is configured to construct a training set, and the training set includes a first number of the history image data captured by the first field angle image capturing device, a second number of the history image data captured by the second field angle image capturing device, and a third number of the history image data captured by the third field angle image capturing device, where the first number is determined by at least a size of the first field angle, a relative positional relationship between the first field angle image capturing device and the vehicle, and a relative positional relationship between the history object and the vehicle, and the second number is determined by at least a size of the second field angle, a relative positional relationship between the second field angle image capturing device and the vehicle, and a relative positional relationship between the history object and the vehicle A positional relationship between the vehicle and the history object, the positional relationship being determined by at least a magnitude of the third angle of view, a relative positional relationship between the third angle-of-view image capturing device and the vehicle, and a relative positional relationship between the history object and the vehicle; and the second training module is used for training by adopting the training set to obtain the normalized model. The amount of data in a training set can be determined according to parameters such as the size of a field angle, the relative position relationship between the image acquisition equipment and the vehicle, the relative position relationship between the historical target object and the vehicle and the like, and the good applicability and the high accuracy of a normalized model obtained through training can be guaranteed through adaptive adjustment.
In an optional embodiment, the apparatus further includes a third obtaining unit and an adjusting unit, where the third obtaining unit is configured to obtain an error between an output result obtained by the normalization model and a depth of field of the historical target object during training by using each piece of the historical image data and the depth of field of the historical target object corresponding to each piece of the historical image data; the adjusting unit is used for adjusting at least one of the first number, the second number and the third number according to the error. That is, in order to ensure the accuracy of the parameters of the model during the training process, at least one of the first quantity, the second quantity and the third quantity may be adjusted according to an error between an output result obtained by normalizing the model and the depth of field of the historical object.
In an optional embodiment, the apparatus further includes a determining unit, where the determining unit is configured to determine a driving speed and a driving acceleration of the vehicle according to a real depth of field of the real-time target after performing denormalization processing on the calculated depth of field of the real-time target by using a parameter related to the image capturing device that captures the real-time image data to obtain the real depth of field of the real-time target corresponding to the real-time image data. Namely, real-time navigation is guided according to the real depth of field of the real-time target object.
The device for determining the depth of field of the object comprises a processor and a memory, wherein the first acquisition unit, the training unit, the second acquisition unit, the processing unit and the like are stored in the memory as program units, and the processor executes the program units stored in the memory to realize corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. One or more than one kernel can be set, and the depth of field of the object can be accurately determined by adjusting the parameters of the kernels.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
The embodiment of the invention provides a computer-readable storage medium, which includes a stored program, wherein when the program runs, a device where the computer-readable storage medium is located is controlled to execute the method for determining the depth of field of an object.
An embodiment of the present invention provides a processor, where the processor is configured to execute a program, where the program executes the method for determining the depth of field of an object when running.
Embodiments of the present invention provide a vehicle comprising one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs include instructions for performing any of the above-described methods.
The embodiment of the invention provides a system, which comprises the vehicle and a plurality of image acquisition devices with different field angles, wherein the image acquisition devices are installed on the vehicle, and the image acquisition devices are communicated with the vehicle.
The embodiment of the invention provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein when the processor executes the program, at least the following steps are realized:
step S101, acquiring a plurality of pieces of historical image data, wherein the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles;
step S102, training by adopting each historical image data and the depth of field of the historical target corresponding to each historical image data to obtain a normalized model;
Step S103, acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by adopting the normalization model to calculate the depth of field;
and step S104, performing inverse normalization processing on the calculated depth of field of the real-time target object by using the relevant parameters of the image acquisition equipment for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data.
The device herein may be a server, a PC, a PAD, a mobile phone, etc.
The present application further provides a computer program product adapted to perform a program initialized with at least the following method steps when executed on a data processing device:
step S101, acquiring a plurality of pieces of historical image data, wherein the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles;
step S102, training by adopting each historical image data and the depth of field of the historical target corresponding to each historical image data to obtain a normalized model;
step S103, acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by adopting the normalization model to calculate the depth of field;
And step S104, performing inverse normalization processing on the calculated depth of field of the real-time target object by using the related parameters of the image acquisition equipment for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
From the above description, it can be seen that the above-mentioned embodiments of the present application achieve the following technical effects:
1) the method for determining the depth of field of the object acquires a plurality of pieces of historical image data, wherein the plurality of pieces of historical image data adopt a plurality of pieces of image acquisition equipment with different field angles, shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period, training by adopting each historical image data and the depth of field of the historical target object corresponding to each historical image data to obtain a normalized model, acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by using the normalized model to calculate the depth of field, and using the relevant parameters of the image acquisition equipment for shooting the real-time image data, and performing inverse normalization processing on the calculated depth of field of the real-time target object to obtain the real depth of field of the real-time target object corresponding to the real-time image data. The normalization model is obtained by acquiring historical target objects at the same distance and/or different distances from the vehicle by adopting a plurality of image acquisition devices with different field angles, so that the obtained normalization model is suitable for the image acquisition devices with different field angles and the target objects with different distances. The method comprises the steps of training to obtain a universal model, obtaining the depth of field of a target object corresponding to images acquired by the image acquisition equipment with different field angles, and then performing inverse normalization processing to obtain the real depth of field of the real-time target object.
2) The device for determining the depth of field of an object comprises a first acquisition unit, a training unit, a processing unit and a second acquisition unit, wherein the first acquisition unit acquires a plurality of pieces of historical image data, the plurality of pieces of historical image data are obtained by adopting a plurality of pieces of image acquisition equipment with different field angles to shoot historical objects at the same distance and/or different distances away from the vehicle in a historical time period, the training unit performs training by adopting each piece of historical image data and the depth of field of the historical object corresponding to each piece of historical image data to obtain a normalized model, the second acquisition unit acquires real-time image data and determines a real-time object corresponding to the real-time image data to calculate the depth of field by adopting the normalized model, the processing unit performs anti-normalization processing on the depth of field calculated by adopting relevant parameters of the image acquisition equipment for shooting the real-time image data, and obtaining the real depth of field of the real-time target object corresponding to the real-time image data. The normalization model is obtained by acquiring historical target objects at the same distance and/or different distances from the vehicle by adopting a plurality of image acquisition devices with different field angles, so that the obtained normalization model is suitable for image acquisition devices with different field angles and target objects with different distances. The method comprises the steps of training to obtain a universal model, obtaining the depth of field of a target object corresponding to images acquired by the image acquisition equipment with different field angles, and then performing inverse normalization processing to obtain the real depth of field of the real-time target object.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (15)

1. A method for determining a depth of field of an object, the method being applied to a vehicle driving system including a vehicle and a plurality of image capturing devices mounted on the vehicle and having different angles of view, comprising:
acquiring a plurality of pieces of historical image data, wherein the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles;
training to obtain a normalized model by adopting each historical image data and the depth of field of the historical target corresponding to each historical image data;
acquiring real-time image data, and determining a real-time target object corresponding to the real-time image data by adopting the normalized model to calculate the depth of field;
And performing inverse normalization processing on the calculated depth of field of the real-time target object by using the related parameters of the image acquisition equipment for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data.
2. The method according to claim 1, wherein the denormalizing the real-time object computed depth of field using parameters associated with the image capture device capturing the real-time image data to obtain the real depth of field of the real-time object corresponding to the real-time image data comprises:
determining a ratio relation between the calculated depth of field of the real-time target object and the real depth of field of the real-time target object to be determined according to the related parameters of the image acquisition equipment;
and calculating the depth of field according to the ratio relation and the real-time target object, and determining the real depth of field of the real-time target object.
3. The method of claim 1, wherein training using each of the historical image data and the depth of field of the historical object corresponding to each of the historical image data to obtain a normalized model comprises:
filtering and threshold segmentation processing are carried out on each historical image data to obtain processed historical image data corresponding to each historical image data;
And training to obtain a normalized model by adopting each processed historical image data and the depth of field of the historical target corresponding to each processed historical image data.
4. The method of claim 3, wherein training to obtain a normalized model using each of the processed historical image data and the depth of field of the historical object corresponding to each of the processed historical image data comprises:
extracting a plurality of different characteristic parameters of the processed historical image data, wherein one color channel of the processed historical image data represents one characteristic parameter;
and training to obtain a normalized model by adopting a plurality of different characteristic parameters of each processed historical image data and the depth of field of the historical target corresponding to each processed historical image data.
5. The method according to claim 1, wherein in a case where there are three image capturing devices, training using each of the historical image data and the depth of field of the historical target corresponding to each of the historical image data to obtain a normalized model includes:
Constructing a training set, wherein the training set comprises a first number of the historical image data captured by a first field angle image capture device, a second number of the historical image data captured by a second field angle image capture device, and a third number of the historical image data captured by a third field angle image capture device, wherein the first number is determined by at least the size of the first field angle, the relative positional relationship between the first field angle image capture device and the vehicle, and the relative positional relationship between the historical object and the vehicle, wherein the second number is determined by at least the size of the second field angle, the relative positional relationship between the second field angle image capture device and the vehicle, and the relative positional relationship between the historical object and the vehicle, and wherein the third number is determined by at least the size of the third field angle, The relative position relation between the third field angle image acquisition device and the vehicle and the relative position relation between the historical target object and the vehicle are determined;
and training by adopting the training set to obtain the normalized model.
6. The method of claim 5, wherein during training using each of the historical image data and the depth of field of the historical object corresponding to each of the historical image data, the method further comprises:
Acquiring an error between an output result obtained by the normalized model and the depth of field of the historical target object;
adjusting at least one of the first number, the second number, and the third number based on the error.
7. The method according to any one of claims 1 to 6, wherein after performing denormalization processing on the calculated depth of field of the real-time object using parameters related to the image capturing device capturing the real-time image data to obtain a true depth of field of the real-time object corresponding to the real-time image data, the method further comprises:
and determining the running speed and the running acceleration of the vehicle according to the real depth of field of the real-time target object.
8. The method according to any one of claims 1 to 6, wherein the relevant parameters of the image acquisition device comprise relative pose between coordinates of the image acquisition device and world coordinates, optical center position of the image acquisition device, distortion amount of the image acquisition device.
9. The method according to any of claims 1 to 6, wherein the field angle is one of:
30°、60°、90°、120°。
10. the method of any one of claims 1 to 6, wherein the normalized model is a convolutional neural network model comprising an input layer, an output layer, and a hidden layer.
11. An apparatus for determining depth of field of an object, the apparatus being applied to a vehicle driving system including a vehicle and a plurality of image capturing devices mounted on the vehicle and having different angles of view, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a plurality of pieces of historical image data, and the plurality of pieces of historical image data are obtained by shooting historical target objects at the same distance and/or different distances away from the vehicle in a historical time period by adopting a plurality of image acquisition devices with different field angles;
the training unit is used for training by adopting the historical image data and the depth of field of the historical target object corresponding to the historical image data to obtain a normalized model;
the second acquisition unit is used for acquiring real-time image data and determining a real-time target object corresponding to the real-time image data by adopting the normalization model to calculate the depth of field;
and the processing unit is used for performing inverse normalization processing on the calculated depth of field of the real-time target object by using the related parameters of the image acquisition equipment for shooting the real-time image data to obtain the real depth of field of the real-time target object corresponding to the real-time image data.
12. A computer-readable storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the method of any one of claims 1 to 10.
13. A processor configured to run a program, wherein the program when executed performs the method of any one of claims 1 to 10.
14. A vehicle comprising one or more processors, memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the method of any of claims 1-10.
15. A system comprising the vehicle of claim 14 and a plurality of image capturing devices of different field angles, the image capturing devices being mounted on the vehicle, the image capturing devices being in communication with the vehicle.
CN202210266067.4A 2022-03-17 2022-03-17 Method and device for determining depth of field of object Pending CN114677425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210266067.4A CN114677425A (en) 2022-03-17 2022-03-17 Method and device for determining depth of field of object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210266067.4A CN114677425A (en) 2022-03-17 2022-03-17 Method and device for determining depth of field of object

Publications (1)

Publication Number Publication Date
CN114677425A true CN114677425A (en) 2022-06-28

Family

ID=82075108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210266067.4A Pending CN114677425A (en) 2022-03-17 2022-03-17 Method and device for determining depth of field of object

Country Status (1)

Country Link
CN (1) CN114677425A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118532A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Vision depth of field estimation method, device, equipment and storage medium
WO2019137081A1 (en) * 2018-01-11 2019-07-18 华为技术有限公司 Image processing method, image processing apparatus, and photographing device
CN111426299A (en) * 2020-06-15 2020-07-17 北京三快在线科技有限公司 Method and device for ranging based on depth of field of target object
CN112417967A (en) * 2020-10-22 2021-02-26 腾讯科技(深圳)有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118532A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Vision depth of field estimation method, device, equipment and storage medium
WO2019137081A1 (en) * 2018-01-11 2019-07-18 华为技术有限公司 Image processing method, image processing apparatus, and photographing device
CN111426299A (en) * 2020-06-15 2020-07-17 北京三快在线科技有限公司 Method and device for ranging based on depth of field of target object
CN112417967A (en) * 2020-10-22 2021-02-26 腾讯科技(深圳)有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium

Similar Documents

Publication Publication Date Title
CN109461181B (en) Depth image acquisition method and system based on speckle structured light
CN107705333B (en) Space positioning method and device based on binocular camera
US9307221B1 (en) Settings of a digital camera for depth map refinement
US10740964B2 (en) Three-dimensional environment modeling based on a multi-camera convolver system
AU2011362799B2 (en) 3D streets
US9805294B2 (en) Method for denoising time-of-flight range images
CN106033621B (en) A kind of method and device of three-dimensional modeling
CN111986472B (en) Vehicle speed determining method and vehicle
CN105303514A (en) Image processing method and apparatus
CN113447923A (en) Target detection method, device, system, electronic equipment and storage medium
CN109300151B (en) Image processing method and device and electronic equipment
CN103477644A (en) Method of recording an image and obtaining 3D information from the image, and camera system
CN112116068B (en) Method, equipment and medium for splicing all-around images
CN111080526A (en) Method, device, equipment and medium for measuring and calculating farmland area of aerial image
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN112511767B (en) Video splicing method and device, and storage medium
US11227407B2 (en) Systems and methods for augmented reality applications
CN116958419A (en) Binocular stereoscopic vision three-dimensional reconstruction system and method based on wavefront coding
CN110033046A (en) A kind of quantization method calculating characteristic matching point distribution confidence level
CN105488780A (en) Monocular vision ranging tracking device used for industrial production line, and tracking method thereof
Maki et al. 3d model generation of cattle using multiple depth-maps for ict agriculture
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN114677425A (en) Method and device for determining depth of field of object
CN112907657A (en) Robot repositioning method, device, equipment and storage medium
CN111489398B (en) Imaging equipment calibration method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination