CN116129069A - Method and device for calculating area of planar area, electronic equipment and storage medium - Google Patents

Method and device for calculating area of planar area, electronic equipment and storage medium Download PDF

Info

Publication number
CN116129069A
CN116129069A CN202310024752.0A CN202310024752A CN116129069A CN 116129069 A CN116129069 A CN 116129069A CN 202310024752 A CN202310024752 A CN 202310024752A CN 116129069 A CN116129069 A CN 116129069A
Authority
CN
China
Prior art keywords
area
plane
point cloud
dimensional point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310024752.0A
Other languages
Chinese (zh)
Inventor
陈曲
叶晓青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310024752.0A priority Critical patent/CN116129069A/en
Publication of CN116129069A publication Critical patent/CN116129069A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a calculation method, a calculation device, electronic equipment and a storage medium for a planar area, relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, image processing, deep learning and the like, and can be applied to scenes such as smart cities, metauniverse and the like. The specific scheme is as follows: acquiring binocular images shot by a binocular camera on a scene; determining a three-dimensional point cloud corresponding to a scene under a camera coordinate system according to camera internal parameters and binocular images of the binocular camera; fitting to obtain a plane equation of a target plane in the scene based on the three-dimensional point cloud corresponding to the scene; obtaining a three-dimensional point cloud corresponding to the mask map through a mask map of a target area in the camera internal reference and the target plane and a simultaneous plane equation; and projecting the three-dimensional point cloud corresponding to the mask map to a two-dimensional plane, and calculating the area of the target area. The method is based on camera internal parameters, plane equations and mask patterns of the target area, the target area is jointly determined, the area of the target area is calculated, and accuracy of calculating the area of the plane area is improved.

Description

Method and device for calculating area of planar area, electronic equipment and storage medium
Technical Field
The application relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, image processing, deep learning and the like, and can be applied to scenes such as smart cities, metauniverse and the like, in particular to a method and a device for calculating the area of a plane area, electronic equipment and a storage medium.
Background
In practical applications, it is generally necessary to calculate the area of some planar area, such as ground damage identification, and to calculate the area of the ground damaged area.
In the related art, a pseudo point cloud can be obtained through binocular stereo matching, and the area of a corresponding part is calculated based on the pseudo point cloud. However, the pseudo point cloud obtained by binocular stereo matching has incomplete and inaccurate conditions, so that the determined object edge is inaccurate, and the accuracy of the calculated planar area is affected.
Disclosure of Invention
The application provides a method and device for calculating area of a plane area, electronic equipment and a storage medium. The specific scheme is as follows:
according to an aspect of the present application, there is provided a method for calculating an area of a planar area, including:
acquiring binocular images shot by a binocular camera on a scene;
determining a three-dimensional point cloud corresponding to a scene under a camera coordinate system according to camera internal parameters and binocular images of the binocular camera;
fitting to obtain a plane equation of a target plane in the scene based on the three-dimensional point cloud corresponding to the scene;
obtaining a three-dimensional point cloud corresponding to the mask map through a mask map of a target area in the camera internal reference and the target plane and a simultaneous plane equation;
and projecting the three-dimensional point cloud corresponding to the mask map to a two-dimensional plane, and calculating the area of the target area.
According to another aspect of the present application, there is provided a computing device of a planar area, including:
the first acquisition module is used for acquiring binocular images shot by the binocular camera on a scene;
the determining module is used for determining a three-dimensional point cloud corresponding to a scene under a camera coordinate system according to the camera internal parameters and the binocular images of the binocular camera;
the fitting module is used for fitting to obtain a plane equation of a target plane in the scene based on the three-dimensional point cloud corresponding to the scene;
the second acquisition module is used for obtaining a three-dimensional point cloud corresponding to the mask map through the mask map of the target area in the camera internal parameter and the target plane and the simultaneous plane equation;
and the calculation module is used for projecting the three-dimensional point cloud corresponding to the mask map to the two-dimensional plane and calculating the area of the target area.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the above embodiments.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to the above-described embodiments.
According to another aspect of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method described in the above embodiments.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a flow chart of a method for calculating a planar area according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for calculating a planar area according to another embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a method for calculating a planar area according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a calculation process of a planar area according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a planar area calculating device according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device for implementing a method of calculating a planar area according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following describes a method, an apparatus, an electronic device, and a storage medium for calculating a planar area according to embodiments of the present application with reference to the accompanying drawings.
Artificial intelligence is the discipline of studying certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person using a computer, both in the technical field of hardware and in the technical field of software. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Computer vision is a science of researching how to make a machine "look at", which means that a camera and a computer are used to replace human eyes to perform machine vision such as recognition, tracking and measurement on targets, and further perform graphic processing, so that the computer processing becomes an image more suitable for human eyes to observe or transmit to an instrument to detect.
Deep learning is a new research direction in the field of machine learning. Deep learning is the inherent regularity and presentation hierarchy of learning sample data, and the information obtained during such learning is helpful in interpreting data such as text, images and sounds. Its final goal is to have the machine have analytical learning capabilities like a person, and to recognize text, image, and sound data.
Fig. 1 is a flow chart of a method for calculating a planar area according to an embodiment of the present application.
The method for calculating the area of the plane area can be executed by the device for calculating the area of the plane area, and the device can be configured in the electronic equipment to jointly calculate the area of the target area based on the camera internal parameters of the binocular camera, the plane equation of the target plane and the mask map of the target area in the target plane, so that the accuracy of calculating the area of the plane area is improved.
The electronic device may be any device with computing capability, for example, may be a personal computer, a mobile terminal, a server, etc., and the mobile terminal may be, for example, a vehicle-mounted device, a mobile phone, a tablet computer, a personal digital assistant, a wearable device, etc., which have various operating systems, touch screens, and/or display screens.
As shown in fig. 1, the method for calculating the area of the planar area includes:
step 101, obtaining a binocular image shot by a binocular camera on a scene.
In the application, a binocular camera can be utilized to shoot a scene to obtain a binocular image. The binocular images may include a left image of a scene captured by a left camera of the binocular cameras and a right image of the scene captured by a right camera of the binocular cameras.
In the present application, a scene may refer to a scene where a planar area to be calculated is located, such as a road scene, a road sign scene, and the like.
For example, to calculate the area of a damaged area of a certain road, a binocular camera may be used to photograph the scene of the road to obtain a corresponding binocular image. For another example, to calculate the area of a certain road sign area, a binocular camera may be used to shoot the scene where the sign is located, so as to obtain a corresponding binocular image.
Step 102, determining a three-dimensional point cloud corresponding to a scene under a camera coordinate system according to the camera internal parameters and the binocular image of the binocular camera.
In the application, the three-dimensional point cloud of each pixel point under the camera coordinate system, namely the three-dimensional point cloud corresponding to the scene under the camera coordinate system, can be determined according to the camera internal parameters of the binocular camera and the coordinates of each pixel point in the binocular image.
Or determining a depth map corresponding to the scene according to the binocular image, and converting the depth map from the image coordinate system to the camera coordinate system according to the camera internal parameters, so as to obtain a three-dimensional point cloud corresponding to the scene under the camera coordinate system. Therefore, the depth map and the camera internal parameters are utilized to determine the three-dimensional point cloud corresponding to the scene, and the determination accuracy of the three-dimensional point cloud is improved.
When determining the depth map, the parallax map can be determined by utilizing a binocular stereo matching algorithm according to the binocular image, and the depth map is determined by combining the parallax map according to camera internal parameters such as the center-to-center distance of two cameras and the focal length of the cameras. Alternatively, the binocular image may be input into a pre-trained neural network model for prediction to obtain a depth map output by the neural network model. Therefore, the accuracy of the depth map can be improved by utilizing the neural network model to conduct prediction.
And 103, fitting to obtain a plane equation of a target plane in the scene based on the three-dimensional point cloud corresponding to the scene.
In the method, 3 non-repeated points can be randomly selected from a three-dimensional point cloud corresponding to a scene to serve as a subset, the subset is utilized to conduct plane equation fitting of a target plane, all the points are checked by utilizing the plane equation, the number of inner points is recorded, the optimal plane equation and the number of corresponding inner points are updated, the steps are repeated until the maximum iteration number is met, all the points are screened by utilizing the optimal plane equation, all the inner points are obtained, and plane equation fitting is conducted again by utilizing all the inner points, so that a final plane equation is obtained.
And 104, obtaining a three-dimensional point cloud corresponding to the mask map by using the simultaneous plane equation through the mask maps of the camera internal parameters and the target area in the target plane.
In the application, classification detection can be performed on any image in the binocular image, a target area in any image is determined, and then a mask map of the target area is determined. Or, semantic segmentation processing is performed on any image in the binocular images, for example, the images are input into a pre-trained semantic segmentation model to obtain a mask image of a target area in any image, so that the scene image is subjected to the semantic segmentation processing to obtain the mask image of the target area, and the processing efficiency is improved.
In the application, two-dimensional coordinate points in the mask map of the target area are in a pixel coordinate system, and a plane equation is in a camera coordinate system, so that a three-dimensional point cloud corresponding to the mask map can be obtained according to the internal parameters of the camera and the mask map and the plane equation of the target area.
And 105, projecting the three-dimensional point cloud corresponding to the mask map to a two-dimensional plane, and calculating the area of the target area.
Because the three-dimensional coordinates of the target area are in a plane, namely, the target plane, in order to facilitate calculation, the three-dimensional point cloud corresponding to the mask map can be projected to the two-dimensional plane to obtain a projection area of the target area on the two-dimensional plane, and the area of the projection area is calculated, so that the area of the projection area is the area of the target area.
Therefore, based on camera internal parameters, plane equations and mask patterns of the target area in the target plane, the target area is jointly determined, the area of the target area is calculated, and the accuracy of calculating the area of the plane area is improved.
In the embodiment of the application, a three-dimensional point cloud corresponding to a scene under a camera coordinate system is determined according to a camera internal parameter of a binocular camera and a binocular image shot by the binocular camera on the scene, a plane equation of a target plane in the scene is obtained by fitting based on the three-dimensional point cloud corresponding to the scene, a mask map of a target area in the camera internal parameter and the target plane is obtained by combining the plane equation, the three-dimensional point cloud corresponding to the mask map is projected to a two-dimensional plane, and the area of the target area is calculated. Therefore, the target area is determined jointly based on the camera internal parameters of the binocular camera, the plane equation of the target plane and the mask map of the target area in the target plane, and the accurate analysis solution of the target area is obtained, so that the accuracy of the target area is improved, and the accuracy of the area calculation of the plane area is further improved.
Fig. 2 is a flowchart of a method for calculating a planar area according to another embodiment of the present application.
As shown in fig. 2, the method for calculating the area of the planar area includes:
in step 201, a binocular image of a scene captured by a binocular camera is acquired.
Step 202, determining a three-dimensional point cloud corresponding to a scene under a camera coordinate system according to camera internal parameters and binocular images of the binocular camera.
And 203, fitting to obtain a plane equation of a target plane in the scene based on the three-dimensional point cloud corresponding to the scene.
In this application, steps 201 to 203 may be implemented in any manner in each embodiment of the present application, so that details are not repeated here.
And 204, determining a mapping relation between the two-dimensional coordinate points and the three-dimensional point cloud in the mask map according to the camera internal parameters and the plane equation.
In the application, the mapping relation between the two-dimensional coordinate points and the three-dimensional point cloud in the mask map can be determined according to the conversion relation between the pixel coordinate system and the camera coordinate system, the camera internal parameters and the plane equation.
For example, the plane equation of the target plane is ax+by+cz+d=0; in the camera coordinate system, there is one point (X, Y, Z) in the non-external reference point cloud, there is one point (u, v) in the pixel coordinate system, and the conversion relationship between the two points is as follows, namely, the conversion relationship between the pixel coordinate system and the camera coordinate system is as shown in the following equation:
Figure BDA0004044315500000061
wherein f x 、f y 、c x And c y Is an internal reference of the camera; f (f) x Representing the length of the focal length in the x-axis direction using pixels;f y representing the length of the focal length in the y-axis direction using pixels; c x Indicating the offset in the x-axis direction; c y Indicating the amount of offset in the y-axis direction.
Simultaneously the two equations
Figure BDA0004044315500000071
The mapping relation between the two-dimensional coordinate points and the three-dimensional point cloud in the mask map can be obtained as follows:
Figure BDA0004044315500000072
and 205, substituting coordinate values of the two-dimensional coordinate points in the mask map into the mapping relation to obtain a three-dimensional point cloud corresponding to the mask map.
In the application, coordinate values of two-dimensional coordinate points in the mask map are substituted into the mapping relation to obtain three-dimensional point clouds corresponding to each two-dimensional coordinate point, so that the three-dimensional point clouds corresponding to the mask map are obtained.
For example, taking the mapping relationship obtained by the simultaneous plane equation as an example, the value of u and the value of v in the two-dimensional coordinate point in the mask map may be substituted into the expressions of X and Y to obtain the value of X and the value of Y, and then the value of X and the value of Y are substituted into the expression of Z to obtain the value of Z, so as to obtain the three-dimensional point cloud corresponding to the two-dimensional coordinate point.
And 206, projecting the three-dimensional point cloud corresponding to the mask map to a two-dimensional plane, and calculating the area of the target area.
In this application, step 206 may be implemented in any manner of embodiments of the present application, so that details are not repeated here.
In the embodiment of the application, when the three-dimensional point cloud corresponding to the mask map is obtained through the mask map of the target area in the camera internal reference and the target plane, the mapping relationship between the two-dimensional coordinate points in the mask map and the three-dimensional point cloud can be determined according to the camera internal reference and the plane equation, and coordinate values of the two-dimensional coordinate points in the mask map are substituted into the mapping relationship, so that the three-dimensional point cloud corresponding to the mask map is obtained. Therefore, according to the camera internal parameters, the plane equation and the mask map, the three-dimensional point cloud corresponding to the mask map can be obtained, and the accuracy of the three-dimensional point cloud corresponding to the mask map can be improved.
Fig. 3 is a flowchart illustrating a method for calculating a planar area according to another embodiment of the present application.
As shown in fig. 3, the method for calculating the area of the planar area includes:
step 301, obtaining a binocular image of a scene shot by a binocular camera
Step 302, determining a three-dimensional point cloud corresponding to a scene under a camera coordinate system according to the camera internal parameters and the binocular image of the binocular camera.
Step 303, fitting to obtain a plane equation of the target plane in the scene based on the three-dimensional point cloud corresponding to the scene.
And 304, obtaining a three-dimensional point cloud corresponding to the mask map by using the simultaneous plane equation through the mask maps of the camera internal parameters and the target area in the target plane.
In this application, steps 301 to 304 may be implemented in any manner in each embodiment of the present application, so that no further description is given here.
Step 305, determining a two-dimensional plane according to any vector and plane equation in the target plane.
In the present application, the normal vector of the target plane may be determined according to a plane equation, for example, the plane equation of the target plane is ax+by+cz+d=0, and the normal vector of the plane is (a, b, c).
In the application, any vector in the target plane can be taken as the x direction, the normal vector of the target plane and any vector are cross multiplied to obtain the y direction, and the plane where the xy coordinate system is located is the two-dimensional plane to be projected.
And 306, projecting the three-dimensional point cloud corresponding to the mask map to a two-dimensional plane to obtain a projection area corresponding to the mask map, and calculating the area of the projection area.
In the application, the three-dimensional point cloud corresponding to the mask map can be projected to a two-dimensional plane to obtain a projection area corresponding to the mask map, and then the area of the projection area is calculated.
For example, any vector in the target plane is taken as the x direction, the normal vector of the target plane and any vector are multiplied by each other to obtain the y direction, and the xy coordinate system is used for projecting the three-dimensional point cloud corresponding to the mask map to the xy coordinate system to obtain the two-dimensional coordinates, and the area of the projection area is calculated.
During projection, a part of three-dimensional point cloud is usually selected to be projected onto a two-dimensional plane, namely sparse points in a projection area, in order to further improve accuracy of a calculation result, in the method, expansion processing can be performed on the projection area so that points in the projection area are connected, corrosion processing is performed later, so that the outline of the projection area is clearer, and the area of the projection area after the processing is calculated again.
Step 307, determining the area of the target area according to the area of the projection area.
In the present application, the calculated area of the projection area is the area of the target area.
In the embodiment of the application, when the three-dimensional point cloud corresponding to the mask map is projected to a two-dimensional plane and the area of the target area is calculated, the two-dimensional plane is determined according to any vector and plane equation in the target plane, the three-dimensional point cloud corresponding to the mask map is projected to the two-dimensional plane, so that the projection area corresponding to the mask map is obtained, the area of the projection area is calculated, and then the area of the target area is determined according to the area of the projection area. Therefore, the two-dimensional plane to be projected is determined according to any vector and plane equation in the target plane, and the three-dimensional point cloud corresponding to the mask map is projected to the two-dimensional plane to obtain the projection area, so that the area of the target area can be obtained by calculating the area of the projection area, and the method is convenient to calculate and high in accuracy.
For ease of understanding, fig. 4 is a schematic diagram illustrating a calculation process of the planar area according to an embodiment of the present application.
As shown in fig. 4, a three-dimensional point cloud under a camera coordinate system can be obtained according to a depth map (specifically, a binocular camera is used for shooting) obtained by network prediction and camera internal parameters, and a ground plane fitting is performed based on the three-dimensional point cloud, so as to obtain a ground plane equation.
Thereafter, can rootBased on the mask pattern of the camera reference and the appointed area, the two are combined
Figure BDA0004044315500000091
And the ground plane equation ax+by+cz+d=0, solving a mapping relation between the two-dimensional coordinate points (u, v) in the mask map and the three-dimensional point cloud (X, Y, Z), and substituting coordinate values of the two-dimensional coordinate points in the mask map into the mapping relation to obtain the three-dimensional point cloud corresponding to the mask map.
Then, the three-dimensional point cloud corresponding to the mask map is projected to a two-dimensional plane, wherein the two-dimensional plane can be obtained by taking 1 vector v on the ground plane as the x direction, cross multiplying the normal vector v of the ground plane to obtain the y direction, projecting the three-dimensional point cloud corresponding to the mask map to an xy coordinate system to obtain a projection area, performing expansion treatment and corrosion treatment on the projection area, and then calculating the area of the projection area after treatment, so that the area of the appointed area on the ground plane is obtained.
In order to achieve the above embodiments, the embodiments of the present application further provide a device for calculating the area of the planar area. Fig. 5 is a schematic structural diagram of a planar area calculating device according to an embodiment of the present application.
As shown in fig. 5, the planar area calculating device 500 includes:
a first obtaining module 510, configured to obtain a binocular image captured by a binocular camera on a scene;
the determining module 520 is configured to determine a three-dimensional point cloud corresponding to a scene in a camera coordinate system according to the camera internal parameters and the binocular image of the binocular camera;
the fitting module 530 is configured to obtain a plane equation of the target plane in the scene based on the three-dimensional point cloud corresponding to the scene;
a second obtaining module 540, configured to obtain, according to the camera internal parameter and the mask map of the target area in the target plane, a three-dimensional point cloud corresponding to the mask map by using a simultaneous plane equation;
the calculating module 550 is configured to project the three-dimensional point cloud corresponding to the mask map onto a two-dimensional plane, and calculate an area of the target area.
In one possible implementation manner of the embodiment of the present application, the second obtaining module 540 is configured to:
determining a mapping relation between two-dimensional coordinate points and three-dimensional point clouds in the mask map according to camera internal parameters and plane equations;
substituting coordinate values of two-dimensional coordinate points in the mask map into the mapping relation to obtain a three-dimensional point cloud corresponding to the mask map.
In one possible implementation manner of the embodiment of the present application, the calculating module 550 is configured to:
determining a two-dimensional plane according to any vector and plane equation in the target plane;
projecting the three-dimensional point cloud corresponding to the mask map to a two-dimensional plane to obtain a projection area corresponding to the mask map, and calculating the area of the projection area;
the area of the target area is determined according to the area of the projection area.
In one possible implementation manner of the embodiment of the present application, the calculating module 550 is configured to:
puffing and corroding the projection area to obtain a treated projection area;
the area of the processed projection area is calculated.
In one possible implementation manner of the embodiment of the application, the apparatus may further include:
and the third acquisition module is used for carrying out semantic segmentation processing on any image in the binocular images to obtain a mask image.
In one possible implementation manner of the embodiment of the present application, the determining module 520 is configured to:
determining a depth map corresponding to the scene according to the binocular image;
and determining a three-dimensional point cloud corresponding to the scene according to the camera internal parameters and the depth map.
It should be noted that, the explanation of the foregoing embodiment of the method for calculating the area of the planar area is also applicable to the apparatus for calculating the area of the planar area in this embodiment, and thus will not be repeated herein.
In the embodiment of the application, a three-dimensional point cloud corresponding to a scene under a camera coordinate system is determined according to a camera internal parameter of a binocular camera and a binocular image shot by the binocular camera on the scene, a plane equation of a target plane in the scene is obtained by fitting based on the three-dimensional point cloud corresponding to the scene, a mask map of a target area in the camera internal parameter and the target plane is obtained by combining the plane equation, the three-dimensional point cloud corresponding to the mask map is projected to a two-dimensional plane, and the area of the target area is calculated. Therefore, the target area is determined jointly based on the camera internal parameters of the binocular camera, the plane equation of the target plane and the mask map of the target area in the target plane, and the accurate analysis solution of the target area is obtained, so that the accuracy of the target area is improved, and the accuracy of the area calculation of the plane area is further improved.
According to embodiments of the present application, there is also provided an electronic device, a readable storage medium and a computer program product.
Fig. 6 shows a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a ROM (Read-Only Memory) 602 or a computer program loaded from a storage unit 608 into a RAM (Random Access Memory ) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An I/O (Input/Output) interface 765 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing units 601 include, but are not limited to, a CPU (Central Processing Unit ), a GPU (Graphic Processing Units, graphics processing unit), various dedicated AI (Artificial Intelligence ) computing chips, various computing units running machine learning model algorithms, DSPs (Digital Signal Processor, digital signal processors), and any suitable processors, controllers, microcontrollers, and the like. The calculation unit 601 performs the respective methods and processes described above, for example, a calculation method of a planar area. For example, in some embodiments, the method of calculating the planar area may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the above-described planar area computing method may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method of computing the planar area by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit System, FPGA (Field Programmable Gate Array ), ASIC (Application-Specific Integrated Circuit, application-specific integrated circuit), ASSP (Application Specific Standard Product, special-purpose standard product), SOC (System On Chip ), CPLD (Complex Programmable Logic Device, complex programmable logic device), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present application may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, RAM, ROM, EPROM (Electrically Programmable Read-Only-Memory, erasable programmable read-Only Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., CRT (Cathode-Ray Tube) or LCD (Liquid Crystal Display ) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network ), WAN (Wide Area Network, wide area network), internet and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service (Virtual Private Server, virtual special servers) are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
According to an embodiment of the present application, there is further provided a computer program product, which when executed by an instruction processor in the computer program product, performs the method for calculating the area of the planar area according to the above embodiment of the present application.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (15)

1. A method of calculating the area of a planar region, comprising:
acquiring binocular images shot by a binocular camera on a scene;
determining a three-dimensional point cloud corresponding to the scene under a camera coordinate system according to the camera internal parameters of the binocular camera and the binocular image;
fitting to obtain a plane equation of a target plane in the scene based on the three-dimensional point cloud corresponding to the scene;
the plane equation is established through the camera internal parameters and a mask map of a target area in the target plane, and a three-dimensional point cloud corresponding to the mask map is obtained;
and projecting the three-dimensional point cloud corresponding to the mask map to a two-dimensional plane, and calculating the area of the target area.
2. The method of claim 1, wherein the obtaining, by combining the plane equation, the three-dimensional point cloud corresponding to the mask map through the mask map of the camera internal reference and the target region in the target plane, includes:
determining a mapping relation between two-dimensional coordinate points and three-dimensional point clouds in the mask map according to the camera internal parameters and the plane equation;
substituting the coordinate values of the two-dimensional coordinate points in the mask map into the mapping relation to obtain the three-dimensional point cloud corresponding to the mask map.
3. The method of claim 1, wherein the projecting the three-dimensional point cloud corresponding to the mask map onto a two-dimensional plane and calculating the area of the target region comprises:
determining the two-dimensional plane according to any vector in the target plane and the plane equation;
projecting the three-dimensional point cloud corresponding to the mask map to the two-dimensional plane to obtain a projection area corresponding to the mask map, and calculating the area of the projection area;
and determining the area of the target area according to the area of the projection area.
4. The method of claim 3, wherein the calculating the area of the projection area comprises:
puffing and corroding the projection area to obtain a treated projection area;
and calculating the area of the processed projection area.
5. The method of claim 1, wherein prior to obtaining a three-dimensional point cloud corresponding to the mask map by combining the plane equation from the mask map of the target area in the target plane area and the camera internal parameters, further comprising:
and carrying out semantic segmentation processing on any image in the binocular images to obtain the mask image.
6. The method of claim 1, wherein the determining a three-dimensional point cloud corresponding to the scene in a camera coordinate system from the camera intrinsic to the binocular camera and the binocular image comprises:
determining a depth map corresponding to the scene according to the binocular image;
and determining a three-dimensional point cloud corresponding to the scene according to the camera internal parameters and the depth map.
7. A computing device for planar area comprising:
the first acquisition module is used for acquiring binocular images shot by the binocular camera on a scene;
the determining module is used for determining a three-dimensional point cloud corresponding to the scene under a camera coordinate system according to the camera internal parameters of the binocular camera and the binocular image;
the fitting module is used for fitting to obtain a plane equation of a target plane in the scene based on the three-dimensional point cloud corresponding to the scene;
the second acquisition module is used for obtaining a three-dimensional point cloud corresponding to the mask map through the camera internal parameters and the mask map of the target area in the target plane by combining the plane equation;
and the calculation module is used for projecting the three-dimensional point cloud corresponding to the mask map to a two-dimensional plane and calculating the area of the target area.
8. The apparatus of claim 7, wherein the second acquisition module is configured to:
determining a mapping relation between two-dimensional coordinate points and three-dimensional point clouds in the mask map according to the camera internal parameters and the plane equation;
substituting the coordinate values of the two-dimensional coordinate points in the mask map into the mapping relation to obtain the three-dimensional point cloud corresponding to the mask map.
9. The apparatus of claim 7, wherein the computing module is to:
determining the two-dimensional plane according to any vector in the target plane and the plane equation;
projecting the three-dimensional point cloud corresponding to the mask map to the two-dimensional plane to obtain a projection area corresponding to the mask map, and calculating the area of the projection area;
and determining the area of the target area according to the area of the projection area.
10. The apparatus of claim 9, wherein the computing module is to:
puffing and corroding the projection area to obtain a treated projection area;
and calculating the area of the processed projection area.
11. The apparatus of claim 7, further comprising:
and the third acquisition module is used for carrying out semantic segmentation processing on any image in the binocular images to obtain the mask image.
12. The apparatus of claim 7, wherein the means for determining is configured to:
determining a depth map corresponding to the scene according to the binocular image;
and determining a three-dimensional point cloud corresponding to the scene according to the camera internal parameters and the depth map.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any of claims 1-6.
CN202310024752.0A 2023-01-09 2023-01-09 Method and device for calculating area of planar area, electronic equipment and storage medium Pending CN116129069A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310024752.0A CN116129069A (en) 2023-01-09 2023-01-09 Method and device for calculating area of planar area, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310024752.0A CN116129069A (en) 2023-01-09 2023-01-09 Method and device for calculating area of planar area, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116129069A true CN116129069A (en) 2023-05-16

Family

ID=86295119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310024752.0A Pending CN116129069A (en) 2023-01-09 2023-01-09 Method and device for calculating area of planar area, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116129069A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118674943A (en) * 2024-08-20 2024-09-20 杭州灵西机器人智能科技有限公司 Automatic method, system, device and medium for vault

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118674943A (en) * 2024-08-20 2024-09-20 杭州灵西机器人智能科技有限公司 Automatic method, system, device and medium for vault

Similar Documents

Publication Publication Date Title
CN113362444B (en) Point cloud data generation method and device, electronic equipment and storage medium
CN112785625B (en) Target tracking method, device, electronic equipment and storage medium
EP3506161A1 (en) Method and apparatus for recovering point cloud data
CN114550177B (en) Image processing method, text recognition method and device
CN113902897A (en) Training of target detection model, target detection method, device, equipment and medium
CN113378712B (en) Training method of object detection model, image detection method and device thereof
CN114186632A (en) Method, device, equipment and storage medium for training key point detection model
CN113177472A (en) Dynamic gesture recognition method, device, equipment and storage medium
CN115330940B (en) Three-dimensional reconstruction method, device, equipment and medium
CN112364843A (en) Plug-in aerial image target positioning detection method, system and equipment
EP4020387A2 (en) Target tracking method and device, and electronic apparatus
CN111192312B (en) Depth image acquisition method, device, equipment and medium based on deep learning
CN113947188A (en) Training method of target detection network and vehicle detection method
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
EP4123605A2 (en) Method of transferring image, and method and apparatus of training image transfer model
CN113344862A (en) Defect detection method, defect detection device, electronic equipment and storage medium
US20220392251A1 (en) Method and apparatus for generating object model, electronic device and storage medium
CN116129069A (en) Method and device for calculating area of planar area, electronic equipment and storage medium
CN113902898A (en) Training of target detection model, target detection method, device, equipment and medium
CN113569707A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
US20230142243A1 (en) Device environment identification method and apparatus, electronic device, and autonomous vehicle
CN114419564B (en) Vehicle pose detection method, device, equipment, medium and automatic driving vehicle
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN111968030B (en) Information generation method, apparatus, electronic device and computer readable medium
CN115937950A (en) Multi-angle face data acquisition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination