CN118158530A - Image processing method, image processing apparatus, electronic device, and storage medium - Google Patents
Image processing method, image processing apparatus, electronic device, and storage medium Download PDFInfo
- Publication number
- CN118158530A CN118158530A CN202211552155.7A CN202211552155A CN118158530A CN 118158530 A CN118158530 A CN 118158530A CN 202211552155 A CN202211552155 A CN 202211552155A CN 118158530 A CN118158530 A CN 118158530A
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- dimensional image
- dimensional
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 67
- 238000012545 processing Methods 0.000 title claims description 34
- 210000000746 body region Anatomy 0.000 claims abstract description 14
- 230000001360 synchronised effect Effects 0.000 claims abstract description 12
- 239000011159 matrix material Substances 0.000 claims description 55
- 238000004422 calculation algorithm Methods 0.000 claims description 26
- 230000008859 change Effects 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000001914 filtration Methods 0.000 claims description 12
- 238000005457 optimization Methods 0.000 claims description 12
- 238000013519 translation Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 14
- 238000003384 imaging method Methods 0.000 abstract description 12
- 238000003702 image correction Methods 0.000 abstract description 5
- 238000000034 method Methods 0.000 description 35
- 230000000875 corresponding effect Effects 0.000 description 30
- 238000010586 diagram Methods 0.000 description 9
- 238000001514 detection method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 4
- 238000000638 solvent extraction Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000002146 bilateral effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The application discloses an image processing method and device, electronic equipment and a computer readable storage medium. The image processing method comprises the following steps: acquiring a two-dimensional image and a depth image under the condition that the two-dimensional image is acquired by surrounding shooting of electronic equipment; identifying a ground area according to the depth image, and acquiring a ground plane equation; acquiring a main body region in the depth image according to a ground plane equation; taking a main body region in the depth image as a mask, and extracting a target region in the two-dimensional image; according to the gyroscope data, acquiring the attitude difference of the electronic equipment when shooting each frame of two-dimensional image and shooting the previous frame of two-dimensional image, wherein the two-dimensional image and the depth image are synchronous, and the depth image comprises a plurality of depth values; and executing anti-shake on the two-dimensional image according to the normal line and the attitude difference of the ground plane equation to obtain an anti-shake image. The application can correct the image acquired by the electronic equipment during surrounding shooting, ensure the image correction effect and improve the imaging quality of the electronic equipment.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer readable storage medium.
Background
In recent years, with the continuous development of electronic devices such as mobile phones and video cameras, the demand for images is increasing, and the pursuit of high-quality visual experience is also increasing. For example, users desire that an electronic device be able to obtain high quality images in some special scenarios. However, when shooting, the electronic device can influence the quality of the finally obtained image due to the shake of the electronic device or the outside, and the current electronic device can correct the image content according to the gyroscope data so as to improve the imaging quality of the electronic device.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium.
The image processing method of the embodiment of the application comprises the following steps: acquiring a two-dimensional image and a depth image under the condition that the two-dimensional image is acquired by surrounding shooting of electronic equipment; identifying a ground area according to the depth image, and acquiring a ground plane equation; acquiring a main body region in the depth image according to the ground plane equation; taking a main area in the depth image as a mask, and extracting a target area in the two-dimensional image; according to the gyroscope data, acquiring the attitude difference of the electronic equipment when shooting each frame of the two-dimensional image and shooting the previous frame of the two-dimensional image, wherein the two-dimensional image is synchronous with the depth image frame, each frame of the two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values; and executing anti-shake on the two-dimensional image according to the normal line of the ground plane equation and the attitude difference to obtain an anti-shake image.
The image processing device comprises a first acquisition module, an identification module, a second acquisition module, an extraction module, a third acquisition module and an execution module. The first acquisition module is used for acquiring the two-dimensional image and the depth image under the condition that the two-dimensional image is acquired by surrounding shooting of the electronic equipment; the identification module is used for identifying a ground area according to the depth image and acquiring a ground plane equation; the second acquisition module is used for acquiring a main body area in the depth image according to the ground plane equation; the extraction module is used for extracting a target area in the two-dimensional image by taking a main area in the depth image as a mask; the third obtaining module is used for obtaining the gesture difference of the electronic device when shooting each frame of the two-dimensional image and shooting the previous frame of the two-dimensional image according to the gyroscope data, the two-dimensional image is synchronous with the depth image frame, each frame of the two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values; the execution module is used for executing anti-shake on the two-dimensional image according to the normal line of the ground plane equation and the attitude difference so as to obtain an anti-shake image.
An electronic device of an embodiment of the application includes one or more processors, memory, and one or more computer programs. One or more of the computer programs are stored in the memory, which when executed by the processor, implement the image processing method of the embodiments of the present application. The image processing method comprises the following steps: acquiring a two-dimensional image and a depth image under the condition that the two-dimensional image is acquired by surrounding shooting of electronic equipment; identifying a ground area according to the depth image, and acquiring a ground plane equation; acquiring a main body region in the depth image according to the ground plane equation; taking a main area in the depth image as a mask, and extracting a target area in the two-dimensional image; according to the gyroscope data, acquiring the attitude difference of the electronic equipment when shooting each frame of the two-dimensional image and shooting the previous frame of the two-dimensional image, wherein the two-dimensional image is synchronous with the depth image frame, each frame of the two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values; and executing anti-shake on the two-dimensional image according to the normal line of the ground plane equation and the attitude difference to obtain an anti-shake image.
The computer-readable storage medium of the embodiment of the present application stores thereon a computer program that, when executed by a processor, implements the image processing method of the embodiment of the present application. The image processing method comprises the following steps: acquiring a two-dimensional image and a depth image under the condition that the two-dimensional image is acquired by surrounding shooting of electronic equipment; identifying a ground area according to the depth image, and acquiring a ground plane equation; acquiring a main body region in the depth image according to the ground plane equation; taking a main area in the depth image as a mask, and extracting a target area in the two-dimensional image; according to the gyroscope data, acquiring the attitude difference of the electronic equipment when shooting each frame of the two-dimensional image and shooting the previous frame of the two-dimensional image, wherein the two-dimensional image is synchronous with the depth image frame, each frame of the two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values; and executing anti-shake on the two-dimensional image according to the normal line of the ground plane equation and the attitude difference to obtain an anti-shake image.
In the image processing method, the image processing device, the electronic equipment and the computer readable storage medium of the embodiment of the application, when the two-dimensional image is obtained by performing surrounding shooting for the electronic equipment, the acquired two-dimensional image and the depth image in the depth image are processed to acquire a ground plane equation, a main body area in the depth image is acquired according to the ground plane equation, a target area in the two-dimensional image is extracted according to the main body area, then the gesture difference of the electronic equipment between each frame of the two-dimensional image and the shooting of the previous frame of the two-dimensional image is acquired according to gyroscope data, and finally the anti-shake image is obtained by performing anti-shake on the target area of the two-dimensional image according to the normal line and the gesture difference of the ground plane equation, thereby the whole image processing process can correct the image acquired by the electronic equipment during surrounding shooting, the image correction effect is ensured, and the imaging quality of the electronic equipment is improved.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic diagram of the structure of an electronic device in accordance with certain embodiments of the present application;
FIG. 4 is a schematic diagram of an image processing method according to some embodiments of the present application;
FIG. 5 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 6 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 7 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 8 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 9 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 10 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 11 is a schematic diagram of an image processing method according to some embodiments of the present application;
FIG. 12 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 13 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 14 is a schematic diagram of an image processing method according to some embodiments of the application;
FIG. 15 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 16 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 17 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 18 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 19 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 20 is a schematic diagram of an image processing method according to some embodiments of the present application;
FIG. 21 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 22 is a schematic diagram of an image processing method according to some embodiments of the present application;
FIG. 23 is a schematic diagram of a connection state of a computer readable storage medium and a processor according to some embodiments of the present application.
Detailed Description
Embodiments of the present application are further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings refer to the same or similar elements or elements having the same or similar functions throughout. In addition, the embodiments of the present application described below with reference to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the present application.
In recent years, with the continuous development of electronic devices such as mobile phones and video cameras, the demand for images is increasing, and the pursuit of high-quality visual experience is also increasing. For example, users desire that an electronic device be able to obtain high quality images in some special scenarios. However, when shooting, the electronic device can influence the quality of the finally obtained image due to the shake of the electronic device or the outside, and the current electronic device can correct the image content according to the gyroscope data so as to improve the imaging quality of the electronic device. To solve this problem, an embodiment of the present application provides an image processing method (shown in fig. 1), an image processing apparatus 10 (shown in fig. 2), an electronic device 100 (shown in fig. 3), and a computer-readable storage medium 200 (shown in fig. 23).
Referring to fig. 1, an image processing method according to an embodiment of the present application includes:
02: in the case where the two-dimensional image is obtained by performing surround shooting for the electronic apparatus 100 (shown in fig. 3), the two-dimensional image and the depth image are acquired;
03: identifying a ground area according to the depth image, and acquiring a ground plane equation;
04: acquiring a main body region in the depth image according to a ground plane equation;
05: taking a main body region in the depth image as a mask, and extracting a target region in the two-dimensional image;
06: according to the gyroscope data, acquiring the attitude difference of the electronic equipment 100 when shooting each frame of two-dimensional image and shooting the previous frame of two-dimensional image, wherein the two-dimensional image is synchronous with the depth image frame, each frame of two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values;
07: and executing anti-shake on the two-dimensional image according to the normal line and the attitude difference of the ground plane equation to obtain an anti-shake image.
Referring to fig. 2, the image processing method described above may be applied to the image processing apparatus 10, and the image processing apparatus 10 according to the embodiment of the application includes a first acquiring module 12, an identifying module 13, a second acquiring module 14, an extracting module 15, a third acquiring module 16, and an executing module 17. The first acquisition module 12 is configured to acquire a two-dimensional image and a depth image in a case where the two-dimensional image is acquired by performing surround shooting for the electronic apparatus 100. The recognition module 13 is used for recognizing the ground area according to the depth image and acquiring a ground plane equation. The second acquiring module 14 is configured to acquire a subject area in the depth image according to a ground plane equation. The extraction module 15 is configured to extract a target region in the two-dimensional image using the subject region in the depth image as a mask. The third obtaining module 16 is configured to obtain, according to the gyroscope data, a posture difference between the electronic device 100 when capturing each frame of two-dimensional image and when capturing a previous frame of two-dimensional image, where the two-dimensional image is synchronous with a depth image, and each frame of two-dimensional image corresponds to the gyroscope data collected by the gyroscope, and the depth image includes a plurality of depth values. The execution module 17 is configured to execute anti-shake on the two-dimensional image according to the normal line and the posture difference of the ground plane equation to obtain an anti-shake image.
Referring to fig. 3, the image processing method described above may be applied to an electronic device 100, and the electronic device 100 according to an embodiment of the application includes a main body 20, one or more processors 30, a memory 40, and one or more programs. Wherein one or more processors 30 and a memory 40 are each installed in the body 20, one or more programs are stored in the memory 40 and executed by the one or more processors 30, the programs including methods for performing the image processing in 02, 03, 04, 05, 06, and 07. That is, the one or more processors 30 are configured to: in the case where the two-dimensional image is obtained by performing surrounding shooting for the electronic apparatus 100, a two-dimensional image and a depth image are acquired; identifying a ground area according to the depth image, and acquiring a ground plane equation; acquiring a main body region in the depth image according to a ground plane equation; taking a main body region in the depth image as a mask, and extracting a target region in the two-dimensional image; according to the gyroscope data, acquiring the attitude difference of the electronic equipment 100 when shooting each frame of two-dimensional image and shooting the previous frame of two-dimensional image, wherein the two-dimensional image is synchronous with the depth image frame, each frame of two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values; and executing anti-shake on the two-dimensional image according to the normal line and the attitude difference of the ground plane equation to obtain an anti-shake image.
The electronic device 100 according to another embodiment of the present application may include a main body 20 and the image processing apparatus 10 according to the embodiment of the present application, and the image processing apparatus 10 is mounted in the main body 20.
The electronic device 100 according to the embodiment of the present application includes, but is not limited to, a mobile phone, a tablet computer, a camera, a video camera, a personal digital assistant, a wearable device, an intelligent robot, an intelligent vehicle, and the like. Wherein, wearing formula equipment includes intelligent bracelet, intelligent wrist-watch, intelligent glasses etc.. The camera may include a Charge-Coupled Device (CCD) camera, a complementary metal Oxide Semiconductor (Complementary Metal-Oxide-Semiconductor) camera, or the like.
The surround photographing means that the electronic apparatus 100 photographs the photographed object by surrounding to obtain a photographed image, wherein the photographed image may include a two-dimensional image, a depth image, and the like, which is not limited herein. Referring to fig. 4, in the embodiment of the present application, the object is located on a plane, for example, a ground, a platform, etc. The electronic apparatus 100 performs surrounding photographing on a subject to acquire a two-dimensional image and a depth image. In one embodiment, the electronic device 100 may acquire two-dimensional images and depth images in real time as the two-dimensional images and depth images to be processed. In another embodiment, the electronic device 100 may acquire, from another electronic device, a two-dimensional image and a depth image obtained by performing surrounding photographing on the subject by the other electronic device as the two-dimensional image and the depth image to be processed.
In some embodiments, the electronic device 100 may further include a first image acquisition device 50, a second image acquisition device 60, and an inertial measurement unit 70 (Inertial Measurement Unit, IMU), wherein the IMU includes a gyroscope and an accelerometer for acquiring data providing the gyroscope and the accelerometer.
The two-dimensional image may be obtained by photographing by the first image obtaining device 50, and the two-dimensional image may be a gray-scale image or a color image. In some embodiments, the first image capturing device 50 may be a visible light camera module, such as an RGB camera module, a black and white camera module, and the like, which is not limited herein.
The depth image may be obtained by photographing by the second image obtaining device 60. In some embodiments, the second image capturing device 60 may be a depth camera module, such as a TOF camera module, a structured light camera module, a binocular camera module, etc., without limitation. For example, when the second image acquisition device 60 is a TOF camera module, the TOF camera module emits the pulsed infrared light to the object to be photographed, receives the infrared light reflected back by the object to be photographed, and the TOF camera module can perform photoelectric conversion on the received infrared light to obtain a depth image. Wherein the depth image contains a plurality of depth values, i.e. each pixel in the depth image has a corresponding depth value.
Referring to fig. 3 and 5, in some embodiments, the two-dimensional image is frame-synchronized with the depth image, and each frame of the two-dimensional image corresponds to gyroscope data acquired by the gyroscope, i.e., each frame of the two-dimensional image corresponds to gyroscope data and accelerometer data. Wherein each pixel in the two-dimensional image can find the depth value of the corresponding pixel in the depth image.
In some embodiments, the first image acquisition device 50 and the second image acquisition device 60 may be connected by a hardware synchronization line, so as to ensure two-dimensional image and depth image frame synchronization. Therefore, the situation that the data acquired by the first image acquisition device 50 and the second image acquisition device 60 cannot be correlated due to the fact that local time in the first image acquisition device 50 and local time in the second image acquisition device 60 are not synchronous can be avoided, and further accuracy of subsequent data processing is guaranteed. It can be appreciated that the first image capturing device 50, the second image capturing device 60 and the IMU may be connected by a hardware synchronization line, so as to ensure accuracy of data association and improve effect of subsequent image correction.
In some embodiments, calibration of the electronic device 100 may be required before the electronic device 100 can take a picture. The calibration method comprises, but is not limited to, a traditional camera calibration method, an active vision camera calibration method and a camera self-calibration method. In the embodiment of the application, the calibration can be performed by adopting the Zhang's calibration method in the traditional camera calibration method.
The calibration content is the internal parameters of the first image acquisition device 50 and the second image acquisition device 60, the external parameters of the second image acquisition device 60 to the first image acquisition device 50 and the relative posture of the IMU to the first image acquisition device 50. In the embodiment of the present application, the internal parameters of the first image acquisition device 50 and the second image acquisition device 60 are represented by a matrix, the external parameters of the second image acquisition device 60 to the first image acquisition device 50 are represented by a rotation matrix and a translation vector, and the relative attitudes of the IMU to the first image acquisition device 50 are represented by a rotation matrix.
Specifically, a matrix representing the internal parameters of the first image acquisition device 50 and the second image acquisition device 60 is as follows:
where fx, fy is the reference focal length, x0, y0 are principal point coordinates (relative to the imaging plane), s is a coordinate axis tilt parameter, and s can be set to 0.
The rotation matrix is as follows:
The translation vector is as follows:
According to the image processing method, the image processing device 10, the electronic equipment 100 and the computer readable storage medium, when the two-dimensional image is obtained by performing surrounding shooting for the electronic equipment 100, the acquired two-dimensional image and the depth image in the depth image are processed to acquire a ground plane equation, a main body area in the depth image is acquired according to the ground plane equation, a target area in the two-dimensional image is extracted according to the main body area, then the gesture difference between each frame of the two-dimensional image and the shooting of the previous frame of the two-dimensional image of the electronic equipment 100 is acquired according to gyroscope data, and finally anti-shake is performed on the target area of the two-dimensional image according to the normal line and the gesture difference of the ground plane equation, so that an anti-shake image is obtained, and therefore, the whole image processing process can correct the image acquired by the electronic equipment 100 during surrounding shooting, ensure the image correction effect and improve the imaging quality of the electronic equipment 100.
In addition, the two-dimensional image is subjected to anti-shake processing through the normal line of the ground plane equation and the posture difference of the electronic equipment, so that a better anti-shake effect can be realized, the calculated amount is reduced, real-time calculation can be realized in the electronic equipment, and the anti-shake image effect is generated. And compared with the method that only gyroscope data are used for anti-shake, the normal line and the posture difference of the ground plane equation are used for jointly executing anti-shake on the two-dimensional image, so that negative effects caused by the error of the gyroscope can be reduced or even avoided, and the imaging quality is ensured.
Referring to fig. 3 and 6, in some embodiments, the image processing method may further include:
01: it is detected whether the electronic apparatus 100 performs surround shooting.
Referring to fig. 2, the image processing apparatus 10 according to the embodiment of the present application may further include a detection module 11. The detection module 11 is also used for detecting whether the electronic device 100 performs surround shooting.
Referring to fig. 3, a program in an electronic device 100 according to an embodiment of the present application includes a program for executing the image processing method in 01. That is, the one or more processors 30 are also configured to detect whether the electronic device 100 is performing a wrap-around shot.
Specifically, referring to fig. 3 and 7, in some embodiments, 01: detecting whether the electronic apparatus 100 performs surround shooting includes:
011: acquiring a plurality of gyroscope data corresponding to a plurality of two-dimensional images and a plurality of gyroscope data corresponding to a plurality of two-dimensional images respectively within a preset duration;
015: calculating a first Euler distance between adjacent gyroscope data to serve as a change rate sequence of the plurality of gyroscope data;
017: in the change rate sequence, if the latest preset number of first euler distances are smaller than the preset first threshold, it is determined that the electronic device 100 performs surround shooting.
Referring to fig. 2, the detection module 11 is further configured to: acquiring a plurality of gyroscope data corresponding to a plurality of two-dimensional images and a plurality of gyroscope data corresponding to a plurality of two-dimensional images respectively in a preset period; calculating Euler distances between adjacent gyroscope data to serve as a change rate sequence of the plurality of gyroscope data; in the change rate sequence, if the latest preset number of euler distances are smaller than the preset first threshold, it is determined that the electronic device 100 performs surround shooting.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application is further configured to execute the image processing methods 011, 015, and 017. That is, the one or more processors 30 are configured to acquire a plurality of gyroscope data corresponding to a plurality of two-dimensional images and to a plurality of two-dimensional images, respectively, during a preset period; calculating Euler distances between adjacent gyroscope data to serve as a change rate sequence of the plurality of gyroscope data; in the change rate sequence, if the latest preset number of euler distances are smaller than the preset first threshold, it is determined that the electronic device 100 performs surround shooting.
The preset duration may be a duration set by the image processing apparatus 10 or the electronic device 100 before shipment, and the plurality of gyroscope data acquired during the duration may satisfy a requirement of detecting whether the electronic device 100 performs the surround shooting. Of course, in other embodiments, the preset time period may be set by the user, which is not limited herein. Specifically, the more the number of frames of the two-dimensional frames within the preset time period, the more the gyroscope data is acquired, the higher the accuracy of the judgment as to whether or not the electronic apparatus 100 performs the surround shooting; in contrast, the smaller the number of frames of the two-dimensional frames within the preset period, the smaller the acquired gyroscope data, the faster the speed of determination as to whether or not the electronic apparatus 100 performs the surround shooting.
The calculation formula of the Euler distance is as follows:
Therefore, the first Euler distance can be calculated according to the formula by using the gyroscope data corresponding to the first two-dimensional image and the gyroscope data corresponding to the second two-dimensional image, and if five two-dimensional images exist in the preset duration, the first Euler distance is calculated between adjacent gyroscope data in the gyroscope data corresponding to the five two-dimensional images respectively: delta1, delta2, delta3 and delta4, thereby obtaining a plurality of change rate sequences O= [ delta1, delta2, delta3, delta4] of the gyroscope data.
In the case where the latest preset number of first euler distances are each smaller than the preset first threshold, the electronic device 100 performs surround shooting. The latest preset number may be set according to specific situations, for example, if the preset number is 3, the electronic device 100 executes the surround shooting if delta2, delta3, and delta4 are all smaller than the preset first threshold.
In some embodiments, the first threshold may be a threshold set before the image processing apparatus 10 or the electronic device 100 is shipped. Specifically, the first threshold may be obtained by: and acquiring a multi-frame two-dimensional image, calculating gyroscope data corresponding to the multi-frame two-dimensional image according to a calculation formula of Euler distances to obtain a change rate sequence corresponding to the plurality of gyroscope data, counting the maximum value in the change rate sequence, wherein the first threshold value can be slightly larger than the maximum value in the change rate sequence.
Referring to fig. 3 and 8, in some embodiments, prior to calculating the first euler distance between adjacent gyroscope data, 01: detecting whether the electronic apparatus 100 performs surround shooting, further includes:
013: mean filtering is performed on the plurality of gyroscope data. And calculating a first Euler distance between adjacent gyroscope data as the first Euler distance between the filtered adjacent gyroscope data.
Referring to fig. 2, the detection module 11 is further configured to: mean filtering is performed on the plurality of gyroscope data. And calculating a first Euler distance between adjacent gyroscope data as the first Euler distance between the filtered adjacent gyroscope data.
Referring to fig. 3, the program in the electronic device 100 of the embodiment of the present application includes a program for executing the image processing method in 013. That is, the one or more processors 30 are configured to perform mean filtering on the plurality of gyroscope data. And calculating a first Euler distance between adjacent gyroscope data as the first Euler distance between the filtered adjacent gyroscope data.
The average filtering is performed on the data of the gyroscopes, so that the data noise of the gyroscopes can be reduced, errors are reduced, and the accuracy of the data of the gyroscopes is improved. In some embodiments, the gyroscope data can be further noise reduced by performing filtering such as block filtering, gaussian filtering, median filtering, bilateral filtering, and the like, which is not limited herein.
Specifically, in some embodiments, if five two-dimensional images exist within the preset time period, average filtering is performed on the gyroscope data corresponding to the five two-dimensional images, and the filtered gyroscope data may be (rx 1, ry1, rz 1), (rx 2, ry2, rz 2), (rx 3, ry3, rz 3), (rx 4, ry4, rz 4), and (rx 5, ry5, rz 5), respectively, then adjacent gyroscope data in the gyroscope data corresponding to the five two-dimensional images are calculated according to the above formula to obtain the corresponding first euler distance: delta1, delta2, delta3, and delta4.
Referring to fig. 3 and 9, in some embodiments, 01: detecting whether the electronic apparatus 100 performs surround shooting, further includes:
019: in the change rate sequence, if the latest preset number of first euler distances are all greater than the preset first threshold, it is determined that the electronic device 100 does not execute surround shooting, and the gyroscope is adopted to execute anti-shake.
Referring to fig. 2, the detection module 11 is further configured to: in the change rate sequence, if the latest preset number of first euler distances are all greater than the preset first threshold, it is determined that the electronic device 100 does not execute surround shooting, and the gyroscope is adopted to execute anti-shake.
Referring to fig. 3, a program in the electronic device 100 of the embodiment of the present application includes a program for executing the image processing method in 019. That is, in the change rate sequence, if the latest preset number of first euler distances are all greater than the preset first threshold, it is determined that the electronic device 100 does not perform surround shooting, and the gyroscope is used to perform anti-shake.
Specifically, if the latest preset number of first euler distances are all greater than the preset first threshold, for example, if the latest preset number is 3, the latest preset number of first euler distances are delta2, delta3 and delta4, and delta2, delta3 and delta4 are all greater than the preset first threshold, then it is characterized that the electronic device does not execute surround shooting, and at this time, the electronic device may adopt shooting modes such as straight line shooting, pitching shooting and the like, so that the electronic device can adopt a gyroscope to execute anti-shake to ensure imaging quality. Of course, in other embodiments, when the electronic device does not perform the surround shooting, the electronic device may also use other anti-shake modes, which is not limited herein.
Referring to fig. 10, in some embodiments, 03: identifying a ground area according to the depth image, and acquiring a ground plane equation, wherein the ground plane equation comprises the following steps:
031: dividing the depth image into a plurality of grids;
032: converting the depth value of each grid in the depth image into a three-dimensional point cloud, and converting the three-dimensional point cloud into a first coordinate system of a first image acquisition device 50 (shown in fig. 3), wherein the first image acquisition device 50 is used for acquiring a two-dimensional image, and the depth value in the depth image is the depth value in a second coordinate system;
033: fitting the three-dimensional point cloud of each grid under the first coordinate system by adopting a first plane fitting algorithm to obtain a plurality of first fitting plane equations;
034: removing grids with errors of the first fitting plane larger than a second threshold value;
035: converting the accelerometer values into a first coordinate system of the first image acquisition device 50 according to the calibrated external parameters;
036: acquiring an included angle between a normal line of a first fitting plane equation of the remaining grid and a direction of a value of the accelerometer under a first coordinate system;
037: removing grids with included angles larger than a third threshold value;
038: and fitting the three-dimensional point clouds of the rest grids by adopting a second plane fitting algorithm to obtain a second fitting plane equation, and taking the second fitting plane equation as a ground plane equation.
Referring to fig. 2, the identification module 13 is further configured to: dividing the depth image into a plurality of grids; converting the depth value of each grid in the depth image into three-dimensional point cloud, and converting the three-dimensional point cloud into a first coordinate system of the first image acquisition device 50, wherein the first image acquisition device 50 is used for acquiring a two-dimensional image, and the depth value in the depth image is the depth value in the second coordinate system; fitting the three-dimensional point cloud of each grid under the first coordinate system by adopting a first plane fitting algorithm to obtain a plurality of first fitting plane equations; removing grids with errors of the first fitting plane larger than a second threshold value; converting the accelerometer values into a first coordinate system of the first image acquisition device 50 according to the calibrated external parameters; acquiring an included angle between a normal line of a first fitting plane equation of the remaining grid and a direction of a value of the accelerometer under a first coordinate system; removing grids with included angles larger than a third threshold value; and fitting the three-dimensional point clouds of the rest grids by adopting a second plane fitting algorithm to obtain a second fitting plane equation, and taking the second fitting plane equation as a ground plane equation.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application is used to execute the image processing methods 031, 032, 033, 034, 035, 036, 037 and 038. That is, the one or more processors 30 are configured to divide the depth image into a plurality of grids; converting the depth value of each grid in the depth image into three-dimensional point cloud, and converting the three-dimensional point cloud into a first coordinate system of the first image acquisition device 50, wherein the first image acquisition device 50 is used for acquiring a two-dimensional image, and the depth value in the depth image is the depth value in the second coordinate system; fitting the three-dimensional point cloud of each grid under the first coordinate system by adopting a first plane fitting algorithm to obtain a plurality of first fitting plane equations; removing grids with errors of the first fitting plane larger than a second threshold value; converting the accelerometer values into a first coordinate system of the first image acquisition device 50 according to the calibrated external parameters; acquiring an included angle between a normal line of a first fitting plane equation of the remaining grid and a direction of a value of the accelerometer under a first coordinate system; removing grids with included angles larger than a third threshold value; and fitting the three-dimensional point clouds of the rest grids by adopting a second plane fitting algorithm to obtain a second fitting plane equation, and taking the second fitting plane equation as a ground plane equation.
Referring to fig. 11, the depth image is divided into a plurality of grids, that is, the depth image is divided into a plurality of small grid units, and the matching degree between the grid division and the calculation target, and the quality of the grids determine the quality of the final finite element calculation. In some embodiments, partitioning the depth image may employ structured network partitioning, unstructured network partitioning, etc., without limitation. Wherein, the larger the number of grids, the higher the calculation accuracy of the grids.
A three-dimensional point cloud refers to a dataset of points in a three-dimensional coordinate system. The three-dimensional point cloud may include three-dimensional coordinates, colors, classification values, intensity values, time, and the like. For example, a point cloud obtained according to the principle of laser measurement may include three-dimensional coordinates (XYZ) and laser reflection Intensity (Intensity). The point cloud obtained according to the photogrammetry principle may include three-dimensional coordinates (XYZ) and color information (RGB). The point cloud is obtained by combining laser measurement and photogrammetry principles and can comprise three-dimensional coordinates (XYZ), laser reflection Intensity (Intensity) and color information (RGB). The point clouds may include ordered point clouds and unordered point clouds. The depth image can be converted into a three-dimensional point cloud after coordinate conversion, wherein the formula for converting the depth value in the depth image into the three-dimensional point cloud is as follows:
Wherein x w、yw、zw is a 3D point cloud coordinate, u and v are pixel coordinates of a two-dimensional image, zc is a depth value of the depth image corresponding to the pixel point in the (u, v) coordinate, fx and fy are internal reference focal lengths of the second image acquisition device 60, and u 0、v0 is an internal reference principal point of the second image acquisition device 60.
The three-dimensional point cloud and accelerometer values are converted into the formula in the first coordinate system of the first image acquisition device 50 as follows:
Where x rgb、yrgb、zrgb is the coordinates of the three-dimensional point cloud in the first coordinate system of the first image capturing device 50, T 1 is the translation vector in the second image capturing device 60 to the external parameters of the first image capturing device 50, R 1 is the rotation matrix in the second image capturing device 60 to the external parameters of the first image capturing device 50, x w、yw、zw is the 3D point cloud coordinates, ax rgb、ayrgb、azrgb,R2 is the accelerometer value to the external parameters of the first image capturing device 50, and ax, ay, az are the accelerometer values.
In some embodiments, the first plane fitting algorithm may be a least squares method (least squares method), RANSAC (RAndomSAmple Consensus ) algorithm, or the like, without limitation. The least squares method is a mathematical optimization technique. It finds the best functional match for the data by minimizing the sum of squares of the errors. The unknown data can be easily obtained by the least square method, and the sum of squares of errors between the obtained data and the actual data is minimized. The RANSAC algorithm is an iterative algorithm that correctly estimates mathematical model parameters from a set of data containing "outliers" (outliers). "outliers" generally refer to noise in the data, such as mismatching in a match and outliers in an estimated curve.
The first plane fitting algorithm in the embodiment of the application can use a least square method to fit the three-dimensional point cloud of each grid under the first coordinate system so as to obtain a plurality of first fitting plane equations. Compared with the method of fitting by using a RANSAC algorithm, the least square method can quickly remove points with very large errors, and the operation speed is improved.
In some embodiments, the second threshold value and the third threshold value may each be a threshold value set before the image processing apparatus 10 or the electronic device 100 is shipped from the factory. In some embodiments, the second threshold may be obtained by: the first fitting plane equation of all grids is used for calculating the fitting average error, and the second threshold value is set to be slightly larger than the average error. The third threshold may be obtained by: and acquiring the included angle between the normal line of the first fitting plane equation of each grid and the accelerometer value, counting the maximum value in all the included angles, and setting the third threshold value to be slightly larger than the maximum value in all the included angles. Of course, in other embodiments, both the second threshold and the third threshold may be adjusted according to the actual effect.
Specifically, after removing grids with errors larger than a second threshold value of the first fitting plane and removing grids with included angles larger than a third threshold value, fitting the three-dimensional point clouds of the remaining grids by adopting a second plane fitting algorithm to obtain a second fitting plane equation, and taking the second fitting plane equation as a ground plane equation. It should be noted that, in some embodiments, the second plane fitting algorithm may be a RANSAC algorithm, so that the fitting accuracy when fitting the depth values of the remaining grids can be better.
Referring to fig. 12, in some embodiments, 04: according to a ground plane equation, acquiring a main body area in a depth image comprises:
041: removing the point cloud of the ground area according to the ground plane equation, removing the point cloud of the far area, and taking the rest point cloud as a main point cloud;
043: and taking the area corresponding to the main point cloud in the depth image as a main area in the depth image.
Referring to fig. 2, the second acquisition module 14 is further configured to: removing the point cloud of the ground area according to the ground plane equation, removing the point cloud of the far area, and taking the rest point cloud as a main point cloud; and taking the area corresponding to the main point cloud in the depth image as a main area in the depth image.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application includes a method for executing the image processing in 041 and 043. That is, the one or more processors 30 are configured to remove the ground area point cloud according to the ground plane equation, remove the far area point cloud, and take the remaining point cloud as the subject point cloud; and taking the area corresponding to the main point cloud in the depth image as a main area in the depth image.
In some embodiments, the subject region in the depth image may also be acquired using a depth learning method.
More specifically, referring to FIG. 13, in some embodiments, 041: removing the remote area point cloud, comprising:
0411: counting the depth values of all pixels in the depth image after removing the point cloud of the ground area;
0413: dividing the depth value of the depth image with the ground area point cloud removed into a first type of depth value and a second type of depth value by adopting a preset algorithm, wherein the depth values in the first type of depth value are smaller than the depth values in the second type of depth value;
0415: and removing the point cloud corresponding to the depth value in the second class of depth values.
Referring to fig. 2, the second acquisition module 14 is further configured to: counting the depth values of all pixels in the depth image after removing the point cloud of the ground area; dividing the depth value of the depth image with the ground area point cloud removed into a first type of depth value and a second type of depth value by adopting a preset algorithm, wherein the depth values in the first type of depth value are smaller than the depth values in the second type of depth value; and removing the point cloud corresponding to the depth value in the second class of depth values.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application includes a method for performing image processing in 0411, 0413 and 0415. That is, the one or more processors 30 are configured to statistically remove depth values for all pixels in the depth image after the ground area point cloud; dividing the depth value of the depth image with the ground area point cloud removed into a first type of depth value and a second type of depth value by adopting a preset algorithm, wherein the depth values in the first type of depth value are smaller than the depth values in the second type of depth value; and removing the point cloud corresponding to the depth value in the second class of depth values.
Specifically, referring to fig. 14, the statistics of depth values of all pixels in the depth image after removing the point cloud in the ground area includes: the depth values in the depth image are converted into three-dimensional point clouds, and then the three-dimensional point clouds are converted into depth values under the first coordinate system of the first image acquisition device 50. The manner of converting the depth value in the depth image into the three-dimensional point cloud is basically the same as the above manner, and will not be described herein. The formula of the three-dimensional point cloud converted to the depth value in the first coordinate system of the first image acquisition apparatus 50 is as follows:
Wherein xrgb, yrgb, zrgb is the coordinates of the three-dimensional point cloud in the first coordinate system of the first image acquisition device 50, krgb is the depth value corresponding to the depth image and the coordinates of the three-dimensional point cloud in the first coordinate system, and urgb and vrgb are the pixel coordinates of the three-dimensional point cloud in the first coordinate system.
In some embodiments, the predetermined algorithm may be the oxford algorithm. The oxford algorithm may calculate an optimal threshold value (the variance between the foreground and the background is the largest) that can separate the two classes according to the histogram (assuming that the foreground and the background show two peaks on the histogram), and then perform global binarization on the image according to the obtained optimal threshold value. Specifically, dividing the depth value of the depth image with the ground area point cloud removed into a first type of depth value and a second type of depth value according to an Ojin algorithm, and removing the point cloud corresponding to the depth value in the second type of depth value, so as to obtain a main area in the depth image. Wherein the depth values in the first class of depth values are each smaller than the depth values in the second class of depth values.
Referring to fig. 3 and 15, in some embodiments, 06: according to the gyroscope data, acquiring the attitude difference of the electronic device 100 when shooting each frame of two-dimensional image and shooting the previous frame of two-dimensional image comprises:
061: converting the rotation quantity of the gyroscope between two adjacent two-dimensional images into a plurality of rotation matrixes;
063: and sequentially multiplying the rotation matrixes to the left according to the time sequence to obtain the attitude difference.
Referring to fig. 2, the third acquisition module 16 is further configured to: converting the rotation quantity of the gyroscope between two adjacent two-dimensional images into a plurality of rotation matrixes; and sequentially multiplying the rotation matrixes to the left according to the time sequence to obtain the attitude difference.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application includes a method for executing the image processing in 061 and 063. That is, the one or more processors 30 are configured to convert the rotation amount of the gyroscope between two adjacent frames of two-dimensional images into a plurality of rotation matrices; and sequentially multiplying the rotation matrixes to the left according to the time sequence to obtain the attitude difference.
Specifically, in some embodiments, it is assumed that there are four gyroscopes of rotation amounts between two adjacent two-dimensional images, where the four gyroscopes of rotation amounts are (rx 1, ry1, rz 1), (rx 2, ry2, rz 2), (rx 3, ry3, rz 3), and (rx 4, ry4, rz 4), respectively, and the four gyroscopes of rotation amounts are calibrated to obtain a plurality of rotation matrices (R1, R2, R3, and R4). The plurality of rotation matrices are sequentially multiplied left in time order to obtain a posture difference, and the posture difference is used as a posture difference of the electronic device 100 when each frame of two-dimensional image is photographed and a previous frame of two-dimensional image is photographed.
Referring to fig. 3 and 16, in some embodiments, 07: performing anti-shake on the two-dimensional image according to a normal line and a posture difference of a ground plane equation to obtain an anti-shake image, comprising:
071: converting the accelerometer values into a first coordinate system of the first image acquisition device 50 according to the calibrated external parameters;
072: converting a rotation matrix corresponding to the gesture difference into a rotation vector;
073: acquiring a zero setting item in the rotation vector;
074: converting the rotation vector after zero setting into a new rotation matrix;
075: transposing the new rotation matrix to obtain a transposed matrix;
076: converting the transpose matrix into a new rotation vector;
077: obtaining a unit normal vector of a ground plane equation of a current frame, taking the unit normal vector as a first normal vector, and multiplying the first normal vector by a transpose matrix in a first coordinate system to obtain a second normal vector;
078: calculating a second Euler distance according to the unit normal vector and the second normal vector of the ground plane equation of the previous frame;
079: if the second Euler distance is smaller than a fourth threshold value or the cycle number is larger than an adjustable parameter, re-projecting the two-dimensional image by adopting a transposed matrix to obtain an anti-shake image, wherein a unit normal vector of a ground plane equation of a previous frame is a normal vector of the current frame after anti-shake compensation of the two-dimensional image, and the normal vector is stored for the calculation of a next frame;
0710: and when the second Euler distance is larger than a fourth threshold value, or the cycle number is smaller than an adjustable parameter, optimizing and adjusting a new rotation vector, converting the adjusted rotation vector into a new rotation matrix, and executing 075 to 079 until the second Euler distance is smaller than the fourth threshold value, or the cycle number is larger than the adjustable parameter.
Referring to fig. 2, the execution module 17 is further configured to: converting the accelerometer values into a first coordinate system of the first image acquisition device 50 according to the calibrated external parameters; converting a rotation matrix corresponding to the gesture difference into a rotation vector; acquiring a zero setting item in the rotation vector; converting the rotation vector after zero setting into a new rotation matrix; transposing the new rotation matrix to obtain a transposed matrix; converting the transpose matrix into a new rotation vector; obtaining a unit normal vector of a ground plane equation of a current frame, taking the unit normal vector as a first normal vector, and multiplying the first normal vector by a transpose matrix in a first coordinate system to obtain a second normal vector; calculating a second Euler distance according to the unit normal vector and the second normal vector of the ground plane equation of the previous frame; if the second Euler distance is smaller than a fourth threshold value or the cycle number is larger than an adjustable parameter, re-projecting the two-dimensional image by adopting a transposed matrix to obtain an anti-shake image, wherein a unit normal vector of a ground plane equation of a previous frame is a normal vector of the current frame after anti-shake compensation of the two-dimensional image, and the normal vector is stored for the calculation of a next frame; and when the second Euler distance is larger than a fourth threshold value, or the cycle number is smaller than an adjustable parameter, optimizing and adjusting a new rotation vector, converting the adjusted rotation vector into a new rotation matrix, and executing 075 to 079 until the second Euler distance is smaller than the fourth threshold value, or the cycle number is larger than the adjustable parameter.
Referring to fig. 3, the program in the electronic device 100 of the embodiment of the present application includes a program for executing the image processing methods 071, 072, 073, 074, 075, 076, 077, 078, 079, and 0710. That is, the one or more processors 30 are configured to convert the accelerometer values into a first coordinate system of the first image acquisition device 50 according to the calibration external parameters; converting a rotation matrix corresponding to the gesture difference into a rotation vector; acquiring a zero setting item in the rotation vector; converting the rotation vector after zero setting into a new rotation matrix; transposing the new rotation matrix to obtain a transposed matrix; converting the transpose matrix into a new rotation vector; obtaining a unit normal vector of a ground plane equation of a current frame, taking the unit normal vector as a first normal vector, and multiplying the first normal vector by a transpose matrix in a first coordinate system to obtain a second normal vector; calculating a second Euler distance according to the unit normal vector and the second normal vector of the ground plane equation of the previous frame; if the second Euler distance is smaller than a fourth threshold value or the cycle number is larger than an adjustable parameter, re-projecting the two-dimensional image by adopting a transposed matrix to obtain an anti-shake image, wherein a unit normal vector of a ground plane equation of a previous frame is a normal vector of the current frame after anti-shake compensation of the two-dimensional image, and the normal vector is stored for the calculation of a next frame; and when the second Euler distance is larger than a fourth threshold value, or the cycle number is smaller than an adjustable parameter, optimizing and adjusting a new rotation vector, converting the adjusted rotation vector into a new rotation matrix, and executing 075 to 079 until the second Euler distance is smaller than the fourth threshold value, or the cycle number is larger than the adjustable parameter.
Specifically, the values of the accelerometer are converted into a first coordinate system of the first image acquisition device according to the calibration external parameters. The formula of the accelerometer values converted into the first coordinate system of the first image capturing device 50 is substantially the same as that described above, and will not be described herein. Converting a rotation matrix corresponding to the gesture difference when the electronic equipment shoots each frame of two-dimensional image and shoots the previous frame of two-dimensional image into a rotation vector; acquiring a zero setting item in the rotation vector according to the value of the accelerometer; converting the rotation vector after zero setting into a new rotation matrix; transposing the new rotation matrix to obtain a transposed matrix, and marking the transposed matrix as Rdelta; converting the transpose matrix Rdelta into a new rotation vector r_vec; obtaining a unit normal vector of a ground plane equation of a current frame to serve as a first normal vector V1, and multiplying the first normal vector V1 by a transpose matrix Rdelta under a first coordinate system to obtain a second normal vector V2; calculating a second Euler distance according to the unit normal vector V3 and the second normal vector V2 of the ground plane equation of the previous frame; and finally, when the second Euler distance is smaller than the fourth threshold value or the cycle number is larger than the adjustable parameter, reprojecting the two-dimensional image by adopting the transposed matrix Rdelta to obtain the anti-shake image.
The rotation matrix and the rotation vector can be converted with each other by using a roterence rotation formula, and the roterence rotation formula is as follows:
R=cosθI+(1-cosθ)nnT+sinθn∧
specifically, I is a unit matrix, n is a unit vector of the rotation vector, and θ is a modulo length of the rotation vector.
Since the rotation amount of the first image capturing apparatus 50 in the vertical direction (the same direction as the gravitational direction in the world coordinate system) is not required to be anti-shake compensated when the electronic device 100 captures around the subject, the subject itself at this time is located in the center area, and thus, the zeroing item in the rotation vector can be obtained to zero the rotation amount. The rotation vector zeroing can be more fit to the use scene, and compared with the situation that all data of the gyroscope are utilized to execute anti-shake, the rotation vector zeroing can avoid negative effects caused by errors of the gyroscope.
The matrix transpose is to interchange the rows and columns of the matrix, i.e., transpose the new rotation matrix (obtained from the zeroed rotation vector) to obtain a transposed matrix Rdelta, and obtain a new rotation vector r_vec from the transposed matrix Rdelta.
The calculation formula of the second euler distance is the same as the calculation formula of the first euler distance, and will not be described herein. Specifically, a second euler distance is calculated through a second normal vector V2 and a unit normal vector V3 of a ground plane equation of a previous frame, and if the second euler distance is smaller than a fourth threshold, a transposed matrix Rdelta is adopted to reproject the two-dimensional image, so as to obtain an anti-shake image.
In some embodiments, when the second euler distance is greater than the fourth threshold, or the number of cycles is less than the adjustable parameter, a gradient descent method may be used to optimize the non-zero term of the new rotation vector r_vec, and the second euler distance is used as a residual error, the value of the new rotation vector r_vec is adjusted, and then the new rotation vector r_vec after the optimization is converted into a new rotation matrix, and the method from 075 to 079 is executed until the second euler distance is less than the fourth threshold, or the number of cycles is greater than the adjustable parameter. Because the normal direction of the ground in the two-dimensional image after the anti-shake is basically consistent, the gradient descent method is adopted to adjust and optimize the rotation vector r_vec, so that a better anti-shake effect can be obtained, and the imaging quality of the electronic equipment 100 is ensured. It should be noted that, in other embodiments, the adjustment and optimization of the rotation vector r_vec may also use gaussian newton method, LM algorithm, and the like, which are not limited herein.
Referring to fig. 3 and 17, in some embodiments, 073: acquiring a zeroing item in the rotation vector, comprising:
0731: under the first coordinate system, if the absolute value of the X-axis value of the accelerometer is larger than the absolute value of the Y-axis value, the X-axis component in the rotation vector is set to be zero, and if the absolute value of the X-axis value of the accelerometer is smaller than or equal to the absolute value of the Y-axis value, the Y-axis component in the rotation vector is set to be zero.
Referring to fig. 2, the execution module 17 is further configured to: under the first coordinate system, if the absolute value of the X-axis value of the accelerometer is larger than the absolute value of the Y-axis value, the X-axis component in the rotation vector is set to be zero, and if the absolute value of the X-axis value of the accelerometer is smaller than or equal to the absolute value of the Y-axis value, the Y-axis component in the rotation vector is set to be zero.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application includes a method for executing the image processing in 0731. That is, the one or more processors 30 are configured to zero the X-axis component of the rotation vector if the absolute value of the X-axis value of the accelerometer is greater than the absolute value of the Y-axis value, and to zero the Y-axis component of the rotation vector if the absolute value of the X-axis value of the accelerometer is less than or equal to the absolute value of the Y-axis value, in the first coordinate system.
Specifically, the rotation vector may be represented by (Rx, ry, rz), and if the absolute value of the X-axis value of the accelerometer is greater than the absolute value of the Y-axis value, the first image acquisition device 50 may be vertically arranged to zero Rx in the rotation vector; if the absolute value of the X-axis value of the accelerometer is less than or equal to the absolute value of the Y-axis value, ry in the rotation vector can be zeroed out by traversing the first image acquisition device 50. The rotation vector zero setting can be more attached to a use scene, and compared with the situation that electronic equipment executes anti-shake by utilizing all data of a gyroscope, the rotation vector zero setting can avoid negative effects caused by errors of the gyroscope, so that imaging effects can be improved.
Referring to fig. 18, in some embodiments, the image processing method may further include:
08: carrying out gray integral projection on a target area in a current two-dimensional image to obtain a histogram;
09: and optimizing the anti-shake image according to the histogram to obtain a target image.
Referring to fig. 2, the image processing apparatus 10 according to the embodiment of the present application may further include a projection module 18 and an optimization module 19. The projection module 18 is configured to perform gray-scale integral projection on a target area in the current two-dimensional image to obtain a histogram. The optimization module 19 is configured to optimize the anti-shake image according to the histogram to obtain a target image.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application includes a method for executing the image processing in 08 and 09. That is, the one or more processors 30 are further configured to perform gray-scale integral projection on a target region in the current two-dimensional image to obtain a histogram; and optimizing the anti-shake image according to the histogram to obtain a target image.
Gray integral projection is a common positioning method, and local areas of an image are rapidly positioned by analyzing the distribution characteristics of projection of a gray image in a specific direction. The gray integral projection is performed on the target area so as to finely adjust and optimize the anti-shake image, so that a more accurate anti-shake effect can be obtained, and the imaging quality of the electronic equipment is improved.
For example, when the subject is a human face, since the gray-scale integrated value of the human eye region in the projection direction is significantly lower than other parts of the human face, the approximate position of the human eye region can be obtained by using gray-scale integrated projection on the image of the subject. In some embodiments, the gray-scale integration projection may directly integrate the gray-scale image of the two-dimensional image, and integrate the binarized two-dimensional image.
Referring to fig. 19, more specifically, in some embodiments, 08: gray integral projection is carried out on a target area in a current two-dimensional image to obtain a histogram, and the method comprises the following steps:
081: pixel value sums are calculated according to rows or columns of a target area in the current two-dimensional image to form a histogram H1 of a summation value and row coordinates or a histogram H1 of the summation value and column coordinates;
083: and summing pixel values of a target area of the two-dimensional image after the anti-shake of the previous frame according to rows or columns to form a histogram H2 of a summation value and row coordinates or a histogram H2 of the summation value and column coordinates.
Referring to fig. 2, the projection module 18 is further configured to: pixel value sums are calculated according to rows or columns of a target area in the current two-dimensional image to form a histogram H1 of a summation value and row coordinates or a histogram H1 of the summation value and column coordinates; and summing pixel values of a target area of the two-dimensional image after the anti-shake of the previous frame according to rows or columns to form a histogram H2 of a summation value and row coordinates or a histogram H2 of the summation value and column coordinates.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application includes a program for executing the image processing methods in 081 and 083. That is, the one or more processors 30 are further configured to sum the pixel values of the target area in the current two-dimensional image by rows or columns to form a histogram H1 of sum values and row coordinates, or to form a histogram H1 of sum values and column coordinates; and summing pixel values of a target area of the two-dimensional image after the anti-shake of the previous frame according to rows or columns to form a histogram H2 of a summation value and row coordinates or a histogram H2 of the summation value and column coordinates.
Referring to fig. 20, the present application will be described by taking the pixel value sum of the target area in the current two-dimensional image according to the line, and taking the pixel value sum of the target area in the two-dimensional image after the anti-shake of the previous frame as an example. And under the condition that the shot object is a human face, carrying out gray integral projection on the current two-dimensional image, accumulating pixel values in each row of a target area in the current two-dimensional image to form a histogram H1 of a summation value and row coordinates, carrying out gray integral projection on the two-dimensional image after the previous frame of anti-shake, and accumulating pixel values in each row of the target area of the two-dimensional image after the previous frame of anti-shake to form a histogram H2 of the summation value and row coordinates.
Further, referring to fig. 20 and 21, in some embodiments, 09: optimizing the anti-shake image according to the histogram to obtain a target image, including:
091: performing translation optimization on the histogram H1 to minimize the difference between the histogram H2 and the histogram H1, and recording the number of first pixels of the histogram H1 translated on the X axis and the number of second pixels translated on the Y axis;
093: the current two-dimensional image is translated by a first number of pixels in the X-axis and a second number of pixels in the Y-axis to obtain a target image.
Referring to fig. 2, the optimization module 19 is further configured to: performing translation optimization on the histogram H1 to minimize the difference between the histogram H2 and the histogram H1, and recording the number of first pixels of the histogram H1 translated on the X axis and the number of second pixels translated on the Y axis; the current two-dimensional image is translated by a first number of pixels in the X-axis and a second number of pixels in the Y-axis to obtain a target image.
Referring to fig. 3, the program in the electronic device 100 according to the embodiment of the present application includes a program for executing the image processing methods 091 and 093. That is, the one or more processors 30 are further configured to perform a translation optimization on the histogram H1, minimize a difference between the histogram H2 and the histogram H1, and record a first number of pixels of the histogram H1 translated on the X-axis and a second number of pixels translated on the Y-axis; the current two-dimensional image is translated by a first number of pixels in the X-axis and a second number of pixels in the Y-axis to obtain a target image.
The translation optimization refers to translating the histogram, wherein the translation mode is to gradually translate by one pixel, and translate the search in a direction capable of making the statistic smaller. The translation optimization of the histogram H1 and the histogram H2 means that the histogram H1 is translated to minimize the difference between the histogram H1 and the histogram H2, that isThe value of (2) is the smallest. Thus, at/>When the value of (1) is minimum, a first pixel number of the histogram H1 shifted on the X axis and a second pixel number of the histogram H1 shifted on the Y axis can be obtained, and the histogram formed by the current two-dimensional image is shifted on the X axis by the first pixel number and shifted on the Y axis by the second pixel number to obtain the target image.
Specifically, referring to fig. 22, the left graph in fig. 22 is a histogram H1 corresponding to a target region in a current two-dimensional image, the right graph in fig. 22 is a histogram H2 corresponding to a target region in a two-dimensional image after the previous frame of anti-shake, and the histogram H1 may be shifted downward (shifted along the Y axis) by several pixels to minimize the difference between the histogram H1 and the histogram H2, that isThe value of (2) is the smallest. At/>At least, the values of a number of pixels of the histogram H1 shifted along the Y-axis are recorded, whereby the histogram formed by the current two-dimensional image can be shifted by a number of pixels on the Y-axis to obtain the target image.
Referring to fig. 23, the present application also provides a computer readable storage medium 200 having a computer program 210 stored thereon, which when executed by one or more processors 220 implements the image processing method of any of the embodiments described above.
For example, in the case where the program 210 is executed by the processor 220, the following image processing method is implemented:
02: in the case where the two-dimensional image is obtained by performing surrounding shooting for the electronic apparatus 100, a two-dimensional image and a depth image are acquired;
03: identifying a ground area according to the depth image, and acquiring a ground plane equation;
04: acquiring a main body region in the depth image according to a ground plane equation;
05: taking a main body region in the depth image as a mask, and extracting a target region in the two-dimensional image;
06: according to the gyroscope data, acquiring the attitude difference of the electronic equipment 100 when shooting each frame of two-dimensional image and shooting the previous frame of two-dimensional image, wherein the two-dimensional image is synchronous with the depth image frame, each frame of two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values;
07: and executing anti-shake on the two-dimensional image according to the normal line and the attitude difference of the ground plane equation to obtain an anti-shake image.
As another example, in the case where the program 210 is executed by the processor 220, the following image processing method is implemented:
08: carrying out gray integral projection on a target area in a current two-dimensional image to obtain a histogram;
09: and optimizing the anti-shake image according to the histogram to obtain a target image.
As another example, the image processing method 01、011、013、015、017、019、031、032、033、034、035、036、037、038、041、043、0411、0413、0415、061、063、071、072、073、0731、074、075、076、077、078、079、0710、081、083、091、093 can also be implemented when the program 210 is executed by the processor 220.
Note that the explanation of the image processing method and the image processing apparatus 10 in the foregoing embodiments is equally applicable to the computer-readable storage medium 200 in the embodiment of the present application, and the explanation thereof will not be repeated here.
In the non-transitory computer readable storage medium 200 of the present application, when a two-dimensional image is obtained by performing surrounding shooting for the electronic device 100, the obtained two-dimensional image and a depth image in the depth image are processed to obtain a ground plane equation, a main body region in the depth image is obtained according to the ground plane equation, a target region in the two-dimensional image is extracted according to the main body region, then a gesture difference between each frame of the two-dimensional image and the shooting of the previous frame of the two-dimensional image of the electronic device 100 is obtained according to gyroscope data, and finally anti-shake is performed on the target region of the two-dimensional image according to a normal line and the gesture difference of the ground plane equation, so as to obtain an anti-shake image, thereby, the whole image processing process can correct the image obtained by the electronic device 100 during surrounding shooting, ensure an image correction effect, and improve the imaging quality of the electronic device 100.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a computer-readable storage medium can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer-readable storage medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments. In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by those skilled in the art within the scope of the application, which is defined by the claims and their equivalents.
Claims (17)
1. An image processing method, comprising:
Acquiring a two-dimensional image and a depth image under the condition that the two-dimensional image is acquired by surrounding shooting of electronic equipment;
identifying a ground area according to the depth image, and acquiring a ground plane equation;
Acquiring a main body region in the depth image according to the ground plane equation;
taking a main area in the depth image as a mask, and extracting a target area in the two-dimensional image;
According to the gyroscope data, acquiring the attitude difference of the electronic equipment when shooting each frame of the two-dimensional image and shooting the previous frame of the two-dimensional image, wherein the two-dimensional image is synchronous with the depth image frame, each frame of the two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values; and
And executing anti-shake on the two-dimensional image according to the normal line of the ground plane equation and the attitude difference to obtain an anti-shake image.
2. The image processing method according to claim 1, characterized by comprising:
It is detected whether the electronic device performs surround shooting.
3. The image processing method according to claim 2, wherein the detecting whether the electronic device performs the surround shooting includes:
acquiring a plurality of gyroscope data corresponding to a plurality of frames of two-dimensional images within a preset duration;
Calculating a first Euler distance between adjacent gyroscope data to serve as a change rate sequence of a plurality of gyroscope data; and
In the change rate sequence, if the latest preset number of first Euler distances are smaller than a preset first threshold value, determining that the electronic equipment executes surrounding shooting.
4. The image processing method according to claim 3, wherein before calculating the first euler distance between adjacent gyro data, the detecting electronic device performs surround shooting, further comprising:
Performing mean filtering on a plurality of the gyroscope data; and calculating a first Euler distance between adjacent gyroscope data, wherein the first Euler distance between adjacent gyroscope data after calculation is the first Euler distance between adjacent gyroscope data after filtering.
5. The image processing method according to claim 3, wherein the detecting whether the electronic device performs the surround shooting includes:
In the change rate sequence, if the latest preset number of first Euler distances are all larger than the preset first threshold value, determining that the electronic equipment does not execute surrounding shooting, and executing anti-shake by adopting the gyroscope.
6. The image processing method according to claim 1, wherein the identifying a ground area from the depth image and acquiring a ground plane equation includes:
Dividing the depth image into a plurality of grids;
Converting the depth value of each grid in the depth image into a three-dimensional point cloud, and converting the three-dimensional point cloud into a first coordinate system of a first image acquisition device, wherein the first image acquisition device is used for acquiring the two-dimensional image, and the depth value in the depth image is a depth value in a second coordinate system;
Fitting the three-dimensional point cloud of each grid under the first coordinate system by adopting a first plane fitting algorithm to obtain a plurality of first fitting plane equations;
removing grids with errors larger than a second threshold value from the first fitting plane;
Converting the value of the accelerometer into a first coordinate system of the first image acquisition device according to the calibration external parameters;
Acquiring an included angle between a normal line of the first fitting plane equation of the remaining grid and a direction of a value of the accelerometer under the first coordinate system;
removing grids with included angles larger than a third threshold value; and
And fitting the remaining three-dimensional point clouds of the grid by adopting a second plane fitting algorithm to obtain a second fitting plane equation, and taking the second fitting plane equation as the ground plane equation.
7. The image processing method according to claim 1, wherein the acquiring the subject region in the depth image according to the ground plane equation includes:
Removing the point cloud of the ground area according to the ground plane equation, removing the point cloud of a far area, and taking the rest point cloud as a main point cloud; and
And taking the region corresponding to the main point cloud in the depth image as a main region in the depth image.
8. The image processing method according to claim 7, wherein the removing the far-area point cloud includes:
Counting the depth values of all pixels in the depth image after removing the point cloud of the ground area;
dividing the depth value of the depth image with the ground area point cloud removed into a first type of depth value and a second type of depth value by adopting a preset algorithm, wherein the depth values in the first type of depth value are smaller than the depth values in the second type of depth value; and
And removing the point cloud corresponding to the depth value in the second class of depth values.
9. The image processing method according to claim 1, wherein the acquiring, from the gyroscope data, a difference in attitude of the electronic device when capturing the two-dimensional image of each frame and the two-dimensional image of a previous frame, includes:
Converting the rotation quantity of the gyroscope between two adjacent two-dimensional images into a plurality of rotation matrixes; and
And sequentially multiplying the rotation matrixes to the left according to the time sequence to obtain the attitude difference.
10. The image processing method according to claim 1, wherein the performing anti-shake on the two-dimensional image according to the normal line of the ground plane equation and the posture difference to obtain an anti-shake image includes:
Converting the value of the accelerometer into a first coordinate system of a first image acquisition device according to the calibration external parameters;
converting the rotation matrix corresponding to the attitude difference into a rotation vector;
Acquiring a zero setting item in the rotation vector;
converting the rotation vector after zero setting into a new rotation matrix;
transposing the new rotation matrix to obtain a transposed matrix;
converting the transpose matrix into a new rotation vector;
Obtaining a unit normal vector of the ground plane equation of the current frame as a first normal vector, and multiplying the first normal vector by the transpose matrix in the first coordinate system to obtain a second normal vector;
calculating a second Euler distance according to the unit normal vector of the ground plane equation of the previous frame and the second normal vector;
If the second Euler distance is smaller than a fourth threshold value or the cycle number is larger than an adjustable parameter, reprojecting the two-dimensional image by adopting the transposed matrix to obtain the anti-shake image, wherein the unit normal vector of the ground plane equation of the previous frame is the normal vector of the current frame after the anti-shake compensation of the two-dimensional image, and the normal vector is stored for the calculation of the next frame; and
And when the second Euler distance is larger than a fourth threshold value or the circulation times are smaller than an adjustable parameter, optimizing and adjusting the new rotation vector, converting the adjusted rotation vector into a new rotation matrix, and executing 075 to 079 until the second Euler distance is smaller than the fourth threshold value or the circulation times are larger than the adjustable parameter.
11. The image processing method according to claim 10, wherein the acquiring the zeroing term in the rotation vector includes:
Under the first coordinate system, if the absolute value of the X-axis value of the accelerometer is larger than the absolute value of the Y-axis value, the X-axis component in the rotation vector is set to zero, and if the absolute value of the X-axis value of the accelerometer is smaller than or equal to the absolute value of the Y-axis value, the Y-axis component in the rotation vector is set to zero.
12. The image processing method according to claim 1, characterized by further comprising:
carrying out gray integral projection on a target area in the current two-dimensional image to obtain a histogram; and
And optimizing the anti-shake image according to the histogram to obtain a target image.
13. The image processing method according to claim 12, wherein the gray-scale integration projection of the target region in the current two-dimensional image to obtain a histogram includes:
Calculating pixel value sums of the target areas in the current two-dimensional image according to rows or columns to form a histogram H1 of sum values and row coordinates or a histogram H1 of sum values and column coordinates; and
And summing pixel values of a target area of the two-dimensional image after the anti-shake of the previous frame according to rows or columns to form a histogram H2 of a summation value and row coordinates or a histogram H2 of the summation value and column coordinates.
14. The image processing method according to claim 12, wherein the optimizing the anti-shake image according to the histogram to obtain a target image includes:
Performing translation optimization on the histogram H1 to minimize the difference between the histogram H2 and the histogram H1, and recording the number of first pixels translated on the X axis and the number of second pixels translated on the Y axis of the histogram H1; and
And translating the current two-dimensional image by the first pixel number on the X axis and the second pixel number on the Y axis to acquire the target image.
15. An image processing apparatus, comprising:
the first acquisition module is used for acquiring the two-dimensional image and the depth image under the condition that the two-dimensional image is acquired by surrounding shooting of the electronic equipment;
the identification module is used for identifying a ground area according to the depth image and acquiring a ground plane equation;
the second acquisition module is used for acquiring a main body area in the depth image according to the ground plane equation;
the extraction module is used for extracting a target area in the two-dimensional image by taking a main area in the depth image as a mask;
The third acquisition module is used for acquiring the gesture difference of the electronic equipment when shooting each frame of the two-dimensional image and shooting the two-dimensional image of the previous frame 5 according to the gyroscope data, wherein the two-dimensional image is synchronous with the depth image frame, each frame of the two-dimensional image corresponds to the gyroscope data acquired by the gyroscope, and the depth image comprises a plurality of depth values; and
And the execution module is used for executing anti-shake on the two-dimensional image according to the normal line of the ground plane equation and the attitude difference so as to obtain an anti-shake image.
16. An electronic device, comprising:
0 one or more processors, memory; and
One or more computer programs, wherein one or more of the computer programs are stored in the memory, which, when executed by the processor, implement the image processing method of any of claims 1 to 14.
17. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the image processing method of any one of claims 1-14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211552155.7A CN118158530A (en) | 2022-12-05 | 2022-12-05 | Image processing method, image processing apparatus, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211552155.7A CN118158530A (en) | 2022-12-05 | 2022-12-05 | Image processing method, image processing apparatus, electronic device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118158530A true CN118158530A (en) | 2024-06-07 |
Family
ID=91287348
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211552155.7A Pending CN118158530A (en) | 2022-12-05 | 2022-12-05 | Image processing method, image processing apparatus, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118158530A (en) |
-
2022
- 2022-12-05 CN CN202211552155.7A patent/CN118158530A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10542208B2 (en) | Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information | |
US10234873B2 (en) | Flight device, flight control system and method | |
CN106683070B (en) | Height measuring method and device based on depth camera | |
US10515271B2 (en) | Flight device and flight control method | |
CN112106105B (en) | Method and system for generating three-dimensional image of object | |
KR102674646B1 (en) | Apparatus and method for obtaining distance information from a view | |
CN110717942B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
US8417059B2 (en) | Image processing device, image processing method, and program | |
US9832432B2 (en) | Control apparatus, image pickup apparatus, control method, and non-transitory computer-readable storage medium | |
CN109712192B (en) | Camera module calibration method and device, electronic equipment and computer readable storage medium | |
KR102206108B1 (en) | A point cloud registration method based on RGB-D camera for shooting volumetric objects | |
JP6577703B2 (en) | Image processing apparatus, image processing method, program, and storage medium | |
CN113822942B (en) | Method for measuring object size by monocular camera based on two-dimensional code | |
CN110243390B (en) | Pose determination method and device and odometer | |
US10116851B2 (en) | Optimized video denoising for heterogeneous multisensor system | |
CN103824303A (en) | Image perspective distortion adjusting method and device based on position and direction of photographed object | |
US20220329771A1 (en) | Method of pixel-by-pixel registration of an event camera to a frame camera | |
KR101745493B1 (en) | Apparatus and method for depth map generation | |
CN111882655A (en) | Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction | |
US8340399B2 (en) | Method for determining a depth map from images, device for determining a depth map | |
CN105335959B (en) | Imaging device quick focusing method and its equipment | |
US9538161B2 (en) | System and method for stereoscopic photography | |
KR20220121533A (en) | Method and device for restoring image obtained from array camera | |
CN117058183A (en) | Image processing method and device based on double cameras, electronic equipment and storage medium | |
CN116704111B (en) | Image processing method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |