CN105160663A - Method and system for acquiring depth image - Google Patents
Method and system for acquiring depth image Download PDFInfo
- Publication number
- CN105160663A CN105160663A CN201510523074.8A CN201510523074A CN105160663A CN 105160663 A CN105160663 A CN 105160663A CN 201510523074 A CN201510523074 A CN 201510523074A CN 105160663 A CN105160663 A CN 105160663A
- Authority
- CN
- China
- Prior art keywords
- image
- camera head
- depth
- depth image
- acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The present invention relates to a method for acquiring a depth image. Based on a system comprising more than three camera apparatuses, the method comprises the following steps: performing calibration on the more than three camera apparatuses, to acquire a positional relationship among the camera apparatuses; for at least two pairs of the camera apparatuses, acquiring a depth image corresponding to a same image from each pair of the camera apparatuses; and performing weighted averaging on all acquired depth images, to obtain a final depth image. The present invention further relates to a system for acquiring a depth image, which applies the foregoing method.
Description
Technical field
The present invention relates to stereo-picture processing technology field, particularly relate to a kind of method and system obtaining depth image.
Background technology
In the application of 3-D view, the image information by obtaining plane is needed to carry out the distance of perceptual object relative to camera head.Such as somatic sensation television game is identified user by the camera of monocular or binocular and caught the motion of user, reaches the object being controlled game by the action of user.Again such as, in 3-D scanning application, need accurately to obtain three-dimensional body position in space.
But the image that the camera of monocular or binocular is caught is used for obtaining depth image, often there is noise, precision not high.
Summary of the invention
Based on this, be necessary to provide a kind of method obtaining depth image, the picture depth that its picture depth obtains relative to classic method is more accurate.
In addition, a kind of system obtaining depth image is also provided.
Obtain a method for depth image, based on the system of camera head comprising more than three, described method comprises the steps:
Arrange the camera head of more than three, the mutual alignment relation of the camera head of described more than three is fixed;
The camera head of described more than three is demarcated, obtains camera head position relationship each other;
For at least two pairs of camera heads, obtain from often pair of camera head the depth image that a width corresponds to same image;
The final depth image of average acquisition is weighted to obtained all depth images.
Wherein in an embodiment, trigger the camera head photographic subjects image of described more than three simultaneously.
Wherein in an embodiment, the described camera head to described more than three is demarcated, and the step obtaining camera head position relationship each other comprises:
Utilize camera head to the characteristic body with known-image-features from least two different angle and distance shooting images;
Relatively more captured image and the characteristics of image of characteristic body;
Eliminate the deviation of the imaging surface in the vertical direction of often pair of camera head.
Wherein in an embodiment, described characteristic body has the characteristic pattern of uniform intervals distribution.
Wherein in an embodiment, comprise from the method for a pair camera head acquisition depth image:
Obtain the first image of one of them camera head shooting of described a pair camera head and the second image of another filming apparatus shooting;
With described first image for benchmark, the search window of selected setting size, the block that search is corresponding with characteristics of image in the first image from described second image, obtains the first pixel and second pixel of the same target location described first image and the second image corresponding to same target image;
According to the parallax distance X that described first pixel second pixel is determined
r-X
t, distance b between the first camera head and the imaging center of the second camera head and the first camera head and the second camera head focal distance f, calculate the degree of depth d of the target location of target image:
d=b×f/(X
R-X
T);
With described first image for benchmark, obtain the degree of depth of all target locations, Formation Depth image.
Wherein in an embodiment, describedly the average step obtaining final depth image be weighted to obtained all depth images comprise:
The depth value of impact point corresponding on every amplitude deepness image is added and averages, described mean value is corresponded on final depth image the degree of depth of described impact point.
Wherein in an embodiment, describedly the average step obtaining final depth image be weighted to obtained all depth images comprise:
The depth value of impact point corresponding on every amplitude deepness image is averaged according to corresponding weight, described mean value is corresponded on final depth image the degree of depth of described impact point; Wherein, the loss of learning rate of depth image is lower, and shared weight is higher.
A kind of system obtaining depth image, comprise camera head and the image processing apparatus of more than three, the mutual alignment relation of the camera head of described more than three is fixed, described image processing apparatus for obtain described more than three camera head shooting same target image and process obtains depth image;
Described image processing apparatus comprises:
Demarcating module, demarcates the camera head of described more than three, obtains camera head position relationship each other;
Depth image computing module, at least two pairs of camera heads, obtains from often pair of camera head the depth image that a width corresponds to same image;
Weighted average calculation module, is weighted the final depth image of average acquisition to obtained all depth images.
Wherein in an embodiment, described depth image computing module comprises:
Search unit, with described first image for benchmark, the search window of selected setting size, the block that search is corresponding with characteristics of image in the first image from described second image, obtains the first pixel and second pixel of the same target location described first image and the second image corresponding to same target image;
Computing unit, according to the parallax distance X that described first pixel second pixel is determined
r-X
t, distance b between the first camera head and the imaging center of the second camera head and the first camera head and the second camera head focal distance f, calculate the degree of depth d of the target location of target image:
d=b×f/(X
R-X
T);
With described first image for benchmark, obtain the degree of depth of all target locations, Formation Depth image.
Wherein in an embodiment, described weighted average calculation module comprises:
First weighted average calculation unit, is added the depth value of impact point corresponding on every amplitude deepness image and averages, described mean value is corresponded on final depth image the degree of depth of described impact point; Or
Second weighted average calculation unit, averages according to corresponding weight to the depth value of impact point corresponding on every amplitude deepness image, described mean value is corresponded on final depth image the degree of depth of described impact point; Wherein, the loss of learning rate of depth image is lower, and shared weight is higher.
The method and system of above-mentioned acquisition depth image, obtains at least two amplitude deepness images by least three camera heads, then utilizes at least two amplitude deepness images to do weighted mean, can obtain depth image more accurately.
Accompanying drawing explanation
Fig. 1 is the method flow diagram of the acquisition depth image of an embodiment;
Fig. 2 is the positional alignment schematic diagram of 3 camera heads;
Fig. 3 is the position view of 3 camera head arrangement in a rows;
Fig. 4 is the method flow diagram of the acquisition depth image of another embodiment;
Fig. 5 a is tessellated patterning;
Schematic diagram when Fig. 5 b is gridiron pattern slant setting;
Fig. 6 a is the schematic diagram of the first image and the second image;
Fig. 6 b is the principle schematic forming parallax;
Fig. 7 is the system module figure of the acquisition depth image of an embodiment.
Embodiment
Be further described below in conjunction with specific embodiments and the drawings.
Following examples are for the inaccurate problem of depth information of traditional monocular or binocular camera computed image, and based on the system of camera head comprising more than three, provide the method obtaining depth image, its picture depth calculated is more accurate.Before acquisition depth image, need to arrange that image taken by least three camera heads.When arranging camera head, the imaging plane of camera head should be made as far as possible to be in same plane, and no longer to move after arranging.The relative position relation of at least three camera heads is fixed up at this point.
As shown in Figure 1, the method comprises the steps:
Step S100: demarcate the camera head of more than three, obtains camera head position relationship each other.When making product, may need, according to actual conditions, the camera head of more than three is carried out certain space and arranging, such as, these camera heads all equidistantly being formed a line, or form matrix arrangement etc.After the arranging of camera head, position relationship is each other just fixing no longer to be changed.
Demarcating camera head, is exactly obtain the position relationship between camera head.Usually, the imaging plane of multiple camera head is all in a plane.For the camera head of three shown in Fig. 2, three camera heads A, B, C are triangularly arranged, and imaging plane is all positioned at X-Z plane.
Position relationship between camera head mainly can represent by the angle between the Distance geometry line of centres between imaging center.Below for convenience of description, simply " distance between imaging center " is replaced with " distance ".The distance of such as camera head A and camera head B is l1, the distance of camera head B and camera head C is l2, the distance of camera head A and camera head C is l3, the angle of the line of the line of camera head A, camera head B and camera head B, camera head C is α 1, the angle of the line of the line of camera head A, camera head B and camera head A, camera head C is α 2, and the angle of the line of the line of camera head A, camera head C and camera head B, camera head C is α 3.
Be appreciated that the position relationship between camera head can also represent by other modes.Such as, set up the coordinate system in space, the coordinate of the position of each camera head in space is represented, utilize the conversion of coordinate also can obtain position relationship between camera head.
The correlation parameter of these position relationships may be used for subsequent calculations picture depth.
Step S200: at least two pairs of camera heads, obtains from often pair of camera head the depth image that a width corresponds to same image.A pair camera head refers to two corresponding camera heads, for the system of camera head comprising more than three, therefrom can take out the picture catching of at least two pairs of camera heads for the present embodiment.Still for the situation shown in Fig. 2, two pairs of camera heads (A, B) and (B can be got, or two pairs of camera heads (A, C) and (B, C) or two couples of camera head (A C), and (A, C) B).Three pairs of camera heads (A, B), (A, C) and (B, C) can also be got.Camera head can also be other quantity, and the logarithm of the camera head obtained is also relevant to quantity.If the quantity of camera head is N, then the logarithm of camera head can be
between value.Preferably, all camera heads can directly or indirectly associate each other.Such as camera head A and camera head B are as a pair camera head, and camera head B and camera head C is as a pair camera head, then camera head B is public camera, make camera head A and camera head C indirect association.Like this, camera head carries out timing signal, and position relationship each other just can cross-referencedly obtain.
Every a pair camera head can both obtain depth information in this image by shooting image, also namely obtains the range information between scenery in this image and camera head.During shooting, the image for same scenery should be obtained.Multiple camera must trigger shooting to ensure to obtain the state of same scenery at synchronization simultaneously.Therefore, every a pair camera head is when taking same scenery, should ensure that the difference of the two captured image should be only limitted to the difference owing to producing from different angle shots, and can not produce the difference on characteristics of image, such as two camera heads have taken different scenery respectively.
For two pairs of camera heads (A, B) and (B, C).Camera head A, B can take image IMGA, IMGB of same scenery two different angles respectively, to this two width image IMGA, IMGB in conjunction with the position relationship between camera head A, B, can calculate the depth image of camera head to (A, B).Same, camera head B, C can take image IMGB, IMGC of same scenery two different angles respectively, to this two width image IMGB, IMGC in conjunction with the position relationship between camera head B, C, the depth image of camera head to (B, C) can be calculated.
Step S300: the final depth image of average acquisition is weighted to obtained all depth images.
Selected every a pair camera head can export an amplitude deepness image, for obtained all depth images, is weighted and on average can obtains final depth image.Particularly, be weighted on average to the pixel of the correspondence on every amplitude deepness image, obtain weighted mean value as the degree of depth final depth image corresponding to described impact point.A kind of situation is that the weight shared by every amplitude deepness image is equal, and weighted mean value is exactly the mean value of all degree of depth.Also have a kind of situation to be that the weight shared by each amplitude deepness image is unequal, every amplitude deepness image participates in being added with respective weight.Wherein, the loss of learning rate of depth image is lower, and shared weight is higher.
Two width images correspond to the degree of depth of same impact point and be respectively M and N, and weight corresponding to two width images is respectively u and v, then the degree of depth k corresponding to this impact point in final depth image is:
K=u·M+v·N。
Identical process is carried out to impact points all on two width images, the depth image of final synthesis can be obtained.
The weight of depth image is relevant to its loss of learning rate.Image one such as, have 3 places' image information disappearance (being generally that there is obvious noise), image two has 5 place's image information disappearances.When utilizing image one and image two synthesizes new depth image, the weight of image one can be made to be 5/ (5+3), namely 5/8, make the weight of image two be 3/8.Like this, the image one having less loss of learning rate obtains higher weight.
In other embodiments, synthesize according to more depth images, the weight of every amplitude deepness image can also adopt other modes to determine, such as according to loss of learning rate from less to more, has fixing weight: 30%, 20%, 10%, 5%, 5% ...Can set as required.
Complete implementation procedure is described in detail below with a specific embodiment.As shown in Figure 3, following examples are based on the system comprising three camera heads 110,120,130.Three camera heads 110,120,130 one-tenth a line arrangements, imaging center is located on the same line, and imaging plane is all positioned at X-Z plane.The distance of camera head 110,120 is b1, and the distance of camera head 120,130 is b2.B1 and b2 can be equal, also can be unequal.
Step S201: utilize camera head to the characteristic body with known-image-features from least two different angle and distance shooting images.Described characteristic body can adopt gridiron pattern.As shown in Figure 5 a, a kind of gridiron pattern of form comprises the square grid of chequered with black and white formed objects.During shooting, gridiron pattern is placed on the position of setting with the angle of setting.Because the angle and distance of setting is known, the change that the size that can calculate normal picture is brought due to Distance geometry angle difference.As shown in Figure 5 b, because gridiron pattern right hand edge is more away from camera head, therefore its image can correspondingly diminish near the grid size of right hand edge, the degree diminished with away from distance dependent, can calculate.
When utilizing gridiron pattern to calibrate, gridiron pattern can be placed on multiple position and from multiple angle, to carry out general calibration to camera head.
Not through the camera head of calibration, the image of its shooting likely normally can not change according to the relation with distance, but produces distortion by the impact of the principle of pin-hole imaging.
Step S202: relatively more captured image and the characteristics of image of characteristic body.Through calculating, the characteristics of image of normal characteristic body can be obtained, itself and captured image are compared, can obtain.Such as certain gridiron pattern near right hand edge size under normal circumstances should be x, and the tessellated y that is of a size of of correspondence in the image that actual institute claps, the difference so between y and x is exactly distortion parameter.
Step S203: the deviation eliminating the imaging surface in the vertical direction of often pair of camera head.Camera head is when arranging, imaging plane in the Y direction, is also that vertical direction may there are differences, needs to adjust respectively everywhere, realize the demarcation to whole camera head.
Be appreciated that above-mentioned gridiron pattern can also be that other have obviously and the characteristic body of simple characteristics of image, characteristic body such as, have the pattern of equally distributed circular block.
After calibration, camera head just can come into operation, and can be used for obtaining depth image, and obtains 3 dimensional drawing further.
Afterwards, the method for the present embodiment also comprises the step obtaining depth image from often pair of camera head.Wherein, comprise from the step of a pair camera head acquisition depth image:
Step S204: obtain the first image of one of them camera head shooting of described a pair camera head and the second image of another filming apparatus shooting.Be described for the camera head 110,120 in Fig. 3, the situation between camera head 120,130 is roughly the same.
The image that camera head 110 is taken is called the first image 112 in the present embodiment, and the image that camera head 120 is taken is called the second image 122 in the present embodiment, is shown in Fig. 6 a.According to the position that two camera heads 110,120 are put, the first image 112 and the second image 122 are also accustomed to being called left figure and right figure.
Continue to use the coordinate system shown in Fig. 3, known first image 112 and the second image 122 are imagings in X-Z plane.To the pixel Pr on the pixel Pl of corresponding first image 112 of same impact point P (with reference to figure 6b) difference on same scenery and the second image 122, (pixel may be difficult to expression characteristics of image determined to two camera heads 110,120, Pl and Pr is called that the set of pixel is more accurate, but for simplicity, represent this set with the center pixel of this set).
Composition graphs 5a and Fig. 5 b, represents that the position of the pixel of same impact point P on the first image 112 and the second image 122 is not quite similar.With parallax distance (x
r-x
t) represent this difference.
Step S205: with described first image for benchmark, the search window of selected setting size, the block that search is corresponding with characteristics of image in the first image from described second image, obtains the first pixel and second pixel of the same target location described first image and the second image corresponding to same target image.
The image resolution ratio that the size of described search window obtains according to the camera head adopted is determined, for the image that resolution is higher, the size of search window can be less.For the image that resolution is lower, the size of search window can be larger.
Step S206: the parallax distance X determined according to described first pixel and the second pixel
r-X
t, distance b between the first camera head and the imaging center of the second camera head and the first camera head and the second camera head focal distance f, calculate the degree of depth d of the target location of target image:
d=b×f/(X
R-X
T)。
Step S207: be disposed as end condition with the pixel in the first image, circulation performs above-mentioned steps S204 ~ S206.Also namely with described first image for benchmark, obtain the degree of depth of all target locations, Formation Depth image.
Step S208: with all camera heads to being disposed as end condition, circulation performs above-mentioned steps S204 ~ S207.Also namely depth image is obtained from often pair of camera head.
Afterwards, the method for the present embodiment also comprise obtained all depth images are weighted average to obtain the step of final depth image.This step can with reference to the processing mode of the step S300 of previous embodiment.
The method of above-mentioned acquisition depth image, obtains at least two amplitude deepness images by least three camera heads, then utilizes at least two amplitude deepness images to do weighted mean, can obtain depth image more accurately.
As shown in Figure 7, a kind of system obtaining depth image, comprise camera head and the image processing apparatus of more than three, the mutual alignment relation of the camera head of described more than three is fixed, described image processing apparatus for obtain described more than three camera head shooting same target image and process obtains depth image.
Described image processing apparatus comprises: demarcating module, depth image computing module and weighted average calculation module.Wherein demarcating module is demarcated the camera head of described more than three, obtains camera head position relationship each other.Depth image computing module, at least two pairs of camera heads, obtains from often pair of camera head the depth image that a width corresponds to same image; Weighted average calculation module is weighted the final depth image of average acquisition to obtained all depth images.
Described depth image computing module comprises: search unit and computing unit.
Search unit: with described first image for benchmark, the search window of selected setting size, the block that search is corresponding with characteristics of image in the first image from described second image, obtains the first pixel and second pixel of the same target location described first image and the second image corresponding to same target image;
Computing unit: the parallax distance X determined according to described first pixel second pixel
r-X
t, distance b between the first camera head and the imaging center of the second camera head and the first camera head and the second camera head focal distance f, calculate the degree of depth d of the target location of target image:
d=b×f/(X
R-X
T);
With described first image for benchmark, obtain the degree of depth of all target locations, Formation Depth image.
Described weighted average calculation module comprises: the first weighted average calculation unit and/or the second weighted average calculation unit.
First weighted average calculation unit, is added the depth value of impact point corresponding on every amplitude deepness image and averages, described mean value is corresponded on final depth image the degree of depth of described impact point.
Second weighted average calculation unit, averages according to corresponding weight to the depth value of impact point corresponding on every amplitude deepness image, described mean value is corresponded on final depth image the degree of depth of described impact point; Wherein, the loss of learning rate of depth image is lower, and shared weight is higher.
Each technical characteristic of the above embodiment can combine arbitrarily, for making description succinct, the all possible combination of each technical characteristic in above-described embodiment is not all described, but, as long as the combination of these technical characteristics does not exist contradiction, be all considered to be the scope that this instructions is recorded.
The above embodiment only have expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but can not therefore be construed as limiting the scope of the patent.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.
Claims (10)
1. obtain a method for depth image, based on the system of camera head comprising more than three, described method comprises the steps:
Arrange the camera head of more than three, the mutual alignment relation of the camera head of described more than three is fixed;
The camera head of described more than three is demarcated, obtains camera head position relationship each other;
For at least two pairs of camera heads, obtain from often pair of camera head the depth image that a width corresponds to same image;
The final depth image of average acquisition is weighted to obtained all depth images.
2. the method for acquisition depth image according to claim 1, is characterized in that, triggers the camera head photographic subjects image of described more than three simultaneously.
3. the method for acquisition depth image according to claim 1, is characterized in that, the described camera head to described more than three is demarcated, and the step obtaining camera head position relationship each other comprises:
Utilize camera head to the characteristic body with known-image-features from least two different angle and distance shooting images;
Relatively more captured image and the characteristics of image of characteristic body;
Eliminate the deviation of the imaging surface in the vertical direction of often pair of camera head.
4. the method for acquisition depth image according to claim 3, is characterized in that, described characteristic body has the characteristic pattern of uniform intervals distribution.
5. according to the method for the acquisition depth image of any one of Claims 1 to 4, it is characterized in that, comprise from the method for a pair camera head acquisition depth image:
Obtain the first image of one of them camera head shooting of described a pair camera head and the second image of another filming apparatus shooting;
With described first image for benchmark, the search window of selected setting size, the block that search is corresponding with characteristics of image in the first image from described second image, obtains the first pixel and second pixel of the same target location described first image and the second image corresponding to same target image;
According to the parallax distance X that described first pixel second pixel is determined
r-X
t, distance b between the first camera head and the imaging center of the second camera head and the first camera head and the second camera head focal distance f, calculate the degree of depth d of the target location of target image:
d=b×f/(X
R-X
T);
With described first image for benchmark, obtain the degree of depth of all target locations, Formation Depth image.
6. the method for acquisition depth image according to claim 1, is characterized in that, is describedly weighted the average step obtaining final depth image to obtained all depth images and comprises:
The depth value of impact point corresponding on every amplitude deepness image is added and averages, described mean value is corresponded on final depth image the degree of depth of described impact point.
7. the method for acquisition depth image according to claim 1, is characterized in that, is describedly weighted the average step obtaining final depth image to obtained all depth images and comprises:
The depth value of impact point corresponding on every amplitude deepness image is averaged according to corresponding weight, described mean value is corresponded on final depth image the degree of depth of described impact point; Wherein, the loss of learning rate of depth image is lower, and shared weight is higher.
8. one kind obtains the system of depth image, comprise camera head and the image processing apparatus of more than three, the mutual alignment relation of the camera head of described more than three is fixed, described image processing apparatus for obtain described more than three camera head shooting same target image and process obtains depth image;
Described image processing apparatus comprises:
Demarcating module, demarcates the camera head of described more than three, obtains camera head position relationship each other;
Depth image computing module, at least two pairs of camera heads, obtains from often pair of camera head the depth image that a width corresponds to same image;
Weighted average calculation module, is weighted the final depth image of average acquisition to obtained all depth images.
9. the system of acquisition depth image according to claim 8, is characterized in that, described depth image computing module comprises:
Search unit, with described first image for benchmark, the search window of selected setting size, the block that search is corresponding with characteristics of image in the first image from described second image, obtains the first pixel and second pixel of the same target location described first image and the second image corresponding to same target image;
Computing unit, according to the parallax distance X that described first pixel second pixel is determined
r-X
t, distance b between the first camera head and the imaging center of the second camera head and the first camera head and the second camera head focal distance f, calculate the degree of depth d of the target location of target image:
d=b×f/(X
R-X
T);
With described first image for benchmark, obtain the degree of depth of all target locations, Formation Depth image.
10. the system of acquisition depth image according to claim 8, is characterized in that, described weighted average calculation module comprises:
First weighted average calculation unit, is added the depth value of impact point corresponding on every amplitude deepness image and averages, described mean value is corresponded on final depth image the degree of depth of described impact point; Or
Second weighted average calculation unit, averages according to corresponding weight to the depth value of impact point corresponding on every amplitude deepness image, described mean value is corresponded on final depth image the degree of depth of described impact point; Wherein, the loss of learning rate of depth image is lower, and shared weight is higher.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510523074.8A CN105160663A (en) | 2015-08-24 | 2015-08-24 | Method and system for acquiring depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510523074.8A CN105160663A (en) | 2015-08-24 | 2015-08-24 | Method and system for acquiring depth image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105160663A true CN105160663A (en) | 2015-12-16 |
Family
ID=54801505
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510523074.8A Pending CN105160663A (en) | 2015-08-24 | 2015-08-24 | Method and system for acquiring depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105160663A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105700551A (en) * | 2016-01-27 | 2016-06-22 | 浙江大华技术股份有限公司 | An unmanned aerial vehicle landing area determination method, an unmanned aerial vehicle landing method and related apparatuses |
CN106683133A (en) * | 2016-12-09 | 2017-05-17 | 深圳奥比中光科技有限公司 | Method for acquiring target depth image |
CN106780589A (en) * | 2016-12-09 | 2017-05-31 | 深圳奥比中光科技有限公司 | A kind of method for obtaining target depth image |
CN106934828A (en) * | 2015-12-28 | 2017-07-07 | 纬创资通股份有限公司 | Depth image processing method and depth image processing system |
CN107274447A (en) * | 2017-07-14 | 2017-10-20 | 梅卡曼德(北京)机器人科技有限公司 | Integrated phase shift range finding and depth image acquisition method |
CN108460368A (en) * | 2018-03-30 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | 3-D view synthetic method, device and computer readable storage medium |
CN108805921A (en) * | 2018-04-09 | 2018-11-13 | 深圳奥比中光科技有限公司 | Image-taking system and method |
CN109064415A (en) * | 2018-07-09 | 2018-12-21 | 奇酷互联网络科技(深圳)有限公司 | Image processing method, system, readable storage medium storing program for executing and terminal |
CN109712192A (en) * | 2018-11-30 | 2019-05-03 | Oppo广东移动通信有限公司 | Camera module scaling method, device, electronic equipment and computer readable storage medium |
CN109945840A (en) * | 2017-12-20 | 2019-06-28 | 纬创资通股份有限公司 | 3-dimensional image acquisition method and system |
CN109978987A (en) * | 2017-12-28 | 2019-07-05 | 周秦娜 | A kind of control method, apparatus and system constructing panorama based on multiple depth cameras |
CN110876006A (en) * | 2018-08-30 | 2020-03-10 | 美国亚德诺半导体公司 | Depth image obtained using multiple exposures in combination |
WO2020048509A1 (en) * | 2018-09-06 | 2020-03-12 | 杭州海康威视数字技术股份有限公司 | Inter-frame area mapping method and apparatus, and multi-camera observing system |
CN112129262A (en) * | 2020-09-01 | 2020-12-25 | 珠海市一微半导体有限公司 | Visual ranging method and visual navigation chip of multi-camera group |
CN112577603A (en) * | 2020-10-09 | 2021-03-30 | 国网浙江宁波市奉化区供电有限公司 | Switch cabinet real-time monitoring method and system based on cable connector and ambient temperature thereof |
CN112866629A (en) * | 2019-11-27 | 2021-05-28 | 深圳市大富科技股份有限公司 | Binocular vision application control method and terminal |
CN113869231A (en) * | 2021-09-29 | 2021-12-31 | 亮风台(上海)信息科技有限公司 | Method and equipment for acquiring real-time image information of target object |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006028434A2 (en) * | 2003-07-10 | 2006-03-16 | Sarnoff Corporation | Method and apparatus for refining target position and size estimates using image and depth data |
CN1946195A (en) * | 2006-10-26 | 2007-04-11 | 上海交通大学 | Scene depth restoring and three dimension re-setting method for stereo visual system |
CN101674418A (en) * | 2008-09-10 | 2010-03-17 | 新奥特(北京)视频技术有限公司 | Method for detecting depth of emcee in virtual studio system |
CN103824318A (en) * | 2014-02-13 | 2014-05-28 | 西安交通大学 | Multi-camera-array depth perception method |
CN104519343A (en) * | 2013-09-26 | 2015-04-15 | 西克股份公司 | 3D camera in accordance with stereoscopic principle and method of detecting depth maps |
CN104766291A (en) * | 2014-01-02 | 2015-07-08 | 株式会社理光 | Method and system for calibrating multiple cameras |
-
2015
- 2015-08-24 CN CN201510523074.8A patent/CN105160663A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006028434A2 (en) * | 2003-07-10 | 2006-03-16 | Sarnoff Corporation | Method and apparatus for refining target position and size estimates using image and depth data |
CN1946195A (en) * | 2006-10-26 | 2007-04-11 | 上海交通大学 | Scene depth restoring and three dimension re-setting method for stereo visual system |
CN101674418A (en) * | 2008-09-10 | 2010-03-17 | 新奥特(北京)视频技术有限公司 | Method for detecting depth of emcee in virtual studio system |
CN104519343A (en) * | 2013-09-26 | 2015-04-15 | 西克股份公司 | 3D camera in accordance with stereoscopic principle and method of detecting depth maps |
CN104766291A (en) * | 2014-01-02 | 2015-07-08 | 株式会社理光 | Method and system for calibrating multiple cameras |
CN103824318A (en) * | 2014-02-13 | 2014-05-28 | 西安交通大学 | Multi-camera-array depth perception method |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934828A (en) * | 2015-12-28 | 2017-07-07 | 纬创资通股份有限公司 | Depth image processing method and depth image processing system |
CN106934828B (en) * | 2015-12-28 | 2019-12-06 | 纬创资通股份有限公司 | Depth image processing method and depth image processing system |
CN105700551A (en) * | 2016-01-27 | 2016-06-22 | 浙江大华技术股份有限公司 | An unmanned aerial vehicle landing area determination method, an unmanned aerial vehicle landing method and related apparatuses |
CN106683133A (en) * | 2016-12-09 | 2017-05-17 | 深圳奥比中光科技有限公司 | Method for acquiring target depth image |
CN106780589A (en) * | 2016-12-09 | 2017-05-31 | 深圳奥比中光科技有限公司 | A kind of method for obtaining target depth image |
CN107274447A (en) * | 2017-07-14 | 2017-10-20 | 梅卡曼德(北京)机器人科技有限公司 | Integrated phase shift range finding and depth image acquisition method |
CN109945840A (en) * | 2017-12-20 | 2019-06-28 | 纬创资通股份有限公司 | 3-dimensional image acquisition method and system |
CN109978987A (en) * | 2017-12-28 | 2019-07-05 | 周秦娜 | A kind of control method, apparatus and system constructing panorama based on multiple depth cameras |
CN108460368A (en) * | 2018-03-30 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | 3-D view synthetic method, device and computer readable storage medium |
CN108460368B (en) * | 2018-03-30 | 2021-07-09 | 百度在线网络技术(北京)有限公司 | Three-dimensional image synthesis method and device and computer-readable storage medium |
CN108805921A (en) * | 2018-04-09 | 2018-11-13 | 深圳奥比中光科技有限公司 | Image-taking system and method |
CN109064415A (en) * | 2018-07-09 | 2018-12-21 | 奇酷互联网络科技(深圳)有限公司 | Image processing method, system, readable storage medium storing program for executing and terminal |
CN110876006A (en) * | 2018-08-30 | 2020-03-10 | 美国亚德诺半导体公司 | Depth image obtained using multiple exposures in combination |
CN110876006B (en) * | 2018-08-30 | 2021-09-07 | 美国亚德诺半导体公司 | Depth image obtained using multiple exposures in combination |
WO2020048509A1 (en) * | 2018-09-06 | 2020-03-12 | 杭州海康威视数字技术股份有限公司 | Inter-frame area mapping method and apparatus, and multi-camera observing system |
CN109712192A (en) * | 2018-11-30 | 2019-05-03 | Oppo广东移动通信有限公司 | Camera module scaling method, device, electronic equipment and computer readable storage medium |
CN112866629A (en) * | 2019-11-27 | 2021-05-28 | 深圳市大富科技股份有限公司 | Binocular vision application control method and terminal |
CN112129262A (en) * | 2020-09-01 | 2020-12-25 | 珠海市一微半导体有限公司 | Visual ranging method and visual navigation chip of multi-camera group |
CN112577603A (en) * | 2020-10-09 | 2021-03-30 | 国网浙江宁波市奉化区供电有限公司 | Switch cabinet real-time monitoring method and system based on cable connector and ambient temperature thereof |
CN113869231A (en) * | 2021-09-29 | 2021-12-31 | 亮风台(上海)信息科技有限公司 | Method and equipment for acquiring real-time image information of target object |
CN113869231B (en) * | 2021-09-29 | 2023-01-31 | 亮风台(上海)信息科技有限公司 | Method and equipment for acquiring real-time image information of target object |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105160663A (en) | Method and system for acquiring depth image | |
CN106412433B (en) | Atomatic focusing method and system based on RGB-IR depth camera | |
US8928755B2 (en) | Information processing apparatus and method | |
US8111910B2 (en) | Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus | |
CN111145342A (en) | Binocular speckle structured light three-dimensional reconstruction method and system | |
US20150195509A1 (en) | Systems and Methods for Incorporating Two Dimensional Images Captured by a Moving Studio Camera with Actively Controlled Optics into a Virtual Three Dimensional Coordinate System | |
CN110009672A (en) | Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment | |
CN112150528A (en) | Depth image acquisition method, terminal and computer readable storage medium | |
JP2016524125A (en) | System and method for stereoscopic imaging using a camera array | |
JP2016019194A (en) | Image processing apparatus, image processing method, and image projection device | |
CN102724398B (en) | Image data providing method, combination method thereof, and presentation method thereof | |
CN107808398B (en) | Camera parameter calculation device, calculation method, program, and recording medium | |
CN105551020B (en) | A kind of method and device detecting object size | |
KR20110052993A (en) | Method and apparatus for compensating image | |
CN109840922B (en) | Depth acquisition method and system based on binocular light field camera | |
CN109523595A (en) | A kind of architectural engineering straight line corner angle spacing vision measuring method | |
CN106170086B (en) | Method and device thereof, the system of drawing three-dimensional image | |
CN106033614B (en) | A kind of mobile camera motion object detection method under strong parallax | |
CN108230399A (en) | A kind of projector calibrating method based on structured light technique | |
JP2011160421A (en) | Method and apparatus for creating stereoscopic image, and program | |
CN109191533A (en) | Tower crane high-altitude construction method based on assembled architecture | |
JP7378219B2 (en) | Imaging device, image processing device, control method, and program | |
KR101745493B1 (en) | Apparatus and method for depth map generation | |
JP7300895B2 (en) | Image processing device, image processing method, program, and storage medium | |
JP2019106145A (en) | Generation device, generation method and program of three-dimensional model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20151216 |