CN103561258A - Kinect depth video spatio-temporal union restoration method - Google Patents
Kinect depth video spatio-temporal union restoration method Download PDFInfo
- Publication number
- CN103561258A CN103561258A CN201310442055.3A CN201310442055A CN103561258A CN 103561258 A CN103561258 A CN 103561258A CN 201310442055 A CN201310442055 A CN 201310442055A CN 103561258 A CN103561258 A CN 103561258A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- color
- region
- kinect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Provided is a Kinect depth video spatio-temporal union restoration method. According to the Kinect depth video spatio-temporal union restoration method, on the basis of the hypothesis that neighborhood pixels with similar colors should have similar depth values, a color segment image which corresponds to color images is used for guiding depth filling carried out on motion areas extracted from a first depth image and motion areas extracted from all the follow-up depth images; in consideration that some areas with darker colors and without effective depth values can cause failure of the Kinect depth video spatio-temporal union restoration method, the areas with the darker colors are detected firstly, and effective depth values of the same areas with the darker colors are used for repairing empty areas; according to static areas in depth video shot by a Kinect, if a current depth image is provided with empty pixels, filling is carried out by using depth values of positions, corresponding to the positions of the empty pixels, of a previous depth image. Due to the fact that the drawing technology based on the depth images is used for drawing a virtual viewpoint, the image quality of a virtual right view which is correspondingly obtained through the Kinect depth video spatio-temporal union restoration method is obviously better than that of an original virtual right view, and the Kinect depth video spatio-temporal union restoration method can be applied to 3D drawing.
Description
Technical field
The present invention relates to image/video processing technology field, can be applicable to 3D and draw.
Technical background
3 D stereo TV is thought and can be brought vision recreation experience more natural, more life-stylize by many people.Along with the development of stereo display technique and video processing technique, 3D video technique becomes study hotspot in recent years.At present, realize three-dimensional video-frequency system and mainly contain two schemes: a kind of is the scheme of looking, and by a plurality of video camera arrays, obtains three-dimensional scenic, on three-dimensional display, plays back more; Another kind is " texture+degree of depth " scheme, with color texture video and deep video, color texture information and the depth information in stereo scene described respectively, in conjunction with this two-path video information, drafting (DIBR) the technology to drawing virtual view of employing based on depth image, is finally presented at synthetic 3D video on display.First scheme is with respect to the first scheme, have data transfer bandwidth little, be easy to the advantages such as virtual viewpoint rendering.
Obtain depth information, mainly contain two kinds of methods: the one, adopt the passive depth information that obtains of Stereo Matching Algorithm, this method be difficult to algorithm complex and obtain deep video obtain qualitatively compromise.The 2nd, use degree of depth camera active obtaining depth information, degree of depth camera is mainly divided into two classes at present, one class is TOF (time of flight) camera, another kind of is Kinect, they are all to come compute depth information to realize the extract real-time of depth information by transmitting and receiving reverberation or hot spot, also can obtain corresponding real-time color video.Kinect is cheap because of it, and can extract relatively high-resolution depth map, has attracted more concern.But the deep video that Kinect extracts is second-rate, on occlusion area, smooth object surface, have larger loss of depth information cavity, and the depth value of same stagnant zone also can temporal evolution and change, and therefore must fill repair process.
In existing reparation algorithm, the simple reparation algorithm based on space is many, for the optical noise in deep video, generally takes medium filtering, bilateral filtering, and associating bilateral filtering, the multiple filtering method such as linear interpolation carries out noise reduction.For example document 1 is (referring to Massimo Camplani and Luis Salgado. " Efficient Spatio-temporal Hole Filling Strategy for Kinect Depth Maps ", in Proceedings ofSPIE, 82900E, 2012.) neighbor of spatial domain and time-domain in the static scene photographing for Kinect in, associating two-sided filter of use of iteration removes to repair depth value.These space restorative procedures are being eliminated tiny empty point, and the aspect effects such as protection object boundary and smoothed image are remarkable.But for the inclined to one side dark areas of color or retroreflective regions etc., easily produce the filling reparation of large stretch of hole region, the effect of these methods is also not obvious.If consider that Similar color piece object should have this point of similar depth value, fill reparation bulk hole region and just can be resolved.Be subject in addition illumination effect, in the deep video that Kinect photographs, background object, at depth value in the same time unstable not, even there will be the phenomenon of cavity point sometimes, and therefore time-based restorative procedure also needs to introduce.Some time-based reparation algorithms have been there are at present, for example document 2 is (referring to Sung-Yeol Kim, Ji-Ho Cho, Andreas Koschan, and Mongi A.Abidi, " Spatial and temporal enhancement of depth images captured by a time-of-flight depth sensor ", in Proc.of the IEEE International Conference on Pattern Recognition (ICPR), pp.2358-2361, 2010.) used an associating two-sided filter and the time domain based on estimation to repair continuously the depth image that TOF camera obtains.This method can effectively reduce optical noise and repair border, but does not but consider the reflective and degree of depth reparation inclined to one side dark areas of color, and the effect of this method is subject to the impact of estimation accuracy.
Summary of the invention
For most of deep videos, repair algorithm and only based on space, repair this phenomenon, proposed a kind of Kinect deep video space-time unite restorative procedure herein.Neighborhood territory pixel based on having Similar color should have this hypothesis of similar depth value, for the first frame depth image and the moving region that extracts in all frame depth images thereafter, with the color segmentation figure of corresponding coloured image, guides depth fill-in.Consider the failure that may cause said method that some do not have the inclined to one side dark areas of color of effective depth value, first detect the inclined to one side dark areas of color, then by the effective depth value in the inclined to one side dark areas of same color, repair hole region.Stagnant zone in the deep video photographing for Kinect, if empty pixel appears in present frame depth map, fills with the correspondence position depth value of former frame depth map.Put it briefly, the present invention provides technical scheme implementation step and is:
Step S1: to the first frame depth map, by using mean-shift color image to carry out color segmentation, to obtain corresponding label, scheme, then label figure being carried out to eight UNICOM territories cuts apart and further isolates each continuous color block, in each color block, effective depth value proportion surpasses after certain threshold value, by the intermediate value filling cavity pixel of all effective depth values in this color block.
Step S2: for the first frame depth map after step S1 repairs, by corresponding original color RGB image is converted to YCbCr image, to Y, Cb, Cr three-component arranges respectively threshold value, using the extracted region that meets all threshold conditions out as the inclined to one side dark areas of color, and be same piece region by region labeling connected on space, use is filled the empty pixel in this piece with the intermediate value of all effective depth values in piece region, finally uses dilation erosion operation to repair tiny hole region in depth image.
Step S3: for all depth images after the first frame, by the gray-scale map of front and back two frame coloured pictures is carried out to difference processing, find out motor image vegetarian refreshments, and then the minimum matrix that comprises motor image vegetarian refreshments is found out, matrix area is considered as moving region, finally moving region is carried out the space reparation of step S1 use.
Step S4: for all depth images after the first frame, after plucking out regular-shape motion region, remaining part is stationary background region.If there is empty pixel at stagnant zone, the depth value of the correspondence position of the depth map repairing by former frame is repaired.
Be different from prior art, technique scheme embodies the key technology main points of the contribution that possesses skills:
1, for most of Kinect depth image, repairing algorithm does not have to consider that this problem is repaired in the easy filling that produces large stretch of hole region for the inclined to one side dark areas of color or retroreflective regions etc., the present invention proposes the empty restorative procedure of the inclined to one side dark areas of color.First to detect the inclined to one side dark areas of color, by original color RGB image is converted to YCbCr image, to Y, Cb, Cr three-component arranges respectively threshold value, using the extracted region that meets all threshold conditions out as the inclined to one side dark areas of color.Then by region labeling connected on space, be same piece region, use and fill the empty pixel in this piece with the intermediate value of all effective depth values in piece region.
2, the present invention combines Space Revision double calculation method and time of detecting based on moving region is repaired algorithm, to solve simple usage space, repairs reparation that algorithm may occur inaccurate and repair time of long problem, makes repairing effect more excellent.
3, by the label figure to obtaining after color segmentation, further carry out eight UNICOM territories and cut apart to come separated discontinuously arranged color block, make color segmentation result more accurate.
4, in the reparation algorithm based on time-domain, to detected motor point, by minimum matrix area, comprise them, matrix area is considered as to moving region, facilitate the differentiation reparation of moving region and stagnant zone.
Thus, compared with prior art it, the beneficial effect that the inventive method technical scheme has: the empty repairing effect of the inclined to one side dark areas of color is obvious, guaranteeing under the prerequisite of same photographed scene, the depth image that the Kinect deep video restorative procedure that uses the present invention to propose obtains in contrast to original depth-map picture, and repairing effect is fine.Drafting (DIBR) the technology to drawing virtual view of use based on depth image, the picture quality of the virtual right view that the method correspondence that the present invention proposes obtains is also apparently higher than the picture quality of original virtual right view.
Accompanying drawing explanation
Fig. 1 is a kind of Kinect deep video of the present invention space-time unite restorative procedure flow chart.
Fig. 2 is continuous coloured image and the depth image of three frames that example Kinect of the present invention photographs.
Fig. 3 be (a) color image color that the first frame that example Kinect of the present invention photographs obtains through step S1 cut apart figure and (b) repair after depth map.
Fig. 4 is the inclined to one side dark areas of (a) color (different colors has represented the inclined to one side dark areas of different colors, and gray area is not inclined to one side dark areas) that obtains through step S2 of the first frame that example Kinect of the present invention photographs and (b) repairs depth map afterwards.
Fig. 5 is that example of the present invention obtains (a) first frame gray-scale map through step S3, (b) the second frame gray-scale map, (c) the motor point distribution map of the second frame, (d) the moving region subgraph after amplifying in the second frame cromogram, (e) degree of depth subgraph corresponding to the second frame moving region and (f) degree of depth subgraph of moving region after repairing.
Fig. 6 is the untreated depth image of the final design sketch of example of the present invention (a), (b) depth image after reparation, (c) do not carry out the virtual right view that degree of depth reparation obtains, (d) carry out the virtual right view that degree of depth reparation obtains, (e) display area in (c) amplifying, (f) display area in (d) amplifying, (g) the right leg of the chair region in (c) amplifying, the right leg of the chair region in (d) (h) amplifying.
Embodiment
With instantiation, the invention will be further described by reference to the accompanying drawings below:
The effect that the present invention uses one section of deep video sequence being photographed by Kinect and corresponding color image sequence to detect put forward the methods.Video sequence is comprised of 100 two field pictures.Image resolution ratio is 640 * 480.All examples that relate to adopt MATLAB7 as Simulation Experimental Platform.
Flow chart of the present invention is as shown in Figure 1: for the first frame depth map, and a usage space reparation.With the color segmentation figure of its corresponding coloured image, guide initial depth fill-in, then the inclined to one side dark areas of color is carried out to cavity and repair further to improve depth map quality.All depth maps for after the first frame, first extract moving region, separately space reparation are carried out in moving region, and the depth value of the depth map correspondence position then repairing by former frame is filled the empty point in remaining stagnant zone.Below in conjunction with each step, describe this example in detail:
Step (1), to the current depth image that needs reparation, here select the first frame depth map (d) in instance graph 2, the first coloured image corresponding to it, be that the first frame cromogram (a) in instance graph 2 carries out mean-shift color segmentation, obtain identifying label figure and color segmentation figure, the i.e. instance graph 3 (a) of colouring information.If image resolution ratio is n * m, each pixel (i, j) has the label L of a representative color piece index
ij, x
ijit is the value of color of this point.After all colours data clusters, meeting spatial distance is less than h
cand color space distance is less than h
lsome points can be by cluster.C
prepresentative feature collection, q is all Characteristic Numbers.G (x) is the negative derivative of image feature space Kernel Function.The label that image obtains after cutting apart so schemes available following formula and describes:
Consider that some regions with same color label are not connected, Yong Ba UNICOM cuts apart in territory and further isolates each continuous color block.If D
ij kbe the depth value of empty point (i, j) after repairing in k color block, S
kk color block in coloured image, D (S
k) represent all effective depth values in piece, n (S
k) be S
kin all pixel numbers, n (D (S
k)) be S
kin have the pixel number of effective depth value, T is a threshold value.Next just with following formula, carry out filling cavity:
After the reparation obtaining through step (1), depth map is as shown in Fig. 3 (b).
Step (2), still may there is the empty point that is under the jurisdiction of the inclined to one side dark areas of color of bulk in the depth image after step (1) is repaired, first the original color RGB image of present frame is converted to YCbCr image for this reason, to Y, Cb, Cr three-component arranges respectively threshold value, using the extracted region that meets all threshold conditions out as the inclined to one side dark areas of color.If luma (i, j) is the Y component value of point (i, j), blue_fiff (i, j) is chroma blue difference, and red_diff (i, j) is red color difference, T
luma, T
blue_diffand T
red_diffit is all the threshold value of calculating by large Tianjin method.L is set to 0.8 parameter.The region after employing following formula with non-zero brightness value is exactly the inclined to one side dark areas of color, and Fig. 4 (a) is the inclined to one side dark areas of color extracting from example coloured image Fig. 2 (a).
Finding after inclined to one side dark areas, is same piece region by region labeling connected on space, uses with all effective depth value D in piece region
xyintermediate value
fill the empty pixel (i, j) in this piece, suc as formula 4.Last actionradius is 1, be highly that the dilation erosion of 3 oval structure element operates (respectively by
and ⊙ represents) repair tiny hole region D in depth image
ij, after repairing, depth value is
suc as formula 5.
and ⊙ represents respectively expansive working and corrosion operation
After the reparation obtaining through step (2), depth map is as shown in Fig. 4 (b).
Step (3), for all depth maps after the first frame, as shown in Equation 6, by the gray-scale map of front and back two frame coloured pictures (Fig. 5 (a) and (b)) is carried out to difference processing, with difference (i, j) represent point (i, j) gray scale difference value, find out the motion_mask (i of non-zero, j) corresponding all motor image vegetarian refreshments (i, j), as shown in Fig. 5 (c), again the minimum matrix that comprises motor image vegetarian refreshments is found out, matrix area is considered as moving region, Fig. 5 (d) and (e) be respectively the colored subgraph in moving region and moving region degree of depth subgraph, finally the space reparation that step (1) is used is carried out in moving region, after the reparation obtaining, degree of depth subgraph is as shown in Fig. 5 (f).
Step (4), for all depth images after the first frame, after plucking out regular-shape motion region, remaining part is stationary background region.If there is empty pixel (i, j) at stagnant zone, use the depth value of the correspondence position of the depth map that former frame (k-1 frame) repairs
repair, be shown below:
For each frame depth map, it is all comprised of moving region depth map and stagnant zone depth map after repairing.
Fig. 6 (a) and (b) shown respectively the 3rd frame untreated depth map and used the depth map after the present invention processes.The corresponding right view that uses DIBR technology to drawing to obtain be presented at Fig. 6 (c) and (d) in.
From these images, we can find out the present invention can repair depth image effectively, and then provides depth map more clearly for 3D draws.The virtual view that the virtual view of drawing with the depth image after repairing can be drawn than the depth image photographing with original Kinect has better visual quality.Some important details area particularly, as girl's hair, display and chair legs, the depth map after reparation can provide more level and smooth better texture for 3D draws.
Innovative point of the present invention:
adopted the restorative procedure of the space-time unite of current less use to fill reparation to the original deep video being photographed by Kinect, improved remediation efficiency, effect is remarkable.
Claims (1)
1. a Kinect deep video space-time unite restorative procedure, is characterized in that, comprises that step has,
Step S1: to the first frame depth map, by using mean-shift color image to carry out color segmentation, to obtain corresponding label, scheme, then label figure being carried out to eight UNICOM territories cuts apart and further isolates each continuous color block, in each color block, effective depth value proportion surpasses after certain threshold value, by the intermediate value filling cavity pixel of all effective depth values in this color block;
Step S2: for the first frame depth map after step S1 repairs, by corresponding original color RGB image is converted to YCbCr image, to Y, Cb, Cr three-component arranges respectively threshold value, using the extracted region that meets all threshold conditions out as the inclined to one side dark areas of color, and be same piece region by region labeling connected on space, use is filled the empty pixel in this piece with the intermediate value of all effective depth values in piece region, finally uses dilation erosion operation to repair tiny hole region in depth image;
Step S3: for all depth images after the first frame, by the gray-scale map of front and back two frame coloured pictures is carried out to difference processing, find out motor image vegetarian refreshments, and then the minimum matrix that comprises motor image vegetarian refreshments is found out, matrix area is considered as moving region, finally moving region is carried out the space reparation of step S1 use;
Step S4: for all depth images after the first frame, after plucking out regular-shape motion region, remaining part is stationary background region;
If there is empty pixel at stagnant zone, the depth value of the correspondence position of the depth map repairing by former frame is repaired.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310442055.3A CN103561258B (en) | 2013-09-25 | 2013-09-25 | Kinect depth video spatio-temporal union restoration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310442055.3A CN103561258B (en) | 2013-09-25 | 2013-09-25 | Kinect depth video spatio-temporal union restoration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103561258A true CN103561258A (en) | 2014-02-05 |
CN103561258B CN103561258B (en) | 2015-04-15 |
Family
ID=50015395
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310442055.3A Expired - Fee Related CN103561258B (en) | 2013-09-25 | 2013-09-25 | Kinect depth video spatio-temporal union restoration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103561258B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103996174A (en) * | 2014-05-12 | 2014-08-20 | 上海大学 | Method for performing hole repair on Kinect depth images |
CN104299220A (en) * | 2014-07-10 | 2015-01-21 | 上海大学 | Method for filling cavity in Kinect depth image in real time |
CN105096311A (en) * | 2014-07-01 | 2015-11-25 | 中国科学院科学传播研究中心 | Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit) |
CN105869115A (en) * | 2016-03-25 | 2016-08-17 | 浙江大学 | Depth image super-resolution method based on kinect2.0 |
CN105894503A (en) * | 2016-03-30 | 2016-08-24 | 江苏大学 | Method for restoring Kinect plant color and depth detection images |
CN106355642A (en) * | 2016-08-31 | 2017-01-25 | 上海交通大学 | Three-dimensional reconstruction method, based on depth map, of green leaf |
CN106355611A (en) * | 2016-09-13 | 2017-01-25 | 江苏奥斯汀光电科技股份有限公司 | Naked-eye 3D (three-dimensional) super-resolution filtering method on basis of temporal and spatial correlation |
CN106991370A (en) * | 2017-02-28 | 2017-07-28 | 中科唯实科技(北京)有限公司 | Pedestrian retrieval method based on color and depth |
CN108090877A (en) * | 2017-11-29 | 2018-05-29 | 深圳慎始科技有限公司 | A kind of RGB-D camera depth image repair methods based on image sequence |
CN108399632A (en) * | 2018-03-02 | 2018-08-14 | 重庆邮电大学 | A kind of RGB-D camera depth image repair methods of joint coloured image |
CN108629756A (en) * | 2018-04-28 | 2018-10-09 | 东北大学 | A kind of Kinect v2 depth images Null Spot restorative procedure |
CN109949397A (en) * | 2019-03-29 | 2019-06-28 | 哈尔滨理工大学 | A kind of depth map reconstruction method of combination laser point and average drifting |
CN110390681A (en) * | 2019-07-17 | 2019-10-29 | 海伯森技术(深圳)有限公司 | A kind of map object profile rapid extracting method and device based on depth camera |
CN112929628A (en) * | 2021-02-08 | 2021-06-08 | 咪咕视讯科技有限公司 | Virtual viewpoint synthesis method and device, electronic equipment and storage medium |
WO2021237471A1 (en) * | 2020-05-26 | 2021-12-02 | Baidu.Com Times Technology (Beijing) Co., Ltd. | Depth-guided video inpainting for autonomous driving |
WO2022188102A1 (en) * | 2021-03-11 | 2022-09-15 | Oppo广东移动通信有限公司 | Depth image inpainting method and apparatus, camera assembly, and electronic device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110081042A1 (en) * | 2009-10-07 | 2011-04-07 | Samsung Electronics Co., Ltd. | Apparatus and method for adjusting depth |
CN103220543A (en) * | 2013-04-25 | 2013-07-24 | 同济大学 | Real time three dimensional (3D) video communication system and implement method thereof based on Kinect |
CN103258078A (en) * | 2013-04-02 | 2013-08-21 | 上海交通大学 | Human-computer interaction virtual assembly system fusing Kinect equipment and Delmia environment |
-
2013
- 2013-09-25 CN CN201310442055.3A patent/CN103561258B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110081042A1 (en) * | 2009-10-07 | 2011-04-07 | Samsung Electronics Co., Ltd. | Apparatus and method for adjusting depth |
CN103258078A (en) * | 2013-04-02 | 2013-08-21 | 上海交通大学 | Human-computer interaction virtual assembly system fusing Kinect equipment and Delmia environment |
CN103220543A (en) * | 2013-04-25 | 2013-07-24 | 同济大学 | Real time three dimensional (3D) video communication system and implement method thereof based on Kinect |
Non-Patent Citations (3)
Title |
---|
JINGJING FU,ET AL: "Kinect-like depth denoising", 《CIRCUITS AND SYSTEMS(ISCAS),2012 IEEE INTERNATIONAL SYMPOSIUM ON》, 23 May 2012 (2012-05-23), pages 512 - 515 * |
SUNG-YEOL KIM,ET AL: "Spatial and Temporal Enhancement of Depth Images Captured by a Time-of-flight Depth Sensor", 《2010 INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION》, 31 December 2010 (2010-12-31), pages 2358 - 2361 * |
YU YU,ET AL: "A Shadow Repair Approach for Kinect Depth Maps", 《COMPUTER VISION - ACCV 2012,PART Ⅳ》, 9 November 2012 (2012-11-09), pages 615 - 626, XP047027216 * |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103996174B (en) * | 2014-05-12 | 2017-05-10 | 上海大学 | Method for performing hole repair on Kinect depth images |
CN103996174A (en) * | 2014-05-12 | 2014-08-20 | 上海大学 | Method for performing hole repair on Kinect depth images |
CN105096311A (en) * | 2014-07-01 | 2015-11-25 | 中国科学院科学传播研究中心 | Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit) |
CN104299220A (en) * | 2014-07-10 | 2015-01-21 | 上海大学 | Method for filling cavity in Kinect depth image in real time |
CN104299220B (en) * | 2014-07-10 | 2017-05-31 | 上海大学 | A kind of method that cavity in Kinect depth image carries out real-time filling |
CN105869115B (en) * | 2016-03-25 | 2019-02-22 | 浙江大学 | A kind of depth image super-resolution method based on kinect2.0 |
CN105869115A (en) * | 2016-03-25 | 2016-08-17 | 浙江大学 | Depth image super-resolution method based on kinect2.0 |
CN105894503A (en) * | 2016-03-30 | 2016-08-24 | 江苏大学 | Method for restoring Kinect plant color and depth detection images |
CN105894503B (en) * | 2016-03-30 | 2019-10-01 | 江苏大学 | A kind of restorative procedure of pair of Kinect plant colour and depth detection image |
CN106355642A (en) * | 2016-08-31 | 2017-01-25 | 上海交通大学 | Three-dimensional reconstruction method, based on depth map, of green leaf |
CN106355642B (en) * | 2016-08-31 | 2019-04-02 | 上海交通大学 | A kind of three-dimensional rebuilding method of the green leaves based on depth map |
CN106355611B (en) * | 2016-09-13 | 2019-03-22 | 江苏奥斯汀光电科技股份有限公司 | The associated naked eye 3D supersolution in space is as filtering method when one kind is based on |
CN106355611A (en) * | 2016-09-13 | 2017-01-25 | 江苏奥斯汀光电科技股份有限公司 | Naked-eye 3D (three-dimensional) super-resolution filtering method on basis of temporal and spatial correlation |
CN106991370A (en) * | 2017-02-28 | 2017-07-28 | 中科唯实科技(北京)有限公司 | Pedestrian retrieval method based on color and depth |
CN106991370B (en) * | 2017-02-28 | 2020-07-31 | 中科唯实科技(北京)有限公司 | Pedestrian retrieval method based on color and depth |
CN108090877A (en) * | 2017-11-29 | 2018-05-29 | 深圳慎始科技有限公司 | A kind of RGB-D camera depth image repair methods based on image sequence |
CN108399632A (en) * | 2018-03-02 | 2018-08-14 | 重庆邮电大学 | A kind of RGB-D camera depth image repair methods of joint coloured image |
CN108399632B (en) * | 2018-03-02 | 2021-06-15 | 重庆邮电大学 | RGB-D camera depth image restoration method based on color image combination |
CN108629756A (en) * | 2018-04-28 | 2018-10-09 | 东北大学 | A kind of Kinect v2 depth images Null Spot restorative procedure |
CN108629756B (en) * | 2018-04-28 | 2021-06-25 | 东北大学 | Kinectv2 depth image invalid point repairing method |
CN109949397A (en) * | 2019-03-29 | 2019-06-28 | 哈尔滨理工大学 | A kind of depth map reconstruction method of combination laser point and average drifting |
CN110390681A (en) * | 2019-07-17 | 2019-10-29 | 海伯森技术(深圳)有限公司 | A kind of map object profile rapid extracting method and device based on depth camera |
CN110390681B (en) * | 2019-07-17 | 2023-04-11 | 海伯森技术(深圳)有限公司 | Depth image object contour rapid extraction method and device based on depth camera |
US11282164B2 (en) | 2020-05-26 | 2022-03-22 | Baidu Usa Llc | Depth-guided video inpainting for autonomous driving |
WO2021237471A1 (en) * | 2020-05-26 | 2021-12-02 | Baidu.Com Times Technology (Beijing) Co., Ltd. | Depth-guided video inpainting for autonomous driving |
CN112929628A (en) * | 2021-02-08 | 2021-06-08 | 咪咕视讯科技有限公司 | Virtual viewpoint synthesis method and device, electronic equipment and storage medium |
CN112929628B (en) * | 2021-02-08 | 2023-11-21 | 咪咕视讯科技有限公司 | Virtual viewpoint synthesis method, device, electronic equipment and storage medium |
WO2022188102A1 (en) * | 2021-03-11 | 2022-09-15 | Oppo广东移动通信有限公司 | Depth image inpainting method and apparatus, camera assembly, and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN103561258B (en) | 2015-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103561258B (en) | Kinect depth video spatio-temporal union restoration method | |
CN105069808B (en) | The video image depth estimation method split based on image | |
CN108564041B (en) | Face detection and restoration method based on RGBD camera | |
TWI524734B (en) | Method and device for generating a depth map | |
CN109462747B (en) | DIBR system cavity filling method based on generation countermeasure network | |
CN101765022B (en) | Depth representing method based on light stream and image segmentation | |
CN101699512B (en) | Depth generating method based on background difference sectional drawing and sparse optical flow method | |
Cheng et al. | A novel 2Dd-to-3D conversion system using edge information | |
RU2382406C1 (en) | Method of improving disparity map and device for realising said method | |
CN103996174B (en) | Method for performing hole repair on Kinect depth images | |
CN101271578A (en) | Depth sequence generation method of technology for converting plane video into stereo video | |
CN103905813B (en) | Based on the DIBR hole-filling method of background extracting and divisional reconstruction | |
CN105374039B (en) | Monocular image depth information method of estimation based on contour acuity | |
CN102609950B (en) | Two-dimensional video depth map generation process | |
CN107622480B (en) | Kinect depth image enhancement method | |
CN105404888A (en) | Saliency object detection method integrated with color and depth information | |
CA2988360A1 (en) | Method and apparatus for determining a depth map for an image | |
Xu et al. | A method of hole-filling for the depth map generated by Kinect with moving objects detection | |
CN106447718B (en) | A kind of 2D turns 3D depth estimation method | |
KR101125061B1 (en) | A Method For Transforming 2D Video To 3D Video By Using LDI Method | |
CN104778673B (en) | A kind of improved gauss hybrid models depth image enhancement method | |
CN109218706A (en) | A method of 3 D visual image is generated by single image | |
JP5210416B2 (en) | Stereoscopic image generating apparatus, stereoscopic image generating method, program, and recording medium | |
CN104661014B (en) | The gap filling method that space-time combines | |
Yang et al. | Depth map generation from a single image using local depth hypothesis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150415 Termination date: 20170925 |
|
CF01 | Termination of patent right due to non-payment of annual fee |