TWI571099B - Device and method for depth estimation - Google Patents
Device and method for depth estimation Download PDFInfo
- Publication number
- TWI571099B TWI571099B TW104139474A TW104139474A TWI571099B TW I571099 B TWI571099 B TW I571099B TW 104139474 A TW104139474 A TW 104139474A TW 104139474 A TW104139474 A TW 104139474A TW I571099 B TWI571099 B TW I571099B
- Authority
- TW
- Taiwan
- Prior art keywords
- view image
- depth
- image
- feature
- pixel displacement
- Prior art date
Links
Landscapes
- Measurement Of Optical Distance (AREA)
- Length Measuring Devices By Optical Means (AREA)
Description
本揭露是有關於一種深度估測裝置及方法。 The disclosure is directed to a depth estimation apparatus and method.
近年來,深度估測技術被廣泛地應用,諸如車用距離感測、立體成像等等。為了精準地測得與環境物體間的距離(深度),傳統上會利用光達進行環場式的掃瞄以取得深度資訊。然而,低解析度的光達往往不易偵測到環境中的細節,而高掃描解析度的光達卻又價格不斐,使得商品化並不容易。 In recent years, depth estimation techniques have been widely used, such as vehicle distance sensing, stereo imaging, and the like. In order to accurately measure the distance (depth) from an environmental object, it is conventional to use a light field to perform a ring-type scan to obtain depth information. However, low-resolution light is often difficult to detect details in the environment, while high-scan resolution is not expensive, making commercialization difficult.
因此,如何提供一種深度估測裝置及方法,可在提升深度估測效率的同時亦能降低設置成本,乃目前業界所致力的課題之一。 Therefore, how to provide a depth estimation device and method can improve the efficiency of the depth estimation and also reduce the installation cost, which is one of the current topics in the industry.
本發明是有關於一種深度估測裝置及方法,可利用雙視角影像之特徵點來提升深度估測效率,並能降低深度感測器的設置成本。 The invention relates to a depth estimation device and method, which can utilize the feature points of the dual-view image to improve the depth estimation efficiency and reduce the installation cost of the depth sensor.
根據本揭露一方面,提出一種深度估測裝置,其包括:立體影像擷取單元、主動式深度感測器以及處理器。立體影 像擷取單元提供第一視角影像以及第二視角影像。主動式深度感測器提供深度資料。處理器執行深度感測處理,深度感測處理包括:對深度資料進行空間轉換以產生轉換後深度資料;對第一視角影像與第二視角影像執行特徵搜尋運算,以決定多組特徵組,各特徵組包括分別在第一視角影像與第二視角影像上對應於同一物件之兩特徵點;利用特徵點,與轉換後深度資料執行內插運算,以針對第一視角影像與第二視角影像決定像素位移搜尋範圍;以及在像素位移搜尋範圍內,搜尋第一視角影像與第二視角影像匹配時的像素位移量,以產生視差影像。 According to an aspect of the disclosure, a depth estimating apparatus is provided, including: a stereoscopic image capturing unit, an active depth sensor, and a processor. Stereo shadow The first viewing angle image and the second viewing angle image are provided by the capturing unit. Active depth sensors provide depth data. The processor performs a depth sensing process, where the depth sensing process includes: spatially transforming the depth data to generate the converted depth data; performing a feature search operation on the first view image and the second view image to determine a plurality of sets of feature groups, each The feature set includes two feature points corresponding to the same object on the first view image and the second view image respectively; using the feature point, performing an interpolation operation on the converted depth data to determine the first view image and the second view image a pixel displacement search range; and searching for a pixel displacement amount when the first view image and the second view image match in the pixel displacement search range to generate a parallax image.
根據本揭露一方面,提出一種深度估測方法,其包括以下步驟:自立體影像擷取單元取得第一視角影像以及第二視角影像;自主動式深度感測器取得深度資料;對深度資料進行空間轉換以產生轉換後深度資料;對第一視角影像與第二視角影像執行特徵搜尋運算,以決定多組特徵組,各特徵組包括分別在第一視角影像與第二視角影像上對應於同一物件之兩特徵點;利用特徵點,與轉換後深度資料執行內插運算,以針對第一視角影像與第二視角影像決定一像素位移搜尋範圍;以及在像素位移搜尋範圍內,搜尋第一視角影像與第二視角影像匹配時的像素位移量,以產生視差影像。 According to an aspect of the disclosure, a depth estimation method is provided, which includes the steps of: obtaining a first view image and a second view image from a stereo image capture unit; obtaining depth data from an active depth sensor; and performing depth data on the depth data Converting the space to generate the converted depth data; performing a feature search operation on the first view image and the second view image to determine a plurality of sets of features, each feature set including corresponding to the first view image and the second view image respectively Two feature points of the object; performing an interpolation operation with the transformed depth data by using the feature points to determine a pixel displacement search range for the first view image and the second view image; and searching for the first view angle within the pixel displacement search range The amount of pixel displacement when the image matches the second view image to generate a parallax image.
為了對本揭露之上述及其他方面有更佳的瞭解,下文特舉較佳實施例,並配合所附圖式,作詳細說明如下: In order to better understand the above and other aspects of the present disclosure, the preferred embodiments are described below in detail with reference to the accompanying drawings.
100‧‧‧深度估測裝置 100‧‧‧Deep Estimation Device
102‧‧‧立體影像擷取單元 102‧‧‧3D image capture unit
1022‧‧‧第一攝像鏡頭 1022‧‧‧first camera lens
1024‧‧‧第二攝像鏡頭 1024‧‧‧second camera lens
104‧‧‧主動式深度感測器 104‧‧‧Active depth sensor
106‧‧‧處理器 106‧‧‧ Processor
F1‧‧‧第一視角影像 F1‧‧‧ first view image
F2‧‧‧第二視角影像 F2‧‧‧Second view image
DD‧‧‧深度資料 DD‧‧‧In-depth information
I‧‧‧視差影像 I‧‧·parallax image
202、204、206、208、210、212‧‧‧步驟 202, 204, 206, 208, 210, 212‧ ‧ steps
OB‧‧‧物體 OB‧‧‧ objects
OB1、OB2‧‧‧物件 OB1, OB2‧‧‧ objects
P1~P3、P1’~P3’‧‧‧特徵點 P1~P3, P1'~P3’‧‧‧ feature points
A1~A3‧‧‧點 A1~A3‧‧‧
TH‧‧‧閥值 TH‧‧‧ threshold
P、P’‧‧‧像素 P, P’‧‧ ‧ pixels
X0、(X0+SR)-X1、X0+SR、(X0+SR)+X2‧‧‧座標 X 0 , (X 0 +SR)-X 1 , X 0 +SR, (X 0 +SR)+X 2 ‧‧‧ coordinates
SSR‧‧‧像素位移搜尋範圍 SSR‧‧‧Pixel Displacement Search Range
AX‧‧‧鏡心連線 AX‧‧‧ Mirror connection
SL‧‧‧掃描線 SL‧‧‧ scan line
x、y、z‧‧‧座標軸 X, y, z‧‧‧ coordinate axis
第1圖繪示依據本揭露之一實施例之深度估測裝置之簡化方塊圖。 1 is a simplified block diagram of a depth estimation device in accordance with an embodiment of the present disclosure.
第2圖繪示依據本揭露之一實施例之深度估測方法之流程圖。 FIG. 2 is a flow chart of a depth estimation method according to an embodiment of the present disclosure.
第3圖繪示依據本揭露之一實施例之第一視角影像與第二視角影像之示意圖。 FIG. 3 is a schematic diagram of a first view image and a second view image according to an embodiment of the present disclosure.
第4圖繪示依據特徵組之兩特徵點間在一方向上的像素位移量是否超出一閥值來判斷是否篩除該特徵組之示意圖。 FIG. 4 is a schematic diagram of determining whether to screen the feature group according to whether the pixel displacement amount in one direction between two feature points of the feature group exceeds a threshold.
第5圖繪示依據本揭露之一實施例之像素位移搜尋範圍之示意圖。 FIG. 5 is a schematic diagram of a pixel displacement search range according to an embodiment of the present disclosure.
第6A圖及第6B圖繪示依據本揭露不同實施例之立體影像擷取單元與主動式深度感測器之配置圖。 6A and 6B are configuration diagrams of a stereoscopic image capturing unit and an active depth sensor according to different embodiments of the present disclosure.
在本文中,參照所附圖式仔細地描述本揭露的一些實施例,但不是所有實施例都有表示在圖示中。實際上,這些揭露可使用多種不同的變形,且並不限於本文中的實施例。相對的,本揭露提供這些實施例以滿足應用的法定要求。圖式中相同的參考符號用來表示相同或相似的元件。 In the present disclosure, some embodiments of the present disclosure are described in detail with reference to the drawings, but not all embodiments are illustrated in the drawings. In fact, these disclosures may use a variety of different variations and are not limited to the embodiments herein. In contrast, the present disclosure provides these embodiments to meet the statutory requirements of the application. The same reference symbols are used in the drawings to refer to the same or similar elements.
請參考第1圖及第2圖。第1圖繪示依據本揭露之一實施例之深度估測裝置100之簡化方塊圖。第2圖繪示依據本揭露之一實施例之深度估測方法之流程圖。深度估測裝置100主要包括立體影像擷取單元102、主動式深度感測器104以及處理 器106。立體影像擷取單元102例如是具備雙鏡頭的立體相機(stereo camera)。主動式深度感測器104例如是光達(Lidar)、雷達或其它可透過主動式掃描以獲取深度資料的感測器。 Please refer to Figure 1 and Figure 2. FIG. 1 is a simplified block diagram of a depth estimation apparatus 100 in accordance with an embodiment of the present disclosure. FIG. 2 is a flow chart of a depth estimation method according to an embodiment of the present disclosure. The depth estimation device 100 mainly includes a stereo image capturing unit 102, an active depth sensor 104, and processing 106. The stereoscopic image capturing unit 102 is, for example, a stereo camera having a dual lens. The active depth sensor 104 is, for example, a Lidar, radar or other sensor that can be actively scanned for depth data.
在步驟202,立體影像擷取單元102提供第一視角影像F1以及第二視角影像F2。第一視角影像F1以及第二視角影像F2例如是在不同視角下針對一場景所擷取的影像。如第1圖所示,立體影像擷取單元102包括第一攝像鏡頭1022以及第二攝像鏡頭1024,分別用以拍攝不同視角的第一視角影像F1以及第二視角影像F2。第一視角影像F1以及第二視角影像F2為經過影像校正(image rectification)之影像。在一實施例中,第一攝像鏡頭1022以及第二攝像鏡頭1024係沿著一水平軸並列設置,此時第一視角影像F1以及第二視角影像F2相當於一組左、右眼視角影像。然本揭露並不限於此,在一實施例中,立體影像擷取單元102可包括多個攝像鏡頭(例如包括兩個以上的攝像鏡頭),以分別擷取不同視角的視角影像。 In step 202, the stereoscopic image capturing unit 102 provides a first viewing angle image F1 and a second viewing angle image F2. The first view image F1 and the second view image F2 are, for example, images captured for a scene at different viewing angles. As shown in FIG. 1 , the stereoscopic image capturing unit 102 includes a first imaging lens 1022 and a second imaging lens 1024 for respectively capturing a first viewing angle image F1 and a second viewing angle image F2 of different viewing angles. The first view image F1 and the second view image F2 are image rectification images. In one embodiment, the first imaging lens 1022 and the second imaging lens 1024 are arranged side by side along a horizontal axis. At this time, the first viewing angle image F1 and the second viewing angle image F2 are equivalent to a set of left and right eye viewing angle images. The disclosure is not limited thereto. In an embodiment, the stereoscopic image capturing unit 102 may include a plurality of imaging lenses (for example, including two or more imaging lenses) to respectively capture perspective images of different viewing angles.
在步驟204,主動式深度感測器104提供深度資料DD。舉例來說,主動式深度感測器104可發射掃描波並依據偵測到的回波來估測與被掃描物間的距離,進而產生相應的深度資料DD。 At step 204, the active depth sensor 104 provides depth data DD. For example, the active depth sensor 104 can emit a scan wave and estimate the distance from the object to be scanned based on the detected echo, thereby generating a corresponding depth data DD.
在一實施例中,處理器106可依據第一、二視角影像F1、F2以及深度資料DD執行深度感測處理並產生視差影像I。所述深度感測處理例如步驟206~212所示。 In an embodiment, the processor 106 may perform depth sensing processing and generate a parallax image I according to the first and second view images F1, F2 and the depth data DD. The depth sensing process is shown, for example, in steps 206-212.
在步驟206,係對深度資料DD進行空間轉換(space transform)以產生轉換後深度資料。轉換後深度資料係與第一視角影像F1或第二視角影像F2的視角一致。空間轉換例如是一座標系轉換,用以將深度資料DD所對應的座標系轉換至與第一視角影像F1或第二視角影像F2相同的座標系。 At step 206, the depth data DD is spatially transformed to generate the transformed depth data. The converted depth data is consistent with the angle of view of the first perspective image F1 or the second perspective image F2. The space conversion is, for example, a mark conversion for converting the coordinate system corresponding to the depth data DD to the same coordinate system as the first view image F1 or the second view image F2.
在步驟208,係對第一視角影像F1與第二視角影像F2執行特徵搜尋運算,以決定多組特徵組。其中,各特徵組包括分別在第一視角影像F1與第二視角影像F2上對應於同一物件之兩特徵點。特徵搜尋運算例如包括尺度不變特徵轉換(Scale Invariant Feature Transform,SIFT),加速穩固特徵(Speeded Up Robust Features,SUFT)或FAST特徵點檢測等演算法。 At step 208, a feature search operation is performed on the first view image F1 and the second view image F2 to determine a plurality of sets of features. Each feature group includes two feature points corresponding to the same object on the first view image F1 and the second view image F2, respectively. Feature search operations include algorithms such as Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SUFT), or FAST feature point detection.
在步驟210,係利用該些特徵點,與轉換後深度資料執行內插運算,以針對第一視角影像F1與第二視角影像F2決定一像素位移搜尋範圍。 In step 210, the feature points are used to perform an interpolation operation with the converted depth data to determine a pixel displacement search range for the first view image F1 and the second view image F2.
上述之內插運算例如是近鄰內插(Nearest Neighbor Interpolation)、線性內插(Linear Interpolation)、三次內插(Cubic Interpolation)、多項式內插(Polynomial Interpolation)或其它保留邊緣的演算法(Edge-Aware Data Interpolation),像是雙邊濾波器(bilateral filter)或雙邊濾波器柵格(bilateral grid)演算法。 The above interpolation operations are, for example, Nearest Neighbor Interpolation, Linear Interpolation, Cubic Interpolation, Polynomial Interpolation, or other algorithm for retaining edges (Edge-Aware). Data Interpolation), such as a bilateral filter or a bilateral filter algorithm.
一般來說,主動式深度感測器104(如光達)會透過發射多條掃描線進行環場掃描以偵測與環境物體的距離。然該些掃描線所構成的掃描區域係離散的(一掃描線例如對應一特定高度 的水平面),故仍需透過內差運算以產生完整場景的深度資訊。本揭露實施例之處理器106在運算的過程中可將特徵點的資訊對視為已知解再對轉換後深度資料進行內插,故可改善內插運算所造成的誤差,進而降低對主動式深度感測器104掃描解析度的要求。 In general, the active depth sensor 104 (eg, light) performs a ring scan by transmitting a plurality of scan lines to detect the distance from an environmental object. However, the scan areas formed by the scan lines are discrete (a scan line corresponds to a specific height, for example The horizontal plane, so the internal difference calculation is still needed to generate the depth information of the complete scene. In the process of computing, the processor 106 of the embodiment can calculate the information pair of the feature point as a known solution and then interpolate the converted depth data, thereby improving the error caused by the interpolation operation and further reducing the active The depth sensor 104 scans for resolution requirements.
在步驟212,係在該像素位移搜尋範圍內,搜尋第一視角影像F1與第二視角影像F2匹配時的像素位移量,以產生視差影像I。由於處理器106只需在步驟210中所建立的像素位移搜尋範圍內搜尋第一視角影像F1與第二視角影像F2之間的像素位移量,故可有效降低產生視差影像I的運算量。 In step 212, the pixel displacement amount when the first view image F1 and the second view image F2 match are searched in the pixel displacement search range to generate the parallax image I. Since the processor 106 only needs to search for the pixel displacement between the first view image F1 and the second view image F2 within the pixel displacement search range established in step 210, the amount of calculation for generating the parallax image I can be effectively reduced.
須注意,上述各步驟之執行順序並不以第2圖之方塊流程為限。舉例來說,步驟202、204之執行順序可以前後互換,亦可同步執行,步驟206、208也是一樣。 It should be noted that the order of execution of the above steps is not limited to the block flow of FIG. For example, the execution order of steps 202, 204 can be interchanged before and after, or can be performed synchronously, and the same is true for steps 206 and 208.
第3圖繪示依據本揭露之一實施例之第一視角影像F1與第二視角影像F2之示意圖。在第3圖的例子中,針對物體OB在不同視角下進行拍攝可分別獲得第一視角影像F1與第二視角影像F2。由於第一視角影像F1與第二視角影像F2所對應的視角不同,故兩影像中對應相同物體OB的位置會產生偏移,也就是像素位移。一般來說,當物體OB距離越遠,兩影像間的像素位移量越小;反之,當物體OB距離越近,兩影像間的像素位移量越大。 FIG. 3 is a schematic diagram of a first view image F1 and a second view image F2 according to an embodiment of the disclosure. In the example of FIG. 3, the first view image F1 and the second view image F2 are respectively obtained by shooting the object OB at different viewing angles. Since the first view image F1 and the second view image F2 have different viewing angles, the position of the corresponding object OB in the two images may be offset, that is, the pixel displacement. In general, the farther the object OB is, the smaller the pixel displacement between the two images; conversely, the closer the object OB is, the larger the pixel displacement between the two images.
被拍攝的物體OB在第一視角影像F1中係對應物件OB1,在第二視角影像F2中係對應物件OB2。透過特徵搜尋運算 (如SIFT或SUFT),可找出物件OB1的特徵點P1~P3以及物件OB2的特徵點P1’~P3’。其中,對應物體OB同一點A1的兩特徵點P1及P1’可視為一特徵組;對應物體OB同一點A2的兩特徵點P2及P2’可視為一特徵組;對應物體OB同一點A3的兩特徵點P3及P3’可視為一特徵組。相較於平坦/無特徵的畫面(如一面白牆或藍天),特徵點具有明顯、穩固(robust)的性質,故基於特徵點所進行的深度計算較不易發生錯誤。如第2圖所示之步驟210,可利用特徵點資訊來修正內插結果,以降低對主動式深度感測器104掃描解析度的要求,進而降低感測器設置成本。 The object OB to be photographed corresponds to the object OB1 in the first view image F1, and corresponds to the object OB2 in the second view image F2. Feature search operation (e.g., SIFT or SUFT), the feature points P1 to P3 of the object OB1 and the feature points P1' to P3' of the object OB2 can be found. The two feature points P1 and P1' corresponding to the same point A1 of the object OB can be regarded as a feature group; the two feature points P2 and P2' corresponding to the same point A2 of the object OB can be regarded as one feature group; and the corresponding object OB is the same point A3 The feature points P3 and P3' can be regarded as a feature group. Compared to flat/uncharacteristic images (such as a white wall or blue sky), feature points have obvious, robust properties, so depth calculations based on feature points are less prone to errors. As shown in step 210 of FIG. 2, the feature point information can be used to correct the interpolation result to reduce the scanning resolution requirement of the active depth sensor 104, thereby reducing the sensor setup cost.
在一實施例中,處理器106所執行的深度感測處理更包括:判斷各特徵組之兩特徵點間沿著一軸線的一距離差異是否大於一閥值;以及篩除該距離差異大於該閥值的特徵組。 In an embodiment, the depth sensing process performed by the processor 106 further includes: determining whether a distance difference between two feature points of each feature group along an axis is greater than a threshold; and screening the distance difference to be greater than the The characteristic set of thresholds.
舉例來說,若第一視角影像F1與第二視角影像F2經過影像校正後,係分別對應左眼視角影像與右眼視角影像,原則上左、右眼視角影像的特徵點會在同一水平(或同一軸線)上,此時分別在兩視角影像中的相對應兩特徵點間的像素位移量主要應反映在水平方向,垂直方向的距離差異不應過大。在此情況下,篩除沿垂直方向距離差異過大的特徵組,使之排除於後續的內差運算中,可進一步降低深度計算錯誤的機會。 For example, if the first view image F1 and the second view image F2 are image-corrected, respectively, corresponding to the left-eye view image and the right-eye view image, in principle, the feature points of the left and right eye view images will be at the same level ( On the same axis, the pixel displacement between the corresponding two feature points in the two-view image should be mainly reflected in the horizontal direction, and the distance difference in the vertical direction should not be too large. In this case, the feature set whose distance difference in the vertical direction is excessively large is screened out to be excluded from the subsequent internal difference operation, and the chance of the depth calculation error can be further reduced.
如第4圖所示,假設用以提供第一、二視角影像F1、F2的第一、二攝像鏡頭1022、1024之鏡心連線方向為x軸方向,當一特徵組之兩特徵點(如P1、P1’)間沿著y軸方向的距離差異大 於一閥值TH,則此特徵組將被篩除,也就是被排除於後續的內插運算中,以避免採用錯誤的特徵組進行計算。 As shown in FIG. 4, it is assumed that the connection directions of the first and second imaging lenses 1022 and 1024 for providing the first and second viewing angle images F1 and F2 are the x-axis direction, and two feature points of a feature group ( If the distance between the P1 and P1') is large along the y-axis direction At a threshold TH, this set of features will be screened out, that is, excluded from subsequent interpolation operations to avoid calculations using the wrong set of features.
第5圖繪示依據本揭露之一實施例之像素位移搜尋範圍SSR之示意圖。在本揭露實施例中,處理器106可對主動式深度感測器104所提供的深度資料DD進行空間轉換(space transform),並利用自第一、二視角影像F1、F2取得的特徵點,對轉換後的深度資料(例如視差資料)執行內插運算以產生多筆像素位移參考值,再以此些像素位移參考值為基準,建立像素位移搜尋範圍。 FIG. 5 is a schematic diagram of a pixel displacement search range SSR according to an embodiment of the present disclosure. In the disclosed embodiment, the processor 106 may perform space transform on the depth data DD provided by the active depth sensor 104, and use feature points obtained from the first and second view images F1 and F2. Performing an interpolation operation on the converted depth data (for example, parallax data) to generate a plurality of pixel displacement reference values, and then using the pixel displacement reference values as a reference to establish a pixel displacement search range.
如第5圖所示,若位在座標X0的像素P所對應的像素位移參考值為SR,可估測其對應像素P’係位在另一視角影像中同一y軸上的座標X0+SR附近,例如位在座標(X0+SR)-X1到座標(X0+SR)+X2的區間內。座標(X0+SR)-X1到座標(X0+SR)+X2的區間可定義出像素位移搜尋範圍SSR。此時,處理器106只需針對像素位移搜尋範圍SSR進行搜尋即可找出像素P與其對應像素P’之間的實際像素位移量,而不用逐一比對所有的可能值,故可有效降低深度估測的運算量。在一實施例中,若內插後的轉換後深度資料的深度值變化較大,可適當放大X1與X2的值,以調整像素位移搜尋範圍SSR的範圍。也就是說,可依據內插後的轉換後深度資料的深度值變化程度,來調整像素位移搜尋範圍SSR的大小。 As shown in FIG. 5, when the displacement of the reference position pixel coordinates X pixel P 0 corresponding to the value of the SR, which may estimate a corresponding pixel P 'in another bit line view image coordinates X 0 in the same y-axis Near +SR, for example, in the range of coordinates (X 0 +SR) - X 1 to coordinates (X 0 + SR) + X 2 . The interval of the coordinate (X 0 +SR)-X 1 to the coordinate (X 0 +SR) + X 2 defines the pixel displacement search range SSR. At this time, the processor 106 only needs to search for the pixel displacement search range SSR to find the actual pixel displacement amount between the pixel P and its corresponding pixel P′, without comparing all the possible values one by one, so the depth can be effectively reduced. Estimated amount of computation. In an embodiment, if the depth value of the converted depth data after the interpolation changes greatly, the values of X 1 and X 2 may be appropriately enlarged to adjust the range of the pixel displacement search range SSR. That is to say, the size of the pixel displacement search range SSR can be adjusted according to the degree of change of the depth value of the converted depth data after interpolation.
第6A圖及第6B圖繪示依據本揭露不同實施例之立 體影像擷取單元102與主動式深度感測器104之配置圖。在第6A圖的例子中,立體影像擷取單元102之第一、二攝像鏡頭1022、1024的鏡心連線AX係與主動式深度感測器104之掃描線SL方向約略平行/實質上平行。 6A and 6B illustrate the different embodiments in accordance with the present disclosure. A configuration diagram of the volume image capturing unit 102 and the active depth sensor 104. In the example of FIG. 6A, the mirror core connection AX of the first and second imaging lenses 1022, 1024 of the stereoscopic image capturing unit 102 is approximately parallel/substantially parallel to the scanning line SL direction of the active depth sensor 104. .
在另一實施例中,如第6B圖所示,立體影像擷取單元102之第一、二攝像鏡頭1022、1024的鏡心連線AX係與主動式深度感測器104之掃描線SL方向非平行(例如,約略垂直/實質上垂直)。由於第一、二攝像鏡頭1022、1024所提供的第一、二視角影像F1、F2係基於鏡心連線AX方向(例如水平方向)的像素位移量來判斷對應的視差/深度,故可補足主動式深度感測器104基於另一方向(例如垂直方向)掃描之精度。 In another embodiment, as shown in FIG. 6B, the mirror core connection AX of the first and second imaging lenses 1022 and 1024 of the stereoscopic image capturing unit 102 and the scanning line SL direction of the active depth sensor 104 are shown. Non-parallel (eg, approximately vertical/substantially vertical). Since the first and second viewing angle images F1 and F2 provided by the first and second imaging lenses 1022 and 1024 determine the corresponding parallax/depth based on the pixel displacement amount of the mirror line AX direction (for example, the horizontal direction), the complementary pixels can be complemented. The active depth sensor 104 is based on the accuracy of scanning in another direction, such as a vertical direction.
綜上所述,本揭露所提供之深度估測裝置及方法可基於不同視角影像所萃取出的特徵點資訊,對主動式深度感測器提供的深度資料進行內插運算,不僅可改善內插運算的誤差,更能降低對主動式深度感測器掃描解析度的要求。 In summary, the depth estimation apparatus and method provided by the present disclosure can interpolate the depth data provided by the active depth sensor based on the feature point information extracted from different perspective images, thereby not only improving the interpolation. The error of the operation can reduce the requirement of the scanning resolution of the active depth sensor.
雖然本揭露已以較佳實施例揭露如上,然其並非用以限定本揭露。本揭露所屬技術領域中具有通常知識者,在不脫離本揭露之精神和範圍內,當可作各種之更動與潤飾。因此,本揭露之保護範圍當視後附之申請專利範圍所界定者為準。 Although the disclosure has been disclosed above in the preferred embodiments, it is not intended to limit the disclosure. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the disclosure. Therefore, the scope of protection of this disclosure is subject to the definition of the scope of the appended claims.
202、204、206、208、210、212‧‧‧步驟 202, 204, 206, 208, 210, 212‧ ‧ steps
Claims (12)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562242332P | 2015-10-16 | 2015-10-16 |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI571099B true TWI571099B (en) | 2017-02-11 |
TW201715882A TW201715882A (en) | 2017-05-01 |
Family
ID=58608191
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW104139474A TWI571099B (en) | 2015-10-16 | 2015-11-26 | Device and method for depth estimation |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI571099B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10694103B2 (en) | 2018-04-24 | 2020-06-23 | Industrial Technology Research Institute | Building system and building method for panorama point cloud |
US10891805B2 (en) | 2018-04-26 | 2021-01-12 | Industrial Technology Research Institute | 3D model establishing device and calibration method applying to the same |
CN113723380A (en) * | 2021-11-03 | 2021-11-30 | 亿慧云智能科技(深圳)股份有限公司 | Face recognition method, device, equipment and storage medium based on radar technology |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI765339B (en) * | 2020-09-08 | 2022-05-21 | 國立臺灣師範大學 | Stereoscopic Image Recognition and Matching System |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201241789A (en) * | 2011-04-13 | 2012-10-16 | Univ Nat Taiwan | Method for generating disparity map of stereo video |
-
2015
- 2015-11-26 TW TW104139474A patent/TWI571099B/en active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201241789A (en) * | 2011-04-13 | 2012-10-16 | Univ Nat Taiwan | Method for generating disparity map of stereo video |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10694103B2 (en) | 2018-04-24 | 2020-06-23 | Industrial Technology Research Institute | Building system and building method for panorama point cloud |
US10891805B2 (en) | 2018-04-26 | 2021-01-12 | Industrial Technology Research Institute | 3D model establishing device and calibration method applying to the same |
CN113723380A (en) * | 2021-11-03 | 2021-11-30 | 亿慧云智能科技(深圳)股份有限公司 | Face recognition method, device, equipment and storage medium based on radar technology |
Also Published As
Publication number | Publication date |
---|---|
TW201715882A (en) | 2017-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6271609B2 (en) | Autofocus for stereoscopic cameras | |
US9025862B2 (en) | Range image pixel matching method | |
CN108470356B (en) | Target object rapid ranging method based on binocular vision | |
US10027947B2 (en) | Image processing apparatus and image processing method | |
CN104685513A (en) | Feature based high resolution motion estimation from low resolution images captured using an array source | |
JP6566768B2 (en) | Information processing apparatus, information processing method, and program | |
JP2014092461A (en) | Image processor and image processing method, image processing system, and program | |
JP6694234B2 (en) | Distance measuring device | |
CN113034568A (en) | Machine vision depth estimation method, device and system | |
CN112381847B (en) | Pipeline end space pose measurement method and system | |
TWI571099B (en) | Device and method for depth estimation | |
CN112184811B (en) | Monocular space structured light system structure calibration method and device | |
WO2014084181A1 (en) | Image measurement device | |
JP2017142613A (en) | Information processing device, information processing system, information processing method and information processing program | |
TW201724018A (en) | Depth image processing method and depth image processing system | |
US10134136B2 (en) | Image processing apparatus and image processing method | |
TWI528783B (en) | Methods and systems for generating depth images and related computer products | |
JP4394487B2 (en) | Stereo image processing device | |
JP2011185720A (en) | Distance obtaining device | |
US11348271B2 (en) | Image processing device and three-dimensional measuring system | |
JP4701848B2 (en) | Image matching apparatus, image matching method, and image matching program | |
JP6241083B2 (en) | Imaging apparatus and parallax detection method | |
JP6456084B2 (en) | Image processing apparatus, image processing method, and program | |
WO2015159791A1 (en) | Distance measuring device and distance measuring method | |
Zhang et al. | High quality depth maps from stereo matching and ToF camera |