TWI433529B - Method for intensifying 3d objects identification - Google Patents
Method for intensifying 3d objects identification Download PDFInfo
- Publication number
- TWI433529B TWI433529B TW099132046A TW99132046A TWI433529B TW I433529 B TWI433529 B TW I433529B TW 099132046 A TW099132046 A TW 099132046A TW 99132046 A TW99132046 A TW 99132046A TW I433529 B TWI433529 B TW I433529B
- Authority
- TW
- Taiwan
- Prior art keywords
- eye image
- corrected
- right eye
- left eye
- objects
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- User Interface Of Digital Computer (AREA)
Description
本發明係有關於一種辨識3D物件的方法,尤指一種藉由高斯濾波以及分水嶺切割法(watershed segmentation algorithm),分離至少二聚集的3D物件,以增強辨識3D物件的方法。The present invention relates to a method for recognizing a 3D object, and more particularly to a method for separating at least two aggregated 3D objects by Gaussian filtering and a watershed segmentation algorithm to enhance recognition of the 3D object.
利用立體攝影機(stereo camera)擷取包含3D物件的影像,可以取得3D物件與背景之間的視差資訊(disparity map)。使用者可讓立體攝影機斜角架設以及模擬俯視(plan view),了解3D物件在空間的位置、移動情形等。因此,3D物件辨識系統便可利用上述3D物件特性應用於人流計數(head count)或是偵測人流。Using a stereo camera to capture an image containing a 3D object, a disparity map between the 3D object and the background can be obtained. The user can let the stereo camera erect and simulate the plan view to understand the position and movement of the 3D object in space. Therefore, the 3D object recognition system can utilize the above 3D object characteristics to apply to a head count or detect a person flow.
但在辨識3D物件(例如人)時,無可避免的總是會遇到兩個以上的3D物件聚集在一起的情形,此時俯視圖(plan view)上的3D物件也會因為二個以上的3D物件(例如兩個人)聚集在一起,使得3D物件辨識系統容易誤判為單一3D物件(單一個人),導致計數3D物件時發生錯誤。However, when identifying 3D objects (such as people), it is inevitable that two or more 3D objects will always be gathered together. At this time, the 3D objects on the plan view will also be more than two. 3D objects (eg, two people) come together, making the 3D object recognition system easy to misjudge as a single 3D object (single person), causing errors in counting 3D objects.
本發明提供一種增強辨識3D物件的方法包含使用一左眼攝影機與一右眼攝影機,擷取一左眼影像與一右眼影像;校正該左眼影像與該右眼影像以產生一已校正之左眼影像及一已校正之右眼影像;利用該已校正之左眼影像與該已校正之右眼影像,產生一視差圖(disparity map);根據該視差圖,判斷出該視差圖中的一異於背景的3D物件;將該異於背景的3D物件投射至一俯視圖上;濾除該俯視圖上的雜訊,以產生一去雜訊3D物件;判斷該去雜訊3D物件是否由至少二3D物件聚集而成;及如果該去雜訊3D物件係由該至少二3D物件聚集而成,則分離該至少二3D物件。The invention provides a method for enhancing recognition of a 3D object, comprising: capturing a left eye image and a right eye image by using a left eye camera and a right eye camera; correcting the left eye image and the right eye image to generate a corrected image a left eye image and a corrected right eye image; using the corrected left eye image and the corrected right eye image to generate a disparity map; according to the disparity map, determining the disparity map a 3D object different from the background; projecting the 3D object different from the background onto a top view; filtering out the noise on the top view to generate a de-noising 3D object; determining whether the de-noising 3D object is at least The two 3D objects are gathered; and if the de-noising 3D object is assembled from the at least two 3D objects, the at least two 3D objects are separated.
本發明提供的一種增強辨識3D物件的方法,係將根據左眼影像與右眼影像所產生的視差圖中的3D物件,投射至俯視圖上。然後,利用高斯濾波器去除3D物件的雜訊,以及增強3D物件的輪廓特徵。最後利用分水嶺切割法,分離至少二個以上聚集在一起的3D物件。The invention provides a method for enhancing recognition of a 3D object by projecting a 3D object in a parallax map generated by a left eye image and a right eye image onto a top view. Then, the Gaussian filter is used to remove the noise of the 3D object and enhance the contour features of the 3D object. Finally, at least two or more 3D objects gathered together are separated by a watershed cutting method.
請參照第1圖,第1圖係為本發明的一實施例說明增強辨識3D物件的方法之流程圖。第1圖之步驟詳述如下:步驟100:開始;步驟102:使用一左眼攝影機LC與一右眼攝影機RC,擷取一左眼影像LI與一右眼影像RI;步驟104:校正左眼影像LI與右眼影像RI以產生一已校正之左眼影像CLI及一已校正之右眼影像CRI;步驟106:利用已校正之左眼影像CLI與已校正之右眼影像CRI,產生一視差圖(disparity map);步驟108:根據視差圖,判斷出視差圖中的一異於背景的3D物件;步驟110:將異於背景的3D物件投射至一俯視圖(plan view)上;步驟112:濾除俯視圖上的雜訊,以產生一去雜訊3D物件;步驟114:判斷去雜訊3D物件是否由至少二3D物件聚集而成;及步驟116:如果去雜訊3D物件係由至少二3D物件聚集而成,則分離至少二3D物件;步驟118:結束。Please refer to FIG. 1. FIG. 1 is a flow chart showing a method for enhancing recognition of a 3D object according to an embodiment of the present invention. The steps of FIG. 1 are detailed as follows: Step 100: Start; Step 102: Use a left-eye camera LC and a right-eye camera RC to capture a left-eye image LI and a right-eye image RI; Step 104: Correct the left eye The image LI and the right eye image RI are used to generate a corrected left eye image CLI and a corrected right eye image CRI; Step 106: using the corrected left eye image CLI and the corrected right eye image CRI to generate a parallax Disparity map; Step 108: Determine a different 3D object in the disparity map according to the disparity map; Step 110: Project the 3D object different from the background onto a plan view; Step 112: Filtering the noise on the top view to generate a de-noising 3D object; step 114: determining whether the de-noising 3D object is aggregated by at least two 3D objects; and step 116: if the de-noising 3D object is at least two When the 3D objects are gathered, at least two 3D objects are separated; Step 118: End.
在步驟104中,係利用校正參數校正校正左眼影像LI與右眼影像RI,以產生一已校正之左眼影像CLI及一已校正之右眼影像CRI,其中校正參數包含離線得到的左眼攝影機LC與右眼攝影機RC之間的距離B,以及左眼攝影機LC與右眼攝影機RC係以同步的方式擷取左眼影像LI與右眼影像RI。在步驟106中,利用已校正之左眼影像CLI與已校正之右眼影像CRI,產生一視差圖(disparity map)。在視差圖中,可透過左眼攝影機LC與右眼攝影機RC之間的距離B,產生3D物件距離左眼攝影機LC與右眼攝影機RC所在的基準線baseline的視深D。請參照第2圖和第3圖,第2圖係說明重疊已校正之左眼影像CLI與已校正之右眼影像CRI,產生視差dx的示意圖,第3圖係說明利用左眼攝影機LC與右眼攝影機RC,產生視深D的示意圖。如第2圖所示,成像位置PL(XL,YL)係為一3D物件在已校正之左眼影像CLI中的位置,以及成像位置PR(XR,YR)係為3D物件在已校正之右眼影像CRI中的位置,其中3D物件包含已校正之左眼影像CLI與已校正之右眼影像CRI中的不會移動的背景。因此,可根據成像位置PL(XL,YL)、PR(XR,YR)以及式(1),產生視差dx。In step 104, the left eye image LI and the right eye image RI are corrected by using the correction parameter to generate a corrected left eye image CLI and a corrected right eye image CRI, wherein the correction parameter includes the left eye obtained offline. The distance B between the camera LC and the right-eye camera RC, and the left-eye camera LC and the right-eye camera RC capture the left-eye image LI and the right-eye image RI in a synchronized manner. In step 106, a disparity map is generated using the corrected left eye image CLI and the corrected right eye image CRI. In the parallax map, the depth D of the reference line baseline in which the 3D object is located from the left-eye camera LC and the right-eye camera RC can be generated by the distance B between the left-eye camera LC and the right-eye camera RC. Please refer to FIG. 2 and FIG. 3 . FIG. 2 is a schematic diagram showing overlapping of corrected left eye image CLI and corrected right eye image CRI to generate parallax dx, and FIG. 3 illustrates using left eye camera LC and right. The eye camera RC produces a schematic view of the depth of view D. As shown in Fig. 2, the imaging position PL(XL, YL) is the position of a 3D object in the corrected left-eye image CLI, and the imaging position PR (XR, YR) is the corrected right of the 3D object. The position in the ocular image CRI, wherein the 3D object contains the background of the corrected left eye image CLI and the corrected right eye image CRI that does not move. Therefore, the parallax dx can be generated according to the imaging positions PL(XL, YL), PR(XR, YR), and the equation (1).
dx=XR-XL (1)Dx=XR-XL (1)
如第3圖所示,可根據視差dx、左眼攝影機LC與右眼攝影機RC之間的距離B、左眼攝影機LC與右眼攝影機RC的焦距f以及式(2),產生3D物件的視深D,亦即3D物件的Z座標。這裡的3D物件包含已校正之左眼影像CLI與已校正之右眼影像CRI中的任一3D物件。As shown in FIG. 3, the view of the 3D object can be generated based on the parallax dx, the distance B between the left-eye camera LC and the right-eye camera RC, the focal length f of the left-eye camera LC and the right-eye camera RC, and the equation (2). Deep D, which is the Z coordinate of the 3D object. The 3D object here includes any 3D object of the corrected left eye image CLI and the corrected right eye image CRI.
D=Z=f*(B/dx) (2)D=Z=f*(B/dx) (2)
在步驟108中,根據步驟106得到的視差圖,判斷出視差圖中的一異於背景的3D物件,尤指突然出現在已校正之左眼影像CLI與已校正之右眼影像CRI中異於背景的3D物件。因為背景的視深不會改變,所以可根據步驟106得到的視差圖,判斷出視差圖中的異於背景的3D物件。得到3D物件的視深Z之後,可根據式(3)、式(4),產生3D物件的X座標與Y座標。如此便可由左眼攝影機LC與右眼攝影機RC的二個成像面(image plane)上,獲得3D物件的立體資訊,亦即3D物件的三維座標(X,Y,Z),其中式(3)、式(4)的XL、YL亦可XR、YR取代。In step 108, according to the disparity map obtained in step 106, it is determined that the 3D object in the disparity map is different from the background, especially in the corrected left eye image CLI and the corrected right eye image CRI. Background 3D objects. Since the depth of field of the background does not change, the disparity map obtained in step 106 can be used to determine a different 3D object in the disparity map. After obtaining the depth of view Z of the 3D object, the X coordinate and the Y coordinate of the 3D object can be generated according to the formulas (3) and (4). Thus, the stereoscopic information of the 3D object, that is, the three-dimensional coordinates (X, Y, Z) of the 3D object, can be obtained from the two image planes of the left-eye camera LC and the right-eye camera RC, wherein the equation (3) XL and YL of the formula (4) may be substituted by XR and YR.
X=(XL*Z)/f (3)X=(XL*Z)/f (3)
Y=(YL*Z)/f (4)Y=(YL*Z)/f (4)
在步驟110中,使用者可利用俯視圖獲得3D物件在平面上的位置資訊。利用左眼攝影機LC與右眼攝影機RC,產生3D物件的立體資訊後,先將異於背景的3D物件投射到3D俯視圖(plan view)上之後,再以垂直地面的角度來觀察3D物件。請參照第4圖,第4圖係說明3D物件在俯視圖上的示意圖。In step 110, the user can obtain the positional information of the 3D object on the plane by using the top view. After the left-eye camera LC and the right-eye camera RC are used to generate the stereoscopic information of the 3D object, the 3D object different from the background is first projected onto the 3D plan view, and then the 3D object is observed at a vertical ground angle. Please refer to FIG. 4, which is a schematic view showing a 3D object in a plan view.
在視差圖中,每個視差圖的點會給予一個投射權重值。本發明所提出的投射權重值計算方式F (f x ,f y ,Z cam ),係依據3D物件距離基準線baseline越遠權重越重的概念來給予每個視差圖的點一個投射權重值。俯視圖上的每個點在累積完這些投射權重值後,會根據累積量的多寡來判斷是雜訊或是3D物件,累積的權重值越大代表此點應是3D物件。In the disparity map, the point of each disparity map gives a projected weight value. The projection weight value calculation method F ( f x , f y , Z cam ) proposed by the present invention gives a point of each projection weight value to each parallax map according to the concept that the weight of the 3D object is farther away from the baseline line. After accumulating these projection weight values, each point on the top view will judge whether it is a noise or a 3D object according to the amount of accumulated amount. The larger the accumulated weight value, the point should be a 3D object.
在步驟112中係根據參考文獻,去除3D物件的雜訊可利用高度資訊、投射權重值與高斯濾波器(Gaussian filter)來完成,其中投射權重值藉由步驟110改善後,使用者可確定投射權重值較少的點應該是雜訊。In step 112, according to the reference, the removal of the noise of the 3D object can be accomplished by using the height information, the projection weight value, and the Gaussian filter, wherein the projection weight value is improved by the step 110, and the user can determine the projection. The point with less weight value should be noise.
另外,高度資訊是俯視圖上每個點的其中一項資訊,用來表示每個點在空間的高度。當有3D物件(例如人)被投射到俯視圖上時,3D物件的高度資訊通常會有類似一座山的形狀,而高斯濾波器本身也有類似山的外型,因此使用者除了利用高斯濾波器來去除3D物件的雜訊外,也可用來增強3D物件的輪廓特徵,便於之後辨識3D物件。In addition, the height information is one of the information of each point on the top view, which is used to indicate the height of each point in space. When a 3D object (such as a person) is projected onto a top view, the height information of the 3D object usually has a shape similar to a mountain, and the Gaussian filter itself has a mountain-like appearance, so the user uses a Gaussian filter instead. In addition to removing noise from 3D objects, it can also be used to enhance the contour features of 3D objects for later identification of 3D objects.
當3D物件(例如人)聚集在一起時,俯視圖上的物件也會聚在一起,導致經常被誤判成同一3D物件。因此,在步驟112中,已利用高斯濾波器來增強3D物件的輪廓特徵,尤其是「山」的特徵。因此在步驟114中,使用者可利用「尋找區域極值」的方式來尋找「山頂」,看一個3D物件是否可能為兩個以上3D物件所合併而來的(亦即具有兩個以上的「山頂」)。通常只有一個3D物件時,「山頂」的數量只有一個。當發現兩個「山頂」以上時,則表示可能有兩個以上3D物件被併在一起。When 3D objects (such as people) are brought together, the objects on the top view will also get together, resulting in frequent misjudgment into the same 3D object. Thus, in step 112, a Gaussian filter has been utilized to enhance the contour features of the 3D object, particularly the "mountain" feature. Therefore, in step 114, the user can use the "find area extreme value" method to find the "mountain" to see if a 3D object may be merged from two or more 3D objects (that is, have more than two " The top of the mountain"). Usually there is only one 3D object, and there is only one "mountain". When two "mountains" are found above, it means that there may be more than two 3D objects being together.
在步驟116中,當判斷一個3D物件(例如人)具有兩個以上的「山頂」時,代表可能至少二個以上的3D物件被合併。此時3D物件辨識系統可利用分水嶺切割法(watershed segmentation algorithm)先分離至少二個以上的3D物件。然後,根據切割後的3D物件的範圍來判斷是否為3D物件或是雜訊。因此只要切割後的3D物件的範圍夠大,3D物件辨識系統都會認為是3D物件,否則就是雜訊。In step 116, when it is determined that a 3D object (e.g., a person) has more than two "mountains", it is possible that at least two or more 3D objects are merged. At this time, the 3D object recognition system may first separate at least two 3D objects by using a watershed segmentation algorithm. Then, according to the range of the 3D object after cutting, it is judged whether it is a 3D object or a noise. Therefore, as long as the range of the 3D object after cutting is large enough, the 3D object recognition system will be considered as a 3D object, otherwise it is noise.
請參照第5A圖和第5B圖,第5A圖係為根據每個像素的亮度值所呈現的影像之示意圖,第5B圖係說明利用分水嶺切割法分離至少二3D物件的示意圖。如第5A圖所示,白色區域係為亮度較大的區域,網點最密的區域係為亮度較小的區域。分水嶺切割法的主要概念是把整張影像當成地形圖,每個像素(pixel)的亮度值為地形的高度,從這張地形圖找出分水嶺,藉以分離3D物件。如第5B圖所示,可根據第5A圖的像素的亮度值,找出所有的分水嶺線,亦即至少二3D物件各自的邊界,以區分出俯視圖上的3D物件。Referring to FIGS. 5A and 5B, FIG. 5A is a schematic diagram of an image presented according to the luminance value of each pixel, and FIG. 5B is a schematic diagram illustrating separation of at least two 3D objects by a watershed cutting method. As shown in Fig. 5A, the white area is a region with a large brightness, and the area with the closest dot is a region with a small brightness. The main concept of the watershed cutting method is to treat the entire image as a topographic map. The brightness of each pixel is the height of the terrain. From this topographic map, the watershed is found to separate the 3D objects. As shown in FIG. 5B, all the watershed lines, that is, the boundaries of at least two 3D objects, can be found according to the brightness values of the pixels of FIG. 5A to distinguish the 3D objects on the top view.
請參照第6圖,第6圖係為本發明的另一實施例說明增強辨識3D物件的方法之流程圖。第6圖之步驟詳述如下:步驟600:開始;步驟602:使用一左眼攝影機LC與一右眼攝影機RC,擷取一左眼影像LI與一右眼影像RI;步驟604:校正左眼影像LI與右眼影像RI以產生一已校正之左眼影像CLI及一已校正之右眼影像CRI;步驟606:銳利化已校正之左眼影像CLI及已校正之右眼影像CRI,以產生一已銳利化之左眼影像SLI與一已銳利化之右眼影像SRI;步驟608:計算已銳利化之左眼影像SLI及已銳利化之右眼影像SRI的變異性;步驟610:利用已銳利化之左眼影像SLI及已銳利化之右眼影像SRI,產生一視差圖(disparity map);步驟612:根據視差圖,判斷出視差圖中的一異於背景的3D物件(例如人);步驟614:將異於背景的3D物件投射至一俯視圖(plan view)上;步驟616:濾除俯視圖上的雜訊,以產生一去雜訊3D物件;步驟618:判斷去雜訊3D物件是否由至少二3D物件聚集而成;及步驟620:如果去雜訊3D物件係由至少二3D物件聚集而成,則分離至少二3D物件;步驟622:結束。Please refer to FIG. 6. FIG. 6 is a flow chart showing a method for enhancing recognition of a 3D object according to another embodiment of the present invention. The steps of FIG. 6 are detailed as follows: Step 600: Start; Step 602: Use a left-eye camera LC and a right-eye camera RC to capture a left-eye image LI and a right-eye image RI; Step 604: Correct the left eye The image LI and the right eye image RI are used to generate a corrected left eye image CLI and a corrected right eye image CRI; step 606: sharpen the corrected left eye image CLI and the corrected right eye image CRI to generate a sharpened left eye image SLI and a sharpened right eye image SRI; step 608: calculating a variability of the sharpened left eye image SLI and the sharpened right eye image SRI; step 610: utilizing Sharpening the left eye image SLI and the sharpened right eye image SRI to generate a disparity map; step 612: determining, according to the disparity map, a background-like 3D object (eg, a person) Step 614: Projecting a 3D object different from the background onto a plan view; Step 616: Filtering the noise on the top view to generate a de-noising 3D object; Step 618: Determining the noise-free 3D object Whether it is composed of at least two 3D objects; and step 620: If the 3D object to the noise collected by the system from the at least two 3D object, the 3D object is at least two separation; Step 622: End.
第6圖的實施例和第1圖的實施例差別在於第6圖的實施例多了二步驟。在步驟606中,銳利化已校正之左眼影像CLI及已校正之右眼影像CRI係利用一高通濾波器取得已校正之左眼影像CLI及已校正之右眼影像CRI的高頻特徵,用以加強已校正之左眼影像CLI及已校正之右眼影像CRI內部的高頻部分,例如已校正之左眼影像CLI及已校正之右眼影像CRI的邊緣及/或花紋部分。在步驟608中,計算已銳利化之左眼影像SLI及已銳利化之右眼影像SRI的變異性,係為辨別及移除平滑無特徵的區域,例如背景中的整面牆。3D物件辨識系統係用以辨識3D物件(例如人),但背景中的平滑無特徵的區域並非需要被3D物件辨識系統辨識的3D物件,所以移除背景中的平滑無特徵的區域,減少3D物件辨識系統的負擔。第6圖的實施例的其餘操作原理皆和第1圖的實施例相同,在此不再贅述。The difference between the embodiment of Fig. 6 and the embodiment of Fig. 1 is that the embodiment of Fig. 6 has two more steps. In step 606, the corrected left eye image CLI and the corrected right eye image CRI use a high pass filter to obtain the high frequency features of the corrected left eye image CLI and the corrected right eye image CRI. To enhance the high frequency portion of the corrected left eye image CLI and the corrected right eye image CRI, such as the corrected left eye image CLI and the corrected right eye image CRI edge and/or pattern portion. In step 608, the variability of the sharpened left eye image SLI and the sharpened right eye image SRI is calculated to identify and remove smooth, featureless regions, such as the entire wall in the background. The 3D object recognition system is used to identify 3D objects (such as people), but the smooth featureless area in the background is not a 3D object that needs to be recognized by the 3D object recognition system, so the smooth and featureless area in the background is removed, reducing 3D. The burden of the object identification system. The remaining operating principles of the embodiment of FIG. 6 are the same as those of the embodiment of FIG. 1, and are not described herein again.
綜上所述,本發明提供的增強辨識3D物件的方法,係將根據左眼影像與右眼影像所產生的視差圖中的3D物件,投射至俯視圖上。然後,利用高斯濾波器去除3D物件的雜訊,以及增強3D物件的輪廓特徵。最後利用分水嶺切割法,分離至少二個以上聚集在一起的3D物件。因此,本發明可應用於需要人流計數或是偵測人流的地方,例如大賣場、電影院以及百貨公司等。而本發明亦可應用於月台警戒線或是其他警戒區,只要一偵測到3D物件(例如人),立刻響起警報器。In summary, the method for enhancing recognition of a 3D object provided by the present invention projects a 3D object in a parallax map generated by a left eye image and a right eye image onto a top view. Then, the Gaussian filter is used to remove the noise of the 3D object and enhance the contour features of the 3D object. Finally, at least two or more 3D objects gathered together are separated by a watershed cutting method. Therefore, the present invention can be applied to places where a flow of people is required or a flow of people is detected, such as a hypermarket, a movie theater, a department store, and the like. The present invention can also be applied to a platform warning line or other security zone, and as soon as a 3D object (for example, a person) is detected, an alarm is immediately sounded.
以上所述僅為本發明之較佳實施例,凡依本發明申請專利範圍所做之均等變化與修飾,皆應屬本發明之涵蓋範圍。The above are only the preferred embodiments of the present invention, and all changes and modifications made to the scope of the present invention should be within the scope of the present invention.
CRI...已校正之右眼影像CRI. . . Corrected right eye image
CLI...已校正之左眼影像CLI. . . Corrected left eye image
PL、PR...成像位置PL, PR. . . Imaging position
B...距離B. . . distance
D...視深D. . . Depth of field
LC...左眼攝影機LC. . . Left eye camera
RC...右眼攝影機RC. . . Right eye camera
100-118、600-622...步驟100-118, 600-622. . . step
第1圖係為本發明的一實施例說明增強辨識3D物件的方法之流程圖。1 is a flow chart illustrating a method of enhancing recognition of a 3D object according to an embodiment of the present invention.
第2圖係說明重疊已校正之左眼影像與已校正之右眼影像,產生視差的示意圖。Figure 2 is a schematic diagram showing the generation of parallax by superimposing the corrected left eye image and the corrected right eye image.
第3圖係說明利用左眼攝影機與右眼攝影機,產生視深的示意圖。Fig. 3 is a view showing the use of a left-eye camera and a right-eye camera to generate a depth of view.
第4圖係說明利用3D物件在俯視圖上的示意圖。Figure 4 is a schematic diagram showing the use of a 3D object on a top view.
第5A圖係為根據每個像素的亮度值所呈現的影像之示意圖。Figure 5A is a schematic diagram of an image presented based on the luminance values of each pixel.
第5B圖係說明利用分水嶺切割法分離至少二3D物件的示意圖。Figure 5B is a schematic diagram illustrating the separation of at least two 3D objects using a watershed cutting process.
第6圖係為本發明的另一實施例說明增強辨識3D物件的方法之流程圖。Figure 6 is a flow chart illustrating a method of enhancing recognition of a 3D object in accordance with another embodiment of the present invention.
600-622...步驟600-622. . . step
Claims (9)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW099132046A TWI433529B (en) | 2010-09-21 | 2010-09-21 | Method for intensifying 3d objects identification |
US12/967,060 US8610760B2 (en) | 2010-09-21 | 2010-12-14 | Method for intensifying identification of three-dimensional objects |
EP11163682.5A EP2431941B1 (en) | 2010-09-21 | 2011-04-26 | Method for intensifying identification of three-dimensional objects |
PL11163682T PL2431941T3 (en) | 2010-09-21 | 2011-04-26 | Method for intensifying identification of three-dimensional objects |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW099132046A TWI433529B (en) | 2010-09-21 | 2010-09-21 | Method for intensifying 3d objects identification |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201215093A TW201215093A (en) | 2012-04-01 |
TWI433529B true TWI433529B (en) | 2014-04-01 |
Family
ID=45555376
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW099132046A TWI433529B (en) | 2010-09-21 | 2010-09-21 | Method for intensifying 3d objects identification |
Country Status (4)
Country | Link |
---|---|
US (1) | US8610760B2 (en) |
EP (1) | EP2431941B1 (en) |
PL (1) | PL2431941T3 (en) |
TW (1) | TWI433529B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120088467A (en) * | 2011-01-31 | 2012-08-08 | 삼성전자주식회사 | Method and apparatus for displaying partial 3d image in 2d image disaply area |
US20130057655A1 (en) * | 2011-09-02 | 2013-03-07 | Wen-Yueh Su | Image processing system and automatic focusing method |
US20140225991A1 (en) * | 2011-09-02 | 2014-08-14 | Htc Corporation | Image capturing apparatus and method for obatining depth information of field thereof |
TWI508027B (en) | 2013-08-08 | 2015-11-11 | Huper Lab Co Ltd | Three dimensional detecting device and method for detecting images thereof |
WO2016068869A1 (en) * | 2014-10-28 | 2016-05-06 | Hewlett-Packard Development Company, L.P. | Three dimensional object recognition |
CN106033601B (en) * | 2015-03-09 | 2019-01-18 | 株式会社理光 | The method and apparatus for detecting abnormal case |
GB2586712B (en) * | 2018-03-28 | 2021-12-22 | Mitsubishi Electric Corp | Image processing device, image processing method, and image processing program |
CN109241858A (en) * | 2018-08-13 | 2019-01-18 | 湖南信达通信息技术有限公司 | A kind of passenger flow density detection method and device based on rail transit train |
CN110533605B (en) * | 2019-07-26 | 2023-06-02 | 遵义师范学院 | Accurate noise point calibration method |
CN113891100A (en) * | 2020-09-16 | 2022-01-04 | 深圳市博浩光电科技有限公司 | Live broadcast system for real-time three-dimensional image display |
CN113891101A (en) * | 2020-09-16 | 2022-01-04 | 深圳市博浩光电科技有限公司 | Live broadcast method for real-time three-dimensional image display |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1995006897A1 (en) | 1991-02-19 | 1995-03-09 | John Lawrence | Simulated 3-d cinematographic technique, apparatus and glasses |
US6873723B1 (en) * | 1999-06-30 | 2005-03-29 | Intel Corporation | Segmenting three-dimensional video images using stereo |
US6965379B2 (en) | 2001-05-08 | 2005-11-15 | Koninklijke Philips Electronics N.V. | N-view synthesis from monocular video of certain broadcast and stored mass media content |
US7003136B1 (en) * | 2002-04-26 | 2006-02-21 | Hewlett-Packard Development Company, L.P. | Plan-view projections of depth image data for object tracking |
US7324661B2 (en) * | 2004-04-30 | 2008-01-29 | Colgate-Palmolive Company | Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization |
US7623676B2 (en) | 2004-12-21 | 2009-11-24 | Sarnoff Corporation | Method and apparatus for tracking objects over a wide area using a network of stereo sensors |
KR100727033B1 (en) | 2005-12-07 | 2007-06-12 | 한국전자통신연구원 | Apparatus and method for vision processing on network based intelligent service robot system and the system using the same |
US8355579B2 (en) * | 2009-05-20 | 2013-01-15 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Automatic extraction of planetary image features |
-
2010
- 2010-09-21 TW TW099132046A patent/TWI433529B/en active
- 2010-12-14 US US12/967,060 patent/US8610760B2/en active Active
-
2011
- 2011-04-26 EP EP11163682.5A patent/EP2431941B1/en active Active
- 2011-04-26 PL PL11163682T patent/PL2431941T3/en unknown
Also Published As
Publication number | Publication date |
---|---|
US8610760B2 (en) | 2013-12-17 |
EP2431941A1 (en) | 2012-03-21 |
US20120069151A1 (en) | 2012-03-22 |
PL2431941T3 (en) | 2019-04-30 |
EP2431941B1 (en) | 2018-10-10 |
TW201215093A (en) | 2012-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI433529B (en) | Method for intensifying 3d objects identification | |
US10043278B2 (en) | Method and apparatus for reconstructing 3D face with stereo camera | |
TWI469087B (en) | Method for depth map generation | |
TWI584634B (en) | Electronic apparatus and method of generating depth map | |
KR101055411B1 (en) | Method and apparatus of generating stereoscopic image | |
US9723295B2 (en) | Image processing device, image processing method, image processing computer program, and information recording medium whereupon image processing computer program is stored | |
US11783443B2 (en) | Extraction of standardized images from a single view or multi-view capture | |
KR20180041668A (en) | 3D restoration of the human ear from the point cloud | |
WO2011115142A1 (en) | Image processing device, method, program and storage medium | |
CN107392958A (en) | A kind of method and device that object volume is determined based on binocular stereo camera | |
KR101551026B1 (en) | Method of tracking vehicle | |
WO2013038833A1 (en) | Image processing system, image processing method, and image processing program | |
JP4872769B2 (en) | Road surface discrimination device and road surface discrimination method | |
CN107481317A (en) | The facial method of adjustment and its device of face 3D models | |
US9530240B2 (en) | Method and system for rendering virtual views | |
JP6194604B2 (en) | Recognizing device, vehicle, and computer executable program | |
CN104915943B (en) | Method and apparatus for determining main parallax value in disparity map | |
CN107396037B (en) | Video monitoring method and device | |
US9217636B2 (en) | Information processing apparatus, information processing method, and a computer-readable storage medium | |
JP5501084B2 (en) | Planar area detection apparatus and stereo camera system | |
CN104778673B (en) | A kind of improved gauss hybrid models depth image enhancement method | |
WO2013080544A1 (en) | Stereoscopic image processing apparatus, stereoscopic image processing method, and stereoscopic image processing program | |
JP2020098421A (en) | Three-dimensional shape model generation device, three-dimensional shape model generation method and program | |
JP6251099B2 (en) | Distance calculation device | |
CN106303501B (en) | Stereo-picture reconstructing method and device based on image sparse characteristic matching |