CN107220632B - Road surface image segmentation method based on normal characteristic - Google Patents
Road surface image segmentation method based on normal characteristic Download PDFInfo
- Publication number
- CN107220632B CN107220632B CN201710440163.5A CN201710440163A CN107220632B CN 107220632 B CN107220632 B CN 107220632B CN 201710440163 A CN201710440163 A CN 201710440163A CN 107220632 B CN107220632 B CN 107220632B
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- normal
- road surface
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000003709 image segmentation Methods 0.000 title claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims abstract description 19
- 239000013598 vector Substances 0.000 claims description 26
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 239000003086 colorant Substances 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 230000001154 acute effect Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 238000002474 experimental method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a road surface image segmentation method based on normal characteristics. The depth image obtained by the binocular camera is combined with the internal reference of the camera and converted into the plane normal map. And determining a road plane by using normal characteristics in the plane normal map, completing segmentation processing on the road plane, marking the road plane as a safe driving area, and marking the rest areas as collision avoidance areas.
Description
Technical Field
The invention relates to a road surface image segmentation method based on normal characteristics, and belongs to the technical field of computer vision technology.
Background
Owing to the rapid development of artificial intelligence technology, an automobile, which is one of the great inventions of the industrial age, is also advancing toward a new age. Companies such as google, tesla, and hundredth are all developing unmanned vehicles, and in the future, the competition for autonomous vehicles will become extremely intense.
Google unmanned cars determine traffic conditions around the vehicle by using cameras, radar sensors, and laser rangefinders, and navigate using GPS in combination with high precision digital maps. Although the motor vehicle administration in the united states issued a legitimate license plate for google's unmanned automobile and allowed it to go on road, unmanned automobiles remained some distance away from popularity. Not only because the automatic driving technology is not perfect at present, the safety accident is easy to cause. Meanwhile, the hardware equipment cost of the whole set of unmanned system is too high, and the unmanned system is not beneficial to popularization to the public. The auxiliary driving system Mobiley developed based on the computer vision technology does not need expensive sensors such as radars and laser range finders, and is analyzed and distinguished by means of information acquired by a vehicle-mounted camera, so that surrounding people, traffic signs and other vehicles are identified, the driving condition is predicted, early warning is provided for a driver, and danger is avoided. The auxiliary driving scheme based on the computer vision technology does not need expensive sensor equipment, so the scheme is low in cost and easy to popularize, and has a positive effect on promoting the development of the intelligent traffic era.
The method is based on the computer vision correlation technology, and has certain problems in determining the road plane in the RGB-D frame image and segmenting the road plane; for example, in an RGB-D frame image obtained by a camera, the bottom of an object on a road surface, such as a tire of a vehicle, is difficult to effectively segment the road surface and the object only by the features of color texture or depth of field due to the similarity of the self color to the road surface or the interference of shadows.
For example, chinese patent publication No. 102663748A discloses a low-depth image segmentation method based on a frequency domain, which performs low-depth image segmentation processing based on a frequency domain by using the characteristic that a focus target in a low-depth image contains a large number of high-frequency components and a blur area contains a small number of high-frequency components. The image segmentation method may not be able to effectively segment the road plane image.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a road surface image segmentation method based on normal characteristics.
Summary of the invention:
the depth image obtained by the binocular camera is combined with the internal reference of the camera and converted into the plane normal map. And determining a road plane by using normal characteristics in the plane normal map, completing segmentation processing on the road plane, marking the road plane as a safe driving area, and marking the rest areas as collision avoidance areas.
The technical scheme of the invention is as follows:
a road surface image segmentation method based on normal characteristics comprises the following steps:
1) acquiring an RGB-D image through a camera, decomposing the RGB-D image into frame images, and acquiring an internal parameter K of the camera;
wherein d isxAnd dyRespectively representing the number u of length units occupied by a pixel point in the horizontal direction and the vertical direction0And v0The center of the plane of the frame image is, and gamma is an inclination parameter of an inner coordinate axis of a camera coordinate system with a camera optical center as an origin;
2) two-dimensional depth image IdThe pixel point in (2) is converted into a camera coordinate system from an image coordinate system, wherein the conversion relation is shown as formula (1):
wherein x and y are coordinates in a camera coordinate system, and u and v are coordinates in an image coordinate system; the value of the coordinate z in the camera coordinate system is represented by the depth image IdDepth value I of middle corresponding coordinateu,vMultiplying the depth conversion rate by a depth conversion rate R, wherein the depth conversion rate R is determined by a camera and is a known parameter;
3) for depth image IdSelecting two pixels B, C in the neighborhood of NxN, determining the vector by the coordinates x, y and z of the camera coordinate systemAndcalculating the normal vector of the pointWherein,
4) repeating the step 3) to traverse and select all neighborhood pixel points of the pixel point A to obtain all normal vectors of the pixel point A, taking the average value of all normal vectors and carrying out normalization processing to obtain the final normal vector of the pixel point A, and taking the x, y and z coordinate values of the final normal vector of the pixel point A as the pixel values of the RGB channels in the color image to be stored as the image; traversing depth image IdEach pixel point in the image is used for obtaining a final plane normal direction graph If(ii) a And the average value of all normal directions is taken for smoothing and normalization processing, so that the inaccuracy of the normal vector obtained by calculation due to the interference of noise is avoided.
5) Drawing plane normal to figure IfClustering and dividing each pixel point according to the similarity of colors to form divided areas, calculating an average normal vector of each divided area, determining the divided area with the largest area and the direction of the normal vector forming an acute angle with the direction opposite to the gravity in the scene as a road pavement, and extracting road pavement images to obtain a final road pavement extraction result;
6) the road surface is set as a safe-driving region, and the remaining region is set as a collision avoidance region.
Preferably, γ is 0 according to the invention.
According to the invention, the camera in the step 1) is preferably a vehicle-mounted binocular camera. The same binocular camera can simultaneously acquire a color image and a two-dimensional depth image.
According to a preferred embodiment of the invention, the internal parameters K of the camera are acquired by means of camera calibration.
According to a preferred embodiment of the present invention, the step 5) further includes performing morphological closing operation processing on the road surface image after the road surface image is extracted. The morphological close operation process is used to remove noise interference.
According to the invention, in the step 5), clustering segmentation is performed according to the similarity of colors to form segmented regions, and the segmentation is realized by a Mean-Shift clustering segmentation algorithm.
The invention has the beneficial effects that:
1. the invention relates to a road surface image segmentation method based on normal characteristics, which utilizes the direction of a normal vector to greatly change in a boundary area in a frame image, distinguishes a road surface and an object, determines a road plane and completes the segmentation of the road surface; the segmentation result can be used as the input of an auxiliary driving system, so that collision between the segmentation result and objects on a road in the driving process is avoided, and a safe driving path is planned;
2. the pavement image segmentation method based on the normal characteristics is not only applied to a road scene, but also can be used for a stage scene, peripheral interference is removed by selecting the stage as an interested area, and the equal planes of the ground and the back of the stage are determined in the interested area, so that the object segmentation on the stage scene can be completed; the application is wide.
Drawings
FIG. 1 is a flowchart of a method for segmenting a road surface image based on normal features according to the present invention;
FIG. 2 is a color input image in a straight-through plus shading experimental environment;
FIG. 3 is a depth input image in a straight-through plus shading experimental environment;
FIG. 4 is a planar normal view generated under a straight-line shading experiment environment;
FIG. 5 is a schematic diagram of a segmentation effect obtained in a straight-line shading experiment environment;
FIG. 6 is a color input image in a straight lane shaded plus left and front car experimental environment;
FIG. 7 is a depth input image in a straight lane plus shading plus left and front car experimental environment;
FIG. 8 is a normal image of a straight lane with shading and left and front vehicle experimental environment;
FIG. 9 is a schematic diagram of a segmentation effect in an experimental environment of a straight road with a shadow and left and front vehicles;
FIG. 10 is a color input image in a straight lane plus shading plus two-sided car experimental environment;
FIG. 11 is a depth input image in a straight lane plus shadow plus two-sided car experimental environment;
FIG. 12 is a normal image generated in a straight lane shaded and two-sided car-on experimental environment;
FIG. 13 is a schematic view of a segmentation effect in a straight road plus shadow plus two-sided car experiment environment;
FIG. 14 is a color input image in a curve-shaded experimental environment;
FIG. 15 is a depth input image in a curve plus shadow experimental environment;
FIG. 16 is a normal image generated in a curve-shaded experimental environment;
FIG. 17 is a diagram illustrating a segmentation effect in a curve shading experiment environment;
FIG. 18 is a color input image under an intersection experimental environment;
FIG. 19 is a depth input image under an intersection experimental environment;
FIG. 20 is a normal image generated under an intersection experimental environment;
fig. 21 is a schematic diagram of a segmentation effect in an intersection experimental environment.
Detailed Description
The invention is further described below, but not limited thereto, with reference to the following examples and the accompanying drawings.
Example 1
A road surface image segmentation method based on normal characteristics is used for segmenting an image in a straight road shading experimental environment, and comprises the following steps:
1) acquiring an RGB-D image through a camera, decomposing the RGB-D image into frame images, and acquiring an internal parameter K of the camera;
wherein d isxAnd dyRespectively representing the number u of length units occupied by a pixel point in the horizontal direction and the vertical direction0And v0The center of the plane of the frame image is, and gamma is an inclination parameter of an inner coordinate axis of a camera coordinate system with a camera optical center as an origin;
2) two-dimensional depth image IdThe pixel points in (1) are converted into camera coordinates from an image coordinate systemWherein the conversion relationship is shown in formula (1):
wherein x and y are coordinates in a camera coordinate system, and u and v are coordinates in an image coordinate system; the value of the coordinate z in the camera coordinate system is represented by the depth image IdDepth value I of middle corresponding coordinateu,vMultiplying the depth conversion rate by a depth conversion rate R, wherein the depth conversion rate R is determined by a camera and is a known parameter;
3) for depth image IdSelecting two pixels B, C in the neighborhood of NxN, determining the vector by the coordinates x, y and z of the camera coordinate systemAndcalculating the normal vector of the pointWherein,
4) repeating the step 3) to traverse and select all neighborhood pixel points of the pixel point A to obtain all normal vectors of the pixel point A, taking the average value of all normal vectors and carrying out normalization processing to obtain the final normal vector of the pixel point A, and taking the x, y and z coordinate values of the final normal vector of the pixel point A as the pixel values of the RGB channels in the color image to be stored as the image; traversing depth image IdEach pixel point in the image is used for obtaining a final plane normal direction graph If(ii) a And the average value of all normal directions is taken for smoothing and normalization processing, so that the inaccuracy of the normal vector obtained by calculation due to the interference of noise is avoided.
5) Drawing plane normal to figure IfClustering and dividing each pixel point according to the similarity of colors to form divided areas, and calculating the average of each divided areaDetermining a divided region with the largest region area and an acute angle formed between the normal vector direction and the direction opposite to the gravity in the scene as a road pavement, and extracting a road pavement image to obtain a final road pavement extraction result;
6) the road surface is set as a safe-driving region, and the remaining region is set as a collision avoidance region.
As shown in fig. 1-4, the road surface image segmentation method based on normal features segments the image of the straight road shaded experimental environment, clearly and accurately segments the safe driving area and the anti-collision area.
Example 2
The method for segmenting the road surface image based on the normal characteristic as described in the embodiment 1 is different from the method for segmenting the image under the experimental environment of straight track plus shadow plus left side and front vehicle; γ is 0.
As shown in fig. 5-8, the road surface image segmentation method based on normal features clearly and accurately segments a safe driving area and an anti-collision area for images in a straight road plus shadow plus left-side and front vehicle experimental environment.
Example 3
The method for segmenting the road surface image based on the normal characteristic in the embodiment 1 is different from the method for segmenting the image in the experimental environment of straight road shading and two sides with vehicles; the camera in the step 1) is a vehicle-mounted binocular camera. The same binocular camera can simultaneously acquire a color image and a two-dimensional depth image.
As shown in fig. 9-12, the road surface image segmentation method based on normal features segments images in the experimental environment of straight road plus shadow plus vehicles on both sides, and clearly and accurately segments a safe driving area and an anti-collision area.
Example 4
The method for segmenting the road surface image based on the normal characteristic in the embodiment 1 is different from the method for segmenting the image in the curve shading experiment environment; and the internal parameter K of the camera is obtained through camera calibration.
As shown in fig. 12 to 16, the road surface image segmentation method based on normal features segments an image in a curve shading experiment environment, and clearly and accurately segments a safe driving area and an anti-collision area.
Example 5
The method for segmenting the road surface image based on the normal characteristic as in embodiment 1, except that the image under the intersection experimental environment is segmented, in the step 5), after the road surface image is extracted, a step of performing morphological closing operation processing on the road surface image is further included. The morphological close operation process is used to remove noise interference.
As shown in fig. 16-20, the road surface image segmentation method based on normal features segments the image in the intersection experimental environment to clearly and accurately segment the safe driving area and the anti-collision area.
Example 6
The method for segmenting the road surface image based on the normal characteristic in the embodiment 1 is different from the method in that in the step 5), clustering segmentation is performed according to the similarity of colors to form segmented regions, and the segmented regions are achieved through a Mean-Shift clustering segmentation algorithm.
Claims (6)
1. A road surface image segmentation method based on normal features is characterized by comprising the following steps:
1) acquiring an RGB-D image through a camera, decomposing the RGB-D image into frame images, and acquiring an internal parameter K of the camera;
wherein d isxAnd dyRespectively representing the number of length units occupied by a pixel point in the horizontal direction and the vertical direction, and a two-dimensional coordinate (u)0,v0) The center of an image coordinate system, and gamma is an inclination parameter of an inner coordinate axis of the camera coordinate system with the optical center of the camera as an origin;
2) two-dimensional depth image IdThe pixel point in (2) is converted into a camera coordinate system from an image coordinate system, wherein the conversion relation is shown as formula (1):
wherein x and y are coordinates in a camera coordinate system, and u and v are coordinates in an image coordinate system; the value of the coordinate z in the camera coordinate system is represented by the depth image IdDepth value I of middle corresponding coordinateu,vMultiplying the depth conversion rate by a depth conversion rate R, wherein the depth conversion rate R is determined by a camera and is a known parameter;
3) for depth image IdSelecting two pixels B, C in the neighborhood of NxN, determining the vector by the coordinates x, y and z of the camera coordinate systemAndcalculating the normal vector of the pointWherein,
4) repeating the step 3) to traverse and select all neighborhood pixel points of the pixel point A to obtain all normal vectors of the pixel point A, taking the average value of all normal vectors and carrying out normalization processing to obtain the final normal vector of the pixel point A, and taking the x, y and z coordinate values of the final normal vector of the pixel point A as the pixel values of the RGB channels in the color image to be stored as the image; traversing depth image IdEach pixel point in the image is used for obtaining a final plane normal direction graph If;
5) Drawing plane normal to figure IfClustering and dividing each pixel point according to the similarity of colors to form divided areas, calculating the average normal vector of each divided area, determining the divided area with the largest area and the direction of the normal vector forming an acute angle with the direction opposite to the gravity in the scene as the road pavement, and providingTaking a road pavement image to obtain a final road pavement extraction result;
6) the road surface is set as a safe-driving region, and the remaining region is set as a collision avoidance region.
2. The method of claim 1, wherein γ is 0.
3. The method for segmenting the road surface image based on the normal features as claimed in claim 1, wherein the camera in the step 1) is a vehicle-mounted binocular camera.
4. The method of claim 1, wherein the internal parameter K of the camera is obtained by calibrating the camera.
5. The method for segmenting a road surface image based on normal features as claimed in claim 1, wherein the step 5) further comprises the step of performing morphological closed operation processing on the road surface image after the road surface image is extracted.
6. The method for segmenting the road surface image based on the normal characteristic as claimed in claim 1, wherein in the step 5), clustering segmentation is carried out according to the similarity of colors to form segmented regions, and the segmentation is realized through a Mean-Shift clustering segmentation algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710440163.5A CN107220632B (en) | 2017-06-12 | 2017-06-12 | Road surface image segmentation method based on normal characteristic |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710440163.5A CN107220632B (en) | 2017-06-12 | 2017-06-12 | Road surface image segmentation method based on normal characteristic |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107220632A CN107220632A (en) | 2017-09-29 |
CN107220632B true CN107220632B (en) | 2020-02-18 |
Family
ID=59947538
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710440163.5A Active CN107220632B (en) | 2017-06-12 | 2017-06-12 | Road surface image segmentation method based on normal characteristic |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107220632B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108280401B (en) * | 2017-12-27 | 2020-04-07 | 达闼科技(北京)有限公司 | Pavement detection method and device, cloud server and computer program product |
CN112950726B (en) * | 2021-03-25 | 2022-11-11 | 深圳市商汤科技有限公司 | Camera orientation calibration method and related product |
US11734850B2 (en) * | 2021-04-26 | 2023-08-22 | Ubtech North America Research And Development Center Corp | On-floor obstacle detection method and mobile machine using the same |
CN113390435B (en) * | 2021-05-13 | 2022-08-26 | 中铁二院工程集团有限责任公司 | High-speed railway multi-element auxiliary positioning system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103389042A (en) * | 2013-07-11 | 2013-11-13 | 夏东 | Ground automatic detecting and scene height calculating method based on depth image |
CN104361575A (en) * | 2014-10-20 | 2015-02-18 | 湖南戍融智能科技有限公司 | Automatic ground testing and relative camera pose estimation method in depth image |
CN104992145A (en) * | 2015-06-15 | 2015-10-21 | 山东大学 | Moment sampling lane tracking detection method |
CN105426868A (en) * | 2015-12-10 | 2016-03-23 | 山东大学 | Lane detection method based on adaptive region of interest |
CN106228134A (en) * | 2016-07-21 | 2016-12-14 | 北京奇虎科技有限公司 | Drivable region detection method based on pavement image, Apparatus and system |
CN106327433A (en) * | 2016-08-01 | 2017-01-11 | 浙江零跑科技有限公司 | Monocular downward view camera and rear axle steering-based vehicle path following method |
-
2017
- 2017-06-12 CN CN201710440163.5A patent/CN107220632B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103389042A (en) * | 2013-07-11 | 2013-11-13 | 夏东 | Ground automatic detecting and scene height calculating method based on depth image |
CN104361575A (en) * | 2014-10-20 | 2015-02-18 | 湖南戍融智能科技有限公司 | Automatic ground testing and relative camera pose estimation method in depth image |
CN104992145A (en) * | 2015-06-15 | 2015-10-21 | 山东大学 | Moment sampling lane tracking detection method |
CN105426868A (en) * | 2015-12-10 | 2016-03-23 | 山东大学 | Lane detection method based on adaptive region of interest |
CN106228134A (en) * | 2016-07-21 | 2016-12-14 | 北京奇虎科技有限公司 | Drivable region detection method based on pavement image, Apparatus and system |
CN106327433A (en) * | 2016-08-01 | 2017-01-11 | 浙江零跑科技有限公司 | Monocular downward view camera and rear axle steering-based vehicle path following method |
Non-Patent Citations (4)
Title |
---|
《Road detection using segmentation by weighted aggregation based on visual information and a posteriori probability of road regions》;T.T. Son等;《2008 IEEE International Conference on Systems, Man and Cybernetics》;20081231;第3018-3025页 * |
《Robust Urban Road Image Segmentation》;Junyang Li等;《Proceeding of the 11th World Congress on Intelligent Control and Automation》;20140704;第2923-2928页 * |
《结合深度信息的图像分割算法研究》;皮志明;《中国博士学位论文全文数据库》;20131015(第10期);第I138-36页 * |
《遥感图像自动道路提取方法综述》;吴亮等;《自动化学报》;20100731;第36卷(第7期);第912-922页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107220632A (en) | 2017-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bilal et al. | Real-time lane detection and tracking for advanced driver assistance systems | |
CN110443225B (en) | Virtual and real lane line identification method and device based on feature pixel statistics | |
US9846812B2 (en) | Image recognition system for a vehicle and corresponding method | |
CN105206109B (en) | A kind of vehicle greasy weather identification early warning system and method based on infrared CCD | |
CN107220632B (en) | Road surface image segmentation method based on normal characteristic | |
CN111209770A (en) | Lane line identification method and device | |
Fernández et al. | Curvature-based curb detection method in urban environments using stereo and laser | |
Fernández et al. | Road curb and lanes detection for autonomous driving on urban scenarios | |
EP3392830B1 (en) | Image processing device, object recognition device, apparatus control system, image processing method and program | |
Youjin et al. | A robust lane detection method based on vanishing point estimation | |
CN111178122A (en) | Detection and planar representation of three-dimensional lanes in a road scene | |
CN109917359B (en) | Robust vehicle distance estimation method based on vehicle-mounted monocular vision | |
JP6753134B2 (en) | Image processing device, imaging device, mobile device control system, image processing method, and image processing program | |
US20190180121A1 (en) | Detection of Objects from Images of a Camera | |
EP3389009A1 (en) | Image processing device, object recognition device, apparatus control system, image processing method and program | |
Sun | Vision based lane detection for self-driving car | |
Raguraman et al. | Intelligent drivable area detection system using camera and LiDAR sensor for autonomous vehicle | |
Kühnl et al. | Visual ego-vehicle lane assignment using spatial ray features | |
KR20220135186A (en) | Electronic device and control method | |
Hernández et al. | Lane marking detection using image features and line fitting model | |
Wang et al. | An improved hough transform method for detecting forward vehicle and lane in road | |
Giosan et al. | Superpixel-based obstacle segmentation from dense stereo urban traffic scenarios using intensity, depth and optical flow information | |
Zhao et al. | Omni-Directional Obstacle Detection for Vehicles Based on Depth Camera | |
CN112733678A (en) | Ranging method, ranging device, computer equipment and storage medium | |
Oniga et al. | A fast ransac based approach for computing the orientation of obstacles in traffic scenes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |