CN103198320A - Self-adaptive vision-aided driving device - Google Patents
Self-adaptive vision-aided driving device Download PDFInfo
- Publication number
- CN103198320A CN103198320A CN2013101457823A CN201310145782A CN103198320A CN 103198320 A CN103198320 A CN 103198320A CN 2013101457823 A CN2013101457823 A CN 2013101457823A CN 201310145782 A CN201310145782 A CN 201310145782A CN 103198320 A CN103198320 A CN 103198320A
- Authority
- CN
- China
- Prior art keywords
- image
- lane line
- interface
- fpga core
- fpga
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a self-adaptive vision-aided driving device and relates to aided driving of vehicles. The self-adaptive vision-aided driving device is provided with a vehicle-mounted power supply, a voltage-reducing and voltage-stabilizing circuit, a vehicle-mounted display panel, an FPGA (Field Programmable Gate Array) core board, a digital camera and a steering control interface, wherein the FPGA core board is provided with a power module, a crystal oscillator, an FPGA chip and an input/output interface, the input end of the voltage-reducing and voltage-stabilizing circuit is connected with the output end of the vehicle-mounted power supply, the output end of the voltage-reducing and voltage-stabilizing circuit is respectively connected with the input end of the power module of the FPGA core board and the input end of the digital camera, an output interface of the FPGA core board is connected with the input end of the vehicle-mounted display panel, the output end of the digital camera is connected with an input interface of the FPGA core board, the output end of the digital camera is connected with the input interface of the FPGA core board, and the output end of the steering control interface is connected with the input interface of the FPGA core board.
Description
Technical field
The present invention relates to automobile assistant driving, particularly relate to a kind of self-adaptive visual auxiliary driving device.
Technical background
Along with the fast development of information and control technology, the automobile assistant driving technology is accepted by auto vendor and user gradually.Automobile assistant driving can help the driver to lessen fatigue, and improves security performance, has tangible practical value.Lane line tracking motor auxiliary driving device based on vision has become the standard configuration of high-end automobile brand gradually.By to video image analysis, determine relative position and the orientation of automobile and lane line, and with this turning to of automobile controlled in real time and compensated that the finite automaton that is implemented under driver's monitoring is driven.In the coach driving procedure of the practical value of automobile assistant driving device on high-grade highway, the most outstanding.
The vision auxiliary driving device generally comprises 3 main modular: (1) video acquisition module; (2) image analysis module; (3) turn to the adjustment feedback module.Wherein image analysis module is being undertaken the task of identifying lane line, is the core technology of vision auxiliary driving device.The lane line recognition methods of adopting at present mainly contains 4 kinds: (1) adopts edge detection operators such as Sobel operator, Canny operator that original image is handled, provide the high processing value at the lane line boundary, thereby judge the position on border, and obtain the position (referring to Chinese patent CN102303609A and US Patent No. 4970653) of lane line from boundary information; (2) directly or after using edge detection operator adopt Hough transformation (Hough Transform), calculate the straight line that exists in the image, and obtain the position (referring to Chinese patent CN102303609A and CN201712600U, US Patent No. 5790403) of lane line thus; (3) adopt the method for template matches, seek the border along different angles, and obtain thus lane line the position (referring to Chinese patent CN101804813A, US Patent No. 5398292, US4970653).(4) earlier gray level image is carried out binary conversion treatment, by selecting appropriate binary-state threshold, make lane line and pavement image be separated into bianry image, and then the position (referring to Chinese patent CN101016052A and CN101016052A) of search lane line.
Rim detection is the basic problem of image recognition, and very long research history (Zhang Yujin, " Image Engineering (middle volume) graphical analysis " second edition) is arranged.Stable and accurate image recognition under complex environment is the ultimate aim of image recognition technology.Facts have proved, simple image binaryzation and edge detection method are difficult to adapt to various complex environments, and image binaryzation and edge detection process are retrained in conjunction with the identification clarification of objective based on the image partition method of model, obtained good effect (Robert Hanek a lot of the application, Model-Based Image Segmentation Using local Self-Adapting Separation Criteria, Lecture Notes in Computer Science Volume2191,2001:1-8).Though the image partition method image segmentation effect based on model is better, calculated amount is very big, is unsuitable for the graphical analysis of real-time video, particularly on the serious restricted vehicular platform of computational resource.
People such as Zheng Xinqian (Zheng Xinqian etc. are based on the design of vision guided navigation dolly and realization, Xiamen University's journal natural science edition, 2012,2 of FPGA) have proposed a kind of novel adaptive image binaryzation and track recognition methods; Image binaryzation and track identification width are combined, by track identification width feedback binary-state threshold.Based on the monorail vision guided navigation model dolly of this self-adaptive identification method, under multiple illumination and road surface situation, obtained navigation effect preferably.
Image being carried out the square wave convolution, form domatic leading line border, is the key of the algorithm that proposes of people such as Zheng Xinqian with related between the leading line width that forms binaryzation and the binary-state threshold.This algorithm can only realize 30 leading lines more than the pixel at a width, and the vision auxiliary driving device need be identified two very thin lane lines (generally have only 3-5 pixel wide).So the self-adaptation track recognizer that people such as Zheng Xinqian propose can't directly be used at the vision auxiliary driving device.
Except reply complex illumination and complex road surface condition, moving towards the rational vehicle running route of planning according to lane line also is an important technical links of vision auxiliary driving device.The classical way of lane line guiding is (the A Curvature-based Scheme for Improving Road Vehicle Guidance by Computer Vision of the guiding vehicles algorithm based on curvature that people such as E.D.Dickmanns proposes, SPIE Vol.727Mobile Robots, 1986:161-168).This method has been done accurate processing to lane line and the corresponding perspective effect of any form, but because need find the solution differential equation group, so a lot of actual vision auxiliary driving devices of realizing often adopt self-designed approximation method, handle quantity (US Patent No. 5163002 with reduced data, US5301115, US5390118).A lot of vision auxiliary driving devices adopt Hough straight line conversion identification lane line, thus can only be to straight way, and perhaps the straight way of bend is partly identified and is handled.This class vision auxiliary driving device can only be made the linear rows bus or train route wire gauge in far field or near field and draw (US Patent No. 5790403).
The image of existing vision driver assistance dress is handled and is adopted single-chip microcomputer basically, digital signal processor, and computing machine, perhaps computing machine adds special graphical analysis circuit (US Patent No. 5430810).The common ground of these image analysis equipment is to adopt the serial processing mode, and when handling multitude of video information, arithmetic speed is restricted.
Summary of the invention
The purpose of this invention is to provide a kind of self-adaptive visual auxiliary driving device.
The present invention is provided with vehicle power, decompression voltage regulator, vehicle-mounted display panel, field programmable gate array (Field Programmable Gate Array, FPGA) core board (being designated hereinafter simply as the fpga core plate), digital camera and turn to control interface; Described fpga core plate is provided with power module, crystal oscillator, fpga chip and IO interface; The output terminal of the input termination vehicle power of described decompression voltage regulator, the output terminal of decompression voltage regulator connects the power module input end of fpga core plate and the input end of digital camera respectively, the output interface of fpga core plate connects the input end of vehicle-mounted display panel, the input interface of the output termination fpga core plate of described digital camera, the input interface of the output termination fpga core plate of digital camera turns to the input interface of the output termination fpga core plate of control interface.
Described field programmable gate array (FPGA) core board, is determined relative position and the orientation of automobile and lane line, and when automobile is about to the run-off-road line, is sent early warning signal by the analysis to the lane line image as key process unit;
Vehicle power is exported constant 5V electricity by decompression voltage regulator and is supplied with fpga core plate and digital camera.Digital camera receives image and data-signal is transferred to the fpga core plate, after the fpga core plate is gathered and analyzed image, calculates steering angle, by turning to control interface output control signal, controls turning to control interface; By the report of user interface to vehicle-mounted display panel transmission travel condition of vehicle, send the signal that the driver assistance function is ready, begin driver assistance function or termination driver assistance function to user interface simultaneously.
The fpga core plate is as key process unit, by analyzing the lane line image, determines relative position and the orientation of automobile and lane line, and planning travelling line and send corresponding steering order.
Carrying out image with the serial computing scheme that existing main employing computing machine or computing machine are combined with special circuit handles and compares, the present invention is by image processing algorithms such as gray scale expansion and square wave convolution, lane line is broadened and form the domatic border, thereby between the binaryzation width of lane line and binary-state threshold, form related; Give the image binaryzation threshold value measurement feedback of lane width, and make corresponding adjustment, thereby make the image binaryzation threshold value adapt to various complex illuminations and road surface situation automatically.
The present invention adopts a kind of simple circular arc traffic route planing method.According to the position of center, track at the visual field end, directly go out to calculate the curvature of driving camber line.This method realizes easily, and being swift in response accurately to lane line.
The present invention adopts FPGA(Field Programmable Gate Array, field programmable gate array) chip, image walked abreast line by line handle, significantly improved that image is handled and the control reaction velocity.
The present invention has following outstanding advantage:
1) improves the lane line recognition capability of vision auxiliary driving device under complex illumination and complex road surface situation.
2) improve the vision auxiliary driving device to the route planning ability of bend.
3) image that improves the vision auxiliary driving device is handled and the control reaction velocity.
Description of drawings
Fig. 1 is the structural representation of the embodiment of the invention.
Fig. 2 is the decompression voltage regulator schematic diagram of the embodiment of the invention.
Fig. 3 is the synoptic diagram of the self-adaptation lane line recognizer of the embodiment of the invention.
Fig. 4 is that the gray scale of the embodiment of the invention expands and the square wave convolutional filtering synoptic diagram of effect afterwards.
Fig. 5 is the synoptic diagram of the planning travelling line of the embodiment of the invention.
Embodiment
Following examples will the present invention is further illustrated by reference to the accompanying drawings.
Referring to Fig. 1~5, the embodiment of the invention is provided with vehicle power 1, decompression voltage regulator 2, vehicle-mounted display panel 3, fpga core plate 4, digital camera 5 and turns to control interface 6; Described fpga core plate 4 is provided with power module, crystal oscillator, fpga chip and IO interface; The output terminal of the input termination vehicle power 1 of described decompression voltage regulator 2, the output terminal of decompression voltage regulator 2 connects the power module input end of fpga core plate 4 and the input end of digital camera 5 respectively, the output interface of fpga core plate 4 connects the input end of vehicle-mounted display panel 3, the input interface of the output termination fpga core plate 4 of described digital camera 5, the input interface of the output termination fpga core plate 4 of digital camera 5 turns to the input interface of the output termination fpga core plate 4 of control interface 6.
The fpga core plate, is determined relative position and the orientation of automobile and lane line, and when automobile is about to the run-off-road line, is sent early warning signal by the analysis to the lane line image as key process unit;
Operational scheme of the present invention is as follows:
(1) after device powered on, each module was carried out initialization, began to learn track width W
0, the railway line width W
1Etc. dynamic track parameter.
(2) determine that the fiduciary level that lane line is identified reaches after the standard, sends the ready signal of driver assistance function by user interface to user interface.
(3) wait for that the user confirms to begin the driver assistance function.
(4) control is identified and turned to automatically to the beginning lane line.
(5) run into special light and shine or the road surface situation, the field programmable gate array core board determines that the lane line identification certainty is lower than standard, sends the signal that stops the driver assistance function to user interface.
(6) special light shines or the releasing of road surface situation, after the field programmable gate array core board determines that the lane line identification certainty reaches standard, begins to relearn dynamically track parameter such as track width, railway line width.
(7) send the signal that can restart the driver assistance function to user interface.
Core Feature of the present invention and key technique are the identification of lane line and the generation of steering controling signal.The concrete steps that realize Core Feature of the present invention are as follows:
1) after digital camera 5 powers on, with fixing frame per second, gather road image, and line by line to 4 transmission of field programmable gate array core board.
2) described delegation image information fpga core plate 4 receiving steps 1), and carry out analyzing and processing according to step shown in Figure 3.
3) at first according to equation (1) to step 2) described delegation image carries out the gray scale dilation operation, increases the width of lane line.
p
i=max{p
k},i-7≤k≤i+8 (1)
P wherein
iAnd p
kBe the gray-scale value of i and k pixel, max is the function of maximizing.
4) according to equation (2) the described delegation of step 3) image is carried out the square wave convolution, when eliminating noise, form figure as shown in Figure 4.
P wherein
iAnd p
kBe the gray-scale value of i and k pixel, ⊕ is the logical difference exclusive disjunction.W among Fig. 5
0, W
1And W
2Be respectively desirable binary-state threshold, too high binary-state threshold calculates the lane width that obtains with crossing under the low binary-state threshold.By the square wave process of convolution, it is related that the lane width that make to calculate obtains and binary-state threshold directly produce, thereby can carry out the self-adaptation adjustment to binary-state threshold.We claim to calculate under the desirable binary-state threshold lane width W that obtains
0Be the standard lane width.Standard lane width W
0Road surface situation after auxiliary driving device starts obtains when learning, and uses in the binary-state threshold self-adaptation is adjusted.See step 12) for details.
5) binary-state threshold that adopts the lastrow image to hand down carries out binary conversion treatment to the resulting gray level image of step 4).
6) in delegation's image information of the resulting binaryzation of step 5), center, the track M that hands down with lastrow
0Centered by, difference is the search track line boundary to the left and right sides.The search method be to image do shown in equation (3) and (4) left or step function convolution to the right.
E wherein
L(i) and E
R(i) be left side dividing value and the right dividing value of i pixel, p
kBe the image value of k pixel, ⊕ is the logical difference exclusive disjunction.Direction is searched for E to the left and right respectively
L(i) and E
R(i) value is greater than 12 pixel, and demarcates and be left-lane line boundary LB and right lane line boundary RB.
7) search the border of the left side or the right lane line after, lane line is carried out template matches identification, to confirm the identification certainty of lane line.Concrete matching process is as follows:
V wherein
LAnd V
RBe left-lane line and the template matches value that lane line is arranged, LB and RB are left-lane line boundary and the right lane line boundary that searches in the step 6), p
kBe the gray-scale value of k pixel, ⊕ is the logical difference exclusive disjunction, and WD is the width of lane line, obtains at the road surface learning phase, sees step 13) for details.The V that obtains
LAnd V
RIf greater than prior preset threshold, determine that then the left-lane line boundary LB and the right lane line boundary RB that obtain can use, otherwise the value of cancellation LB or RB.
8) according to step 6) and 7) search and the left-lane line boundary LB of affirmation and width and the center M in right lane line boundary RB calculating track.Because the interruption of lane line, perhaps owing to light, when the road surface situation temporarily can't find left margin or right margin, calculate lane width with following formula:
RB wherein, LB is left-lane line boundary and the right lane line boundary that searches in the step 6), M
0Be the track mid point that lastrow finds, W
0It is the standard trajectory width.
9) according to calculating resulting width, with normal width W
0Compare, if the overgauge width, and exceed certain limit, then when the next line binaryzation, reduce threshold value 1-2 gray-scale value; Otherwise then improve threshold value 1-2 gray-scale value.Standard lane width W
0Learning phase after the self-adaptation auxiliary driving device starts obtains, and sees step 12) for details.
10) handle when arriving last column, resulting orbit centre position M is transferred to turns to control interface.According to Fig. 5, can draw the formula that following formula calculates the arc radius R13 in planning planning track.
Wherein a14 is the distance of track misalignment vehicle centerline, and b15 is the terminal distance to front vehicle wheel in visual field.B15 actual measurement in advance obtains.A14 can be calculated by center, track M and obtain square journey (9)
Wherein D is the width of view field image, and M is the center, track, and λ is the proportionality constant of transmitted distortion, can measure acquisition in advance.
What Fig. 5 showed is to eliminate perspective distortion visual field afterwards.
11) the planning arc radius R13 that calculates according to step 10), and the automobile front and back wheel interval S that records in advance can approximate treatment go out the formula for the reasonable motor turning angle α of planning camber line:
Turn to control interface according to steering angle α value, send to vehicle control syetem and turn to steering order.
According to equation (8), when (9) and (10) calculated steering angle α value, the variable of change was parameter a 14.Because the span of a 14 is less than the width of image, so but calculated in advance goes out corresponding steering angle α value, and set up look-up table.When the device operation, only need search correlation values, need not to calculate in real time, can reduce operation time and save logical resource.
12) after starting auxiliary driving device and before the definite startup of the user driver assistance function, begin to learn standard trajectory width parameter W
0To in the multiple image first the row or the first few lines image carry out step 3)-7) operation, obtain average track width value as standard trajectory width parameter W
0
Wherein N is the number of image frames of collection.LB
iAnd RB
iBe left-lane line boundary and the right lane line boundary of certain row image in the i frame.
13) on the basis of step 12), study standard trajectory line width parameter W
1For the i two field picture of gathering, search optimal trajectory line width parameter w
i, and obtain standard trajectory line width parameter W on this basis
1
WD
i=argmax{V
L(WD
i)+V
R(WD
i))
V wherein
LAnd V
RBe the template matches value of definition in equation (5) and (6), argmax makes V
L(WD
i)+V
R(WD
i) parameter WD when getting maximal value
iValue, N is the frame number that calculates.
14) if running into special light shines or the road surface situation, by step 3)~8) in the time of can't in multiframe (1~3 frame) image, following the tracks of lane line.Send the alarm signal that stops the driver assistance function to user interface, the road speed that can slow down simultaneously is to remind driver's manual control bearing circle.Below arbitrary condition when setting up, can be judged as and can't follow the tracks of lane line:
(A) in a two field picture, there is the image above some to can not find the left-lane line, perhaps during the right lane line.
(B) lane width of search acquisition is greater than or less than the standard lane width above certain limit
(C) center, track of departing from the previous row image, the center, track that obtains of search surpasses certain limit.
The present invention discloses a kind of self-adaptive visual track and follows the tracks of and auxiliary driving device.Having under the situation of clear carriageway line sign, cooperating to turn to control and screen display equipment, realizing the shooting to lane line, identification reaches automobile is turned to control automatically.Adopt the recognition methods of adaptive binaryzation lane line, make this device can be under more complicated light and road surface situation operate as normal.Adopt a kind of simple circular arc route planning method, directly calculate the curvature of planning circular arc route from the center, track of visual field end, and turn to control in real time with this.Implementation of the present invention is simple, and effect stability is to being swift in response accurately of lane line.Adopt FPGA(Field Programmable Gate Array, field programmable gate array) chip is as primary processor, can carry out parallel processing to image, make this device have image processing faster and control reaction velocity than common auxiliary driving device based on serial processor such as computing machine or microprocessors.
The present invention plans reasonable traffic route according to the relative position between automobile and the lane line and orientation, adjusts the steering angle of automobile automatically, makes automobile travel along the center, track automatically.
In the lane line identifying, expand and image processing algorithm such as square wave convolution by gray scale, lane line is broadened and form the domatic border, thereby between the binaryzation width of lane line and binary-state threshold, form association; Give binary-state threshold the measurement feedback of lane width, and make corresponding adjustment, thereby make binary-state threshold adapt to illumination and the road surface situation of various complexity automatically.
Image is handled in earlier stage, the identification of image binaryzation, lane line, and the needed time of image processing process such as threshold feedback is less than the time of digital camera collection and transmission delegation image.Image processing process carries out line by line in real time.
According to the relative position between lane line and the automobile and orientation, planning circular arc traffic route, and according to the curvature of center, track at this camber line of position calculation of visual field end.
Directly calculate the steering angle α=S/R of automobile according to the front and back wheel interval S of the curvature 1/R of planning driving camber line and automobile, and automobile is controlled in real time.
When calculating the steering angle α of automobile, adopt the method for look-up table, directly provide the steering angle α of center, corresponding track, thereby save computing time and logical resource.
The dynamic track parameter of the automatic learning adaptive identification of the present invention lane line.
The present invention's method of the dynamic track of study parameter automatically is: the lane line in search and the identification multiple image (more than 100 frames) in first row or the first few lines image, calculate lane width and search for the lane line width of optimum matching; With the lane width of gained and the mean value of lane line width in multiple image as standard lane width and standard lane line width in the self-adaptation identification.
The present invention can make the substandard judgement of fiduciary level of lane line identification, and sends to user interface by vehicle-mounted display panel and to remind driver's device to enter the pilot steering pattern at once.
The present invention can make the substandard judgement of fiduciary level of lane line identification according to following arbitrary condition:
(A) in a two field picture, there is the capable image above some to can not find the left-lane line, perhaps the right lane line.
(B) lane width of search acquisition and the difference of standard lane width exceed certain limit.
(C) difference of the center, track of the center, track of search acquisition and previous row image surpasses certain limit.
The present invention can be by by following process flow operation (referring to Fig. 2):
(1) after device powered on, each module was carried out initialization, began to learn road surface parameters such as track and railway line width.
(2) determine and accurately to identify after the railway line, send the ready signal of driver assistance function by user interface to user interface, and wait for that the user confirms to begin the driver assistance function.
(3) control is identified and turned to automatically to the beginning lane line.
(4) run into special light and shine or the road surface situation, when the field programmable gate array core board determines that the fiduciary level of lane line identification is lower than standard, send the signal that stops the driver assistance function to user interface.
(5) special light shines or the releasing of road surface situation, and the field programmable gate array core board begins to relearn road surface parameters such as track width, railway line width after determining that the fiduciary level of lane line identification reaches standard.
(6) send the signal that can restart the driver assistance function to user interface.
The present invention identifies the function of lane line and divides following six steps to finish:
(1) image is carried out gray scale and expand or other image processing methods, widen lane line.
(2) image is carried out square wave convolution or other image processing methods, make lane line form the domatic border.
(3) image is carried out binaryzation.
(4) image is carried out left and step function convolution to the right, and search lane line border.
(5) lane line that finds is carried out template matches, to confirm the reliability of identification.
(6) the standard lane width that obtains in the lane width that obtains and the learning process is compared, and according to comparative result binary-state threshold is adjusted.
Claims (1)
1. a self-adaptive visual auxiliary driving device is characterized in that being provided with vehicle power, decompression voltage regulator, vehicle-mounted display panel, fpga core plate, digital camera and turns to control interface; Described fpga core plate is provided with power module, crystal oscillator, fpga chip and IO interface; The output terminal of the input termination vehicle power of described decompression voltage regulator, the output terminal of decompression voltage regulator connects the power module input end of fpga core plate and the input end of digital camera respectively, the output interface of fpga core plate connects the input end of vehicle-mounted display panel, the input interface of the output termination fpga core plate of described digital camera, the input interface of the output termination fpga core plate of digital camera turns to the input interface of the output termination fpga core plate of control interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013101457823A CN103198320A (en) | 2013-04-24 | 2013-04-24 | Self-adaptive vision-aided driving device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013101457823A CN103198320A (en) | 2013-04-24 | 2013-04-24 | Self-adaptive vision-aided driving device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103198320A true CN103198320A (en) | 2013-07-10 |
Family
ID=48720852
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013101457823A Pending CN103198320A (en) | 2013-04-24 | 2013-04-24 | Self-adaptive vision-aided driving device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103198320A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105253062A (en) * | 2015-08-17 | 2016-01-20 | 深圳市美好幸福生活安全系统有限公司 | Automobile advanced driver assistance system-based image display system and implementation method thereof |
CN106054886A (en) * | 2016-06-27 | 2016-10-26 | 常熟理工学院 | Automatic guiding transport vehicle route identification and control method based on visible light image |
CN108496178A (en) * | 2016-01-05 | 2018-09-04 | 御眼视觉技术有限公司 | System and method for estimating Future Path |
CN109886122A (en) * | 2019-01-23 | 2019-06-14 | 珠海市杰理科技股份有限公司 | Method for detecting lane lines, device, computer equipment and storage medium |
CN110516550A (en) * | 2019-07-26 | 2019-11-29 | 电子科技大学 | A kind of lane line real-time detection method based on FPGA |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101477629A (en) * | 2008-12-29 | 2009-07-08 | 东软集团股份有限公司 | Interested region extraction process and apparatus for traffic lane |
TW201028311A (en) * | 2009-01-19 | 2010-08-01 | Univ Nat Taiwan Science Tech | Lane departure warning method and system thereof |
WO2011047508A1 (en) * | 2009-10-22 | 2011-04-28 | Tianjin University Of Technology | Embedded vision tracker and mobile guiding method for tracking sequential double color beacons array with extremely wide-angle lens |
CN102320298A (en) * | 2011-06-09 | 2012-01-18 | 中国人民解放军国防科学技术大学 | Lane departure warning device based on single chip |
CN102592114A (en) * | 2011-12-26 | 2012-07-18 | 河南工业大学 | Method for extracting and recognizing lane line features of complex road conditions |
-
2013
- 2013-04-24 CN CN2013101457823A patent/CN103198320A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101477629A (en) * | 2008-12-29 | 2009-07-08 | 东软集团股份有限公司 | Interested region extraction process and apparatus for traffic lane |
TW201028311A (en) * | 2009-01-19 | 2010-08-01 | Univ Nat Taiwan Science Tech | Lane departure warning method and system thereof |
WO2011047508A1 (en) * | 2009-10-22 | 2011-04-28 | Tianjin University Of Technology | Embedded vision tracker and mobile guiding method for tracking sequential double color beacons array with extremely wide-angle lens |
CN102320298A (en) * | 2011-06-09 | 2012-01-18 | 中国人民解放军国防科学技术大学 | Lane departure warning device based on single chip |
CN102592114A (en) * | 2011-12-26 | 2012-07-18 | 河南工业大学 | Method for extracting and recognizing lane line features of complex road conditions |
Non-Patent Citations (1)
Title |
---|
郑新钱等: "基于FPGA的视觉导航小车设计与实现", 《厦门大学学报(自然科学版)》, vol. 51, no. 3, 31 May 2012 (2012-05-31), pages 331 - 335 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105253062A (en) * | 2015-08-17 | 2016-01-20 | 深圳市美好幸福生活安全系统有限公司 | Automobile advanced driver assistance system-based image display system and implementation method thereof |
CN105253062B (en) * | 2015-08-17 | 2018-04-27 | 深圳市美好幸福生活安全系统有限公司 | Image display system and its implementation based on the advanced drive assist system of automobile |
CN108496178A (en) * | 2016-01-05 | 2018-09-04 | 御眼视觉技术有限公司 | System and method for estimating Future Path |
CN108496178B (en) * | 2016-01-05 | 2023-08-08 | 御眼视觉技术有限公司 | System and method for estimating future path |
CN106054886A (en) * | 2016-06-27 | 2016-10-26 | 常熟理工学院 | Automatic guiding transport vehicle route identification and control method based on visible light image |
CN106054886B (en) * | 2016-06-27 | 2019-03-26 | 常熟理工学院 | The identification of automated guided vehicle route and control method based on visible images |
CN109886122A (en) * | 2019-01-23 | 2019-06-14 | 珠海市杰理科技股份有限公司 | Method for detecting lane lines, device, computer equipment and storage medium |
CN110516550A (en) * | 2019-07-26 | 2019-11-29 | 电子科技大学 | A kind of lane line real-time detection method based on FPGA |
CN110516550B (en) * | 2019-07-26 | 2022-07-05 | 电子科技大学 | FPGA-based lane line real-time detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11635764B2 (en) | Motion prediction for autonomous devices | |
US12131487B2 (en) | Association and tracking for autonomous devices | |
US11776135B2 (en) | Object velocity from images | |
US10976743B2 (en) | Trajectory generation using motion primitives | |
US20230259792A1 (en) | Object Association for Autonomous Vehicles | |
US11548533B2 (en) | Perception and motion prediction for autonomous devices | |
US20240149868A1 (en) | Collision prediction and avoidance for vehicles | |
US11225247B2 (en) | Collision prediction and avoidance for vehicles | |
CN108583578B (en) | Lane decision method based on multi-objective decision matrix for automatic driving vehicle | |
JP6653381B2 (en) | Map update method and system based on control feedback of autonomous vehicle | |
CN106092121B (en) | Automobile navigation method and device | |
US20220230449A1 (en) | Automatically perceiving travel signals | |
US20190308620A1 (en) | Feature-based prediction | |
US10650256B2 (en) | Automatically perceiving travel signals | |
CN112752950A (en) | Modifying map elements associated with map data | |
US20230038786A1 (en) | Deep Structured Scene Flow for Autonomous Devices | |
CN103192830B (en) | A kind of self-adaptive visual lane departure warning device | |
CN107933686A (en) | A kind of multi-shaft steering vehicle tracking center lane line rotating direction control method and system | |
CN109059944A (en) | Motion planning method based on driving habit study | |
CN103198320A (en) | Self-adaptive vision-aided driving device | |
WO2018195150A1 (en) | Automatically perceiving travel signals | |
US20230150549A1 (en) | Hybrid log simulated driving | |
US11851086B2 (en) | Using simulations to identify differences between behaviors of manually-driven and autonomous vehicles | |
US11590969B1 (en) | Event detection based on vehicle data | |
CN105620486B (en) | Driving mode judgment means and method applied to vehicle energy management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130710 |