WO2013149149A1 - Method to identify driven lane on map and improve vehicle position estimate - Google Patents
Method to identify driven lane on map and improve vehicle position estimate Download PDFInfo
- Publication number
- WO2013149149A1 WO2013149149A1 PCT/US2013/034617 US2013034617W WO2013149149A1 WO 2013149149 A1 WO2013149149 A1 WO 2013149149A1 US 2013034617 W US2013034617 W US 2013034617W WO 2013149149 A1 WO2013149149 A1 WO 2013149149A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- lane
- image data
- data
- location
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3602—Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
- G01S19/49—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
Definitions
- the vehicle coordinates are obtained from a device that uses signals broadcast from a global navigation satellite system, such as GPS (that is, a Global Position System, which is operated by the United States government).
- GPS Global Position System
- Such a device, a GPS receiver performs well in most outdoor environments, but there are environments, such as urban canyons and forests, that obstruct GPS signals and degrade a GPS receiver's positioning performance.
- GPS receivers can be augmented with other sensors to improve their positioning performance in challenging environments.
- Some common aiding sensors or sensor systems are inertial-based, such as IMUs (inertial measurement units), INS (inertial navigation systems), accelerometers, and gyroscopes; and distance- measurement-based, such as vehicle odometers.
- Maps can be used to constrain GPS errors by matching the GPS-derived position to the closest mapped road in the direction of travel.
- the deficiency of commonly available maps, such as those found in current vehicle or personal navigation systems, is that they only contain road-level geometry and thus can at best only improve the position to the level of the currently travelled road and not to the level of the currently travelled lane within that road.
- the road-level accuracy is sufficient for navigation applications, but lane-level accuracy is required for some cooperative collision avoidance applications, such as those warning of stopped or suddenly slowing vehicles ahead in the same lane.
- Fig. 1 illustrates a lidar reflectivity scan in combination with an image showing a roadway.
- Fig. 2 illustrates a route through a geographical region.
- FIG. 3 is an image showing an example mounting arrangement for a navigation system.
- Fig. 4 is a plot depicting a number of visible satellites and the resulting horizontal dilution of precision (HDOP).
- Fig. 5 is an overhead view of a portion of a test route.
- Fig. 6 is an image depicting testing results for the present navigation system.
- Fig. 7 is a flowchart depicting an example method by which a vehicle location system can determine a location of a vehicle.
- Fig. 8 depicts an example algorithm for combining the values generated by the method of Fig. 7 to identify a more accurate location for a vehicle.
- Fig. 9 is a block diagram illustrating functional elements of a system for determining a vehicle's location in accordance with the present disclosure.
- Fig. 10 is a flowchart depicting an example method by which a vehicle location system can refine an estimated position of a vehicle.
- Intelligent vehicles of the future require precise and robust localization for improved safety and performance. Providing this position information typically falls to systems relying on some combination of GPS and inertial navigation systems (INS).
- GPS inertial navigation systems
- the features and quality of these systems vary across the automotive industry and their performance differs across various environments. GPS is especially dependent on line of sight to satellites, and heavy presence of obstructions such as buildings or foliage can affect the accuracy of GPS.
- New production vehicles are starting to use additional sensors for safety systems, such as cameras and laser scanners. With these new sensors come new capabilities to be leveraged in advanced positioning solutions to combat the errors in GPS that arise due to obstructed views of the sky.
- the integrated GPS/INS navigation system is a well-established localization architecture.
- the GPS updates provide a means for bounding error growth in the INS solution.
- RISS reduced inertial sensor systems
- vehicle speed measurements to provide pose estimates with limited acceleration and angular rate information. Moving forward, these systems will be augmented with information from vision based systems to aid GPS in error mitigation.
- Light detection and ranging (lidar) and camera systems are used in detection of, or ranging to, distinct features in the environment.
- the sensors are often used to detect street signs, buildings, or lane markings in the immediate vicinity of the vehicle.
- Lidar-based lane-marking detection has been performed using a histogram and histogram gradient feature extraction algorithms in various implementations.
- Lane departure warning (LDW) systems using cameras are also currently available. These systems generally operate using a linear-parabolic lane model. Hough transform methods can be used to obtain road edges, and clothoid model parameters can be used for horizontal and vertical road parameter recognition.
- the present disclosure provides a navigation system that uses various sensors that are becoming common on production vehicles such as GPS, inertial sensors (accelerometers and gyros), and lane departure sensors (cameras or lidars).
- a number of potential sensor configurations is presented, including as little as two accelerometers and one gyroscope combined with L1 -frequency-only GPS to a full six degree of freedom inertial measurement unit (IMU) with L1 and L2 frequency GPS accompanied by lateral positions from lidar and camera.
- the present navigation system may incorporate a map database of the lane markings that may be used for vision-based measurement integration. Vehicle wheel speed measurements may be provided by the on-board controller area network (CAN) of the test vehicle can also be incorporated.
- CAN controller area network
- the present systems can be validated by a number of techniques, including qualitative analysis, by inspection of trajectories plotted on satellite imagery, and quantitative analysis, by comparison of trajectories produced by the present system to those provided by a reference system to derive position error statistics.
- the present navigation algorithm may comprise an extended Kalman filter implementation.
- the following description contemplates three different system implementations, though other implementations may be utilized as would be readily apparent to a person of ordinary skill in the art.
- the first example system implementation utilizes a combined reduced inertial sensor system, L1 frequency GPS pseudorange and pseudorange rate measurements, and wheel speed measurements retrieved from an on-board CAN.
- the second example system implementation adds L2 frequency GPS measurements.
- the final example system implementation used a six degree of freedom IMU instead of a reduced IMU and also adds lateral lane position updates from a camera and lidar. Each system implementation is discussed below.
- a relatively low-cost u-blox LEA 6T GPS receiver was integrated with two accelerometers, a single gyroscope, and vehicle speed measurements to form a baseline navigation system.
- the sensitive axes of the accelerometers were aligned with the longitudinal and lateral axis of the vehicle, and the sensitive axis of gyroscope was aligned with the vertical axis of the vehicle.
- the states of the present filter include the position of the vehicle in geodetic coordinates (latitude ( ⁇ ), longitude ( ⁇ ), and altitude (h), the velocity in a local tangent plane (north (V n ), east (V e ), and down (V ⁇ /)), the heading ( ⁇ ) of the vehicle relative to north in the local tangent plane, two accelerometer bias states (b ax , b a yj, one gyroscope bias state (bg Z ), GPS receiver clock bias (At ) and drift (At ), and a tire slip state (s).
- Kinematic relationships can be used to formulate the dynamic equations for the propagation of the vehicle position states.
- the heading dynamics can then be defined relative to the angular velocity measured by the gyroscope, the measurement error, and the dynamics introduced by the dynamics of the Earth.
- ⁇ ⁇ - b - co 8 ⁇ ( ⁇ ) (7)
- IMU biases can be assumed to be first order Gauss Markov processes as is the tire slip state.
- the clock bias dynamics can be driven by the drift term and the clock drift can be assumed to be driven by zero mean white noise.
- the Kalman filter covariance matrix can be propagated using the linearized error dynamics derived from the dynamics described above. Note that in this formulation the vehicle roll and pitch can be assumed to be negligible as is the lever arm between the IMU and GPS antenna.
- the INS navigation solution can be updated using both GPS and wheel speed measurements. In some cases, GPS pseudorange and pseudorange rate measurements can be used in this formulation. The INS solution can then be used to predict the pseudorange and pseudorange rate for each visible satellite at each GPS measurement epoch.
- the pseudorange prediction model can be as follows: 2 2 2
- x s , y s , and z s are the Earth centered Earth fixed (ECEF) coordinates of the satellite and x, y, and z are the estimates of the vehicle position resolved in the ECEF frame.
- the pseudorange rate prediction is given by:
- V s S; and ⁇ zs are the velocities of the satellite in the ECEF frame, and V x ,
- Vy, and V z are the estimated vehicle velocities resolved in the ECEF frame.
- the pseudorange and pseudorange rate predictions can be compared to the measured pseudorange and pseudorange rates to calculate the correction to the navigation solution.
- the correction can be mapped into the state domain and applied to the navigation solution and the error covariance matrix can be updated.
- An additional state update can be performed using the wheel speed measurement provided by the vehicle CAN.
- the CAN data provides a change in wheel position in the form of an encoder count differential. This measurement can be converted to a measurement of wheel speed using the wheel radius and the number of counts per revolution. The two rear wheel speeds can then be averaged to form one observable. This value can then be compared to the wheel speed predicted by the INS solution.
- the model for the wheel speed prediction contains the estimated north and east velocities and the estimated tire slip.
- the correction to the position solution is computed by calculating the difference between the predicted and measured wheel speed and mapping that error into the state domain using the geometry matrix derived from the partial derivative of the prediction with respect to the state vector. Note also that the stationary condition (i.e., the measured wheel speed is zero) allows for improved estimates of the inertial biases. Called a zero velocity update, additional corrections can be made to the bias states using the knowledge that the vehicle is static.
- the total attitude i.e., roll, pitch, and heading
- the ECEF frame can be selected as the navigation frame (minimizing the complexity of mapping the GPS to the navigation frame).
- a rotation matrix can be maintained to map accelerometer measurement from the vehicle body frame to the navigation frame.
- the dynamics of this rotation matrix, C can be given by:
- ⁇ is the measured angular velocity of the body frame relative to an inertial ib
- the velocity dynamics can be written in vector form as a function of the three axis accelerometer measurements accounting for the effects of gravity on the measured accelerations.
- V e C e (f - b ) + g (12) b a a e
- f are the three axes accelerometer measurements
- b are the accelerometer a
- processing the GPS pseudorange and pseudorange rate measurements follows the same procedure previously described for the RISS implementation.
- the system utilizes a GPS receiver that provides measurements from both the L1 and L2 frequency carrier. This typically results in two times the number of observables at each GPS measurement epoch.
- the wheel speed measurement update can also be performed according to the previous formulation.
- the full vision integration system uses six degree of freedom inertial measurements, dual frequency NovAtel GPS measurements, two estimates of lateral position within the lane - one provided by the lidar and another by the camera, and a map of the lane to estimate the pose of the vehicle.
- the estimation procedure followed the form described for the GPS/INS algorithm with the addition of a new update step when lateral position measurements are provided by either vision-based sensor.
- the lane detection and positioning methods for each vision system are described below. A description of the procedure for integrating the measurements in the Kalman filter follows.
- the specific lidar-based lane detection algorithm is one that is based on fitting an ideal lane model to actual road data, where the ideal lane model is updated with each lidar scan to reflect the current road conditions.
- a lane takes on a profile similar to the 100 averaged lidar reflectivity scans seen in Fig. 1 (see plot 10) where the corresponding road is seen in the lower image 12.
- the plot 10 is a mirror image of the photo due to the fact that the right portion of the image corresponds to negative horizontal angles. Note that this profile has a relatively constant area between the peaks, where the peaks represent the lane markings, which are typically bright and thus have higher reflectivity, and the constant area is the unmarked pavement of the road's surface, which is typically dark and thus has lower reflectivity.
- An ideal lane model can be generated with each lidar scan to mimic this averaged data, where the constant portion is generated by averaging the reflectivity directly in front of the vehicle and the lane markings can be generated by increasing the average road surface reflectivity by 75%.
- This model can then be stretched over a range of some minimum expected lane width to some maximum expected lane width, and the minimum RMSE between the ideal lane and the lidar data can be assumed to be the area where the lane resides.
- the camera-based method for the present system can use line extraction techniques applied to a captured image to detect lane markings and calculate a lateral distance from a 2 nd order polynomial model for the lane marking.
- a threshold can be chosen from the histogram of the image to compensate for differences in lighting, weather, or other non-ideal scenarios for extracting the lane markings.
- the thresholding operation transforms the image into a binary image, which is followed by Canny edge detection.
- a Hough transform can then be used to extract the lines from the image, fill in holes in the lane marking edges, and exclude erroneous edges. Using the slope of the lines, the lines are divided into left or right lane markings.
- Two criteria can be used to further exclude non-lane markings in the image based on the assumption that the lane markings do not move significantly within the image from frame to frame.
- the first test checks that the slope of the line is within a threshold of the slope of the near region of the last frame's 2 nd order polynomial model.
- the second test uses boundary lines from the last frame's 2 nd order polynomial to exclude lines that are not near the current estimate of the polynomial.
- the 2 nd order polynomial interpolation is used on the selected lines' midpoint and endpoints to determine the coefficients of the polynomial model, and a Kalman filter is used to filter the model to decrease the effect of erroneous polynomial coefficient estimates.
- the lateral distance can be calculated using the polynomial model on the lowest measurable row of the image (for greater resolution) and a real-distance-to-pixel factor.
- the camera and lidar algorithms provide measurements of the lateral position of the vehicle within the lane. With this information and knowledge of the global position of the center of the lane provided by a map of the route, the estimate of the pose of the vehicle can be updated.
- a prediction of the lateral position of the vehicle in the lane can be calculated using the current estimate of the vehicle position and the two closest map points in the direction of travel.
- the vehicle position and map coordinates can be expressed in a local tangent plane in north, east, and down components.
- the distance of the vehicle can be calculated normal to the line defined by the two closest points in the north and east directions using the following equations.
- n n n n m + ⁇ ( ⁇ - m ) (16)
- mi and m 2 are the closest map points and d is the calculate lateral distance.
- Fig. 7 is a flowchart depicting an example method by which a vehicle location system can determine a location of a vehicle.
- a conventional vehicle location system determines a first estimated location of the vehicle.
- a number of additional sensors such as inertial sensor systems can be utilized to supplement the location data derived from the global navigation satellite system and otherwise refine the estimate of the vehicle's current location.
- this first estimate of the vehicle's current location will be relatively coarse.
- step 104 a closest mapped road to the vehicle's estimated location is determined.
- This step may involve consulting a map database accessible to the system, where the map database stores location information for a number of roads and other points of interest in a given geographical region.
- the map database may store the location information for the roads and points of interest using absolute coordinates, though other approaches for storing the data in the map database may be utilized.
- step 106 the direction of the travel of the vehicle is determined. This determination may be made by consulting historical location data for the vehicle, assuming that the vehicle is traveling forwards. In some system implementations, the system can identify whether the vehicle is in a reverse gear or a forward gear. In the event that the vehicle is in a reverse gear, the determination of vehicle direction based upon the historical position data made in step 106 will be reversed. Other means for determining the vehicle's direction of travel may include consulting a magnetic field sensor configured to operate as a compass to determine a direction of travel of the vehicle.
- step 108 the vehicle's current lane of travel is determined. Step 108 may occur before, after, or during the execution of steps 102, 104, and 106.
- the determination of the current lane of travel may be made using one or more optical sensors configured to generate an image of the road upon which the vehicle is currently traveling.
- the optical imaging device e.g., a lidar or camera
- identify a number of lane markings present in front of the vehicle Those lane markings can then be analyzed to determine the current lane of travel.
- a position of the vehicle on the road may be determined and that position upon the road may be utilized to determine a location of the vehicle.
- the analysis of the lane markings may involve making a determination as to the slope of the identified lane markings to ensure that the slopes are within a predefined acceptable boundary. Similarly, the width of the lane marking can be analyzed to ensure that they meet expected values. Additionally, because the lane markings should not vary from frame-to-frame as captured by the imaging devices, detected markings that move too much between frames may be ignored as noise.
- step 102 Having calculated both an estimated position of the vehicle (step 102), closest mapped road (step 104), direction of travel (106), and lane position (or, alternatively, a road position) (108), these data points can then be combined in step 1 10 to calculate a more accurate location for the vehicle than was calculated in step 102.
- Fig. 8 depicts one algorithm for combining the values generated by the method of Fig. 7 to identify a more accurate location for a vehicle.
- the initial vehicle estimate is shown at point 802. This initial estimate may correspond to that generated in step 102 of Fig. 7 and may be generated by a conventional vehicle location system, such as a global navigation satellite system, for example.
- road 804 is determined to be the closest mapped road to the vehicle's estimated location.
- road 804 is a depiction of a portion of a roadway including four lanes, with two lanes being allocated for travel in each direction. In each direction, there is an outside or passing lane (furthest away from the edge of the road 804), and an inside lane (closest to the edge of the road 804).
- a direction of travel of the vehicle is determined (e.g. in accordance with step 106 of Fig. 7). In the example depicted in Fig. 8, the determined direction of travel of the vehicle is indicated by arrow 806.
- point 810 that is the shortest distance from the vehicle's estimated location 802 and located in the centerline of the current lane of travel 808. That identified point 810 can then be used as the vehicle's updated current location. Because point 810 includes both information captured from a conventional location system, as well as location data based upon a current lane of travel of the vehicle, point 810 represents a more accurate location for the vehicle than point 802. In other words, point 802 becomes a first estimate for the vehicle location that is further constrained by the lane of travel data associated with the vehicle.
- a current lane of travel for the vehicle that data point can be used to provide the driver with additional useful information such as warning about upcoming hazards located in the vehicle's current lane of travel.
- the vehicle's current lane of travel can be used to determine whether the vehicle is located in the correct turning lane. If not, the driver can be notified that the vehicle must change lanes in anticipation of an upcoming turn.
- Fig. 8 depicts one example implementation for updating a vehicle's estimated location using lane-of-travel data, other algorithms may be used for combining that data.
- Fig. 9 is a block diagram illustrating functional elements of a system for determining a vehicle's location in accordance with the present disclosure.
- System 900 includes GPS receiver 902 configured to receive satellite signals and convert those signals into a coarse position data for the vehicle.
- Fig. 9 depicts a GPS receiver, other types of satellite-based navigation systems exist and may be used in conjunction with the present system and method.
- GPS receiver 902 is configured to transmit the coarse position data to a central processing unit 904.
- System 900 may also include an inertial measurement unit (IMU) 906.
- IMU 906 is also configured to transmit data to central processing unit 904.
- IMU 906 comprises a six degree-of-freedom inertial measurement unit configured to send acceleration and attitude data in response to determining motion and an orientation of the vehicle.
- Camera 908 is configured to capture visual data and transmit at least a portion of that visual data to central processor 904. That visual data can be utilized, for example, to determine a current lane of travel of a particular vehicle as described herein.
- camera 908 has a field of view matched to at least a portion of an outdoor environment of a vehicle. Camera 908 is configured to send camera data in response to observing that outdoor environment.
- system 900 includes light detecting and ranging (LIDAR) device 910 having a field of view matched to at least a portion of the outdoor environment and configured to send LIDAR data in response to observing the outdoor environment.
- LIDAR light detecting and ranging
- either camera 908 or LIDAR device 910 are present, though in other implementations, both devices may be present and work in collaboration with one another to observer the space about a perimeter of the vehicle.
- Processor 904 is configured to receive data from one or more of LIDAR 910, Camera 908, IMU 906, and GPS receiver 902 to determine a location of the vehicle. As described herein, this may involve updating or refining an initial estimate of the vehicle location with lane data captured from one or more of LIDAR 910 and camera 908.
- Map database 912 stores location information for a number of roads and other points of interest in a given geographical region. Map database 912 may store the location information for the roads and points of interest using absolute coordinates, though other approaches for storing the data in the map database may be utilized.
- Fig. 10 is a flowchart depicting an example method by which a vehicle location system can refine an estimated position of a vehicle.
- a number of threshold values are used to limit and/or control the use of data captured from a number of different sources.
- the various thresholds may be determined based upon a history of data received from the designated source, expected value ranges for certain behaviors, time in each range, and other operations.
- a position of the vehicle within a current lane of travel is calculated. This calculated may be performed based upon data captured from a number of sensor systems, including LIDAR 1000 and/or an optical or infrared camera 1002.
- step 1016 the coordinates of a position of the vehicles are estimated. This estimation may be performed using data captured from a GPS system 1004, an inertial measurement unit 1006 that may include a number of accelerometers and/or gyroscopes, wheel speed sensor(s) 1008, and/or map data describing lane information (such as lane locations and geometry).
- a GPS system 1004 an inertial measurement unit 1006 that may include a number of accelerometers and/or gyroscopes, wheel speed sensor(s) 1008, and/or map data describing lane information (such as lane locations and geometry).
- step 1030 the vehicle's lane changes are detected.
- a lane change may be detected using data generated from step 1016 (after being filtered through threshold 1022), IMU 1006 (after being filtered through threshold 1024), wheel speed sensor(s) 1008 (after being filtered through threshold 1026), steering angle sensor 1010 (after being filtered through threshold 1028), and/or step 1014 (after being filtered through threshold 1020).
- step 1032 an improved lane estimate for the vehicle is calculated using the data from step 1030 and step 1018.
- step 1034 an improved position estimate for the vehicle is determined using data from step 1014 and step 1032.
- the current driven lane of the vehicle can be stored in step 1036.
- the current driven lane can then be used in step 1018, in combination with data from step 1016 and map 1012 to generate another estimate of the current driven lane of the vehicle.
- a number of different logic routines may be employed to monitor various sensor conditions to determine whether a vehicle has made a lane change (for example, using the method illustrated in Fig. 10).
- the following logic pseudocode for example, describes a process by a number of conditions may be analyzed to determine whether a vehicle has made a lane change to the left. Similarly logic may be used to determine whether the vehicle has made a lane change to the right.
- test route was designed with two main aims: 1 ) to cover a variety of environment types that have characteristic effects on positioning performance, and 2) to be representative of typical U.S. driving in the proportioning of environment type selections.
- the environmental features considered important here for their influence on GNSS-based positioning accuracy and for their commonality are trees, tree canopies, mountains, overpasses, buildings, urban canyons, and tunnels. These features, along with other helpful attributes, are used to define seven sufficiently distinct environments in Table 1 .
- the test route covered Open and Urban environment types.
- test route was selected to include roads that closely match (within ⁇ 5%) the urban road-use class proportioning in the U.S. found by the highway administration.
- the resulting 46-mile test route is shown in Fig. 2.
- test timing is a critical component of the test plan. Since the satellite configuration, as seen by the receiver on the ground, repeats every 24 hours, for repeatable results, it is desirable to have the testing span 24 hours. For procedural convenience, the 24 hour period is divided into 3 shifts. Each shift is on a different day (thus allowing a large break from test driving), and spans 10 hours, with 2-hour overlaps with respect to time of day, for a total desired span of 24 hours.
- the vehicle used for these tests was a four door sedan.
- the GPS antenna for the NovAtel receiver and the reference receiver was mounted on the center of the roof, as seen in Fig. 3, while the u-blox GPS antenna was mounted on the trunk lid. Additionally, an external wheel odometer was mounted on the passenger rear wheel to aid the reference system. Both the camera and lidar used for lane detection were attached to a roof rack placed on the vehicle and centered laterally, also seen in Fig. 3. All data was logged using a full-sized PC located in the trunk.
- the production or near-production grade sensors included a Micro-Electro- Mechanical (MEMS) grade Crossbow IMU 440, which is a full six degree of freedom IMU.
- MEMS Micro-Electro- Mechanical
- Crossbow IMU 440 which is a full six degree of freedom IMU.
- production vehicles are not equipped with full six degree of freedom IMUs. Accordingly, a reduced set of IMU measurements were used in testing.
- the Crossbow measurements were recorded at 100 Hz.
- a u-blox LEA 6T GPS receiver (L1 frequency only) provided GPS pseudorange and pseudorange rate measurements at 1 Hz.
- a Logitech Quickcam 9000 recorded images at 10 Hz for lane marking detection. Longitudinal speed was obtained from wheel speed sensors reported by the on-board CAN. Measurements from the vehicle CAN were available at 50 Hz.
- the results are presented in the form of a plot of satellite availability, tables showing error statistics, and plots of navigation solutions on road maps.
- Fig. 4 the number of visible satellites and the resulting horizontal dilution of precision (HDOP) are shown for the test route over three test runs. As seen, the number of visible satellites varies from twelve to one. Accordingly, the HDOP ranges from typical values (less than 2) to extreme values when less than four satellites are visible. The GPS positioning accuracy is compromised due to the reduced GPS satellite visibility and increased multipath reflections in portions of the test route.
- Fig. 5 is a zoomed view of one test run from Table 2.
- the GPS/INS solution (dashed lines) can be seen bridging several outages of the standalone GPS solution (solid lines).
- the GPS/INS performance improvements were seen in this scenario and in heavy foliage environments.
- the full vision integration system provided improvement in the urban areas of the test route, in particular as seen in Fig. 6.
- the full vision integrated system shows improvement over the GPS solution (solid lines).
- the present invention is a method for refining a determined location of a vehicle, comprising estimating a location of the vehicle using a global navigation satellite system, determining a closest mapped road to the estimated location of the vehicle, the closest mapped road having a plurality of lanes, determining a first lane in the plurality of lanes of the closest mapped road in which the vehicle is located, and refining the estimated location of the vehicle based upon a location of the first lane in the plurality of lanes of the closest mapped road.
- the present invention is a system comprising a vehicle location system configured to estimate a location of a vehicle using a global navigation satellite system, and a processor configured to determine a closest mapped road to the estimated location of the vehicle, determine a position on the closest mapped road in which the vehicle is located, and refine the estimated location of the vehicle based upon the position on the closest mapped road.
- the present invention is a system for determining a position of a vehicle in an outdoor environment.
- the system includes a global positioning system (GPS) receiver configured to send coarse position data in response to receiving a GPS signal from at least one GPS transmitter, a six degree-of-freedom inertial measurement unit configured to send acceleration and attitude data in response to determining motion and an orientation of the vehicle, and a camera having a field of view matched to at least a portion of the outdoor environment and configured to send camera data in response to observing the outdoor environment.
- GPS global positioning system
- the system includes a light detecting and ranging (LIDAR) device having a field of view matched to at least a portion of the outdoor environment and configured to send LIDAR data in response to observing the outdoor environment, and a processor configured to receive and analyze the coarse position data, the acceleration and attitude data, the camera data, and the LIDAR data to determine the position of the vehicle.
- LIDAR light detecting and ranging
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
Abstract
A method for refining a determined location of a vehicle includes estimating a location of the vehicle using a global navigation satellite system, determining a closest mapped road to the estimated location of the vehicle, the closest mapped road having a plurality of lanes, determining a first lane in the plurality of lanes of the closest mapped road in which the vehicle is located, and refining the estimated location of the vehicle based upon a location of the first lane in the plurality of lanes of the closest mapped road.
Description
METHOD TO IDENTIFY DRIVEN LANE ON MAP
AND IMPROVE VEHICLE POSITION ESTIMATE
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Patent Application 61/617,346 filed on March 29, 2012 and entitled "Method to Identify Driven Lane on Map and Improve Vehicle Position Estimate."
STATEMENT CONCERNING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
[0002] Not applicable.
BACKGROUND OF THE INVENTION
[0003] Estimating the global coordinates of ground vehicle position is important for valuable applications such as vehicle navigation and cooperative collision avoidance. Most commonly, the vehicle coordinates are obtained from a device that uses signals broadcast from a global navigation satellite system, such as GPS (that is, a Global Position System, which is operated by the United States government). Such a device, a GPS receiver, performs well in most outdoor environments, but there are environments, such as urban canyons and forests, that obstruct GPS signals and degrade a GPS receiver's positioning performance.
[0004] GPS receivers can be augmented with other sensors to improve their positioning performance in challenging environments. Some common aiding sensors or sensor systems are inertial-based, such as IMUs (inertial measurement units), INS (inertial navigation systems), accelerometers, and gyroscopes; and distance- measurement-based, such as vehicle odometers.
[0005] The assistance of GPS-based positioning systems using inertial sensors is well-established, but the resulting accuracy can be dependent on sensor quality and thus also costly. In some applications, such as cooperative vehicle avoidance, where relatively high accuracy is required but high cost is to be avoided, it can be difficult or infeasible to achieve the target accuracy using inertial aiding alone. Recently, vision- based aiding has shown potential, but its effectiveness in most approaches is still
limited by delays in processing GPS data. The only vision-based approach that does not suffer from this drawback, known as drift, is the one that uses pre-existing databases of environment imagery. The downside of that approach is that it requires collecting imagery for all locations that a vehicle might reach in all manner of conditions, which is quite infeasible. Furthermore, some vision technologies, such as lidars of sufficient resolution for accurate positioning, are still prohibitively expensive for automotive applications.
[0006] Maps can be used to constrain GPS errors by matching the GPS-derived position to the closest mapped road in the direction of travel. The deficiency of commonly available maps, such as those found in current vehicle or personal navigation systems, is that they only contain road-level geometry and thus can at best only improve the position to the level of the currently travelled road and not to the level of the currently travelled lane within that road. The road-level accuracy is sufficient for navigation applications, but lane-level accuracy is required for some cooperative collision avoidance applications, such as those warning of stopped or suddenly slowing vehicles ahead in the same lane.
BRIEF DESCRIPTION OF THE FIGURES
[0007] Fig. 1 illustrates a lidar reflectivity scan in combination with an image showing a roadway.
[0008] Fig. 2 illustrates a route through a geographical region.
[0009] Fig. 3 is an image showing an example mounting arrangement for a navigation system.
[0010] Fig. 4 is a plot depicting a number of visible satellites and the resulting horizontal dilution of precision (HDOP).
[0011] Fig. 5 is an overhead view of a portion of a test route.
[0012] Fig. 6 is an image depicting testing results for the present navigation system.
[0013] Fig. 7 is a flowchart depicting an example method by which a vehicle location system can determine a location of a vehicle.
[0014] Fig. 8 depicts an example algorithm for combining the values generated by the method of Fig. 7 to identify a more accurate location for a vehicle.
[0015] Fig. 9 is a block diagram illustrating functional elements of a system for determining a vehicle's location in accordance with the present disclosure.
[0016] Fig. 10 is a flowchart depicting an example method by which a vehicle location system can refine an estimated position of a vehicle.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0017] Intelligent vehicles of the future require precise and robust localization for improved safety and performance. Providing this position information typically falls to systems relying on some combination of GPS and inertial navigation systems (INS). The features and quality of these systems vary across the automotive industry and their performance differs across various environments. GPS is especially dependent on line of sight to satellites, and heavy presence of obstructions such as buildings or foliage can affect the accuracy of GPS. New production vehicles are starting to use additional sensors for safety systems, such as cameras and laser scanners. With these new sensors come new capabilities to be leveraged in advanced positioning solutions to combat the errors in GPS that arise due to obstructed views of the sky.
[0018] The integrated GPS/INS navigation system is a well-established localization architecture. In such an architecture, the GPS updates provide a means for bounding error growth in the INS solution. Recently, reduced inertial sensor systems (RISS) have been combined with vehicle speed measurements to provide pose estimates with limited acceleration and angular rate information. Moving forward, these systems will be augmented with information from vision based systems to aid GPS in error mitigation.
[0019] Light detection and ranging (lidar) and camera systems are used in detection of, or ranging to, distinct features in the environment. For vehicle positioning, the sensors are often used to detect street signs, buildings, or lane markings in the immediate vicinity of the vehicle. Lidar-based lane-marking detection has been performed using a histogram and histogram gradient feature extraction algorithms in various implementations. Lane departure warning (LDW) systems using cameras are
also currently available. These systems generally operate using a linear-parabolic lane model. Hough transform methods can be used to obtain road edges, and clothoid model parameters can be used for horizontal and vertical road parameter recognition.
[0020] The integration of global navigation solutions, such as those attained by GPS/INS systems, with local information from camera or lidar, often requires additional knowledge of the environment in the form of maps. Use the available map data, vision- based measurements can be utilized to enhance accuracy and robustness of a GPS/INS system by fusing information in a particle filter.
[0021] The variety of sensors available on current and future production vehicles, and the expansive breadth of approaches to integrate the sensors, present a unique opportunity for a scalable navigation solution. The term "scalable" is used to highlight the fact that the present system would integrate a number of possible inputs (depending on the available sensor information) to provide estimates of the vehicle pose. The accuracy and reliability of such a system will be dependent on the sensor suite available on the vehicle. Accordingly, the present disclosure provides a navigation system that uses various sensors that are becoming common on production vehicles such as GPS, inertial sensors (accelerometers and gyros), and lane departure sensors (cameras or lidars).
[0022] In the present disclosure, a number of potential sensor configurations is presented, including as little as two accelerometers and one gyroscope combined with L1 -frequency-only GPS to a full six degree of freedom inertial measurement unit (IMU) with L1 and L2 frequency GPS accompanied by lateral positions from lidar and camera. The present navigation system may incorporate a map database of the lane markings that may be used for vision-based measurement integration. Vehicle wheel speed measurements may be provided by the on-board controller area network (CAN) of the test vehicle can also be incorporated. The present systems can be validated by a number of techniques, including qualitative analysis, by inspection of trajectories plotted on satellite imagery, and quantitative analysis, by comparison of trajectories produced by the present system to those provided by a reference system to derive position error statistics.
[0023] The present navigation algorithm may comprise an extended Kalman filter implementation. The following description contemplates three different system implementations, though other implementations may be utilized as would be readily apparent to a person of ordinary skill in the art. The first example system implementation utilizes a combined reduced inertial sensor system, L1 frequency GPS pseudorange and pseudorange rate measurements, and wheel speed measurements retrieved from an on-board CAN. The second example system implementation adds L2 frequency GPS measurements. The final example system implementation used a six degree of freedom IMU instead of a reduced IMU and also adds lateral lane position updates from a camera and lidar. Each system implementation is discussed below.
[0024] In the example system implementation, a relatively low-cost u-blox LEA 6T GPS receiver was integrated with two accelerometers, a single gyroscope, and vehicle speed measurements to form a baseline navigation system. The sensitive axes of the accelerometers were aligned with the longitudinal and lateral axis of the vehicle, and the sensitive axis of gyroscope was aligned with the vertical axis of the vehicle.
[0025] The states of the present filter include the position of the vehicle in geodetic coordinates (latitude (λ), longitude (ψ), and altitude (h), the velocity in a local tangent plane (north (Vn), east (Ve), and down (V^/)), the heading (ψ) of the vehicle relative to north in the local tangent plane, two accelerometer bias states (bax, bayj, one gyroscope bias state (bgZ), GPS receiver clock bias (At ) and drift (At ), and a tire slip state (s). Kinematic relationships can be used to formulate the dynamic equations for the propagation of the vehicle position states. v
n
λ (1 )
R + h
λ v
e
Φ (2)
(R + h) cos( )
Φ
h = -V (3) d
[0026] The dynamics of the velocities can be approximated by the accelerometer measurements , the expected measurement error, and the assumption that
the vertical velocity was driven by zero mean white noise.
V = 0 (6)
d
[0027] The heading dynamics can then be defined relative to the angular velocity measured by the gyroscope, the measurement error, and the dynamics introduced by the dynamics of the Earth. ψ = ω - b - co 8Ϊη(λ) (7)
gz gz e
[0028] IMU biases can be assumed to be first order Gauss Markov processes as is the tire slip state. The clock bias dynamics can be driven by the drift term and the clock drift can be assumed to be driven by zero mean white noise.
[0029] The Kalman filter covariance matrix can be propagated using the linearized error dynamics derived from the dynamics described above. Note that in this formulation the vehicle roll and pitch can be assumed to be negligible as is the lever arm between the IMU and GPS antenna.
[0030] The INS navigation solution can be updated using both GPS and wheel speed measurements. In some cases, GPS pseudorange and pseudorange rate measurements can be used in this formulation. The INS solution can then be used to predict the pseudorange and pseudorange rate for each visible satellite at each GPS measurement epoch. The pseudorange prediction model can be as follows:
2 2 2
Ρ = Λ/(χ - χ) + {y - y) + {z - z) + cAt (8)
s s s where xs, ys, and zs are the Earth centered Earth fixed (ECEF) coordinates of the satellite and x, y, and z are the estimates of the vehicle position resolved in the ECEF frame. The pseudorange rate prediction is given by:
where V s S; and \ zs are the velocities of the satellite in the ECEF frame, and Vx,
Vy, and Vz are the estimated vehicle velocities resolved in the ECEF frame.
[0031] The pseudorange and pseudorange rate predictions can be compared to the measured pseudorange and pseudorange rates to calculate the correction to the navigation solution. Using the linearized geometry matrix, derived by computing the partial derivative of the pseudorange and pseudorange rate models with respect to the state vector, the correction can be mapped into the state domain and applied to the navigation solution and the error covariance matrix can be updated.
[0032] An additional state update can be performed using the wheel speed measurement provided by the vehicle CAN. The CAN data provides a change in wheel position in the form of an encoder count differential. This measurement can be converted to a measurement of wheel speed using the wheel radius and the number of counts per revolution. The two rear wheel speeds can then be averaged to form one observable. This value can then be compared to the wheel speed predicted by the INS solution. The model for the wheel speed prediction contains the estimated north and east velocities and the estimated tire slip.
= (V2 + \ 2 )2 (1 + s) (10) ws n e
[0033] The correction to the position solution is computed by calculating the difference between the predicted and measured wheel speed and mapping that error into the state domain using the geometry matrix derived from the partial derivative of the prediction with respect to the state vector. Note also that the stationary condition (i.e., the measured wheel speed is zero) allows for improved estimates of the inertial biases. Called a zero velocity update, additional corrections can be made to the bias states using the knowledge that the vehicle is static.
[0034] Unlike with the reduced inertial navigation algorithm, when the full six degree of freedom IMU is used, the total attitude (i.e., roll, pitch, and heading) can be estimated. Also, rather than navigating in the local tangent plane (used to minimize complexity of mapping the IMU to the navigation frame), the ECEF frame can be selected as the navigation frame (minimizing the complexity of mapping the GPS to the navigation frame).
[0035] The change of navigation frames impacts the dynamic equations used in the propagation step of the Kalman filter. A rotation matrix can be maintained to map accelerometer measurement from the vehicle body frame to the navigation frame. The dynamics of this rotation matrix, C can be given by:
b
CG = CG - Ω Ce (1 1 )
b b ib ie b where Ω is the measured angular velocity of the body frame relative to an inertial ib
frame and Ω is the rotation rate of the Earth. Note that the gyroscope biases must be ie
removed.
[0036] The velocity dynamics can be written in vector form as a function of the three axis accelerometer measurements accounting for the effects of gravity on the measured accelerations.
Ve = Ce (f - b ) + g (12) b a a e
b b
Here, f are the three axes accelerometer measurements, b are the accelerometer a a
b b biases, and g is the local acceleration due to gravity,
e
[0037] Since the velocity and position are now expressed in a common navigation frame, the position dynamic equations are relatively straightforward with Pe equal to the x, y, and z components of the position of the vehicle expressed in the ECEF frame.
[0038] Processing the GPS pseudorange and pseudorange rate measurements follows the same procedure previously described for the RISS implementation. In one implementation, the system utilizes a GPS receiver that provides measurements from both the L1 and L2 frequency carrier. This typically results in two times the number of observables at each GPS measurement epoch. The wheel speed measurement update can also be performed according to the previous formulation.
[0039] The full vision integration system uses six degree of freedom inertial measurements, dual frequency NovAtel GPS measurements, two estimates of lateral position within the lane - one provided by the lidar and another by the camera, and a map of the lane to estimate the pose of the vehicle. The estimation procedure followed the form described for the GPS/INS algorithm with the addition of a new update step when lateral position measurements are provided by either vision-based sensor. The lane detection and positioning methods for each vision system are described below. A description of the procedure for integrating the measurements in the Kalman filter follows.
[0040] The specific lidar-based lane detection algorithm is one that is based on fitting an ideal lane model to actual road data, where the ideal lane model is updated with each lidar scan to reflect the current road conditions. Ideally a lane takes on a profile
similar to the 100 averaged lidar reflectivity scans seen in Fig. 1 (see plot 10) where the corresponding road is seen in the lower image 12. The plot 10 is a mirror image of the photo due to the fact that the right portion of the image corresponds to negative horizontal angles. Note that this profile has a relatively constant area between the peaks, where the peaks represent the lane markings, which are typically bright and thus have higher reflectivity, and the constant area is the unmarked pavement of the road's surface, which is typically dark and thus has lower reflectivity. An ideal lane model can be generated with each lidar scan to mimic this averaged data, where the constant portion is generated by averaging the reflectivity directly in front of the vehicle and the lane markings can be generated by increasing the average road surface reflectivity by 75%. This model can then be stretched over a range of some minimum expected lane width to some maximum expected lane width, and the minimum RMSE between the ideal lane and the lidar data can be assumed to be the area where the lane resides.
[0041] The camera-based method for the present system can use line extraction techniques applied to a captured image to detect lane markings and calculate a lateral distance from a 2nd order polynomial model for the lane marking. A threshold can be chosen from the histogram of the image to compensate for differences in lighting, weather, or other non-ideal scenarios for extracting the lane markings. The thresholding operation transforms the image into a binary image, which is followed by Canny edge detection. A Hough transform can then be used to extract the lines from the image, fill in holes in the lane marking edges, and exclude erroneous edges. Using the slope of the lines, the lines are divided into left or right lane markings. Two criteria can be used to further exclude non-lane markings in the image based on the assumption that the lane markings do not move significantly within the image from frame to frame. The first test checks that the slope of the line is within a threshold of the slope of the near region of the last frame's 2nd order polynomial model. The second test uses boundary lines from the last frame's 2nd order polynomial to exclude lines that are not near the current estimate of the polynomial. The 2nd order polynomial interpolation is used on the selected lines' midpoint and endpoints to determine the coefficients of the polynomial model, and a Kalman filter is used to filter the model to decrease the effect of erroneous polynomial coefficient estimates. Finally, the lateral distance can be
calculated using the polynomial model on the lowest measurable row of the image (for greater resolution) and a real-distance-to-pixel factor.
[0042] The camera and lidar algorithms provide measurements of the lateral position of the vehicle within the lane. With this information and knowledge of the global position of the center of the lane provided by a map of the route, the estimate of the pose of the vehicle can be updated. A prediction of the lateral position of the vehicle in the lane can be calculated using the current estimate of the vehicle position and the two closest map points in the direction of travel. The vehicle position and map coordinates can be expressed in a local tangent plane in north, east, and down components. The distance of the vehicle can be calculated normal to the line defined by the two closest points in the north and east directions using the following equations.
(p - m ){m - m ) + {P - m ){m - m )
n 2 1 2 e 1 1 2
n n e e e_
2 2
(m - m ) + (m - m )
1 2 1 2
n n e e n = m + μ(ηι - m ) (15)
n 1 2 1
n n n n = m + μ(ηι - m ) (16)
e 1 2 1
[0043] This predicted lateral distance in the lane can then be compared to the measurement derived from either the camera or lidar data to calculate error. The camera and lidar measurements can be applied independently when either algorithm reports a lateral position. The lateral error can then be mapped into the state domain using the linearized geometry matrix, and then the state estimates and error covariance can be updated.
[0044] Accordingly, the present system may be configured to utilize lane marking observation to enhance or otherwise improve a vehicle's location estimate that may otherwise have been generated using other location systems and sensors. For example, Fig. 7 is a flowchart depicting an example method by which a vehicle location system can determine a location of a vehicle. In step 102, a conventional vehicle location system (such as a global navigation satellite system) determines a first estimated location of the vehicle. In various implementations, a number of additional sensors, such as inertial sensor systems can be utilized to supplement the location data derived from the global navigation satellite system and otherwise refine the estimate of the vehicle's current location. Generally, this first estimate of the vehicle's current location will be relatively coarse.
[0045] Having established a first estimate of the vehicle's current location, in step 104 a closest mapped road to the vehicle's estimated location is determined. This step may involve consulting a map database accessible to the system, where the map database stores location information for a number of roads and other points of interest in a given geographical region. The map database may store the location information for the roads and points of interest using absolute coordinates, though other approaches for storing the data in the map database may be utilized.
[0046] Having determined the closest mapped road to the vehicle's estimated location, in step 106 the direction of the travel of the vehicle is determined. This determination may be made by consulting historical location data for the vehicle, assuming that the vehicle is traveling forwards. In some system implementations, the system can identify whether the vehicle is in a reverse gear or a forward gear. In the event that the vehicle is in a reverse gear, the determination of vehicle direction based upon the historical position data made in step 106 will be reversed. Other means for determining the vehicle's direction of travel may include consulting a magnetic field sensor configured to operate as a compass to determine a direction of travel of the vehicle.
[0047] In step 108, the vehicle's current lane of travel is determined. Step 108 may occur before, after, or during the execution of steps 102, 104, and 106. As provided in the present disclosure (see, for example, the description pertaining to Fig. 1 ), the
determination of the current lane of travel may be made using one or more optical sensors configured to generate an image of the road upon which the vehicle is currently traveling. By analyzing data captured by the optical imaging device (e.g., a lidar or camera) it is possible identify a number of lane markings present in front of the vehicle. Those lane markings can then be analyzed to determine the current lane of travel. In other implementations, though, rather than determine a lane of travel, a position of the vehicle on the road (without consideration of lane markings) may be determined and that position upon the road may be utilized to determine a location of the vehicle.
[0048] The analysis of the lane markings may involve making a determination as to the slope of the identified lane markings to ensure that the slopes are within a predefined acceptable boundary. Similarly, the width of the lane marking can be analyzed to ensure that they meet expected values. Additionally, because the lane markings should not vary from frame-to-frame as captured by the imaging devices, detected markings that move too much between frames may be ignored as noise.
[0049] Based upon the images captured by the imaging devices, therefore, it is possible to identify a lane of travel of the vehicle.
[0050] Having calculated both an estimated position of the vehicle (step 102), closest mapped road (step 104), direction of travel (106), and lane position (or, alternatively, a road position) (108), these data points can then be combined in step 1 10 to calculate a more accurate location for the vehicle than was calculated in step 102.
[0051] Fig. 8, for example, depicts one algorithm for combining the values generated by the method of Fig. 7 to identify a more accurate location for a vehicle. In Fig. 8 the initial vehicle estimate is shown at point 802. This initial estimate may correspond to that generated in step 102 of Fig. 7 and may be generated by a conventional vehicle location system, such as a global navigation satellite system, for example.
[0052] Given the initial estimate of vehicle location, road 804 is determined to be the closest mapped road to the vehicle's estimated location. In Fig. 8, road 804 is a depiction of a portion of a roadway including four lanes, with two lanes being allocated for travel in each direction. In each direction, there is an outside or passing lane (furthest away from the edge of the road 804), and an inside lane (closest to the edge of the road 804).
[0053] After identifying the closest mapped road 804 (e.g., in accordance with step 104 of Fig. 7), a direction of travel of the vehicle is determined (e.g. in accordance with step 106 of Fig. 7). In the example depicted in Fig. 8, the determined direction of travel of the vehicle is indicated by arrow 806.
[0054] Additionally, for the example shown in Fig. 8, it is determined that the vehicle is traveling in the outside lane (e.g., in accordance with step 108 of Fig. 7). This determination is made by analyzing the image data as described in the present disclosure. Because this analysis is based on imaging data, the knowledge that the vehicle is in the passing lane does not provide any information about the direction of travel. Accordingly, the vehicle's location and direction of travel must be combined with the lane of travel information to accurately locate the vehicle in a particular lane of the roadway.
[0055] Consequently, by knowing the vehicle's direction of travel (indicated by arrow 806) it is possible to determine that the vehicle, if it is to be traveling in the passing lane, must be located within lane 808 of road 804. Having identified the lane of travel, the system may then identify a point 810 that is the shortest distance from the vehicle's estimated location 802 and located in the centerline of the current lane of travel 808. That identified point 810 can then be used as the vehicle's updated current location. Because point 810 includes both information captured from a conventional location system, as well as location data based upon a current lane of travel of the vehicle, point 810 represents a more accurate location for the vehicle than point 802. In other words, point 802 becomes a first estimate for the vehicle location that is further constrained by the lane of travel data associated with the vehicle.
[0056] Having determined a current lane of travel for the vehicle, that data point can be used to provide the driver with additional useful information such as warning about upcoming hazards located in the vehicle's current lane of travel. Similarly, when the driver is approaching a turn, the vehicle's current lane of travel can be used to determine whether the vehicle is located in the correct turning lane. If not, the driver can be notified that the vehicle must change lanes in anticipation of an upcoming turn.
[0057] Although Fig. 8 depicts one example implementation for updating a vehicle's estimated location using lane-of-travel data, other algorithms may be used for combining that data.
[0058] Fig. 9 is a block diagram illustrating functional elements of a system for determining a vehicle's location in accordance with the present disclosure. System 900 includes GPS receiver 902 configured to receive satellite signals and convert those signals into a coarse position data for the vehicle. Although Fig. 9 depicts a GPS receiver, other types of satellite-based navigation systems exist and may be used in conjunction with the present system and method. GPS receiver 902 is configured to transmit the coarse position data to a central processing unit 904.
[0059] System 900 may also include an inertial measurement unit (IMU) 906. IMU 906 is also configured to transmit data to central processing unit 904. In one implementation, IMU 906 comprises a six degree-of-freedom inertial measurement unit configured to send acceleration and attitude data in response to determining motion and an orientation of the vehicle.
[0060] Camera 908 is configured to capture visual data and transmit at least a portion of that visual data to central processor 904. That visual data can be utilized, for example, to determine a current lane of travel of a particular vehicle as described herein. In one implementation, camera 908 has a field of view matched to at least a portion of an outdoor environment of a vehicle. Camera 908 is configured to send camera data in response to observing that outdoor environment.
[0061] In some implementations, system 900 includes light detecting and ranging (LIDAR) device 910 having a field of view matched to at least a portion of the outdoor environment and configured to send LIDAR data in response to observing the outdoor environment. In some implementations, either camera 908 or LIDAR device 910 are present, though in other implementations, both devices may be present and work in collaboration with one another to observer the space about a perimeter of the vehicle.
[0062] Processor 904 is configured to receive data from one or more of LIDAR 910, Camera 908, IMU 906, and GPS receiver 902 to determine a location of the vehicle. As described herein, this may involve updating or refining an initial estimate of the vehicle location with lane data captured from one or more of LIDAR 910 and camera 908.
[0063] Map database 912 stores location information for a number of roads and other points of interest in a given geographical region. Map database 912 may store the location information for the roads and points of interest using absolute coordinates, though other approaches for storing the data in the map database may be utilized.
[0064] Fig. 10 is a flowchart depicting an example method by which a vehicle location system can refine an estimated position of a vehicle. In the depicted flow, a number of threshold values are used to limit and/or control the use of data captured from a number of different sources. The various thresholds may be determined based upon a history of data received from the designated source, expected value ranges for certain behaviors, time in each range, and other operations. In step 1014 a position of the vehicle within a current lane of travel is calculated. This calculated may be performed based upon data captured from a number of sensor systems, including LIDAR 1000 and/or an optical or infrared camera 1002.
[0065] In step 1016 the coordinates of a position of the vehicles are estimated. This estimation may be performed using data captured from a GPS system 1004, an inertial measurement unit 1006 that may include a number of accelerometers and/or gyroscopes, wheel speed sensor(s) 1008, and/or map data describing lane information (such as lane locations and geometry).
[0066] In step 1030, the vehicle's lane changes are detected. A lane change may be detected using data generated from step 1016 (after being filtered through threshold 1022), IMU 1006 (after being filtered through threshold 1024), wheel speed sensor(s) 1008 (after being filtered through threshold 1026), steering angle sensor 1010 (after being filtered through threshold 1028), and/or step 1014 (after being filtered through threshold 1020).
[0067] In step 1032, an improved lane estimate for the vehicle is calculated using the data from step 1030 and step 1018. In step 1034, an improved position estimate for the vehicle is determined using data from step 1014 and step 1032. Once determined, the current driven lane of the vehicle can be stored in step 1036. The current driven lane can then be used in step 1018, in combination with data from step 1016 and map 1012 to generate another estimate of the current driven lane of the vehicle.
[0068] In various implementations, a number of different logic routines may be employed to monitor various sensor conditions to determine whether a vehicle has made a lane change (for example, using the method illustrated in Fig. 10). The following logic pseudocode, for example, describes a process by a number of conditions may be analyzed to determine whether a vehicle has made a lane change to the left. Similarly logic may be used to determine whether the vehicle has made a lane change to the right.
[0069] -— "DETECT LANE CHANGE" PROCESS -—
[0070] IF ((LateralAcceleration > LatAccLowLimitLeft) AND (LateralAcceleration < LatAccHighLimitLeft)) * LatAccFactor +
[0071] ((GyroRate > GyroRateLowLimitLeft) AND ((GyroRate < GyroRateHighLimitLeft)) * GyroRateFactor +
[0072] ((WheelSpeed Differential > Wheel DiffLowLimitLeft) AND
((WheelSpeed Differential < WheelDiffHighLimitLeft)) * Wheel DiffFactor +
[0073] ((SteeringAngle > SteerAngleLowLimitLeft) AND ((SteeringAngle < SteerAngleHighLimitLeft)) * SteerAngleDiffFactor +
[0074] ((SteeringAngleRate > SteerAngleRateLowLimitLeft) AND ((SteeringAngleRate < SteerAngleRateHighLimitLeft)) * SteerAngleRateDiffFactor + [0075] (if lidar distance from left lane edge decreases and then lidar distance from right edge increases) * WithinLaneFactor +
[0076] (if camera distance from left lane edge decreases and then camera distance from right edge increases) * WithinLaneFactor +
[0077] (if there is a left lane on the map) * LanePresentFactor
[0078] (if GPS position is changing toward left within a certain rate bounds) *
GPSRateFactor
[0079] (if turning radius is greater than the radius of the turn but less than a certain threshold) * TurningRadiusFactor
[0080] > LogicSumThresholdLeft
[0081] THEN start TurningLeftTimer
[0082] ELSE reset TurningLeftTimer to zero
[0083]
[0084] If TurningLeftTimer > TurningLeftTimerThreshold THEN
[0085] set current lane id to the lane if of the next lane to the left
[0086]
[0087]
[0088] In this pseudocode above "( )" indicate a logical operation that may have a result of 0 or 1 . Variables ending in "Factor" could take any positive value and serve to assign weight (or importance or strength) to each of the logical conditions, but, in some cases, using a value between 0 and 1 could make it easier to determine the overall threshold, LogicSumThresholdLeft. The other factors should be selected so that LogicSumThresholdLeft is exceed when there is a lane change and not otherwise.
[0089] In one example test of the present system, testing occurred in Detroit, Michigan and data was captured following a 46-mile route through the city center and surrounding suburbs to mimic driving experienced by a typical driver. The test route was designed with two main aims: 1 ) to cover a variety of environment types that have characteristic effects on positioning performance, and 2) to be representative of typical U.S. driving in the proportioning of environment type selections. The environmental features considered important here for their influence on GNSS-based positioning accuracy and for their commonality are trees, tree canopies, mountains, overpasses, buildings, urban canyons, and tunnels. These features, along with other helpful attributes, are used to define seven sufficiently distinct environments in Table 1 . The test route covered Open and Urban environment types.
Features
Environment
Terrain Vegetation Buildings Overpasses Tunnels
Flat or Almost Almost
Open None None
mildly none none
undulating Scattered Rare, low,
Sparse None None
mask < 5 ° trees far
Mountains
sometimes Some tree
Moderate Some low Maybe but rare
Rural masking canopies
5-20 °
Mountains Dominant Negligible compared to natural
Dense mask tree obstructions although there could be
20-60° canopies along tunnel
Some, low
Sparse Flat or scattered None None
or far
mildly
Multiundulating Moderate
Moderate story, rare Some Rare
Urban with mask < number,
high-rises
5, or < some
Dominant
other short
Dense high-rise Frequent Long
obstructions canopies
canyons
Table 1 : Environment Definitions
[0090] The test route was selected to include roads that closely match (within ±5%) the urban road-use class proportioning in the U.S. found by the highway administration. The resulting 46-mile test route is shown in Fig. 2.
[0091] Due to satellite geometry variability over time, the test timing is a critical component of the test plan. Since the satellite configuration, as seen by the receiver on the ground, repeats every 24 hours, for repeatable results, it is desirable to have the testing span 24 hours. For procedural convenience, the 24 hour period is divided into 3 shifts. Each shift is on a different day (thus allowing a large break from test driving), and spans 10 hours, with 2-hour overlaps with respect to time of day, for a total desired span of 24 hours.
[0092] The vehicle used for these tests was a four door sedan. The GPS antenna for the NovAtel receiver and the reference receiver was mounted on the center of the roof, as seen in Fig. 3, while the u-blox GPS antenna was mounted on the trunk lid.
Additionally, an external wheel odometer was mounted on the passenger rear wheel to aid the reference system. Both the camera and lidar used for lane detection were attached to a roof rack placed on the vehicle and centered laterally, also seen in Fig. 3. All data was logged using a full-sized PC located in the trunk.
[0093] Some sensors used on this project attempt to mimic those that are common to modern day passenger vehicles. Additional higher end sensors are included in this assessment to acknowledge the expected improving capabilities of the sensor suites of future production vehicles.
[0094] The production or near-production grade sensors included a Micro-Electro- Mechanical (MEMS) grade Crossbow IMU 440, which is a full six degree of freedom IMU. Typically, production vehicles are not equipped with full six degree of freedom IMUs. Accordingly, a reduced set of IMU measurements were used in testing. The Crossbow measurements were recorded at 100 Hz. A u-blox LEA 6T GPS receiver (L1 frequency only) provided GPS pseudorange and pseudorange rate measurements at 1 Hz. Additionally, a Logitech Quickcam 9000 recorded images at 10 Hz for lane marking detection. Longitudinal speed was obtained from wheel speed sensors reported by the on-board CAN. Measurements from the vehicle CAN were available at 50 Hz.
[0095] Higher end sensor data was provided by a NovAtel Propak V3 GPS receiver (providing both L1 and L2 frequency pseudorange and pseudorange rate measurements). The NovAtel receiver also provided position, velocity, and range data with all measurements recorded at 5 Hz. An Ibeo Alasca XT (marketed as an automotive grade lidar but not typically available on production vehicles) was used to record range and reflectivity to road features. Lidar outputs were recorded at 10 Hz. A reference solution was provided by a NovAtel SPAN system, which fuses a tactical grade IMU with a NovAtel Propak receiver.
[0096] The results are presented in the form of a plot of satellite availability, tables showing error statistics, and plots of navigation solutions on road maps. In Fig. 4, the number of visible satellites and the resulting horizontal dilution of precision (HDOP) are shown for the test route over three test runs. As seen, the number of visible satellites varies from twelve to one. Accordingly, the HDOP ranges from typical values (less than 2) to extreme values when less than four satellites are visible. The GPS positioning
accuracy is compromised due to the reduced GPS satellite visibility and increased multipath reflections in portions of the test route.
[0097] The results of the NovAtel GPS-only solution and the GPS/INS solution combining the NovAtel receiver data with the full six degree of freedom IMU440 are shown. Table 2 includes the mean horizontal errors and the percentage of errors less than 1 .5 and 3 meters for both the standalone NovAtel solution and the combined solution. The combined navigation solution exhibits improvement in all three categories with nearly 10 percentage points of improvement in the 1 .5 and 3 meter error threshold analysis.
Mean
Absolute % < 1.5 % < 3
Run Number
Horizontal m m
Error
Propak_R1 2.01 64.7 88.8
Propak_R2 2.6 61.4 85.6
Propak_R3 2.67 37.2 74.5
Propak_R4 2.48 42.1 73.8
Propak_R5 1.87 74.3 86.6
Propak
2.33 55.9 81.9
Overall
GPS_INS_R1 1.91 63.7 88.9
GPS_INS_R2 1.74 77.4 94
GPS_INS_R3 2.25 43.2 84.5
GPS_INS_R4 2.04 50.1 83.8
GPS_INS_R5 1.56 84.9 94.4
GPSJNS
1.9 63.8 89.2
Overall
Table 2: Error Statistics for NovAtel GPS with 6DOF IMU
[0098] Fig. 5 is a zoomed view of one test run from Table 2. In Fig. 5, the GPS/INS solution (dashed lines) can be seen bridging several outages of the standalone GPS solution (solid lines). The GPS/INS performance improvements were seen in this scenario and in heavy foliage environments.
[0099] The full vision integration system provided improvement in the urban areas of the test route, in particular as seen in Fig. 6. Clearly, the full vision integrated system (dashed lines) shows improvement over the GPS solution (solid lines).
[00100] In conclusion, a scalable navigation solution using typical or near typical vehicle sensors was presented. Analysis of the positioning capabilities of the system operating in various environments was shown. The system provided improved results over standalone GPS.
[00101] In one implementation, the present invention is a method for refining a determined location of a vehicle, comprising estimating a location of the vehicle using a global navigation satellite system, determining a closest mapped road to the estimated location of the vehicle, the closest mapped road having a plurality of lanes, determining a first lane in the plurality of lanes of the closest mapped road in which the vehicle is located, and refining the estimated location of the vehicle based upon a location of the first lane in the plurality of lanes of the closest mapped road.
[00102] In another implementation, the present invention is a system comprising a vehicle location system configured to estimate a location of a vehicle using a global navigation satellite system, and a processor configured to determine a closest mapped road to the estimated location of the vehicle, determine a position on the closest mapped road in which the vehicle is located, and refine the estimated location of the vehicle based upon the position on the closest mapped road.
In another implementation, the present invention is a system for determining a position of a vehicle in an outdoor environment. The system includes a global positioning system (GPS) receiver configured to send coarse position data in response to receiving a GPS signal from at least one GPS transmitter, a six degree-of-freedom inertial measurement unit configured to send acceleration and attitude data in response to determining motion and an orientation of the vehicle, and a camera having a field of view matched to at least a portion of the outdoor environment and configured to send camera data in response to observing the outdoor environment. The system includes a light detecting and ranging (LIDAR) device having a field of view matched to at least a portion of the outdoor environment and configured to send LIDAR data in response to observing the outdoor environment, and a processor configured to receive and analyze
the coarse position data, the acceleration and attitude data, the camera data, and the LIDAR data to determine the position of the vehicle.
Claims
1 . A method for refining a determined location of a vehicle, comprising:
estimating a location of the vehicle using a global navigation satellite system; determining a closest mapped road to the estimated location of the vehicle, the closest mapped road having a plurality of lanes;
determining a first lane in the plurality of lanes of the closest mapped road in which the vehicle is located; and
refining the estimated location of the vehicle based upon a location of the first lane in the plurality of lanes of the closest mapped road.
2. The method of claim 1 , wherein determining the first lane in the plurality of lanes of the closest mapped road in which the vehicle is located includes capturing image data using at least one of a camera and a light detection and ranging device connected to the vehicle.
3. The method of claim 2, including filtering the image data.
4. The method of claim 3, wherein filtering the image data includes:
identifying a lane marking in the image data; and
determining a slope of the lane mark.
5. The method of claim 3, wherein filtering the image data includes:
identifying a lane marking in the image data; and
determining a rate of movement of the lane marking in the image data.
6. The method of claim 3, wherein filtering the image data includes:
identifying first and second lane markings in the image data; and characterizing a distance between the first and second lane markings.
7. The method of claim 1 , wherein a location of each of the plurality of lanes of the closest mapped road are described by absolute coordinates.
8. The method of claim 1 , including determining a direction of travel of the vehicle.
9. The method of claim 8, including determining the first lane in the plurality of lanes using the direction of travel of the vehicle.
10. The method of claim 1 , wherein the first lane in the plurality of lanes corresponds to a currently driven road lane.
1 1 . A system, comprising:
a vehicle location system configured to estimate a location of a vehicle using a global navigation satellite system; and
a processor configured to:
determine a closest mapped road to the estimated location of the vehicle; determine a position on the closest mapped road in which the vehicle is located; and
refine the estimated location of the vehicle based upon the position on the closest mapped road.
12. The system of claim 1 1 , including an optical sensor configured to capture image data, and wherein the processor is configured to use the image data to determine the position on the closest mapped road.
13. The system of claim 12, wherein the processor is configured to filter the image data.
14. The system of claim 13, wherein the processor is configured to: identify a lane marking in the image data; and
determine a slope of the lane mark.
15. The system of claim 13, wherein the processor is configured to:
identify a lane marking in the image data; and
determine a rate of movement of the lane marking in the image data.
16. The system of claim 13, wherein the processor is configured to:
identify first and second lane markings in the image data; and
characterize a distance between the first and second lane markings.
17. The system of claim 1 1 , wherein a location the closest mapped road is described by absolute coordinates.
18. A system for determining a position of a vehicle in an outdoor environment, the system comprising:
a global positioning system (GPS) receiver configured to send coarse position data in response to receiving a GPS signal from at least one GPS transmitter;
a six degree-of-freedom inertial measurement unit configured to send
acceleration and attitude data in response to determining motion and an orientation of the vehicle;
a camera having a field of view matched to at least a portion of the outdoor environment and configured to send camera data in response to observing the outdoor environment;
a light detecting and ranging (LIDAR) device having a field of view matched to at least a portion of the outdoor environment and configured to send LIDAR data in response to observing the outdoor environment; and
a processor configured to receive and analyze the coarse position data, the acceleration and attitude data, the camera data, and the LIDAR data to determine the position of the vehicle.
19. The system of claim 18, wherein the GPS receiver is configured to receive L1 frequency GPS data.
20. The system of claim 18, wherein the GPS receiver is configured to receive L2 frequency GPS data.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261617346P | 2012-03-29 | 2012-03-29 | |
US61/617,346 | 2012-03-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013149149A1 true WO2013149149A1 (en) | 2013-10-03 |
Family
ID=48083686
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2013/034617 WO2013149149A1 (en) | 2012-03-29 | 2013-03-29 | Method to identify driven lane on map and improve vehicle position estimate |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2013149149A1 (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2918974A1 (en) | 2014-03-11 | 2015-09-16 | Volvo Car Corporation | Method and system for determining a position of a vehicle |
US20150266422A1 (en) * | 2014-03-20 | 2015-09-24 | Magna Electronics Inc. | Vehicle vision system with curvature estimation |
US9250080B2 (en) | 2014-01-16 | 2016-02-02 | Qualcomm Incorporated | Sensor assisted validation and usage of map information as navigation measurements |
WO2016051228A1 (en) * | 2014-09-30 | 2016-04-07 | Umm-Al-Qura University | A method and system for an accurate and energy efficient vehicle lane detection |
EP3018448A1 (en) | 2014-11-04 | 2016-05-11 | Volvo Car Corporation | Methods and systems for enabling improved positioning of a vehicle |
US9460624B2 (en) | 2014-05-06 | 2016-10-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and apparatus for determining lane identification in a roadway |
US9836052B1 (en) * | 2014-08-29 | 2017-12-05 | Waymo Llc | Change detection using curve alignment |
US9878711B2 (en) | 2015-12-14 | 2018-01-30 | Honda Motor Co., Ltd. | Method and system for lane detection and validation |
EP3337197A1 (en) * | 2016-12-15 | 2018-06-20 | Dura Operating, LLC | Method and system for performing advanced driver assistance system functions using beyond line-of-sight situational awareness |
CN108627175A (en) * | 2017-03-20 | 2018-10-09 | 现代自动车株式会社 | The system and method for vehicle location for identification |
CN109556615A (en) * | 2018-10-10 | 2019-04-02 | 吉林大学 | The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot |
CN110081880A (en) * | 2019-04-12 | 2019-08-02 | 同济大学 | A kind of sweeper local positioning system and method merging vision, wheel speed and inertial navigation |
US10502840B2 (en) | 2016-02-03 | 2019-12-10 | Qualcomm Incorporated | Outlier detection for satellite positioning system using visual inertial odometry |
CN110910453A (en) * | 2019-11-28 | 2020-03-24 | 魔视智能科技(上海)有限公司 | Vehicle pose estimation method and system based on non-overlapping view field multi-camera system |
CN111380538A (en) * | 2018-12-28 | 2020-07-07 | 沈阳美行科技有限公司 | Vehicle positioning method, navigation method and related device |
EP3497685A4 (en) * | 2016-08-10 | 2020-07-29 | Xevo Inc. | Method and apparatus for providing goal oriented navigational directions |
CN112650772A (en) * | 2021-01-08 | 2021-04-13 | 腾讯科技(深圳)有限公司 | Data processing method, data processing device, storage medium and computer equipment |
CN112749584A (en) * | 2019-10-29 | 2021-05-04 | 北京初速度科技有限公司 | Vehicle positioning method based on image detection and vehicle-mounted terminal |
CN113532448A (en) * | 2020-04-13 | 2021-10-22 | 广州汽车集团股份有限公司 | Navigation method and system for automatically driving vehicle and driving control equipment |
CN115267838A (en) * | 2022-06-22 | 2022-11-01 | 湖北星纪时代科技有限公司 | Method, device, equipment and medium for testing positioning performance |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002318130A (en) * | 2001-02-13 | 2002-10-31 | Alpine Electronics Inc | Traveling lane detecting device and navigation apparatus |
DE102006040334A1 (en) * | 2006-08-29 | 2008-03-06 | Robert Bosch Gmbh | Lane recognizing method for use with driver assistance system of vehicle i.e. motor vehicle, involves reconstructing characteristics of lane markings and/or lanes from position of supporting points in coordinate system |
WO2008146899A1 (en) * | 2007-05-25 | 2008-12-04 | Aisin Aw Co., Ltd. | Lane determining device, lane determining method and navigation apparatus using the same |
US20100253597A1 (en) * | 2009-04-02 | 2010-10-07 | Gm Global Technology Operations, Inc. | Rear view mirror on full-windshield head-up display |
DE102010033729A1 (en) * | 2010-08-07 | 2012-02-09 | Audi Ag | Method and device for determining the position of a vehicle on a roadway and motor vehicles with such a device |
-
2013
- 2013-03-29 WO PCT/US2013/034617 patent/WO2013149149A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002318130A (en) * | 2001-02-13 | 2002-10-31 | Alpine Electronics Inc | Traveling lane detecting device and navigation apparatus |
DE102006040334A1 (en) * | 2006-08-29 | 2008-03-06 | Robert Bosch Gmbh | Lane recognizing method for use with driver assistance system of vehicle i.e. motor vehicle, involves reconstructing characteristics of lane markings and/or lanes from position of supporting points in coordinate system |
WO2008146899A1 (en) * | 2007-05-25 | 2008-12-04 | Aisin Aw Co., Ltd. | Lane determining device, lane determining method and navigation apparatus using the same |
US20100253597A1 (en) * | 2009-04-02 | 2010-10-07 | Gm Global Technology Operations, Inc. | Rear view mirror on full-windshield head-up display |
DE102010033729A1 (en) * | 2010-08-07 | 2012-02-09 | Audi Ag | Method and device for determining the position of a vehicle on a roadway and motor vehicles with such a device |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9250080B2 (en) | 2014-01-16 | 2016-02-02 | Qualcomm Incorporated | Sensor assisted validation and usage of map information as navigation measurements |
US9644975B2 (en) | 2014-03-11 | 2017-05-09 | Volvo Car Corporation | Method and system for determining a position of a vehicle |
EP2918974A1 (en) | 2014-03-11 | 2015-09-16 | Volvo Car Corporation | Method and system for determining a position of a vehicle |
US11247608B2 (en) | 2014-03-20 | 2022-02-15 | Magna Electronics Inc. | Vehicular system and method for controlling vehicle |
US20150266422A1 (en) * | 2014-03-20 | 2015-09-24 | Magna Electronics Inc. | Vehicle vision system with curvature estimation |
US10406981B2 (en) * | 2014-03-20 | 2019-09-10 | Magna Electronics Inc. | Vehicle vision system with curvature estimation |
US11745659B2 (en) | 2014-03-20 | 2023-09-05 | Magna Electronics Inc. | Vehicular system for controlling vehicle |
US9460624B2 (en) | 2014-05-06 | 2016-10-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and apparatus for determining lane identification in a roadway |
US10074281B2 (en) | 2014-05-06 | 2018-09-11 | Toyota Motor Engineering & Manufacturing North America, Inc. | Method and apparatus for determining lane identification in a roadway |
US10627816B1 (en) | 2014-08-29 | 2020-04-21 | Waymo Llc | Change detection using curve alignment |
US9836052B1 (en) * | 2014-08-29 | 2017-12-05 | Waymo Llc | Change detection using curve alignment |
WO2016051228A1 (en) * | 2014-09-30 | 2016-04-07 | Umm-Al-Qura University | A method and system for an accurate and energy efficient vehicle lane detection |
US9965699B2 (en) | 2014-11-04 | 2018-05-08 | Volvo Car Corporation | Methods and systems for enabling improved positioning of a vehicle |
EP3018448A1 (en) | 2014-11-04 | 2016-05-11 | Volvo Car Corporation | Methods and systems for enabling improved positioning of a vehicle |
US9878711B2 (en) | 2015-12-14 | 2018-01-30 | Honda Motor Co., Ltd. | Method and system for lane detection and validation |
US10502840B2 (en) | 2016-02-03 | 2019-12-10 | Qualcomm Incorporated | Outlier detection for satellite positioning system using visual inertial odometry |
EP3497685A4 (en) * | 2016-08-10 | 2020-07-29 | Xevo Inc. | Method and apparatus for providing goal oriented navigational directions |
CN108388240A (en) * | 2016-12-15 | 2018-08-10 | 德韧营运有限责任公司 | The method and system of advanced driving assistance system function is executed using over the horizon context aware |
EP3337197A1 (en) * | 2016-12-15 | 2018-06-20 | Dura Operating, LLC | Method and system for performing advanced driver assistance system functions using beyond line-of-sight situational awareness |
CN108627175A (en) * | 2017-03-20 | 2018-10-09 | 现代自动车株式会社 | The system and method for vehicle location for identification |
CN109556615A (en) * | 2018-10-10 | 2019-04-02 | 吉林大学 | The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot |
CN111380538A (en) * | 2018-12-28 | 2020-07-07 | 沈阳美行科技有限公司 | Vehicle positioning method, navigation method and related device |
CN111380538B (en) * | 2018-12-28 | 2023-01-24 | 沈阳美行科技股份有限公司 | Vehicle positioning method, navigation method and related device |
CN110081880A (en) * | 2019-04-12 | 2019-08-02 | 同济大学 | A kind of sweeper local positioning system and method merging vision, wheel speed and inertial navigation |
CN112749584A (en) * | 2019-10-29 | 2021-05-04 | 北京初速度科技有限公司 | Vehicle positioning method based on image detection and vehicle-mounted terminal |
CN112749584B (en) * | 2019-10-29 | 2024-03-15 | 北京魔门塔科技有限公司 | Vehicle positioning method based on image detection and vehicle-mounted terminal |
CN110910453B (en) * | 2019-11-28 | 2023-03-24 | 魔视智能科技(上海)有限公司 | Vehicle pose estimation method and system based on non-overlapping view field multi-camera system |
CN110910453A (en) * | 2019-11-28 | 2020-03-24 | 魔视智能科技(上海)有限公司 | Vehicle pose estimation method and system based on non-overlapping view field multi-camera system |
CN113532448A (en) * | 2020-04-13 | 2021-10-22 | 广州汽车集团股份有限公司 | Navigation method and system for automatically driving vehicle and driving control equipment |
CN112650772B (en) * | 2021-01-08 | 2022-02-25 | 腾讯科技(深圳)有限公司 | Data processing method, data processing device, storage medium and computer equipment |
CN112650772A (en) * | 2021-01-08 | 2021-04-13 | 腾讯科技(深圳)有限公司 | Data processing method, data processing device, storage medium and computer equipment |
CN115267838A (en) * | 2022-06-22 | 2022-11-01 | 湖北星纪时代科技有限公司 | Method, device, equipment and medium for testing positioning performance |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013149149A1 (en) | Method to identify driven lane on map and improve vehicle position estimate | |
CN106289275B (en) | Unit and method for improving positioning accuracy | |
CN108693543B (en) | Method and system for detecting signal spoofing | |
Rose et al. | An integrated vehicle navigation system utilizing lane-detection and lateral position estimation systems in difficult environments for GPS | |
CN107328411B (en) | Vehicle-mounted positioning system and automatic driving vehicle | |
US8447519B2 (en) | Method of augmenting GPS or GPS/sensor vehicle positioning using additional in-vehicle vision sensors | |
US9528834B2 (en) | Mapping techniques using probe vehicles | |
Atia et al. | A low-cost lane-determination system using GNSS/IMU fusion and HMM-based multistage map matching | |
US9162682B2 (en) | Method and device for determining the speed and/or position of a vehicle | |
KR101454153B1 (en) | Navigation system for unmanned ground vehicle by sensor fusion with virtual lane | |
CN112074885A (en) | Lane sign positioning | |
US11009356B2 (en) | Lane marking localization and fusion | |
JP5162849B2 (en) | Fixed point position recorder | |
US20180025632A1 (en) | Mapping Techniques Using Probe Vehicles | |
KR20180101717A (en) | Vehicle component control using maps | |
US20100191461A1 (en) | System and method of lane path estimation using sensor fusion | |
US20160161259A1 (en) | Digital Map Tracking Apparatus and Methods | |
KR101704405B1 (en) | System and method for lane recognition | |
CN109425343A (en) | This truck position apparatus for predicting | |
JP4596566B2 (en) | Self-vehicle information recognition device and self-vehicle information recognition method | |
Abosekeen et al. | Utilizing the ACC-FMCW radar for land vehicles navigation | |
JP2023541424A (en) | Vehicle position determination method and vehicle position determination device | |
EP4113063A1 (en) | Localization of autonomous vehicles using camera, gps, and imu | |
Cho | Development of an IGVM integrated navigation system for vehicular lane-level guidance services | |
Martin et al. | Performance analysis of a scalable navigation solution using vehicle safety sensors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13715582 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13715582 Country of ref document: EP Kind code of ref document: A1 |