WO2021036592A1 - 后视镜自适应调节方法及装置 - Google Patents

后视镜自适应调节方法及装置 Download PDF

Info

Publication number
WO2021036592A1
WO2021036592A1 PCT/CN2020/103362 CN2020103362W WO2021036592A1 WO 2021036592 A1 WO2021036592 A1 WO 2021036592A1 CN 2020103362 W CN2020103362 W CN 2020103362W WO 2021036592 A1 WO2021036592 A1 WO 2021036592A1
Authority
WO
WIPO (PCT)
Prior art keywords
angle
vehicle
target
auxiliary
image
Prior art date
Application number
PCT/CN2020/103362
Other languages
English (en)
French (fr)
Inventor
何彦杉
黄为
徐文康
张峻豪
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to MX2021011749A priority Critical patent/MX2021011749A/es
Priority to EP20857477.2A priority patent/EP3909811B1/en
Publication of WO2021036592A1 publication Critical patent/WO2021036592A1/zh
Priority to US17/489,112 priority patent/US12049170B2/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/02Rear-view mirror arrangements
    • B60R1/06Rear-view mirror arrangements mounted on vehicle exterior
    • B60R1/062Rear-view mirror arrangements mounted on vehicle exterior with remote control for adjusting position
    • B60R1/07Rear-view mirror arrangements mounted on vehicle exterior with remote control for adjusting position by electrically powered actuators
    • B60R1/072Rear-view mirror arrangements mounted on vehicle exterior with remote control for adjusting position by electrically powered actuators for adjusting the mirror relative to its housing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/02Rear-view mirror arrangements
    • B60R1/025Rear-view mirror arrangements comprising special mechanical means for correcting the field of view in relation to particular driving conditions, e.g. change of lane; scanning mirrors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Definitions

  • the invention relates to the field of smart cars, in particular to a rearview mirror adaptive adjustment method and device.
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Research in the field of artificial intelligence includes robotics, natural language processing, computer vision, decision-making and reasoning, human-computer interaction, recommendation and search, and basic AI theories.
  • Autonomous driving is a mainstream application in the field of artificial intelligence.
  • Autonomous driving technology relies on the collaboration of computer vision, radar, monitoring devices, and global positioning systems to allow motor vehicles to achieve autonomous driving without the need for human active operations.
  • Self-driving vehicles use various computing systems to help transport passengers from one location to another. Some autonomous vehicles may require some initial input or continuous input from an operator (such as a navigator, driver, or passenger). The self-driving vehicle allows the operator to switch from the manual mode to the self-driving mode or a mode in between. Since autonomous driving technology does not require humans to drive motor vehicles, it can theoretically effectively avoid human driving errors, reduce the occurrence of traffic accidents, and improve highway transportation efficiency. Therefore, autonomous driving technology is getting more and more attention.
  • the driver In the driving process, in order to ensure safety and reduce blind spots, the driver needs to manually adjust the rearview mirror based on the driver's field of view, driving status, and scenes outside the vehicle. However, the driver manually adjusts the rearview mirror in real time during driving. Regulation is both distracting and inefficient, affecting driving safety.
  • the embodiment of the present application provides an adaptive adjustment and device for the rear-view mirror. Using the embodiment of the present application prevents the driver from manually adjusting the rear-view mirror during driving, which may affect driving safety, and improves the driving safety performance.
  • an embodiment of the present application provides an adaptive adjustment method for a rearview mirror, including:
  • the first auxiliary angle is obtained according to the image of the following vehicle and the first reference point, where the first reference point is the point on the rearview mirror of the target, and the image of the following vehicle Obtained by a rear-view camera, and the image of the following vehicle includes the target following vehicle; the second auxiliary angle is calculated according to the horizontal field of view angle, the spatial position of the human eye, and the target rearview mirror position; the second auxiliary angle is obtained according to the first auxiliary angle and the second auxiliary angle The horizontal adjustment angle of the target rearview mirror; adjust the horizontal angle of the target rearview mirror according to the horizontal adjustment angle.
  • the first reference point may be the center point on the target rearview mirror, and the second reference point may also be the center point on the target rearview mirror.
  • the horizontal field of view angle is the angle formed by the straight line passing through the virtual mirror point and the point on the left boundary of the target rearview mirror and the straight line passing through the virtual mirror point and the point on the left boundary of the target rearview mirror.
  • the virtual mirror point is the symmetry point of the human eye space position with the target rearview mirror as the symmetry axis.
  • this embodiment the horizontal angle of the rearview mirror is adjusted based on the driver's field of view and the scene outside the vehicle.
  • this embodiment adopts an adaptive adjustment method, which does not require manual adjustment and avoids driving. The driver was distracted by manually adjusting the rear-view mirror, which affected driving safety.
  • the first auxiliary angle is the angle formed by the first straight line and the second straight line, the first straight line is the straight line passing the target trailing vehicle and the first reference point, and the second straight line is the straight line passing through the target vehicle and the first reference point.
  • the first reference point and the straight line perpendicular to the target rearview mirror;
  • the second auxiliary angle is the angle formed by the third straight line and the center line of the horizontal field of view.
  • the third straight line is the line passing through the driver’s eye position and the second reference point.
  • the second reference point is the center line of the horizontal field of view and behind the target.
  • the intersection point of the mirror surface of the sight glass, the center line of the horizontal field of view is the angular bisector of the horizontal field of view angle.
  • the following vehicle image includes M vehicles, and M is an integer greater than or equal to 1, and determining the first assist angle of the target following vehicle based on the following vehicle image includes:
  • the offset is the distance between the center position of the front face of the following car A and the longitudinal center line of the following car image in the following car image.
  • the frame is the number of pixels occupied by the contour of the following car A in the image of the following car; the distance d is obtained according to the offset of the following car A and the car body frame, and the distance d is the front face of the following car A and The distance between the rear of the own vehicle; the third auxiliary angle of the following vehicle A is obtained according to the distance d and the offset of the following vehicle A.
  • the third auxiliary angle is the angle formed by the fourth straight line and the lateral centerline of the own vehicle Angle, the fourth straight line is the straight line passing through the position of the rear-view camera and the center position of the front face of the following vehicle A;
  • the following vehicle A is the target following vehicle, and the first auxiliary angle of the target following vehicle is obtained according to the third auxiliary angle of the target following vehicle and the distance d;
  • the i-th rear car is obtained according to the third auxiliary angle and the body frame of the i-th rear car
  • the important probability of the vehicle is determined, and the following vehicle with the largest important probability is determined as the target vehicle, and the first auxiliary angle of the target following vehicle is obtained according to the third auxiliary angle of the target following vehicle and the distance d.
  • the method of this embodiment can enable the driver of the self-vehicle to determine the following vehicle that needs to be focused on among the multiple vehicles, thereby adjusting the angle of the self-vehicle's rearview mirror so that the driver can pass through at any time
  • the rearview mirror observation is worth paying attention to the vehicle to ensure the safety of the vehicle.
  • obtaining the vehicle distance d according to the offset of the following vehicle A and the vehicle body frame includes:
  • the first relationship table is the offset and the correspondence relationship between the car body frame and the distance;
  • Obtaining the third assist angle of the following vehicle A according to the distance d and the offset of the following vehicle A includes:
  • the second relationship table is the correspondence relationship table between the distance and the offset amount and the third auxiliary angle.
  • the vehicle distance d and the third auxiliary angle can be quickly obtained by looking up the table, so as to quickly determine the horizontal adjustment angle of the target rearview mirror.
  • obtaining the first assist angle of the target vehicle behind according to the third assist angle distance d of the target vehicle behind includes:
  • the third assist angle of the target following vehicle and the first assist angle corresponding to the vehicle distance d are the first assist angle of the vehicle behind target
  • the third relationship table is the correspondence between the third assist angle and the vehicle distance d and the first assist angle Relational tables.
  • acquiring the body frame of the following vehicle A according to the image of the following vehicle includes:
  • the important probability of obtaining the i-th rear car according to the third auxiliary angle and the body frame of the i-th rear car includes:
  • the third auxiliary angle of the i-th rear car and the corresponding important probability of the car body frame are the important probabilities of the i-th rear car
  • the fourth relationship table is the third auxiliary angle and the corresponding relationship between the car body frame and the important probabilities. table.
  • acquiring the horizontal adjustment angle of the target rearview mirror according to the first auxiliary angle and the second auxiliary angle includes:
  • the horizontal adjustment angles corresponding to the first auxiliary angle and the second auxiliary angle are the horizontal adjustment angles of the target rearview mirror
  • the fifth relationship table is the correspondence table between the first auxiliary angle, the second auxiliary angle and the horizontal adjustment angle.
  • an embodiment of the present application provides an adaptive adjustment method for a rearview mirror, including:
  • the horizontal field of view angle is the angle formed by the straight line passing through the virtual mirror point and the point on the upper boundary of the target rearview mirror and the straight line passing through the virtual mirror point and the point on the lower boundary of the target rearview mirror.
  • the virtual mirror point is the symmetry point of the human eye space position with the target rearview mirror as the symmetry axis.
  • the vertical angle of the rearview mirror is adjusted based on the driver's field of view and the scene outside the vehicle, so that the driver can observe the driving state of the rear vehicle through the rearview mirror at any time.
  • this embodiment adopts an adaptive adjustment method and does not require manual adjustment, which prevents the driver from manually adjusting the rear-view mirror to affect driving safety due to distraction.
  • the auxiliary adjustment angle of the self-vehicle is the angle formed by the vertical field of view centerline and the fifth straight line
  • the fifth straight line is the straight line passing through the space position of the driver's eyes and the third reference point.
  • the angle of the included angle formed by the center line of the vertical field of view, the third reference point is the point on the target rearview mirror, and the center line of the vertical field of view is the angle bisector of the vertical field of view angle.
  • obtaining the sky-to-ground ratio R according to the image of the following vehicle includes:
  • the image of the following vehicle into multiple image bands longitudinally; obtain the target image band from the multiple image bands, and the target image band is the continuous transition between the sky and the ground among the multiple image bands; count the sky occupied by the target image band The number of pixels and the number of pixels occupied by the ground; the sky-ground ratio R is calculated according to the number of pixels occupied by the sky and the number of pixels occupied by the ground.
  • the sky-ground ratio R is the number of pixels occupied by the sky and the number of pixels occupied by the ground The ratio of the number.
  • obtaining the vertical adjustment angle of the target rearview mirror according to the auxiliary adjustment angle of the self-vehicle and the sky-to-ground ratio R includes:
  • the auxiliary relation table is the correspondence relation table between the auxiliary adjustment angle and the sky-to-ground ratio R, and the vertical adjustment angle.
  • an embodiment of the present application provides an adaptive adjustment method for a rearview mirror, including:
  • the spatial position of the human eye and the spatial position of the vehicle's target rearview mirror, and the vertical viewing angle is obtained according to the spatial position of the human eye and the spatial position of the target rearview mirror; calculated according to the spatial position of the driver's eye and the vertical viewing angle
  • the auxiliary adjustment angle of the own car; the sky-ground ratio R is obtained according to the image of the vehicle behind; the vertical adjustment angle of the target rearview mirror is obtained according to the auxiliary adjustment angle of the own car and the sky-ground ratio R, and the target rearview mirror is adjusted according to the vertical adjustment angle The vertical angle to the target.
  • the horizontal field of view angle is the angle formed by the straight line passing through the virtual mirror point and the point on the upper boundary of the target rearview mirror and the straight line passing through the virtual mirror point and the point on the lower boundary of the target rearview mirror.
  • the virtual mirror point is the symmetry point of the human eye space position with the target rearview mirror as the symmetry axis.
  • the vertical angle of the rearview mirror is adjusted based on the driver's field of vision and the scene outside the vehicle, or according to the slope of the current driving road, so that the driver can observe the vehicle condition behind his vehicle through the rearview mirror at any time.
  • this embodiment adopts an adaptive adjustment method and does not require manual adjustment, which prevents the driver from being distracted from affecting driving safety.
  • the auxiliary adjustment angle of the self-vehicle is the angle formed by the vertical field of view centerline and the fifth straight line
  • the fifth straight line is the straight line passing through the space position of the driver's eyes and the third reference point.
  • the angle of the included angle formed by the center line of the vertical field of view, the third reference point is the point on the target rearview mirror, and the center line of the vertical field of view is the angle bisector of the vertical field of view angle.
  • obtaining the sky-to-ground ratio R according to the image of the following vehicle includes:
  • the following vehicle image is divided into multiple image bands longitudinally; the target image band is obtained from the multiple image bands, and the target image band is an image band in which the sky and the ground transition continuously among the multiple image bands; the sky occupancy in the target image band is counted The number of pixels and the number of pixels occupied by the ground; the sky-ground ratio R is calculated according to the number of pixels occupied by the sky and the number of pixels occupied by the ground.
  • the sky-ground ratio R is the number of pixels occupied by the sky and the pixels occupied by the ground The ratio of the number.
  • obtaining the vertical adjustment angle of the target rearview mirror according to the auxiliary adjustment angle of the self-vehicle and the sky-to-ground ratio R includes:
  • the auxiliary relationship table is the correspondence relationship table between the auxiliary adjustment angle and the sky-to-ground ratio and the vertical adjustment angle.
  • the rearview mirror adaptive adjustment method further includes:
  • determining the vertical adjustment angle according to the driving state of the vehicle and the slope ⁇ includes:
  • the target vertical angle is adjusted to the preset angle ⁇ ; when the driving state of the own vehicle is the long downhill driving state, the target vertical angle is adjusted to the preset angle ⁇ ;
  • the target vertical angle is adjusted to ⁇ - ⁇ /2; when the driving state of the own vehicle is out of the downhill driving state, the target vertical angle is adjusted to ⁇ + ⁇ /2;
  • the target vertical angle is adjusted to the preset angle ⁇ ; when the driving state of the own vehicle is entering the uphill driving state, the target vertical angle is adjusted to ⁇ - ⁇ /2 ;
  • the target vertical angle is adjusted to ⁇ + ⁇ /2.
  • an adaptive adjustment device for a rearview mirror including:
  • the acquisition module is used to acquire the eye space position of the driver of the vehicle and the space position of the target rearview mirror, and obtain the horizontal field of view angle of the driver in the target rearview mirror according to the space position of the human eye and the space position of the target rearview mirror;
  • the image of the following car, the first auxiliary angle of the target following car is obtained according to the image of the following car, the first auxiliary angle is obtained based on the image of the following car and the first reference point, where the first reference point is the point on the target rearview mirror ,
  • the image of the following vehicle is acquired by the rear-view camera, and the image of the following vehicle includes the target following vehicle;
  • the calculation module is used to calculate the second auxiliary angle according to the horizontal field of view angle, the spatial position of the human eye, and the position of the target rearview mirror;
  • the obtaining module is further configured to obtain the horizontal adjustment angle of the target rearview mirror according to the first auxiliary angle and the second auxiliary angle;
  • the adjustment module is used to adjust the horizontal angle of the target rearview mirror according to the horizontal adjustment angle.
  • the first auxiliary angle is the angle formed by the first straight line and the second straight line, the first straight line is the straight line passing the target trailing vehicle and the first reference point, and the second straight line is the straight line passing through the target vehicle and the first reference point.
  • the first reference point and the straight line perpendicular to the target rearview mirror;
  • the second auxiliary angle is the angle formed by the third straight line and the center line of the horizontal field of view.
  • the third straight line is the line passing through the driver’s eye position and the second reference point.
  • the second reference point is the center line of the horizontal field of view and behind the target.
  • the intersection point of the mirror surface of the sight glass, the center line of the horizontal field of view is the angular bisector of the horizontal field of view angle.
  • the following vehicle image includes M vehicles, and M is an integer greater than or equal to 1.
  • the acquisition module is specifically used for:
  • the offset is the distance between the center position of the front face of the following car A and the longitudinal center line of the following car image in the following car image.
  • the frame is the number of pixels occupied by the contour of the following car A in the image of the following car; the distance d is obtained according to the offset of the following car A and the car body frame, and the distance d is the front face of the following car A and The distance between the rear of the own vehicle; the third auxiliary angle of the following vehicle A is obtained according to the distance d and the offset of the following vehicle A.
  • the third auxiliary angle is the angle formed by the fourth straight line and the lateral centerline of the own vehicle Angle, the fourth straight line is the straight line passing through the position of the rear-view camera and the center position of the front face of the following vehicle A;
  • the following vehicle A is the target following vehicle, and the first auxiliary angle of the target following vehicle is obtained according to the third auxiliary angle of the target following vehicle and the distance d;
  • the i-th rear car is obtained according to the third auxiliary angle and the body frame of the i-th rear car
  • the important probability of the vehicle is determined, and the following vehicle with the largest important probability is determined as the target vehicle, and the first auxiliary angle of the target following vehicle is obtained according to the third auxiliary angle of the target following vehicle and the distance d.
  • the obtaining module is specifically used for:
  • the first relationship table is the offset and the correspondence relationship between the car body frame and the distance;
  • the obtaining module is specifically used for:
  • the second relationship table is the correspondence relationship table between the distance and the offset amount and the third auxiliary angle.
  • the obtaining module is specifically used for:
  • the third relationship table according to the third auxiliary angle and distance d of the target following vehicle to obtain the third auxiliary angle of the target following vehicle and the first auxiliary angle corresponding to the distance d, and the third auxiliary angle of the target following vehicle and the vehicle distance d.
  • the first assist angle corresponding to the distance d is the first assist angle of the following vehicle target
  • the third relation table is the third assist angle and the correspondence relation table between the vehicle distance d and the first assist angle.
  • the acquiring module is specifically used for:
  • the acquisition module is specifically used for:
  • the fourth relationship table according to the third auxiliary angle of the i-th rear car and its body frame to obtain the third auxiliary angle of the i-th rear car and its corresponding important probability of the car body frame; among them, the i-th rear car
  • the third auxiliary angle of the car and the corresponding important probability of the car body frame are the important probabilities of the i-th car behind
  • the fourth relation table is the correspondence table of the third auxiliary angle and the car body frame and the important probabilities.
  • the obtaining module is specifically configured to:
  • the fifth relation table is the correspondence relation table between the first auxiliary angle and the second auxiliary angle and the horizontal adjustment angle.
  • an adaptive adjustment device for a rearview mirror including:
  • the acquisition module is used to acquire the spatial position of the driver's eyes and the spatial position of the target rearview mirror of the vehicle, and obtain the vertical field of view angle according to the spatial position of the human eye and the spatial position of the target rearview mirror;
  • the calculation module is used to calculate the auxiliary adjustment angle of the self-vehicle according to the spatial position of the driver's eyes and the vertical view angle;
  • the acquisition module is also used to acquire the image of the vehicle behind by the rear view camera of the vehicle, and obtain the sky-ground ratio R according to the image of the vehicle behind; obtain the vertical adjustment angle of the target rearview mirror according to the auxiliary adjustment angle of the vehicle and the sky-ground ratio R ;
  • the adjustment module is used to adjust the target rearview mirror to the target vertical angle according to the vertical adjustment angle.
  • the auxiliary adjustment angle of the self-vehicle is the angle formed by the vertical field of view centerline and the fifth straight line
  • the fifth straight line is the straight line passing through the space position of the driver's eyes and the third reference point.
  • the angle of the included angle formed by the center line of the vertical field of view, the third reference point is the point on the target rearview mirror, and the center line of the vertical field of view is the angle bisector of the vertical field of view angle.
  • the obtaining module is specifically used for:
  • the following vehicle image is divided into multiple image bands longitudinally; the target image band is obtained from the multiple image bands, and the target image band is an image band in which the sky and the ground transition continuously among the plurality of image bands; the sky occupancy in the target image band is counted The number of pixels and the number of pixels occupied by the ground; the sky-ground ratio R is calculated according to the number of pixels occupied by the sky and the number of pixels occupied by the ground.
  • the sky-ground ratio R is the number of pixels occupied by the sky and the pixels occupied by the ground The ratio of the number.
  • the obtaining module is specifically used for:
  • the auxiliary relationship table is the correspondence relationship table between the auxiliary adjustment angle and the sky-to-ground ratio and the vertical adjustment angle.
  • an adaptive adjustment device for a rearview mirror including:
  • the acquisition module is used to acquire the image of the following vehicle collected by the rear view camera of the vehicle;
  • the calculation module is used to convert the image of the following car into a grayscale image and calculate the average value of the pixels in the grayscale image;
  • the obtaining module is also used to obtain the spatial position of the driver’s eyes and the spatial position of the target rearview mirror of the vehicle if the average value is not less than the preset value, and according to the spatial position of the eyes and the target rearview mirror The spatial position obtains the vertical field of view angle;
  • the calculation module is also used to calculate the auxiliary adjustment angle of the self-vehicle according to the spatial position of the driver's eyes and the vertical view angle;
  • the acquisition module is also used to obtain the sky-to-ground ratio R according to the image of the vehicle behind; to obtain the vertical adjustment angle of the target rearview mirror according to the auxiliary adjustment angle of the self-car and the sky-to-ground ratio R;
  • the adjustment module is used to adjust the target rearview mirror to the target vertical angle according to the vertical adjustment angle.
  • the auxiliary adjustment angle of the self-vehicle is the angle formed by the vertical field of view centerline and the fifth straight line
  • the fifth straight line is the straight line passing through the space position of the driver's eyes and the third reference point.
  • the angle of the included angle formed by the center line of the vertical field of view, the third reference point is the point on the target rearview mirror, and the center line of the vertical field of view is the angle bisector of the vertical field of view angle.
  • the obtaining module is specifically used for:
  • the image of the following vehicle into multiple image bands longitudinally; obtain the target image band from the multiple image bands, and the target image band is the continuous transition between the sky and the ground among the multiple image bands; count the sky occupied by the target image band The number of pixels and the number of pixels occupied by the ground; the sky-ground ratio R is calculated according to the number of pixels occupied by the sky and the number of pixels occupied by the ground.
  • the sky-ground ratio R is the number of pixels occupied by the sky and the number of pixels occupied by the ground The ratio of the number.
  • the obtaining module is specifically used for:
  • the auxiliary relation table is the correspondence relation table between the auxiliary adjustment angle and the sky-to-ground ratio R, and the vertical adjustment angle.
  • the acquiring module is also used to:
  • the adjustment module is also used to adjust the vertical angle of the target rearview mirror to the target vertical angle according to the driving state of the vehicle and the gradient ⁇ , which is the absolute value of the slope of the road where the vehicle is currently located.
  • the adjustment module is specifically used for:
  • the target vertical angle is adjusted to the preset angle ⁇ ; when the driving state of the own vehicle is the long downhill driving state, the target vertical angle is adjusted to the preset angle ⁇ ;
  • the target vertical angle is adjusted to ⁇ - ⁇ /2; when the driving state of the own vehicle is out of the downhill driving state, the target vertical angle is adjusted to ⁇ + ⁇ /2;
  • the target vertical angle is adjusted to the preset angle ⁇ ; when the driving state of the own vehicle is entering the uphill driving state, the target vertical angle is adjusted to ⁇ - ⁇ /2 ;
  • the target vertical angle is adjusted to ⁇ + ⁇ /2.
  • an embodiment of the present application provides an adaptive adjustment device for a rearview mirror, including: a memory for storing a program; a processor for executing a program stored in the memory, and when the program stored in the memory is executed, the processor uses For performing at least one method of the first aspect, the second aspect, and the third aspect.
  • a computer-readable medium stores program code for device execution, and the program code includes a method for executing at least one of the first aspect, the second aspect, or the third aspect .
  • a computer program product containing instructions, which when the computer program product runs on a computer, causes the computer to execute at least one method in the first aspect, the second aspect, or the third aspect.
  • a chip in a tenth aspect, includes a processor and a data interface.
  • the processor reads instructions stored in a memory through the data interface, and executes any of the first, second, or third aspects. At least one method.
  • the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory.
  • the processor is configured to execute at least one method in the first aspect, the second aspect, or the third aspect.
  • an electronic device which includes the device in at least one of the foregoing fourth to sixth aspects.
  • FIG. 1 is a schematic structural diagram of an autonomous vehicle provided by an embodiment of the application.
  • FIG. 2 is a schematic flowchart of a method for self-adjusting rearview mirror provided by an embodiment of the application
  • Figure 3 is a schematic diagram of the principle of obtaining the spatial position of the human eye
  • FIG. 4 is a schematic diagram of the driver's horizontal field of view in the rearview mirror provided by an embodiment of the application;
  • FIG. 5 is a schematic diagram of an auxiliary angle provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of the auxiliary angle and the horizontal adjustment angle provided by an embodiment of the application.
  • FIG. 7 is a schematic flowchart of another method for self-adjusting rearview mirror provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of auxiliary adjustment angle provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of the principle of calculating the sky-to-ground ratio provided by an embodiment of the application.
  • FIG. 10 is a schematic flowchart of another method for self-adjusting rearview mirror provided by an embodiment of this application.
  • FIG. 11 is a schematic diagram of a principle of adjusting the vertical angle of a rearview mirror based on a slope provided by an embodiment of the application;
  • FIG. 12 is a schematic structural diagram of a rear-view mirror adaptive adjustment device provided by an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of another rearview mirror adaptive adjustment device provided by an embodiment of the application.
  • FIG. 14 is a schematic structural diagram of another rearview mirror adaptive adjustment device provided by an embodiment of the application.
  • FIG. 15 is a schematic structural diagram of another rearview mirror adaptive adjustment device provided by an embodiment of the application.
  • 16 is a schematic structural diagram of another rearview mirror adaptive adjustment device provided by an embodiment of the application.
  • FIG. 17 is a schematic structural diagram of a computer program product provided by an embodiment of the application.
  • Fig. 1 is a functional block diagram of a vehicle 100 provided by an embodiment of the present invention.
  • the vehicle 100 is configured in a fully or partially autonomous driving mode.
  • the vehicle 100 can control itself while in the automatic driving mode, and can determine the current state of the vehicle and its surrounding environment through human operations, determine the possible behavior of at least one other vehicle in the surrounding environment, and determine the other vehicle
  • the confidence level corresponding to the possibility of performing the possible behavior is to control the vehicle 100 based on the determined information.
  • the vehicle 100 can be placed to operate without human interaction.
  • the vehicle 100 may include various subsystems, such as a travel system 102, a sensor system 104, a control system 106, one or more peripheral devices 108 and a power supply 110, a computer system 112, and a user interface 116.
  • the vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple elements.
  • each subsystem and element of the vehicle 100 may be interconnected by wire or wirelessly.
  • the travel system 102 may include components that provide power movement for the vehicle 100.
  • the propulsion system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, such as a hybrid engine composed of a gasoline engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the engine 118 converts the energy source 119 into mechanical energy.
  • Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy for other systems of the vehicle 100.
  • the transmission device 120 can transmit mechanical power from the engine 118 to the wheels 121.
  • the transmission device 120 may include a gearbox, a differential, and a drive shaft.
  • the transmission device 120 may also include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
  • the sensor system 104 may include several sensors that sense information about the environment around the vehicle 100.
  • the sensor system 104 may include a positioning system 122 (the positioning system may be a GPS system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, a radar 126, a laser rangefinder 128, and Camera 130.
  • the sensor system 104 may also include sensors of the internal system of the monitored vehicle 100 (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and identification are key functions for the safe operation of the autonomous vehicle 100.
  • the positioning system 122 can be used to estimate the geographic location of the vehicle 100.
  • the IMU 124 is used to sense changes in the position and orientation of the vehicle 100 based on inertial acceleration.
  • the IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 may use radio signals to sense objects in the surrounding environment of the vehicle 100. In some embodiments, in addition to sensing the object, the radar 126 may also be used to sense the speed and/or direction of the object.
  • the laser rangefinder 128 can use laser light to sense objects in the environment where the vehicle 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the camera 130 may be used to capture multiple images of the surrounding environment of the vehicle 100.
  • the camera 130 may be a still camera or a video camera.
  • the control system 106 controls the operation of the vehicle 100 and its components.
  • the control system 106 may include various components, including a steering system 132, a throttle 134, a braking unit 136, a sensor fusion algorithm 138, a computer vision system 140, a route control system 142, and an obstacle avoidance system 144.
  • the steering system 132 is operable to adjust the forward direction of the vehicle 100.
  • it may be a steering wheel system in one embodiment.
  • the throttle 134 is used to control the operating speed of the engine 118 and thereby control the speed of the vehicle 100.
  • the braking unit 136 is used to control the vehicle 100 to decelerate.
  • the braking unit 136 may use friction to slow down the wheels 121.
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current.
  • the braking unit 136 may also take other forms to slow down the rotation speed of the wheels 121 to control the speed of the vehicle 100.
  • the computer vision system 140 may be operable to process and analyze the images captured by the camera 130 in order to identify objects and/or features in the surrounding environment of the vehicle 100.
  • the objects and/or features may include traffic signals, road boundaries, and obstacles.
  • the computer vision system 140 may use object recognition algorithms, Structure from Motion (SFM) algorithms, video tracking, and other computer vision technologies.
  • SFM Structure from Motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the route control system 142 is used to determine the travel route of the vehicle 100.
  • the route control system 142 may combine data from the sensor 138, the GPS 122, and one or more predetermined maps to determine the driving route for the vehicle 100.
  • the obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise cross over potential obstacles in the environment of the vehicle 100.
  • control system 106 may add or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
  • the vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through peripheral devices 108.
  • the peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152.
  • the peripheral device 108 provides a means for the user of the vehicle 100 to interact with the user interface 116.
  • the onboard computer 148 may provide information to the user of the vehicle 100.
  • the user interface 116 can also operate the onboard computer 148 to receive user input.
  • the on-board computer 148 can be operated through a touch screen.
  • the peripheral device 108 may provide a means for the vehicle 100 to communicate with other devices located in the vehicle.
  • the microphone 150 may receive audio (eg, voice commands or other audio input) from a user of the vehicle 100.
  • the speaker 152 may output audio to the user of the vehicle 100.
  • the wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 146 may use 3G cellular communication, such as CDMA, EVDO, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication.
  • the wireless communication system 146 may use WiFi to communicate with a wireless local area network (WLAN).
  • WLAN wireless local area network
  • the wireless communication system 146 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee. Other wireless protocols, such as various vehicle communication systems.
  • the wireless communication system 146 may include one or more dedicated short-range communications (DSRC) devices. These devices may include communication between vehicles and/or roadside stations. Public and/or private data communication.
  • DSRC dedicated short-range communications
  • the power supply 110 may provide power to various components of the vehicle 100.
  • the power source 110 may be a rechargeable lithium ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100.
  • the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
  • the computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer readable medium such as a data storage device 114.
  • the computer system 112 may also be multiple computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner.
  • the processor 113 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor may be a dedicated device such as an ASIC or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of the computer 110 in the same block, those of ordinary skill in the art should understand that the processor, computer, or memory may actually include Multiple processors, computers, or memories stored in the same physical enclosure.
  • the memory may be a hard disk drive or other storage medium located in a housing other than the computer 110. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described here, some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
  • the processor may be located away from the vehicle and wirelessly communicate with the vehicle.
  • some of the processes described herein are executed on a processor arranged in the vehicle and others are executed by a remote processor, including taking the necessary steps to perform a single manipulation.
  • the data storage device 114 may include instructions 115 (eg, program logic), which may be executed by the processor 113 to perform various functions of the vehicle 100, including those functions described above.
  • the data storage device 114 may also contain additional instructions, including sending data to, receiving data from, interacting with, and/or performing data on one or more of the propulsion system 102, the sensor system 104, the control system 106, and the peripheral device 108. Control instructions.
  • the data storage device 114 may also store data, such as road maps, route information, the location, direction, and speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 100 and the computer system 112 during the operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes.
  • the camera 130 may include a driver monitoring system (DMS) camera, a cockpit monitoring system (CMS) camera, and a rear-view camera located to obtain images of the rear vehicle.
  • DMS driver monitoring system
  • CMS cockpit monitoring system
  • the DMS camera is used to obtain an image of the driver's head
  • the CMS camera is used to obtain an image of the interior of the vehicle driven by the driver, and the image shows the driver's head.
  • the processor 113 obtains the spatial position of the driver's eyes based on the image obtained by the DMS camera and the image obtained by the CMS camera.
  • the processor 113 obtains the driver's horizontal field of view angle in the rearview mirror based on the spatial position of the driver's eyes and the spatial position of the rearview mirror, obtains the first auxiliary angle based on the image of the rear vehicle, and The horizontal viewing angle, the spatial position of the human eye, and the spatial position of the rearview mirror obtain the second auxiliary angle, and finally the horizontal adjustment angle is obtained based on the first auxiliary angle and the second auxiliary angle, and the horizontal angle of the rearview mirror is adjusted based on the horizontal adjustment angle.
  • the processor 113 obtains the vertical viewing angle of the driver in the rearview mirror based on the spatial position of the driver's eyes and the spatial position of the rearview mirror, based on the vertical viewing angle, the spatial position of the human eye, and the rearview mirror space.
  • the position obtains the auxiliary adjustment angle;
  • the processor 113 obtains the sky-to-ground ratio R based on the image of the following vehicle, and then obtains the vertical adjustment angle based on the sky-to-ground ratio R and the auxiliary adjustment angle.
  • the processor 113 adjusts the vertical angle of the rearview mirror to the target vertical angle based on the vertical adjustment angle.
  • the gyroscope obtains the slope of the road on which the vehicle is traveling within the preset time, and the processor 113 uses the driving state of the vehicle according to the slope within the preset time, and then based on the driving state of the vehicle and the current time.
  • the vertical adjustment angle of the rearview mirror is adjusted by the slope of the road on which the vehicle is traveling.
  • the user interface 116 is used to provide information to or receive information from a user of the vehicle 100.
  • the user interface 116 may include one or more input/output devices in the set of peripheral devices 108, such as a wireless communication system 146, a car computer 148, a microphone 150, and a speaker 152.
  • the computer system 112 may control the functions of the vehicle 100 based on inputs received from various subsystems (for example, the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering unit 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control of many aspects of the vehicle 100 and its subsystems.
  • one or more of these components described above may be installed or associated with the vehicle 100 separately.
  • the data storage device 114 may exist partially or completely separately from the vehicle 1100.
  • the aforementioned components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation to the embodiment of the present invention.
  • An autonomous vehicle traveling on a road can recognize objects in its surrounding environment to determine the adjustment to the current speed.
  • the object may be other vehicles, traffic control equipment, or other types of objects.
  • each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the vehicle, etc., can be used to determine the speed to be adjusted by the self-driving car.
  • the self-driving car vehicle 100 or the computing device associated with the self-driving vehicle 100 may be based on the characteristics of the identified object and the surrounding environment
  • the state of the object e.g., traffic, rain, ice on the road, etc.
  • each recognized object depends on each other's behavior, so all recognized objects can also be considered together to predict the behavior of a single recognized object.
  • the vehicle 100 can adjust its speed based on the predicted behavior of the identified object.
  • an autonomous vehicle can determine what stable state the vehicle will need to adjust to (for example, accelerating, decelerating, or stopping) based on the predicted behavior of the object.
  • other factors may also be considered to determine the speed of the vehicle 100, such as the lateral position of the vehicle 100 on the road on which it is traveling, the curvature of the road, the proximity of static and dynamic objects, and so on.
  • the computing device can also provide instructions to modify the steering angle of the vehicle 100 so that the self-driving car follows a given trajectory and/or maintains an object near the self-driving car (for example, , The safe horizontal and vertical distances of cars in adjacent lanes on the road.
  • the above-mentioned vehicle 100 may be a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, recreational vehicle, playground vehicle, construction equipment, tram, golf cart, train, and trolley, etc.
  • the embodiments of the invention are not particularly limited.
  • the adjustment of the rear-view mirror of the vehicle includes left-right adjustment and vertical adjustment of the rear-view mirror. The following describes how to adjust the rearview mirror in the left and right directions.
  • FIG. 2 is a schematic flowchart of a method for adaptive adjustment of a rearview mirror according to an embodiment of the present invention. As shown in Figure 2, the method includes:
  • S201 Obtain the spatial position of the driver's eyes and the position of the rearview mirror, and obtain the driver's horizontal field of view according to the spatial position of the driver's eyes and the position of the target rearview mirror of the vehicle.
  • the vehicle driven by the driver is equipped with a driver monitoring system (DMS) camera and a cockpit monitoring system (CMS) camera.
  • DMS driver monitoring system
  • CMS cockpit monitoring system
  • the DMS camera is used to obtain an image of the driver's head
  • the CMS camera is used to obtain an image of the interior of the vehicle driven by the driver, and the image shows the driver's head.
  • the spatial position of the driver's eyes may be obtained through the image obtained by the DMS camera or the spatial position of the driver's eyes may be obtained through the image obtained by the DMS camera and the image obtained by the CMS camera.
  • the following specifically introduces the image obtained by the DMS camera and the image obtained by the CMS camera to obtain the spatial position of the driver's eyes.
  • the internal parameter matrix and the external parameter matrix of the DMS camera and the CMS camera are obtained, and the eye positions P 1 , P 2 are detected in the image obtained by the DMS camera and the image obtained by the CMS camera, respectively, and the external parameters of the DMS camera and the CMS camera are combined Matrix and internal parameter matrix, calculate the space line O 1 P 1 , O 2 P 2 where the eye is located, and finally obtain the eye space point P based on the space line O 1 P 1 , O 2 P 2 , the eye space point P is the space line O 1 P 1 ,O 2 P 2 ’s intersection point.
  • the space position of the left and right eyes of the driver is obtained, and then the space position of the left and right eyes of the driver is averaged to obtain the space position of the driver's eyes.
  • obtaining the internal participation and external parameter matrix of the two cameras is to establish the position and direction relationship between the camera and the camera, and the camera and other objects in the space based on the internal participation and external parameter.
  • the camera calibration method is used to obtain the internal parameter matrix and external parameter matrix of the calibration of the DMS camera and the CMS camera before leaving the factory, and then the image obtained by the DMS camera and the image obtained by the CMS camera are detected by deep learning or other algorithms
  • the 2D position coordinates p 1 , p 2 of the middle eye are combined with the calibrated internal parameter matrix and external parameter matrix of the DMS camera and CMS camera to calculate the spatial straight line O 1 P 1 , O 2 P 2 where the eye is located, as shown in Figure 3, based on
  • the camera's internal/external parameters convert the 2D position of the eye p 1 , p 2 into spatial coordinates P 1 , P 2 ;
  • O 1 , O 2 are the optical origins of the DMS camera and the CMS camera in space, which are recorded in the two cameras external reference matrix, and further according to the spatial coordinates of the point P 1, P 2 and O 1, O 2 calculated eye where spatial straight line O 1 P 1, O 2 P 2, and finally the space eye where the linear O 1 P 1, O The
  • the position of the target rearview mirror of the vehicle is obtained, and then the horizontal viewing angle of the driver is determined based on the spatial position of the driver's eyes and the position of the target rearview mirror.
  • E is the spatial position of the driver’s eyes
  • R and L are the left and right boundary points of the right rearview mirror of the vehicle, respectively
  • A' is the symmetry of the point E with the straight line RL as the axis of symmetry.
  • ⁇ LA'R is the horizontal view angle of the above driving
  • A is the driver's view.
  • S202 Acquire an image of the following vehicle, and obtain a first assist angle of the target following vehicle according to the image of the following vehicle.
  • the image of the following vehicle is obtained by the rear-view camera of the own vehicle, and the rear-view camera is used to obtain the image of the vehicle driving behind the own vehicle.
  • the rear-view camera can be located at any position behind the vehicle, such as the center position of the upper or lower border of the license plate, or the upper left corner, lower left corner, upper right corner, or lower right corner of the rear of the vehicle.
  • the image of the following vehicle may include vehicles traveling in multiple lanes behind the vehicle.
  • the following vehicle image includes M following vehicles
  • determining the first assist angle of the target following vehicle based on the following vehicle image includes:
  • the offset is the distance between the center position of the front face of the following car A and the longitudinal center line of the following car image in the following car image.
  • the frame is the number of pixels occupied by the contour of the following car A in the image of the following car; the distance d is obtained according to the offset of the following car A and the car body frame, and the distance d is the front face of the following car A and The distance between the rear of the own vehicle; the third auxiliary angle of the following vehicle A is obtained according to the distance d and the offset of the following vehicle A.
  • the third auxiliary angle is the angle formed by the fourth straight line and the lateral centerline of the own vehicle Angle, the fourth straight line is the straight line passing through the position of the rear-view camera and the center position of the front face of the following vehicle A;
  • the following vehicle A is the target following vehicle, and the first auxiliary angle of the target following vehicle is obtained according to the third auxiliary angle of the target following vehicle and the distance d;
  • the i-th rear car is obtained according to the third auxiliary angle and the body frame of the i-th rear car
  • the following car with the highest important probability is determined as the target vehicle
  • the first auxiliary angle of the target following car is obtained according to the third auxiliary angle of the target following car and the distance d.
  • the important probability is used to characterize the importance of the following car, or the degree that the following car needs to be paid attention to by the driver. If the body frame of the following car is larger and the third assist angle is smaller, the greater the probability of importance of the following car. The greater the probability of the importance of the following car, the higher the importance of the following car, or the higher the degree that the following car needs to be paid attention to by the driver.
  • this embodiment is used to determine the vehicle that needs to be focused on from the multiple vehicles behind, and then adjust the horizontal angle of the target rearview mirror of the vehicle so that the vehicle is in the field of vision of the driver of the vehicle ,
  • the driver of the own vehicle can pay attention to the driving status of the vehicle in real time, thereby ensuring the driving safety of the own vehicle.
  • the non-focused vehicle is not in the field of vision of the driver of the vehicle, it is avoided that the non-focused vehicle interferes with the judgment of the driver of the vehicle.
  • the first auxiliary angle of the target following vehicle is the angle formed by the first straight line and the second straight line
  • the first straight line is the line passing the target following vehicle and the first reference point
  • the second straight line is the line passing through the first reference point.
  • the reference point is a straight line perpendicular to the target rearview mirror
  • the first reference point is a point on the target rearview mirror.
  • the first reference point is the center position point of the target rearview mirror.
  • the third auxiliary angle is ⁇ COY, where point O is the center position of the rear of the vehicle, and is also the installation position of the rear-view camera.
  • Straight line OY is the longitudinal centerline of the vehicle
  • OX is the vertical line passing through point O
  • C is the center position of the front face of the following vehicle A
  • point O' is the center position of the right rearview mirror of the vehicle
  • straight line O'L' is A straight line passing O'and perpendicular to the mirror surface of the right rearview mirror of the vehicle.
  • the first reference point is the center position point of the target rearview mirror
  • the first auxiliary angle is ⁇ CO'L'
  • the first straight line is the straight line CO'
  • the second straight line is the straight line O'L'.
  • the third auxiliary angle is the angle between ⁇ COY passing through the center of the front face of the rear vehicle A and the position of the rear-view camera, and the angle between the straight line passing through the position of the rear-view camera and parallel to the longitudinal centerline of the vehicle .
  • acquiring the body frame of the following vehicle A according to the image of the following vehicle includes:
  • the car body frame of the following car A is the number of pixels in the contour of the following car A.
  • the body frame of each of the multiple following vehicles can be acquired simultaneously according to the above-mentioned method.
  • the rear view camera is used to obtain the image of the rear of the vehicle (ie, the image of the following vehicle), the image is median filtered to eliminate the interference information in the image of the following vehicle, and then the filtered image is canny edge to correct
  • the object in the image is segmented, and finally the edge detection result is detected by the Haar operator to find the car body in the segmented vehicle from the edge detection result, and draw the minimum bounding rectangle of each car, calculate The number of pixels occupied in the rectangular frame of each car is calculated, and the number of pixels is the car body frame. So far, the body frame of each of the multiple rear vehicles is obtained.
  • obtaining the vehicle distance d according to the offset of the following vehicle A and the vehicle body frame includes:
  • the first relationship table is the offset and the correspondence relationship between the car body frame and the distance.
  • the model of the following car A can be determined according to the body frame of the following car A, such as trucks, cars, and off-road vehicles (sport utility vehicles, SUV). Since different vehicle models have different vehicle widths, the width of the front of the same vehicle is not much different, so the actual width of the front of the following vehicle A can be determined according to the model of the following vehicle A.
  • the car body frame will include the front and different proportions of the side of the car. According to the offset, the width of the front part of the car body frame is obtained, so that according to the actual width of the front of the rear car A and the rear car image The width of the front of car A can determine the distance d between the following car A and its own car.
  • the first relationship table is acquired before using the first relationship table.
  • Table 1 below is a table of the corresponding relationship between the offset and the vehicle body frame and the distance, that is, the above-mentioned first relationship table.
  • This look-up table relationship is related to the field of view, pixels, and installation position of the selected rear-view camera.
  • the rear view camera is installed above the rear license plate, the pixel is 1280*960, and the camera field of view is 120° in the horizontal direction and 90° in the vertical direction as an example.
  • Table 1 is only an example, which is used to indicate the relationship between the offset, the vehicle body frame and the distance, and the change trend of each variable is not a limitation on the protection scope of the present application.
  • the second relation table is queried according to the distance d and the offset of the following vehicle A to obtain the third auxiliary angle of the following vehicle A, where the third auxiliary angle of the following vehicle A is the following vehicle
  • the third auxiliary angle corresponding to the vehicle distance d and the offset of A, and the second relationship table is a corresponding relationship table between the distance and the offset and the third auxiliary angle.
  • the second relationship table is acquired before using the second relationship table.
  • Table 2 below is a table of the corresponding relationship between the offset and the body frame and the distance, that is, the above-mentioned second relationship table.
  • the second relationship table is acquired. It can be obtained from the third device or created by itself.
  • the first relationship table is established through different offsets and vehicle distance d and the corresponding third auxiliary angle.
  • This relationship table is related to the field of view, pixels, and installation position of the selected rear-view camera.
  • the rear-view camera When establishing this relationship table, first select the rear-view camera, fix its installation position, and then place other vehicles used for testing in different positions, record the distance d, read the image of the following vehicle, and record the third auxiliary angle ⁇ COY and offset, you can get the second relationship table, the horizontal axis is the distance d, the unit is m, the vertical axis is the offset, the unit is cm.
  • the rear view camera is installed above the rear license plate, the pixel is 1280*960, and the camera field of view is 120° in the horizontal direction and 90° in the vertical direction as an example.
  • the important probability of obtaining the i-th rear car according to the third auxiliary angle and the body frame of the i-th rear car includes:
  • the third auxiliary angle of the i-th rear car and the corresponding important probability of the car body frame are the important probabilities of the i-th rear car
  • the fourth relation table is the third auxiliary angle and the corresponding relation table of the car body frame and the important probabilities.
  • the important probability of the following car is 1.
  • the vehicle closest to the own vehicle is selected to determine its important probability.
  • the fourth relationship table is established before using the fourth relationship table. See Table 3 below.
  • Table 3 below is the third auxiliary angle and the corresponding relationship table between the car body frame and the important probability, that is, the above-mentioned fourth relationship table.
  • the use range of the third relationship table above is within 5 lanes behind the vehicle, that is, two lanes to the left, two lanes to the right and the own lane.
  • This look-up table relationship is related to the field of view, pixels, and installation position of the selected rear-view camera.
  • the rear view camera is installed above the rear license plate, the pixel is 1280*960, and the camera field of view is 120° in the horizontal direction and 90° in the vertical direction as an example.
  • obtaining the first auxiliary angle of the target following vehicle according to the third auxiliary angle of the target following vehicle and the distance d includes:
  • the third relationship table is the third auxiliary angle and the correspondence relationship table between the vehicle distance d and the first auxiliary angle.
  • Table 4 which is the third auxiliary angle and the correspondence relation table between the vehicle distance d and the first auxiliary angle, that is, the above-mentioned third relation table.
  • the third relationship table is related to the field of view, pixels, and installation position of the selected rear-view camera.
  • the second relation table can be obtained, the horizontal axis is the distance between vehicles d, the unit is m, and the vertical axis is the value of ⁇ COY.
  • the rear view camera is installed above the rear license plate, the pixel is 1280*960, and the camera field of view is 120° in the horizontal direction and 90° in the vertical direction as an example.
  • the first auxiliary angle is related to the position of the first reference point on the target rearview mirror
  • the first auxiliary angle of the target vehicle behind is acquired according to the image of the following vehicle, which is related to the first auxiliary angle in the third relationship table.
  • An auxiliary angle needs to be consistent, that is, both are angles obtained based on the same first reference point.
  • acquiring the first assist angle of the target following vehicle according to the offset and the body frame of the target following vehicle includes:
  • the third auxiliary angle and the first auxiliary angle corresponding to the car body frame are the target vehicle's following First auxiliary angle
  • the sixth relationship table is the correspondence relationship table between the offset, the vehicle body frame, and the first auxiliary angle.
  • the manner of obtaining the fifth relationship table can refer to the manner of obtaining the above-mentioned first relationship table, second relationship table, and third relationship table, which will not be described here.
  • S203 Calculate the second assist angle according to the driver's field of view in the target rearview mirror, the spatial position of the human eye, and the target rearview mirror position.
  • the second auxiliary angle is the angle formed by the third straight line and the horizontal field of view centerline
  • the third straight line is a straight line passing through the driver's eye position and the second reference point
  • the second reference point is the horizontal field of view centerline
  • the intersection point with the mirror surface of the target rearview mirror, the center line of the horizontal field of view is the angular bisector of the horizontal field of view angle.
  • E is the spatial position of the driver's eyes
  • O'point is the center position of the right rearview mirror, that is, the second reference point is the center position of the right rearview mirror
  • L'point is the right rearview mirror passing through The vertical line of the mirror at the center position, according to the principle of mirror reflection, the incident light through the mirror and the corresponding outgoing light have the same angle with this vertical line
  • C is the center of the front face of the following vehicle, and the current driver is in the right rearview mirror.
  • the horizontal field of view is A 1
  • the horizontal field of view of the adjusted driver in the right rearview mirror is A 2
  • is the horizontal adjustment angle of the rearview mirror.
  • the angle ⁇ EO′V 1 when the human eye looks to the center of the horizontal field of view A 1 through the midpoint of the rearview mirror is obtained by calculation, which is the second auxiliary angle.
  • the straight line O'V 1 is the center line of the horizontal field of view A 1 , that is, the center line of the horizontal field of view
  • the straight line O'V 2 is the center line of the horizontal field of view A 2 , that is, the driver's horizontal field of vision after adjusting the horizontal angle of the right rearview mirror. Centerline.
  • S204 Acquire the horizontal adjustment angle of the target rearview mirror according to the first auxiliary angle and the second auxiliary angle, and adjust the horizontal angle of the target rearview mirror according to the horizontal adjustment angle.
  • acquiring the horizontal adjustment angle of the target rearview mirror according to the first auxiliary angle and the second auxiliary angle includes:
  • the horizontal adjustment angles corresponding to the first auxiliary angle and the second auxiliary angle are the horizontal adjustment angles of the target rearview mirror, and the fifth relation table is the correspondence relation table between the first auxiliary angle, the second auxiliary angle and the horizontal adjustment angle.
  • the fourth relationship table is acquired. It can be obtained from the third device or created by itself. Among them, a fourth relationship table is established by different first auxiliary angle ⁇ CO'L', second auxiliary angle ⁇ EO'V 1 and corresponding horizontal adjustment angle ⁇ .
  • This relationship table is related to the field of view, pixels, and installation position of the selected rear-view camera.
  • This table first select the rear-view camera and fix its installation position; select the style and installation position of the rear-view mirror. Then place other vehicles used for the test in different positions, record ⁇ CO'L', ⁇ EO'V 1 , adjust the rearview mirror so that the rear vehicle appears in the center of the rearview mirror, and record the horizontal adjustment angle ⁇ at this time.
  • the rear view camera is installed above the rear license plate, the pixel is 1280*960, and the camera field of view is 120° in the horizontal direction and 90° in the vertical direction as an example.
  • Table 5 is a table of the correspondence between the first auxiliary angle and the second auxiliary angle and the horizontal adjustment angle, that is, the above-mentioned fifth relation table.
  • the above-mentioned target rearview mirror may be a left rearview mirror or a right rearview mirror.
  • the horizontal angles of the left rearview mirror and the right rearview mirror of the vehicle can be adjusted according to the above-mentioned method.
  • the second auxiliary angle is related to the position of the second reference point on the target rearview mirror, the second auxiliary angle is calculated according to the horizontal field of view angle, the spatial position of the human eye, and the target rearview mirror position. It needs to be consistent with the second auxiliary angle in the fifth relationship table, that is, both are angles obtained based on the same first reference point.
  • the horizontal angle of the rearview mirror is adjusted based on the driver's field of view and the scene outside the vehicle by adopting this embodiment, so that the driver can observe the car of interest through the rearview mirror at any time.
  • this embodiment adopts an adaptive adjustment method and does not require manual adjustment, which prevents the driver from being distracted from affecting driving safety.
  • the adaptive adjustment method adopted in this implementation ensures that the driver has no blind spots in the process of overtaking by the following vehicle, thereby ensuring driving safety.
  • the solution of this embodiment can use cameras of different modalities, such as a CMS camera and a DMS camera. And this solution does not rely on the front camera, which embodies the advantages of low hardware requirements.
  • FIG. 7 is a schematic flowchart of another method for adaptively adjusting a rearview mirror according to an embodiment of the present invention. As shown in Figure 7, the method includes:
  • S701 Obtain the spatial position of the driver's eyes and the spatial position of the target rearview mirror of the vehicle, and obtain the vertical field of view angle according to the spatial position of the human eyes and the spatial position of the target rearview mirror.
  • the image of the following vehicle may include vehicles traveling in multiple lanes behind the vehicle.
  • the rear-view camera may be placed at the center of the rear of the vehicle, or at a location where images of the following vehicle can be obtained, such as the upper left corner, upper right corner, lower left corner, lower right corner of the vehicle, or other positions.
  • S702 Calculate the auxiliary adjustment angle of the self-vehicle according to the spatial position of the driver's eyes and the vertical field of view angle.
  • the auxiliary adjustment angle of the self-vehicle is the angle formed by the vertical field of view center line and the fifth straight line
  • the fifth line is the line passing through the space position of the driver's eyes and the third reference point and the vertical field of view center line
  • the angle of the formed included angle, the third reference point is an arbitrary position point on the target rearview mirror
  • the vertical field of view center line is the angular bisector of the vertical field of view angle.
  • the third reference point is the center position point of the target rearview mirror.
  • FIG. 8 shows a schematic diagram of the vertical field of view of the vehicle's rearview mirror.
  • point E is the spatial position of the driver's eyes
  • point O' is the center position of the target rearview mirror, that is, the third reference point.
  • the point M is the symmetry point with the target rearview mirror as the symmetry axis point E.
  • the angle corresponding to the vertical field of view H is the vertical field of view angle ⁇ UMD.
  • S703 Obtain an image of the following vehicle collected by the rear view camera of the vehicle, and obtain the sky-to-ground ratio R according to the image of the following vehicle.
  • obtaining the sky-to-ground ratio R according to the image of the following vehicle includes:
  • the target image band is the continuous image band of the sky and the ground in the multiple image bands; count the pixels occupied by the sky in the target image band
  • the number of pixels and the number of pixels occupied by the ground is calculated according to the number of pixels occupied by the sky and the number of pixels occupied by the ground.
  • the sky-to-ground ratio R is the number of pixels occupied by the sky and the number of pixels occupied by the ground ratio.
  • the curve in the figure is the boundary
  • the upper half of the image is the sky
  • the lower half is the ground
  • the gray frame in the middle is the following car.
  • the image of the following vehicle is divided into 9 image bands longitudinally, which are image bands 1, 2, 3, 4, 5, 6, 7, 8, and 9 respectively.
  • the transition between the sky and the ground in image bands 4, 5, and 6 is discontinuous due to the vehicle behind, so the target image band only includes image bands 1, 2, 3, 7, 8, and 9.
  • the target image band only includes image bands 1, 2, 3, 7, 8, and 9.
  • S704 Obtain the vertical adjustment angle of the target rearview mirror according to the auxiliary adjustment angle of the self-vehicle and the sky-to-ground ratio R, and adjust the target rearview mirror to the target vertical angle according to the vertical adjustment angle.
  • obtaining the vertical adjustment angle of the vehicle behind the target according to the auxiliary adjustment angle and the sky-to-ground ratio R includes:
  • the auxiliary relation table is the relation table of auxiliary adjustment scheduling and the sky-to-ground ratio R and the vertical adjustment angle.
  • the auxiliary adjustment angle is related to the position of the third reference point on the target rearview mirror, the auxiliary adjustment angle of the vehicle is calculated according to the spatial position of the driver’s eyes and the vertical field of view.
  • the auxiliary adjustment angles in the relationship table need to be consistent, that is, both are angles obtained based on the same third reference point.
  • This fifth relationship table is related to the field of view, pixels, and installation position of the selected rear-view camera. When establishing this relationship table, first select the rear-view camera and fix its installation position; select the style and installation position of the rear-view mirror. In the following table 6, the rear view camera is installed above the rear license plate, the pixel is 1280*960, and the camera field of view is 120° in the horizontal direction and 90° in the vertical direction as an example.
  • Table 6 is the auxiliary adjustment angle and the corresponding relationship table of the sky-to-ground ratio R and the vertical adjustment angle, that is, the above-mentioned auxiliary relationship table.
  • the vertical angle of the target rearview mirror is adjusted to the target vertical angle according to the vertical adjustment angle.
  • the vertical angle of the rearview mirror is adjusted based on the driver's field of view and the scene outside the vehicle, or adjusted according to the slope of the current driving road, so that the driver can pass through at any time.
  • the rearview mirror observes the condition of the car behind the vehicle.
  • this embodiment adopts an adaptive adjustment method and does not require manual adjustment, which prevents the driver from being distracted by manually adjusting the angle of the rear-view mirror and affecting driving safety.
  • the solution of this embodiment can use cameras of different modalities, such as a CMS camera and a DMS camera. And this solution does not rely on the front camera, which embodies the advantages of low hardware requirements.
  • FIG. 10 is a schematic flowchart of another method for self-adjusting rearview mirror according to an embodiment of the present invention. As shown in 10, the method includes:
  • S1001 Obtain an image of the following vehicle collected by the rear view camera of the vehicle, convert the image of the following vehicle into a grayscale image, and calculate an average value of pixels in the grayscale image.
  • the image of the following vehicle may include vehicles traveling in multiple lanes behind the vehicle.
  • the rear-view camera may be placed at the center of the rear of the vehicle, or at a location where images of the following vehicle can be obtained, such as the upper left corner, upper right corner, lower left corner, lower right corner of the vehicle, or other positions.
  • the sky-to-ground ratio R can be obtained by converting the image of the following vehicle into a grayscale image, and calculating the average value of the pixels in the grayscale image, and then determining whether the average value is not less than a preset value.
  • steps S1003 and S1004 are executed; if the average value of the gray image pixels is less than the preset value, then steps 1005 and S1006 are executed.
  • the average value of the grayscale pixels corresponding to the image of the following car is less than the preset value, it means that the image of the following car is too dark. For example, the image obtained in the dark or foggy day cannot be calculated at this time. R out of the sky and the ground.
  • the auxiliary adjustment angle of the self-vehicle is the angle formed by the vertical field of view center line and the fifth straight line
  • the fifth line is the line passing through the space position of the driver's eyes and the third reference point and the vertical field of view center line
  • the angle of the formed included angle, and the third reference point is a point on the target rearview mirror.
  • the third reference point is the center position point of the target rearview mirror.
  • step S201 the process of obtaining the spatial position of the driver's eyes and the position of the target rearview mirror of the own vehicle can be specifically referred to the relevant description of the above step S201, which will not be described here.
  • obtaining the sky-to-ground ratio R according to the image of the following vehicle includes:
  • the target image band is the continuous image band of the sky and the ground in the multiple image bands; count the pixels occupied by the sky in the target image band
  • the number of pixels and the number of pixels occupied by the ground is calculated according to the number of pixels occupied by the sky and the number of pixels occupied by the ground.
  • the sky-to-ground ratio R is the number of pixels occupied by the sky and the number of pixels occupied by the ground ratio.
  • obtaining the vertical adjustment angle of the vehicle behind the target according to the auxiliary adjustment angle and the sky-to-ground ratio R includes:
  • the fifth relation table is the relation table of auxiliary adjustment scheduling and the sky-to-ground ratio R and the vertical adjustment angle.
  • steps S1001-S1004 can refer to the related description of steps S701-S704, which will not be described here.
  • the gradient of the road on which the vehicle is traveling can be obtained through the gyroscope of the vehicle, and then the driving state of the vehicle can be obtained.
  • the slope of the road on which the own vehicle is traveling within the preset is obtained, and the traveling state of the own vehicle can be determined according to the slope of the road on which the own vehicle is traveling within the preset time period.
  • the driving state of the self-vehicle includes the flat driving state, the long downhill state, the downhill state, the off downhill state, the long uphill state, the uphill state, and the off uphill state.
  • the box in the figure represents the own vehicle.
  • the driving state of the self-vehicle is a flat driving state, as shown in figure a in Figure 11;
  • the driving state of the vehicle is determined to be a long uphill state, as shown in the e diagram in Figure 11;
  • a represents the absolute value of the slope of the road where the vehicle is currently located
  • b is the magnitude of change, which is a small value. For example, 0.01, 0.02, 0.05, 0.1, 0.5, 0.6, 0.8, 1 or other smaller values.
  • S1006 Adjust the vertical angle of the target rearview mirror to the target vertical angle according to the driving state of the vehicle and the gradient ⁇ , where the gradient ⁇ is the absolute value of the gradient of the road where the vehicle is currently located.
  • the target vertical angle is the preset angle ⁇ ; when the driving state of the own vehicle is long downhill driving state, the target vertical angle is the preset angle ⁇ ; when the driving state of the own vehicle is In order to enter the downhill driving state, the target vertical angle is ⁇ - ⁇ /2; when the vehicle’s driving state is out of the downhill driving state, the target vertical angle is ⁇ + ⁇ /2; when the own vehicle’s driving state is long
  • the target vertical angle is the preset angle ⁇ ; when the vehicle is driving uphill, the target vertical angle is ⁇ - ⁇ /2; when the vehicle is driving off the uphill In the state, the target vertical angle is ⁇ + ⁇ /2.
  • the vertical angle of the target rearview mirror is adjusted using the method of the embodiment shown in FIG. 7, it is determined whether the adjusted vertical angle of the target rearview mirror is required by the user. If it does not meet the requirements of the user, The action of adjusting the vertical angle of the target rearview mirror based on the driving state of the vehicle can be executed according to the user's instruction, so as to better meet the needs of the user.
  • the method of the embodiment shown in FIG. 7 and the driving state of the vehicle are used to determine the vertical adjustment angle of the target rearview mirror respectively, and then the obtained vertical adjustment angle is processed, for example, Average value, weighted sum, etc., to obtain the processed vertical adjustment angle, and finally adjust the vertical angle of the target rearview mirror to the target vertical angle based on the processed vertical adjustment angle.
  • this method can effectively avoid the inaccuracy of the vertical adjustment angle obtained by using one of the above two methods.
  • the vertical angle of the rearview mirror is adjusted based on the driver's field of view and the scene outside the vehicle, or adjusted according to the slope of the current driving road, so that the driver can pass through at any time.
  • the rearview mirror observes the condition of the car behind the vehicle.
  • this embodiment adopts an adaptive adjustment method and does not require manual adjustment, which prevents the driver from being distracted by manually adjusting the angle of the rear-view mirror and affecting driving safety.
  • the solution of this embodiment can use cameras of different modalities, such as a CMS camera and a DMS camera. And this solution does not rely on the front camera, which embodies the advantages of low hardware requirements.
  • the rearview mirror adaptive adjustment device 1200 includes:
  • the obtaining module 1201 is used to obtain the spatial position of the driver's eyes and the spatial position of the target rearview mirror, and obtain the horizontal field of view angle of the driver in the target rearview mirror according to the spatial position of the human eye and the spatial position of the target rearview mirror; Obtain the image of the following vehicle, and obtain the first auxiliary angle of the target following vehicle based on the image of the following vehicle.
  • the first auxiliary angle is the angle formed by the first straight line and the second straight line.
  • the first straight line is the angle after passing the target.
  • the straight line between the car and the first reference point, the second straight line is a straight line that passes through the first reference point and is perpendicular to the target rearview mirror, the first reference point is the point on the target rearview mirror, and the image of the rear car is acquired by the rearview camera , And the image of the following car includes the target following car;
  • the calculation module 1202 is configured to calculate a second auxiliary angle according to the horizontal field of view angle, the spatial position of the human eye, and the position of the target rearview mirror.
  • the second auxiliary angle is the angle formed by the third straight line and the center line of the horizontal field of view.
  • the three straight lines are straight lines passing through the driver's eye position and the second reference point, the second reference point is the intersection of the horizontal field of view center line and the mirror surface of the target rearview mirror, and the horizontal field of view center line is the angular bisector of the horizontal field of view angle;
  • the obtaining module 1201 is further configured to obtain the horizontal adjustment angle of the target rearview mirror according to the first auxiliary angle and the second auxiliary angle;
  • the adjustment module 1203 is used to adjust the horizontal angle of the target rearview mirror according to the horizontal adjustment angle.
  • the following vehicle image includes M vehicles, and M is an integer greater than or equal to 1.
  • the acquiring module 1201 is specifically configured to:
  • the offset is the distance between the center position of the front face of the following car A and the longitudinal center line of the following car image in the following car image.
  • the frame is the number of pixels occupied by the contour of the following car A in the image of the following car; the distance d is obtained according to the offset of the following car A and the car body frame, and the distance d is the front face of the following car A and The distance between the rear of the own vehicle; the third auxiliary angle of the following vehicle A is obtained according to the distance d and the offset of the following vehicle A.
  • the third auxiliary angle is the angle formed by the fourth straight line and the lateral centerline of the own vehicle Angle, the fourth straight line is the straight line passing through the position of the rear-view camera and the center position of the front face of the following vehicle A;
  • the following vehicle A is the target following vehicle, and the first auxiliary angle of the target following vehicle is obtained according to the third auxiliary angle of the target following vehicle and the distance d;
  • the i-th car is obtained according to the third assist angle of the i-th car and the distance d
  • the important probability of the vehicle is determined, and the following vehicle with the largest important probability is determined as the target vehicle, and the first auxiliary angle of the target following vehicle is obtained according to the third auxiliary angle and the body frame of the target following vehicle.
  • the obtaining module 1201 is specifically configured to:
  • the first relationship table is the offset and the correspondence relationship between the car body frame and the distance;
  • the obtaining module 1201 is specifically used to:
  • the second relationship table is the correspondence relationship table between the distance and the offset amount and the third auxiliary angle.
  • the obtaining module 1201 is specifically configured to:
  • the third relationship table according to the third auxiliary angle and distance d of the target following vehicle to obtain the third auxiliary angle of the target following vehicle and the first auxiliary angle corresponding to the distance d, and the third auxiliary angle of the target following vehicle and the vehicle distance d.
  • the first assist angle corresponding to the distance d is the first assist angle of the following vehicle target
  • the third relation table is the third assist angle and the correspondence relation table between the vehicle distance d and the first assist angle.
  • the acquiring module 1201 is specifically configured to:
  • the obtaining module 1201 is specifically used for:
  • the fourth relationship table according to the third auxiliary angle of the i-th rear car and its body frame to obtain the third auxiliary angle of the i-th rear car and its corresponding important probability of the car body frame; among them, the i-th rear car
  • the third auxiliary angle of the car and the corresponding important probability of the car body frame are the important probabilities of the i-th car behind
  • the fourth relation table is the correspondence table of the third auxiliary angle and the car body frame and the important probabilities.
  • the obtaining module 1201 is specifically configured to:
  • the fifth relation table is the correspondence relation table between the first auxiliary angle and the second auxiliary angle and the horizontal adjustment angle.
  • the foregoing modules are used to perform the relevant steps of the foregoing method.
  • the obtaining module 1201 is used to execute the related content of steps S201 and S202
  • the calculation module 1202 is used to execute the related content of step S203
  • the adjustment module 1203 is used to execute the related content of step S204.
  • the rearview mirror adaptive adjustment device 1200 is presented in the form of a module.
  • the "module” here can refer to application-specific integrated circuits (ASICs), processors and memories that execute one or more software or firmware programs, integrated logic circuits, and/or other devices that can provide the above functions .
  • ASICs application-specific integrated circuits
  • the above acquisition module 1201, calculation module 1202, and adjustment module 1203 can be implemented by the processor 1501 of the rearview mirror adaptive adjustment device shown in FIG. 15.
  • the rearview mirror adaptive adjustment device 1300 includes:
  • the acquiring module 1301 is used to acquire the spatial position of the driver's eyes and the spatial position of the target rearview mirror of the vehicle, and obtain the vertical field of view angle according to the spatial position of the human eye and the spatial position of the target rearview mirror;
  • the calculation module 1302 is used to calculate the auxiliary adjustment angle of the self-vehicle according to the spatial position of the driver's eyes and the vertical field of view angle, where the auxiliary adjustment angle of the self-vehicle is the angle formed by the center line of the vertical field of view and the fifth straight line;
  • the fifth straight line is the angle formed by the straight line passing through the driver’s eyes and the third reference point and the vertical field of view center line, the third reference point is the point on the target rearview mirror, and the vertical field of view center line is The angular bisector of the vertical viewing angle;
  • the acquisition module 1301 is also used to acquire images of the following vehicle collected by the rear view camera of the vehicle, and obtain the sky-ground ratio R according to the image of the vehicle behind; obtain the vertical adjustment of the target rearview mirror according to the auxiliary adjustment angle of the vehicle and the sky-ground ratio R angle;
  • the adjustment module 1303 is used to adjust the target rearview mirror to the target vertical angle according to the vertical adjustment angle.
  • the obtaining module 1301 is specifically configured to:
  • the following vehicle image is divided into multiple image bands longitudinally; the target image band is obtained from the multiple image bands, and the target image band is an image band in which the sky and the ground transition continuously among the plurality of image bands; the sky occupancy in the target image band is counted The number of pixels and the number of pixels occupied by the ground; the sky-ground ratio R is calculated according to the number of pixels occupied by the sky and the number of pixels occupied by the ground.
  • the sky-ground ratio R is the number of pixels occupied by the sky and the pixels occupied by the ground The ratio of the number.
  • the obtaining module 1301 is specifically configured to:
  • the auxiliary relationship table is the correspondence relationship table between the auxiliary adjustment angle and the sky-to-ground ratio and the vertical adjustment angle.
  • the above modules are used to execute the relevant steps of the above method.
  • the obtaining module 1301 is used to execute the related content of steps S701 and S703
  • the calculation module 1302 is used to execute the related content of step S702
  • the adjustment module 1303 is used to execute the related content of step S704.
  • the rearview mirror adaptive adjustment device 1300 is presented in the form of a module.
  • the "module” here can refer to application-specific integrated circuits (ASICs), processors and memories that execute one or more software or firmware programs, integrated logic circuits, and/or other devices that can provide the above functions .
  • ASICs application-specific integrated circuits
  • the above acquisition module 1301, calculation module 1302, and adjustment module 1303 can be implemented by the processor 1601 of the rearview mirror adaptive adjustment device shown in FIG. 16.
  • FIG. 14 is a schematic structural diagram of a rearview mirror adaptive adjustment device provided by an embodiment of the present application.
  • the self-adjusting device 1400 for the rear view mirror includes:
  • the obtaining module 1401 is used to obtain images of the following vehicle collected by the rear view camera of the vehicle;
  • the calculation module 1402 is used to convert the image of the following car into a grayscale image, and calculate the average value of the pixels in the grayscale image;
  • the obtaining module 1401 is also used for obtaining the spatial position of the driver’s eyes and the spatial position of the target rearview mirror of the own vehicle if the average value is not less than the preset value, and according to the spatial position of the eyes and the target rearview mirror Obtain the vertical field of view angle from the spatial position;
  • the calculation module 1402 is also used to calculate the auxiliary adjustment angle of the self-vehicle according to the spatial position of the driver’s eyes and the vertical field of view angle.
  • the auxiliary adjustment angle of the self-vehicle is the angle formed by the vertical field of view centerline and the fifth straight line. ;
  • the obtaining module 1401 is also used to obtain the sky-to-ground ratio R according to the image of the vehicle behind; to obtain the vertical adjustment angle of the target rearview mirror according to the auxiliary adjustment angle of the self-vehicle and the sky-to-ground ratio R;
  • the adjustment module 1403 is configured to adjust the target rearview mirror to the target vertical angle according to the vertical adjustment angle.
  • the obtaining module 1601 is specifically configured to:
  • the image of the following vehicle into multiple image bands longitudinally; obtain the target image band from the multiple image bands, and the target image band is the continuous transition between the sky and the ground among the multiple image bands; count the sky occupied by the target image band The number of pixels and the number of pixels occupied by the ground; the sky-ground ratio R is calculated according to the number of pixels occupied by the sky and the number of pixels occupied by the ground.
  • the sky-ground ratio R is the number of pixels occupied by the sky and the number of pixels occupied by the ground The ratio of the number.
  • the obtaining module 1401 is specifically configured to:
  • the auxiliary relation table is the correspondence relation table between the auxiliary adjustment angle and the sky-to-ground ratio R, and the vertical adjustment angle.
  • the obtaining module 1401 is further configured to:
  • the adjustment module is also used to adjust the vertical angle of the target rearview mirror to the target vertical angle according to the driving state of the vehicle and the gradient ⁇ , which is the absolute value of the slope of the road where the vehicle is currently located.
  • the adjustment module 1403 is specifically configured to:
  • the target vertical angle is adjusted to the preset angle ⁇ ; when the driving state of the own vehicle is the long downhill driving state, the target vertical angle is adjusted to the preset angle ⁇ ;
  • the target vertical angle is adjusted to ⁇ - ⁇ /2; when the driving state of the own vehicle is out of the downhill driving state, the target vertical angle is adjusted to ⁇ + ⁇ /2;
  • the target vertical angle is adjusted to the preset angle ⁇ ; when the driving state of the own vehicle is entering the uphill driving state, the target vertical angle is adjusted to ⁇ - ⁇ /2 ;
  • the target vertical angle is adjusted to ⁇ + ⁇ /2.
  • the foregoing modules are used to execute the relevant steps of the foregoing method.
  • the obtaining module 1401 is used to execute the related content of steps S1001-S1005
  • the calculation module 1302 is used to execute the related content of step S1001
  • the adjustment module 1403 is used to execute the related content of step S1006.
  • the rearview mirror adaptive adjustment device 1400 is presented in the form of a module.
  • the "module” here can refer to application-specific integrated circuits (ASICs), processors and memories that execute one or more software or firmware programs, integrated logic circuits, and/or other devices that can provide the above functions .
  • ASICs application-specific integrated circuits
  • the above acquisition module 1401, calculation module 1402, and adjustment module 1403 can be implemented by the processor 1601 of the rearview mirror adaptive adjustment device shown in FIG. 16.
  • FIG. 12 and FIG. 13 or FIG. 14 may be the same device or different devices.
  • Figures 13 and 14 can be the same device or different devices.
  • the adjusting device 1500 can be implemented with the structure in FIG. 15.
  • the adjusting device 1500 includes at least one processor 1501, at least one memory 1502 and at least one communication interface 1503.
  • the processor 1501, the memory 1502, and the communication interface 1503 are connected through the communication bus and complete mutual communication.
  • the processor 1501 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the above program programs.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication interface 1503 is used to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area network (Wireless Local Area Networks, WLAN), etc.
  • RAN radio access network
  • WLAN Wireless Local Area Networks
  • the memory 1502 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disc storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory can exist independently and is connected to the processor through a bus.
  • the memory can also be integrated with the processor.
  • the memory 1502 is used to store application program codes for executing the above solutions, and the processor 1501 controls the execution.
  • the processor 1501 is configured to execute application program codes stored in the memory 1502.
  • the code stored in the memory 1502 can execute the method for self-adjusting rearview mirror provided in FIG. 2, for example:
  • the spatial position of the driver’s eyes and the spatial position of the target rearview mirror and obtain the driver’s horizontal field of view in the target rearview mirror according to the spatial position of the human eye and the spatial position of the target rearview mirror; obtain the image of the following vehicle, and Acquire the first auxiliary angle of the target following vehicle according to the image of the following vehicle, where the first auxiliary angle is the angle formed by the first straight line and the second straight line, and the first straight line is the following vehicle passing the target and the first reference point
  • the second line is a straight line that passes through the first reference point and is perpendicular to the target rearview mirror.
  • the first reference point is the point on the target rearview mirror.
  • the image of the following vehicle is obtained by the rear view camera, and the image of the following vehicle is Including the target vehicle behind; the second auxiliary angle is calculated according to the horizontal field of view angle, the human eye space position and the target rearview mirror position.
  • the second auxiliary angle is the angle formed by the third straight line and the horizontal field of view center line, and the third straight line It is the straight line passing the driver’s eye position and the second reference point.
  • the second reference point is the intersection of the center line of the horizontal field of view and the mirror surface of the target rear-view mirror; the target rear-view mirror is obtained according to the first auxiliary angle and the second auxiliary angle.
  • Horizontal adjustment angle adjust the horizontal angle of the target rearview mirror according to the horizontal adjustment angle.
  • the processor adjusts the horizontal angle of the target rearview mirror according to the horizontal adjustment angle.
  • the processor may directly control the target rearview mirror according to the horizontal adjustment angle to adjust the horizontal angle of the target rearview mirror, or process
  • the device sends a control instruction to the control device of the target rearview mirror to instruct the control device to adjust the horizontal angle of the target rearview mirror according to the horizontal adjustment angle.
  • the adjusting device 1600 can be implemented with the structure in FIG. 16.
  • the adjusting device 1600 includes at least one processor 1601, at least one memory 1602 and at least one communication interface 1603.
  • the processor 1601, the memory 1602, and the communication interface 1603 are connected through the communication bus and complete mutual communication.
  • the processor 1601 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the above program programs.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication interface 1603 is used to communicate with other devices or communication networks, such as Ethernet, wireless access network (RAN), wireless local area networks (Wireless Local Area Networks, WLAN), etc.
  • devices or communication networks such as Ethernet, wireless access network (RAN), wireless local area networks (Wireless Local Area Networks, WLAN), etc.
  • the memory 1602 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • the dynamic storage device can also be electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disc storage, optical disc storage (Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be used by a computer Any other media accessed, but not limited to this.
  • the memory can exist independently and is connected to the processor through a bus.
  • the memory can also be integrated with the processor.
  • the memory 1602 is used to store application program codes for executing the above solutions, and the processor 1601 controls the execution.
  • the processor 1601 is configured to execute application program codes stored in the memory 1602.
  • the code stored in the memory 1602 can execute another rearview mirror adaptive adjustment method provided in FIG. 7 or FIG. 10, such as:
  • the auxiliary adjustment angle of the own vehicle is the angle formed by the center line of the vertical field of view and the fifth straight line; obtain the image of the following vehicle collected by the rear view camera of the own vehicle, and obtain it according to the image of the following vehicle Sky-ground ratio R; Obtain the vertical adjustment angle of the target rearview mirror according to the auxiliary adjustment angle of the vehicle and the sky-ground ratio R, and adjust the target rearview mirror to the target vertical angle according to the vertical adjustment angle.
  • the fifth straight line is the angle formed by the space position of the driver's eyes and the third reference point and the vertical field of view center line
  • the third reference point is the point on the target rearview mirror, which is perpendicular to the field of view center line. Is the angular bisector of the vertical viewing angle
  • the spatial position of the human eye and the spatial position of the vehicle's target rearview mirror, and the vertical viewing angle is obtained according to the spatial position of the human eye and the spatial position of the target rearview mirror; calculated according to the spatial position of the driver's eye and the vertical viewing angle
  • the auxiliary adjustment angle of the self-car, the auxiliary adjustment angle of the self-car is the angle formed by the vertical field of view center line and the fifth straight line;
  • the sky-to-ground ratio R is obtained according to the image of the following car; the auxiliary adjustment angle and the sky and the ground are obtained according to the self-car’s auxiliary adjustment angle
  • the ratio R obtains the vertical adjustment angle of the target rearview mirror, and adjusts the target rearview mirror to the target vertical angle according to the vertical
  • the fifth straight line is the angle formed by the space position of the driver's eyes and the third reference point and the vertical field of view center line
  • the third reference point is the point on the target rearview mirror, which is perpendicular to the field of view center line. It is the angular bisector of the vertical viewing angle.
  • the processor adjusts the target rearview mirror to the target vertical angle according to the vertical adjustment angle.
  • the processor can directly control the target rearview mirror according to the vertical adjustment angle to adjust the vertical angle of the target rearview mirror to the target.
  • the vertical angle or the processor sends a control instruction to the control device of the target rearview mirror to instruct the control device to adjust the vertical angle of the target rearview mirror to the target vertical angle according to the vertical adjustment angle.
  • An embodiment of the present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program execution includes part or all of the steps of any rearview mirror adaptive adjustment method recorded in the above method embodiments. .
  • the disclosed methods may be implemented as computer program instructions encoded on a computer-readable storage medium in a machine-readable format or encoded on other non-transitory media or articles.
  • Figure 17 schematically illustrates a conceptual partial view of an example computer program product arranged in accordance with at least some of the embodiments presented herein, the example computer program product comprising a computer program for executing a computer process on a computing device.
  • the example computer program product 1700 is provided using a signal bearing medium 1701.
  • the signal bearing medium 1701 may include one or more program instructions 1702, which, when executed by one or more processors, can provide the functions or part of the functions described above with respect to FIG. 2, FIG. 7 or FIG. 10.
  • the program instructions 1702 in FIG. 17 also describe example instructions.
  • the signal-bearing medium 1701 may include a computer-readable medium 1703, such as, but not limited to, a hard disk drive, compact disk (CD), digital video compact disk (DVD), digital tape, memory, read-only storage memory (Read -Only Memory, ROM) or Random Access Memory (RAM), etc.
  • the signal bearing medium 1701 may include a computer recordable medium 1704, such as, but not limited to, memory, read/write (R/W) CD, R/W DVD, and so on.
  • the signal-bearing medium 1701 may include a communication medium 1705, such as, but not limited to, digital and/or analog communication media (eg, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.). Therefore, for example, the signal bearing medium 1701 may be communicated by a wireless communication medium 1705 (for example, a wireless communication medium that complies with the IEEE 802.11 standard or other transmission protocols).
  • the one or more program instructions 1702 may be, for example, computer-executable instructions or logic-implemented instructions.
  • a computing device such as that described with respect to FIG. 2, FIG. 7, or FIG.
  • a program instruction 1702 conveyed to the computing device provides various operations, functions, or actions. It should be understood that the arrangement described here is for illustrative purposes only. Thus, those skilled in the art will understand that other arrangements and other elements (for example, machines, interfaces, functions, sequences, and functional groups, etc.) can be used instead, and some elements can be omitted altogether depending on the desired result . In addition, many of the described elements are functional entities that can be implemented as discrete or distributed components, or combined with other components in any appropriate combination and position.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, A number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present invention.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program can be stored in a computer-readable memory, and the memory can include: a flash disk , Read-only memory (English: Read-Only Memory, abbreviation: ROM), random access device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

一种后视镜自适应调节方法,包括:获取自车驾驶员人眼空间位置和目标后视镜空间位置,并根据人眼空间位置及目标后视镜空间位置获取驾驶员在目标后视镜中的水平视野角度;获取后车图像,并根据后车图像获取目标后车的第一辅助角度;根据水平视野角度、人眼空间位置和目标后视镜位置计算得到第二辅助角度;根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度;根据水平调节角度调节目标后视镜的水平角度。一种后视镜自适应调节装置。采用该后视镜自适应调节方法和装置避免了驾驶员因分心手动调节后视镜而影响驾驶安全。

Description

后视镜自适应调节方法及装置
本申请要求于2019年8月31日递交中国知识产权局、申请号为201910830441.7,发明名称为“后视镜自适应调节方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及智能汽车领域,尤其涉及一种后视镜自适应调节方法及装置。
背景技术
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式作出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。人工智能领域的研究包括机器人,自然语言处理,计算机视觉,决策与推理,人机交互,推荐与搜索,AI基础理论等。
自动驾驶是人工智能领域的一种主流应用,自动驾驶技术依靠计算机视觉、雷达、监控装置和全球定位系统等协同合作,让机动车辆可以在不需要人类主动操作下,实现自动驾驶。自动驾驶的车辆使用各种计算系统来帮助将乘客从一个位置运输到另一位置。一些自动驾驶车辆可能要求来自操作者(诸如,领航员、驾驶员、或者乘客)的一些初始输入或者连续输入。自动驾驶车辆准许操作者从手动模操作式切换到自东驾驶模式或者介于两者之间的模式。由于自动驾驶技术无需人类来驾驶机动车辆,所以理论上能够有效避免人类的驾驶失误,减少交通事故的发生,且能够提高公路的运输效率。因此,自动驾驶技术越来越受到重视。
在驾驶过程中,为了确保安全,减小盲区,驾驶员需要基于驾驶员视野、驾驶状态、车外场景等对后视镜进行手动调节,但是在驾驶过程中驾驶员对后视镜进行手动实时调节既分散精力也效率低下,影响驾驶安全。
发明内容
本申请实施例提供一种后视镜自适应调节及装置,采用本申请实施例避免了驾驶过程中驾驶员因对后视镜进行手动调节而影响驾驶安全,提高了驾驶安全性能。
第一方面,本申请实施例提供一种后视镜自适应调节方法,包括:
获取自车驾驶员人眼空间位置和目标后视镜空间位置,并根据人眼空间位置及目标后视镜空间位置获取驾驶员在目标后视镜中的水平视野角度;获取后车图像,并根据后车图像获取目标后车的第一辅助角度,该第一辅助角度是根据后车图像和第一参考点得到的,其中,第一参考点为目标后视镜上的点,后车图像为后视摄像头获取的,且后车图像中包括目标后车;根据水平视野角度、人眼空间位置和目标后视镜位置计算得到第二辅助角度;根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度;根据水平调节角度调 节目标后视镜的水平角度。
可选地,第一参考点可以为目标后视镜上的中心点,第二参考点也可以为目标后视镜上的中心点。
在此需要说明的是,水平视野角度为经过虚拟镜像点和目标后视镜的左边界上的点的直线与经过虚拟镜像点和目标后视镜的左边界上的点的直线形成夹角的角度,虚拟镜像点为以目标后视镜为对称轴,人眼空间位置的对称点。
采用本实施例使得后视镜的水平角度基于驾驶员视野及车外场景进行调节,相比于传统的后视镜调节,本实施例采用自适应调节的方式,不需要手动调节,避免了驾驶员因分心手动调节后视镜而影响驾驶安全。
在一个可行的实施例中,第一辅助角度为第一直线与第二直线形成的夹角的角度,第一直线为经过目标后车和第一参考点的直线,第二直线为经过第一参考点且垂直于目标后视镜的直线;
第二辅助角度为第三直线与水平视野中心线形成的夹角的角度,第三直线为经过驾驶员人眼位置与第二参考点的直线,第二参考点为水平视野中心线与目标后视镜的镜面的交点,水平视野中心线为水平视野角度的角平分线。
在一个可行的实施例中,后车图像中包括M辆车,M为大于或者等于1的整数,根据后车图像确定目标后车的第一辅助角度,包括:
根据后车图像获取后车A的车体框和偏移量,该偏移量为在后车图像中后车A的前脸中心位置与后车图像的纵向中心线之间的距离,车体框为后车A的轮廓在后车图像中所占据的像素点的个数;根据后车A的偏移量和车体框获取车距d,该车距d为后车A的前脸与自车车尾之间的距离;根据后车A的车距d及偏移量获取后车A的第三辅助角度,第三辅助角度为第四直线与自车的横向中心线形成夹角的角度,第四直线为经过后视摄像头的位置和后车A前脸中心位置的直线;
当M=1时,后车A为目标后车,根据目标后车的第三辅助角度及车距d获取目标后车的第一辅助角度;
当M大于1时,后车A为M辆后车中的第i辆,i=1,2,…,M,根据第i辆后车的第三辅助角度及车体框获取第i辆后车的重要概率,并将重要概率最大的后车确定为目标车辆,根据目标后车的第三辅助角度及车距d获取目标后车的第一辅助角度。
当存在多辆后车时,采用本实施例方法可使得自车驾驶员在多辆后车中确定需要重点关注的后车,从而调节自车后视镜的角度,以使驾驶员随时可以通过后视镜观察值得关注车辆,进而保证车辆行驶安全。
在一个可行的实施例中,根据后车A的偏移量和车体框获取车距d,包括:
根据后车A的偏移量和车体框查询第一关系表,以得到后车A的车距d;其中,后车A的车距d为后车A的偏移量和车体框对应的距离,第一关系表为偏移量及车体框与距离的对应关系表;
根据所述后车A的车距d及偏移量获取后车A的第三辅助角度,包括:
根据后车A的车距d及偏移量查询第二关系表,以得到后车A的第三辅助角度,其中,后车A的第三辅助角度为后车A的车距d和偏移量对应的第三辅助角度,第二关系表为距 离和偏移量与第三辅助角度的对应关系表。
通过查表的方式可以快速获取车距d和第三辅助角度,从而快速确定目标后视镜的水平调节角度。
在一个可行的实施例中,根据目标后车的第三辅助角度车距d获取目标后车的第一辅助角度,包括:
根据目标后车的第三辅助角度和车距d查询第三关系表,以得到目标后车的第三辅助角度和车距d对应的第一辅助角度,
其中,目标后车的第三辅助角度和车距d对应的第一辅助角度为后车目标的第一辅助角度,第三关系表为第三辅助角度及车距d与第一辅助角度的对应关系表。通过查表的方式,可以快速确定目标后车的第一辅助角度,从而可快速确定目标后视镜的调节角度。
在一个可行的实施例中,根据后车图像获取后车A的车体框,包括:
对后车图像进行中值滤波,以得到滤波后的图像;根据canny边缘检测算法对滤波后的图像进行边缘检测,得到边缘检测结果;根据haar算子从边缘检测结果中获取后车A的轮廓,并计算得到后车A的轮廓中像素点的个数。
在一个可行的实施例中,根据第i辆后车的第三辅助角度及车体框获取第i辆后车的重要概率,包括:
根据第i辆后车的第三辅助角度及其车体框查询第四关系表,以得到第i辆后车的第三辅助角度及其车体框对应的重要概率;
其中,第i辆后车的第三辅助角度及其车体框对应的重要概率为第i辆后车的重要概率,第四关系表为第三辅助角度及车体框与重要概率的对应关系表。
在一个可行的实施例中,根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度,包括:
根据第一辅助角度和第二辅助角度查询第五关系表,以得到第一辅助角度和第二辅助角度对应的水平调节角度;
其中,第一辅助角度和第二辅助角度对应的水平调节角度为目标后视镜的水平调节角度,第五关系表为第一辅助角度和第二辅助角度与水平调节角度的对应关系表。
第二方面,本申请实施例提供一种后视镜自适应调节方法,包括:
获取驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度;获取自车后视摄像头采集的后车图像,并根据后车图像获取天空地面比R;根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,并根据垂直调节角度将目标后视镜调节至目标垂直角度。
在此需要说明的是,水平视野角度为经过虚拟镜像点和目标后视镜的上边界上的点的直线与经过虚拟镜像点和目标后视镜的下边界上的点的直线形成夹角的角度,虚拟镜像点为以目标后视镜为对称轴,人眼空间位置的对称点。
采用本实施例使得后视镜的垂直角度基于驾驶员视野和车外场景进行调节,进而使得驾驶员随时可以通过后视镜观察后车行驶状态。相比于传统的后视镜调节,本实施例采用自适应调节的方式,不需要手动调节,避免了驾驶员因分心手动调节后视镜影响驾驶安全。
在一个可行的实施例中,自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度,第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,第三参考点为目标后视镜上的点,垂直视野中心线为垂直视野角度的角平分线。
在一个可行的实施例中,根据后车图像获取天空地面比R,包括:
将后车图像纵向划分为多个图像带;从多个图像带中获取目标图像带,该目标图像带为多个图像带中天空和地面过渡连续的图像带;统计目标图像带中天空占据的像素个数和地面占据的像素个数;根据天空占据的像素个数和地面占据的像素个数计算得到天空地面比R,该天空地面比R为天空占据的像素个数和地面占据的像素个数的比值。
在一个可行的实施例中,根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,包括:
根据辅助调节角度和天空地面比R查询辅助关系表,获取辅助调节角度和天空地面比R对应的垂直调节角度;其中,辅助调节角度和天空地面比R对应的垂直调节角为目标后视镜的垂直调节角度,辅助关系表为辅助调节角度及天空地面比R,与垂直调节角度之间的对应关系表。通过查表的方式,可以快速确定目标后视镜的垂直调节角度。
第三方面,本申请实施例提供一种后视镜自适应调节方法,包括:
获取自车后视摄像头采集的后车图像;将该后车图像转换为灰度图,并计算灰度图中像素的平均值;若该平均值不小于预设值,则获取自车驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度;根据后车图像获取天空地面比R;根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,并根据垂直调节角度将目标后视镜调节至目标垂直角度。
在此需要说明的是,水平视野角度为经过虚拟镜像点和目标后视镜的上边界上的点的直线与经过虚拟镜像点和目标后视镜的下边界上的点的直线形成夹角的角度,虚拟镜像点为以目标后视镜为对称轴,人眼空间位置的对称点。
在此需要说明的是,对于黑夜或极端天气情况(比如大雾、大雨、大雪等能见度较低的天气),是无法基于后车图像获取天空地面比R,因此在使用后车图像时,需要对后车图像进行检测确定能否获取天空地面比R;若无法通过后车图像获取天空地面比R,则根据自车当前行驶的坡度来调节目标后视镜。
采用本实施例使得后视镜的垂直角度基于驾驶员视野和车外场景,或者根据当前行驶道路的坡度进行调节,进而使得驾驶员随时可以通过后视镜观察自车后面的车况。相比于传统的后视镜调节,本实施例采用自适应调节的方式,不需要手动调节,避免了驾驶员因分心影响驾驶安全。
在一个可行的实施例中,自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度,第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,第三参考点为目标后视镜上的点,垂直视野中心线为垂直视野角度的角平分线。
在一个可行的实施例中,根据后车图像获取天空地面比R,包括:
将后车图像纵向划分为多个图像带;从该多个图像带中获取目标图像带,该目标图像带为多个图像带中天空和地面过渡连续的图像带;统计目标图像带中天空占据的像素个数和地面占据的像素个数;根据天空占据的像素个数和地面占据的像素个数计算得到天空地面比R,该天空地面比R为天空占据的像素个数和地面占据的像素个数的比值。
在一个可行的实施例中,根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,包括:
根据辅助调节角度和天空地面比R查询辅助关系表,获取辅助调节角度和天空地面比R对应的垂直调节角度;其中,辅助调节角度和天空地面比R对应的垂直调节角为目标后视镜的垂直调节角度,辅助关系表为辅助调节角度及天空地面比,与垂直调节角度之间的对应关系表。通过查表的方式,可以快速确定目标后视镜的垂直调节角度。
在一个可行的实施例中,若平均值小于预设值,后视镜自适应调节方法还包括:
获取预设时长内自车所行驶的道路的坡度;根据预设时长内自车所行驶的道路的坡度确定自车的行驶状态;根据自车的行驶状态和坡度β将目标后视镜的垂直角度调节至目标垂直角度,该坡度β为自车当前所在道路的坡度的绝对值。
在一个可行的实施例中,根据自车的行驶状态和坡度β确定垂直调节角度,包括:
当自车的行驶状态为平地行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为长下坡行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为进入下坡行驶状态时,将目标垂直角度调节为θ-β/2;当自车的行驶状态为脱离下坡行驶状态时,将目标垂直角度调节为θ+β/2;当自车的行驶状态为长上坡行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为进入上坡行驶状态时,将目标垂直角度调节为θ-β/2;当自车的行驶状态为脱离上坡行驶状态时,将目标垂直角度调节为θ+β/2。
第四方面,本申请实施例提供一种后视镜自适应调节装置,包括:
获取模块,用于获取自车驾驶员人眼空间位置和目标后视镜空间位置,并根据人眼空间位置及目标后视镜空间位置获取驾驶员在目标后视镜中的水平视野角度;获取后车图像,根据后车图像获取目标后车的第一辅助角度,该第一辅助角度是根据后车图像和第一参考点得到的,其中,第一参考点为目标后视镜上的点,后车图像为后视摄像头获取的,且后车图像中包括目标后车;
计算模块,用于根据水平视野角度、人眼空间位置和目标后视镜位置计算得到第二辅助角度;
获取模块,还用于根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度;
调节模块,用于根据水平调节角度调节目标后视镜的水平角度。
在一个可行的实施例中,第一辅助角度为第一直线与第二直线形成的夹角的角度,第一直线为经过目标后车和第一参考点的直线,第二直线为经过第一参考点且垂直于目标后视镜的直线;
第二辅助角度为第三直线与水平视野中心线形成的夹角的角度,第三直线为经过驾驶员人眼位置与第二参考点的直线,第二参考点为水平视野中心线与目标后视镜的镜面的交点,水平视野中心线为水平视野角度的角平分线。
在一个可行的实施例中,后车图像中包括M辆车,M为大于或者等于1的整数,在根 据后车图像获取目标后车的第一辅助角度的方面,获取模块具体用于:
根据后车图像获取后车A的车体框和偏移量,该偏移量为在后车图像中后车A的前脸中心位置与后车图像的纵向中心线之间的距离,车体框为后车A的轮廓在后车图像中所占据的像素点的个数;根据后车A的偏移量和车体框获取车距d,该车距d为后车A的前脸与自车车尾之间的距离;根据后车A的车距d及偏移量获取后车A的第三辅助角度,第三辅助角度为第四直线与自车的横向中心线形成夹角的角度,第四直线为经过后视摄像头的位置和后车A前脸中心位置的直线;
当M=1时,后车A为目标后车,根据目标后车的第三辅助角度及车距d获取目标后车的第一辅助角度;
当M大于1时,后车A为M辆后车中的第i辆,i=1,2,…,M,根据第i辆后车的第三辅助角度及车体框获取第i辆后车的重要概率,并将重要概率最大的后车确定为目标车辆,根据目标后车的第三辅助角度及车距d获取目标后车的第一辅助角度。
在一个可行的实施例中,在根据后车A的偏移量和车体框获取车距d的方面,获取模块具体用于:
根据后车A的偏移量和车体框查询第一关系表,以得到后车A的车距d;其中,后车A的车距d为后车A的偏移量和车体框对应的距离,第一关系表为偏移量及车体框与距离的对应关系表;
在根据后车A的车距d及偏移量获取后车A的第三辅助角度的方面,获取模块具体用于:
根据后车A的车距d及偏移量查询第二关系表,以得到后车A的第三辅助角度,其中,后车A的第三辅助角度为后车A的车距d和偏移量对应的第三辅助角度,第二关系表为距离和偏移量与第三辅助角度的对应关系表。
在一个可行的实施例中,在根据目标后车的第三辅助角度和车距d获取目标后车的第一辅助角度的方面,获取模块具体用于:
根据目标后车的第三辅助角度和车距d查询第三关系表,以得到目标后车的第三辅助角度和车距d对应的第一辅助角度,目标后车的第三辅助角度和车距d对应的第一辅助角度为后车目标的第一辅助角度,第三关系表为第三辅助角度及车距d与第一辅助角度的对应关系表。
在一个可行的实施例中,在根据后车图像获取后车A的车体框的方面,获取模块具体用于:
对后车图像进行中值滤波,以得到滤波后的图像;根据canny边缘检测算法对滤波后的图像进行边缘检测,得到边缘检测结果;根据haar算子从边缘检测结果中获取后车A的轮廓,并计算得到后车A的轮廓中像素点的个数。
在一个可行的实施例中,在根据第i辆后车的第三辅助角度及车体框获取第i辆后车的重要概率的方面,获取模块具体用于:
根据第i辆后车的第三辅助角度及其车体框查询第四关系表,以得到第i辆后车的第三辅助角度及其车体框对应的重要概率;其中,第i辆后车的第三辅助角度及其车体框对应的重要概率为第i辆后车的重要概率,第四关系表为第三辅助角度及车体框与重要概率的 对应关系表。
在一个可行的实施例中,在根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度的方面,获取模块具体用于:
根据第一辅助角度和第二辅助角度查询第五关系表,以得到第一辅助角度和第二辅助角度对应的水平调节角度;其中,第一辅助角度和第二辅助角度对应的水平调节角度为目标后视镜的水平调节角度,第五关系表为第一辅助角度和第二辅助角度与水平调节角度的对应关系表。
第五方面,本申请实施例提供一种后视镜自适应调节装置,包括:
获取模块,用于获取驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;
计算模块,用于根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度;
获取模块,还用于获取自车后视摄像头采集的后车图像,并根据后车图像获取天空地面比R;根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度;
调节模块,用于根据垂直调节角度将目标后视镜调节至目标垂直角度。
在一个可行的实施例中,自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度,第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,第三参考点为目标后视镜上的点,垂直视野中心线为垂直视野角度的角平分线。
在一个可行的实施例中,在根据后车图像获取天空地面比R的方面,获取模块具体用于:
将后车图像纵向划分为多个图像带;从该多个图像带中获取目标图像带,该目标图像带为多个图像带中天空和地面过渡连续的图像带;统计目标图像带中天空占据的像素个数和地面占据的像素个数;根据天空占据的像素个数和地面占据的像素个数计算得到天空地面比R,该天空地面比R为天空占据的像素个数和地面占据的像素个数的比值。
在一个可行的实施例中,在根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度的方面,获取模块具体用于:
根据辅助调节角度和天空地面比R查询辅助关系表,获取辅助调节角度和天空地面比R对应的垂直调节角度;其中,辅助调节角度和天空地面比R对应的垂直调节角为目标后视镜的垂直调节角度,辅助关系表为辅助调节角度及天空地面比,与垂直调节角度之间的对应关系表。
第六方面,本申请实施例提供一种后视镜自适应调节装置,包括:
获取模块,用于获取自车后视摄像头采集的后车图像;
计算模块,用于将后车图像转换为灰度图,并计算灰度图中像素的平均值;
获取模块,还用于若平均值不小于预设值,则获取自车驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;
计算模块,还用于根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调 节角度;
获取模块,还用于根据后车图像获取天空地面比R;根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度;
调节模块,用于根据垂直调节角度将目标后视镜调节至目标垂直角度。
在一个可行的实施例中,自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度,第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,第三参考点为目标后视镜上的点,垂直视野中心线为垂直视野角度的角平分线。
在一个可行的实施例中,在根据后车图像获取天空地面比R的方面,获取模块具体用于:
将后车图像纵向划分为多个图像带;从多个图像带中获取目标图像带,该目标图像带为多个图像带中天空和地面过渡连续的图像带;统计目标图像带中天空占据的像素个数和地面占据的像素个数;根据天空占据的像素个数和地面占据的像素个数计算得到天空地面比R,该天空地面比R为天空占据的像素个数和地面占据的像素个数的比值。
在一个可行的实施例中,在根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度的方面,获取模块具体用于:
根据辅助调节角度和天空地面比R查询辅助关系表,获取辅助调节角度和天空地面比R对应的垂直调节角度;其中,辅助调节角度和天空地面比R对应的垂直调节角为目标后视镜的垂直调节角度,辅助关系表为辅助调节角度及天空地面比R,与垂直调节角度之间的对应关系表。
在一个可行的实施例中,若平均值小于预设值,获取模块还用于:
获取预设时长内自车所行驶的道路的坡度;根据预设时长内自车所行驶的道路的坡度确定自车的行驶状态;
调节模块,还用于根据自车的行驶状态和坡度β将目标后视镜的垂直角度调节至目标垂直角度,该坡度β为自车当前所在道路的坡度的绝对值。
在一个可行的实施例中,调节模块具体用于:
当自车的行驶状态为平地行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为长下坡行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为进入下坡行驶状态时,将目标垂直角度调节为θ-β/2;当自车的行驶状态为脱离下坡行驶状态时,将目标垂直角度调节为θ+β/2;当自车的行驶状态为长上坡行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为进入上坡行驶状态时,将目标垂直角度调节为θ-β/2;当自车的行驶状态为脱离上坡行驶状态时,将目标垂直角度调节为θ+β/2。
第七方面,本申请实施例提供一种后视镜自适应调节装置,包括:存储器,用于存储程序;处理器,用于执行存储器存储的程序,当存储器存储的程序被执行时,处理器用于执行第一方面、第二方面和第三方面中的至少一种方法。
第八方面,提供一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行第一方面、第二方面或第三方面中的至少一种方法。
第九方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运 行时,使得计算机执行上述第一方面、第二方面或第三方面中的至少一种方法。
第十方面,提供一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行第一方面、第二方面或第三方面中的至少一种方法。
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行第一方面、第二方面或第三方面中的至少一种方法。
第十一方面,提供一种电子设备,该电子设备包括上述第四方面至第六方面中的至少一个方面中的装置。
本发明的这些方面或其他方面在以下实施例的描述中会更加简明易懂。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种自动驾驶汽车的结构示意图;
图2为本申请实施例提供的一种后视镜自适应调节的方法流程示意图;
图3为获取人眼空间位置的原理示意图;
图4为本申请实施例提供的驾驶员在后视镜中水平视野示意图;
图5为本申请实施例提供的辅助角度示意图;
图6为本申请实施例提供的辅助角度及水平调节角度示意图;
图7为本申请实施例提供的另一种后视镜自适应调节的方法流程示意图;
图8为本申请实施例提供的辅助调节角度的示意图;
图9为本申请实施例提供的计算天空地面比的原理示意图;
图10为本申请实施例提供的另一种后视镜自适应调节的方法流程示意图;
图11为本申请实施例提供的一种基于坡度调节后视镜垂直角度的原理示意图;
图12为本申请实施例提供的一种后视镜自适应调节装置的结构示意图;
图13为本申请实施例提供的另一种后视镜自适应调节装置的结构示意图;
图14为本申请实施例提供的另一种后视镜自适应调节装置的结构示意图;
图15为本申请实施例提供的另一种后视镜自适应调节装置的结构示意图;
图16为本申请实施例提供的另一种后视镜自适应调节装置的结构示意图;
图17为本申请实施例提供的一种计算机程序产品的结构示意图。
具体实施方式
下面结合附图对本申请的实施例进行描述。
图1是本发明实施例提供的车辆100的功能框图。在一个实施例中,将车辆100配置为完全或部分地自动驾驶模式。例如,车辆100可以在处于自动驾驶模式中的同时控制自 身,并且可通过人为操作来确定车辆及其周边环境的当前状态,确定周边环境中的至少一个其他车辆的可能行为,并确定该其他车辆执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制车辆100。在车辆100处于自动驾驶模式中时,可以将车辆100置为在没有和人交互的情况下操作。
车辆100可包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,车辆100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,车辆100的每个子系统和元件可以通过有线或者无线互连。
行进系统102可包括为车辆100提供动力运动的组件。在一个实施例中,推进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如汽油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。
能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为车辆100的其他系统提供能量。
传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器系统104可包括感测关于车辆100周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是GPS系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可包括被监视车辆100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主车辆100的安全操作的关键功能。
定位系统122可用于估计车辆100的地理位置。IMU 124用于基于惯性加速度来感测车辆100的位置和朝向变化。在一个实施例中,IMU 124可以是加速度计和陀螺仪的组合。
雷达126可利用无线电信号来感测车辆100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达126还可用于感测物体的速度和/或前进方向。
激光测距仪128可利用激光来感测车辆100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
相机130可用于捕捉车辆100的周边环境的多个图像。相机130可以是静态相机或视频相机。
控制系统106为控制车辆100及其组件的操作。控制系统106可包括各种元件,其中包括转向系统132、油门134、制动单元136、传感器融合算法138、计算机视觉系统140、路线控制系统142以及障碍物避免系统144。
转向系统132可操作来调整车辆100的前进方向。例如在一个实施例中可以为方向盘 系统。
油门134用于控制引擎118的操作速度并进而控制车辆100的速度。
制动单元136用于控制车辆100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制车辆100的速度。
计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别车辆100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(Structure from Motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。
路线控制系统142用于确定车辆100的行驶路线。在一些实施例中,路线控制系统142可结合来自传感器138、GPS 122和一个或多个预定地图的数据以为车辆100确定行驶路线。
障碍物避免系统144用于识别、评估和避免或者以其他方式越过车辆100的环境中的潜在障碍物。
当然,在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
车辆100通过外围设备108与外部传感器、其他车辆、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。
在一些实施例中,外围设备108提供车辆100的用户与用户接口116交互的手段。例如,车载电脑148可向车辆100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围设备108可提供用于车辆100与位于车内的其它设备通信的手段。例如,麦克风150可从车辆100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向车辆100的用户输出音频。
无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如CDMA、EVD0、GSM/GPRS,或者4G蜂窝通信,例如LTE。或者5G蜂窝通信。无线通信系统146可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种车辆通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括车辆和/或路边站台之间的公共和/或私有数据通信。
电源110可向车辆100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为车辆100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
车辆100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个 处理器113,处理器113执行存储在例如数据存储装置114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制车辆100的个体组件或子系统的多个计算设备。
处理器113可以是任何常规的处理器,诸如商业可获得的CPU。替选地,该处理器可以是诸如ASIC或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机110的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机110的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该车辆并且与该车辆进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于车辆内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,数据存储装置114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行车辆100的各种功能,包括以上描述的那些功能。数据存储装置114也可包含额外的指令,包括向推进系统102、传感器系统104、控制系统106和外围设备108中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令115以外,数据存储装置114还可存储数据,例如道路地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆100在自主、半自主和/或手动模式中操作期间被车辆100和计算机系统112使用。
相机130可以包括驾驶员监控系统(driver monitoring system,DMS)摄像头、座舱监控系统(cockpit monitoring system,CMS)摄像头及位于获取后车图像的后视摄像头。DMS摄像头用于获取驾驶员的头部图像,CMS摄像头用于获取该驾驶员所驾驶车辆内部的图像,该图像显示有驾驶员的头部。处理器113基于DMS摄像头获取的图像和CMS摄像头获取的图像得到驾驶员人眼的空间位置。
对于后视镜的水平调节,处理器113基于驾驶员人眼的空间位置和后视镜的空间位置获取驾驶员在后视镜中的水平视野角度,基于后车图像获取第一辅助角度,基于水平视野角度、人眼空间位置和后视镜空间位置获取第二辅助角度,最后基于第一辅助角度和第二辅助角度获取水平调节角度,基于水平调节角度调节后视镜的水平角度。
对于垂直调节角度,处理器113基于驾驶员人眼的空间位置和后视镜的空间位置获取驾驶员在后视镜中的垂直视野角度,基于垂直视野角度、人眼空间位置和后视镜空间位置获取辅助调节角度;处理器113基于后车图像获取天空地面比R,再基于天空地面比R和辅助调节角度获取垂直调节角度。最后处理器113基于垂直调节角度将后视镜的垂直角度调节至目标垂直角度。当无法获取天空地面比R时,陀螺仪获取预设时长内自车所行驶道路的坡度,处理器113根据预设时长内的坡度自车的行驶状态,然后基于自车的行驶状态和当前时刻自车所行驶道路的坡度来调节后视镜的垂直调节角度。
用户接口116,用于向车辆100的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车载电脑148、麦克风150和扬声器152。
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制车辆100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向单元132来避免由传感器系统104和障碍物避免系统144检测到的障碍物。在一些实施例中,计算机系统112可操作来对车辆100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与车辆100分开安装或关联。例如,数据存储装置114可以部分或完全地与车辆1100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本发明实施例的限制。
在道路行进的自动驾驶汽车,如上面的车辆100,可以识别其周围环境内的物体以确定对当前速度的调整。所述物体可以是其它车辆、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与车辆的间距等,可以用来确定自动驾驶汽车所要调整的速度。
可选地,自动驾驶汽车车辆100或者与自动驾驶车辆100相关联的计算设备(如图1的计算机系统112、计算机视觉系统140、数据存储装置114)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所述识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。车辆100能够基于预测的所述识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定车辆将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定车辆100的速度,诸如,车辆100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
除了提供调整自动驾驶汽车的速度的指令之外,计算设备还可以提供修改车辆100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述车辆100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车、和手推车等,本发明实施例不做特别的限定。
在此需要说明的是,对车辆后视镜的调节包括后视镜左右方向调节和垂直方向调节。下面介绍后视镜左右方向上的调节方式。
参见图2,图2为本发明实施例提供的一种后视镜自适应调节方法的流程示意图。如图2所示,该方法包括:
S201、获取驾驶员人眼的空间位置及后视镜的位置,并根据驾驶员人眼的空间位置和 自车目标后视镜的位置获取驾驶员的水平视野。
其中,在驾驶员所驾驶的车辆上配置有驾驶员监控系统(driver monitoring system,DMS)摄像头和座舱监控系统(cockpit monitoring system,CMS)摄像头。DMS摄像头用于获取驾驶员的头部图像,CMS摄像头用于获取该驾驶员所驾驶车辆内部的图像,该图像显示有驾驶员的头部。
可选地,可通过DMS摄像头获取的图像得到驾驶员人眼的空间位置或者通过DMS摄像头获取的图像和CMS摄像头获取图像得到驾驶员人眼的空间位置。
以下具体介绍通过DMS摄像头获取的图像和CMS摄像头获取图像得到驾驶员人眼的空间位置。
具体地,获取DMS摄像头和CMS摄像头的内参矩阵和外参矩阵,分别在DMS摄像头获取的图像和CMS摄像头获取的图像中检测出眼睛位置P 1,P 2、结合DMS摄像头和CMS摄像头的外参矩阵及内参矩阵,计算眼睛所在空间直线O 1P 1,O 2P 2,最后基于空间直线O 1P 1,O 2P 2获取眼睛空间点P,眼睛空间点P为空间直线O 1P 1,O 2P 2的交点。按照此方法获取驾驶员左右眼空间位置,然后对驾驶员左右眼的空间位置取平均值,得到驾驶员人眼的空间位置。
其中,获取两个摄像头的内参与外参矩阵,是为了基于内参与外参建立摄像头与摄像头、摄像头与空间中其他物体的位置、方向关系。
在一个具体的示例中,通过相机标定方法获取出厂前对DMS摄像头和CMS摄像头的标定的内参矩阵和外参矩阵,然后通过深度学习或其他算法检测出DMS摄像头获取的图像和CMS摄像头获取的图像中眼睛2D位置坐标p 1,p 2,再结合DMS摄像头和CMS摄像头的标定的内参矩阵和外参矩阵,计算眼睛所在空间直线O 1P 1,O 2P 2,如图3所示,基于相机的内/外参将眼睛2D位置p 1,p 2转为空间坐标P 1,P 2;O 1,O 2是DMS摄像头和CMS摄像头在空间中的光学原点,均记录在两个摄像头的外参矩阵中,进而可根据空间坐标点P 1,P 2及O 1,O 2计算出眼睛所在空间直线O 1P 1,O 2P 2,最后将眼睛所在空间直线O 1P 1,O 2P 2的交点确定为眼睛空间点。若眼睛空间直线O 1P 1,O 2P 2没有交点,则选择距离两条直线距离最近的点为眼睛空间点P。
按照上述方法获取驾驶员左右眼空间位置,然后对驾驶员左右眼的空间位置取平均值,得到驾驶员人眼的空间位置。
在获取驾驶员人眼的空间位置后,获取自车目标后视镜的位置,然后基于驾驶员人眼空间位置和目标后视镜的位置确定驾驶员的水平视野角度。
如图4所示,E为驾驶员人眼的空间位置,R和L分别为自车右后视镜的左边界点和右边界点,A’为以直线RL为对称轴,点E的对称点。∠LA′R为上述驾驶的水平视野角度,A为驾驶员的视野。
S202、获取后车图像,并根据后车图像获取目标后车的第一辅助角度。
其中,后车图像由自车的后视摄像头获取的,该后视摄像头用于获取自车后方行驶车辆的图像。
可选地,该后视摄像头可位于自车后方的任何位置,比如车牌上边界或下边界的中心位置,或者车后方的左上角、左下角、右上角或者右下角等位置。
在此需要说明的是,后车图像中可包括在自车后方多车道上行驶的车辆。
在一个可行的实施例中,后车图像包括M辆后车,根据后车图像确定目标后车的第一辅助角度,包括:
根据后车图像获取后车A的车体框和偏移量,该偏移量为在后车图像中后车A的前脸中心位置与后车图像的纵向中心线之间的距离,车体框为后车A的轮廓在后车图像中所占据的像素点的个数;根据后车A的偏移量和车体框获取车距d,该车距d为后车A的前脸与自车车尾之间的距离;根据后车A的车距d及偏移量获取后车A的第三辅助角度,第三辅助角度为第四直线与自车的横向中心线形成夹角的角度,第四直线为经过后视摄像头的位置和后车A前脸中心位置的直线;
当M=1时,上述后车A为目标后车,根据目标后车的第三辅助角度及车距d获取目标后车的第一辅助角度;
当M大于1时,后车A为M辆后车中的第i辆,i=1,2,…,M,根据第i辆后车的第三辅助角度及车体框获取第i辆后车的重要概率,将重要概率最大的后车确定为目标车辆,根据目标后车的第三辅助角度及车距d获取目标后车的第一辅助角度。
其中,重要概率用于表征后车的重要程度,或者后车需要被驾驶员关注程度。若后车的车体框越大,且第三辅助角度越小,则后车的重要概率越大。后车的重要概率越大,后车的重要程度越高,或后车需要被驾驶员关注的程度越高。
当存在多辆后车时,通过本实施例从多辆后车中确定需要重点关注的后车,然后调节自车的目标后视镜的水平角度,使得该车位于自车驾驶员的视野内,自车驾驶员可实时关注该车的行驶状态,进而保证自车的行驶安全。同时由于非重点关注车辆不在自车驾驶员的视野内,避免了非重点关注车辆干扰自车驾驶员的判断。
其中,目标后车的第一辅助角度为第一直线与第二直线形成的夹角的角度,第一直线为经过目标后车和第一参考点的直线,第二直线为经过第一参考点且垂直于目标后视镜的直线,第一参考点为目标后视镜上的点。
进一步地,第一参考点为目标后视镜的中心位置点。
在一个具体的示例中,如图5所示,第三辅助角度为∠COY,其中,点O为自车的车尾中心位置点,也是后视摄像头的安装位置。直线OY为自车的纵向中心线,OX为经过点O的垂直线,C为后车A前脸中心位置,点O’为自车右后视镜的中心位置点,直线O’L’为经过O’且垂直于自车右后视镜镜面的直线。其中,第一参考点为目标后视镜的中心位置点,第一辅助角度为∠CO′L′,第一直线为直线CO′,第二直线为直线O′L′。如图5所示,第三辅助角度为∠COY经过后车A前脸中心位置和后视摄像头位置的直线,与经过后视摄像头位置且与自车的纵向中心线的平行的直线的夹角。
在一个可行的实施例中,根据后车图像获取后车A的车体框,包括:
对后车图像进行中值滤波,得到滤波后的图像,再根据canny检测算法对滤波后的图像进行边缘检测,得到边缘检测结果;最后根据haar算子从边缘检测结果中获取后车A的轮廓,后车A的车体框为后车A的轮廓内像素点的个数。
在此需要说明的是,若后车图像中包括多辆后车,可按照上述方法同时获取该多辆后车中每辆后车的车体框。
具体地,通过后视摄像头获取自车后方的图像(即后车图像),对该图像进行中值滤波、以消除后车图像中的干扰信息,再对滤波后的图像进行canny边缘,以对该图像中的物体进行分割,最后对边缘检测结果进行Haar算子检测,以用于在边缘检测结果中找出被分割车辆中的车体,并画出每辆车的最小外接矩形框,计算出每辆车的矩形框内所占像素个数,该像素个数即为车体框。至此得到多辆后车中每辆后车的车体框。
在一个可行的实施例中,根据后车A的偏移量和车体框获取车距d,包括:
根据后车A的偏移量和车体框查询第一关系表,以得到后车A的车距d;其中,后车A的车距d为后车A的偏移量和车体框对应的距离,第一关系表为偏移量及车体框与距离的对应关系表。
在此需要说明的是,在获取后车A的车体框后,可根据后车A的车体框确定后车A的车型,比如,卡车、轿车和越野车(sport utility vehicle,SUV),由于不同车型车宽大小不一样,同一车型车头宽度相差不大,因此可根据后车A的车型确定后车A的车头实际宽度。不同偏移量时,车体框会包括车头和不同比例的车侧面,根据偏移量,获取车体框中车头部分的宽度,从而根据后车A的车头实际宽度和在后车图像中后车A的车头宽度可确定后车A与自车之间的车距d。
在一个示例中,在使用第一关系表之前,获取该第一关系表。参见下表1,下表1为偏移量及车体框与距离的对应关系表,即上述第一关系表。
Figure PCTCN2020103362-appb-000001
表1
在一个示例中,在使用第一关系表之前,获取该第一关系表。可以从第三设备中获取的,也可以是自身创建的。其中通过不同的车体框(CarBox)及偏移量与对应的车距d建立第一关系表,该关系表也可表示为偏移量与车体框及距离之间的函数d=f(CarBox,偏移量)。上述第一关系表的使用范围是本车后方5车道内,即向左两车道,向右两车道与本车道。此查表关系与选用的后视摄像头的视野范围、像素、安装位置有关。以后视摄像头安装在后车牌上方,像素为1280*960,摄像头视野为横向120°纵向90°为例。
在此需要说明的是,表1只是一个示例,用于表示偏移量、车体框及距离之间的关系,各变量的变化趋势不是对本申请保护范围的限制。
在一个可行的实施例中,根据后车A的车距d及偏移量查询第二关系表,以得到后车 A的第三辅助角度,其中,后车A的第三辅助角度为后车A的车距d和偏移量对应的第三辅助角度,第二关系表为距离和偏移量与第三辅助角度的对应关系表。
在一个示例中,在使用第二关系表之前,获取该第二关系表。参见下表2,下表2为偏移量及车体框与距离的对应关系表,即上述第二关系表。
Figure PCTCN2020103362-appb-000002
表2
在一个示例中,在使用第二关系表之前,获取该第二关系表。可以从第三设备中获取的,也可以是自身创建的。其中通过不同的偏移量及车距d与对应的第三辅助角度建立第一关系表,该关系表也可表示为d与车体框及第三辅助角度∠COY之间的函数∠COY=f(d,偏移量),此关系表使用范围是本车后方5车道内,即向左两车道,向右两车道与本车道。此关系表与选用的后视摄像头的视野范围、像素、安装位置有关。建立此关系表时,先选定后视摄像头、固定其安装位置,然后将用于测试的其他车辆放置在不同的位置,记录其车距d,读取后车图像,记录第三辅助角度∠COY与偏移量,即可得到第二关系表,横轴为距离d,单位为m,纵轴为偏移量,单位为cm。以后视摄像头安装在后车牌上方,像素为1280*960,摄像头视野为横向120°纵向90°为例。
在一个可行的实施例中,根据第i辆后车的第三辅助角度及车体框获取第i辆后车的重要概率,包括:
根据第i辆后车的第三辅助角度及车体框查询第四关系表,以得到第i辆后车的第三辅助角度及车体框对应的重要概率;
其中,第i辆后车的第三辅助角度及车体框对应的重要概率为第i辆后车的重要概率,第四关系表为第三辅助角度及车体框与重要概率的对应关系表。
在此需要说明的是,当后车图像中只有一辆后车时,该后车的重要概率为1。当自车后面有多条车道时,在每条车道中仅选择离自车最近的后车来确定其重要概率。
在一个示例中,在使用第四关系表之前,建立该第四关系表。参见下表3,下表3为第三辅助角度及车体框与重要概率的对应关系表,即上述第四关系表。
Figure PCTCN2020103362-appb-000003
Figure PCTCN2020103362-appb-000004
表3
在此需要说明的是,上述第三关系表的使用范围是本车后方5车道内,即向左两车道,向右两车道与本车道。此查表关系与选用的后视摄像头的视野范围、像素、安装位置有关。以后视摄像头安装在后车牌上方,像素为1280*960,摄像头视野为横向120°纵向90°为例。
在一个可行的实施例中,根据目标后车的第三辅助角度及车距d获取目标后车的第一辅助角度,包括:
根据第三辅助角度和车距d查询第三关系表,以得到第三辅助角度和车距d对应的第一辅助角度,第三辅助角度和车距d对应的第一辅助角度为后车目标的第一辅助角度,第三关系表为第三辅助角度及车距d与第一辅助角度的对应关系表。
在另一个示例中,在使用第三关系表之前,获取该第三关系表。可以从第三设备中获取的,也可以是自身创建的。其中通过不同的车距d及第三辅助角度∠COY与对应的第一辅助角度(∠CO′L′)建立第三关系表,该关系表也可表示为第一辅助角度与车距d及第三辅助角度之间的函数∠CO′L′=f(d,∠COY),此关系表使用范围是本车后方5车道内,即向左两车道,向右两车道与本车道。
参见表4,表4为第三辅助角度及车距d与第一辅助角度的对应关系表,即上述第三关系表。
Figure PCTCN2020103362-appb-000005
表4
第三关系表与选用的后视摄像头的视野范围、像素、安装位置有关。建立此关系表时,先选定后视摄像头、固定其安装位置,然后将用于测试的其他车辆放置在不同位置,记录其车距d,读取后车图像,记录后车在其中的角度∠COY及后车在后视镜中的角度∠CO′L′,即可得到第二关系表,横轴为车距d,单位为m,纵轴为∠COY值。以后视摄像头安装在后车牌上方,像素为1280*960,摄像头视野为横向120°纵向90°为例。
在此需要说明的是,由于第一辅助角度与第一参考点在目标后视镜上的位置相关,因此根据后车图像获取目标后车的第一辅助角度,与第三关系表中的第一辅助角度需要一致, 即两者都是基于同一第一参考点得到的角度。
在另一个可行的实施例中,根据所述目标后车的偏移量及车体框获取所述目标后车的第一辅助角度,包括:
根据第三辅助角度和车体框查询第六关系表,获取第三辅助角度和车体框对应的第一辅助角度,第三辅助角度和车体框对应的第一辅助角度为目标后车的第一辅助角度;
第六关系表为偏移量、车体框及第一辅助角度之间的对应关系表。
在此需要说明的是,第五关系表的获取方式可参见上述第一关系表、第二关系表和第三关系表的获取方式,在此不在叙述。
S203、根据驾驶员在目标后视镜中的视野、人眼空间位置和目标后视镜位置计算得到第二辅助角度。
其中,第二辅助角度为第三直线与水平视野中心线形成的夹角的角度,第三直线为经过驾驶员人眼位置与第二参考点的直线,该第二参考点为水平视野中心线与目标后视镜的镜面的交点,水平视野中心线为水平视野角度的角平分线。
如图6所示,E为驾驶员人眼的空间位置;O’点为右后视镜中心位置,即第二参考点为右后视镜的中心位置;L’点为经过右后视镜中心位置的镜面垂直线,根据镜面反射原理,经镜面的入射光线、其对应的出射光线与此垂直线夹角相同;C为后车的前脸中心,当前驾驶员在右后视镜中的水平视野是A 1;调节后的驾驶员在右后视镜中的水平视野是A 2;α为后视镜的水平调节角度。通过计算得到人眼通过后视镜中点看向水平视野A 1中心时的角度∠EO′V 1,即第二辅助角度。直线O′V 1为水平视野A 1的中心线,即水平视野的中心线,直线O′V 2为水平视野A 2的中心线,即调节右后视镜的水平角度后驾驶员水平视野的中心线。
S204、根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度,根据该水平调节角度调节目标后视镜的水平角度。
在一个具体的实施例中,根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度,包括:
根据第一辅助角度和第二辅助角度查询第五关系表,以得到第一辅助角度和第二辅助角度对应的水平调节角度,
第一辅助角度和第二辅助角度对应的水平调节角度为目标后视镜的水平调节角度,第五关系表为第一辅助角度和第二辅助角度与水平调节角度的对应关系表。
在另一个示例中,在使用第四关系表之前,获取该第四关系表。可以从第三设备中获取的,也可以是自身创建的。其中通过不同的第一辅助角度∠CO′L′、第二辅助角度∠EO′V 1与对应的水平调节角度α建立第四关系表。该关系表也可看成为第一辅助角度∠CO′L′及第二辅助角度∠EO′V 1与水平调节角度α之间的函数α=f(∠CO′L′,∠EO′V 1),此关系表使用范围是本车后方5车道内,即向左两车道,向右两车道与本车道。此关系表与选用的后视摄像头的视野范围、像素、安装位置有关。建立此表时,先选定后视摄像头、固定其安装位置;选定后视镜的款式、安装位置。然后将用于测试的其他车辆放置在不同位置,记录∠CO′L′、∠EO′V 1,调整后视镜,使得后车出现在后视镜中心,记录此时的水平调节角度α。下表5以后视摄像头安装在后车牌上方,像素为1280*960,摄像头视野为横向120°纵向90°为例。
参见表5,表5为第一辅助角度及第二辅助角度与水平调节角度的对应关系表,即上述第五关系表。
Figure PCTCN2020103362-appb-000006
表5
在此需要说明的是,上述目标后视镜可以为左后视镜或右后视镜。换言之,自车的左后视镜和右后视镜的水平角度均可按照上述方法进行调节。
在此需要说明的是,由于第二辅助角度与第二参考点在目标后视镜上的位置相关,因此根据水平视野角度、人眼空间位置和目标后视镜位置计算得到第二辅助角度,与第五关系表中的第二辅助角度需要一致,即两者都是基于同一第一参考点得到的角度。
可以看出,在本申请实施例的方案中,采用本实施例使得后视镜的水平角度基于驾驶员视野及车外场景进行调节,进而使得驾驶员随时可以通过后视镜观察值得关注车辆。相比于传统的后视镜调节,本实施例采用自适应调节的方式,不需要手动调节,避免了驾驶员因分心影响驾驶安全。并且采用本实施的自适应调节的方式,使得在后车超车过程中驾驶员无盲区,进而保证了驾驶安全。本实施的方案可采用不同模态的摄像头,比如CMS摄像头、DMS摄像头。并且本方案不依赖前置摄像头,体现了对硬件要求低的优点。
下面介绍后视镜垂直方向上的调节方式。
参见图7,图7为本发明实施例提供的另一种后视镜自适应调节方法的流程示意图。如图7所示,该方法包括:
S701、获取驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度。
在此需要说明的是,后车图像中可包括在自车后方多车道上行驶的车辆。
可选地,后视摄像头可置于自车的车尾中心位置,还可位于可以获取后车图像的位置,比如自车的左上角、右上角、左下角、右下角或者其他位置。
S702、根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度。
其中,该自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度,该第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,该第三参考点为该目标后视镜上的任意位置点,垂直视野中心线为垂直视野角度的角平分线。
进一步地,第三参考点为目标后视镜的中心位置点。
基于图8来说明辅助调节角度,第三参考点为目标后视镜的中心位置点。图8给出了自车后视镜在垂直方向上的视野示意图。如图8所示,点E为驾驶员人眼的空间位置,点O′为目标后视镜的中心位置,即第三参考点。点M为以目标后视镜为对称轴点E的对称点。从电子系统获得当前后视镜垂直方向角度,即可通过镜面反射原理,计算驾驶员垂直视野H,人眼通过后视镜中点O′看向视野H中心时的角度∠EO′V,即辅助调节角度,第三直线为直线EO’,直线O’V为垂直视野中心线。垂直视野H对应的角度为垂直视野角度∠UMD。
S703、获取自车后视摄像头采集的后车图像,并根据所述后车图像获取天空地面比R。
在一个可行的实施例中,根据所述后车图像获取天空地面比R,包括:
将后车图像纵向划分为多个图像带;从多个图像带中获取目标图像带,目标图像带为多个图像带中天空和地面过渡连续的图像带;统计目标图像带中天空占据的像素个数和地面占据的像素个数;根据天空占据的像素个数和地面占据的像素个数计算得到天空地面比R,天空地面比R为天空占据的像素个数和地面占据的像素个数的比值。
举例说明,如图9所示的后车图像,以图中的曲线为边界,图像的上半部分为天空,下半部分为地面,中间的灰色框为后车。如图8所示,将后车图像纵向划分为9个图像带,分别为图像带1、2、3、4、5、6、7、8和9。其中,由于后车使得图像带4、5和6中天空和地面过渡不连续,因此目标图像带只包括图像带1、2、3、7、8和9。然后统计图像带1、2、3、7、8和9中天空和地面分别占据的像素个数,最后根据天空和地面分别占据的像素个数计算得到的天空地面比R。
S704、根据所述自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,并根据所述垂直调节角度将所述目标后视镜调节至目标垂直角度。
在一个可行的实施例中,根据辅助调节角度和天空地面比R获取目标后车的垂直调节角度,包括:
根据辅助调节调度和天空地面比R查询辅助关系表,以得到辅助调节角度和天空地面比R对应的垂直调节角度,其中,目标后车的垂直调节角度为辅助调节角度和天空地面比R对应的垂直调节角度,辅助关系表为辅助调节调度和天空地面比R与垂直调节角度对应的关系表。
在此需要说明的是,由于辅助调节角度与第三参考点在目标后视镜上的位置相关,因此根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度,与辅助关系表中的辅助调节角度需要一致,即两者都是基于同一第三参考点得到的角度。
在一个示例中,在使用辅助关系表之前,获取该辅助关系表。可以从第三设备中获取的,也可以是自身创建的。其中通过不同的辅助调节角度∠EO′V和天空地面比R与对应的垂直调节角度θ建立第五关系表。该关系表也可看成为辅助调节角度∠EO′V和天空地面比R与垂直调节角度θ之间的函数θ=f(R,∠EO′V 1)。此第五关系表与选用的后视摄像头的视野范围、像素、安装位置有关。建立此关系表时,先选定后视摄像头、固定其安装位置;选定后视镜的款式、安装位置。下表6以后视摄像头安装在后车牌上方,像素为1280*960,摄像头视野为横向120°纵向90°为例。
参见表6,表6为辅助调节角度及天空地面比R与垂直调节角度的对应关系表,即上 述第辅助关系表。
Figure PCTCN2020103362-appb-000007
表6
在获取垂直调节角度后,根据该垂直调节角度将目标后视镜的垂直角度调节至目标垂直角度。
可以看出,在本申请实施例的方案中,采用本实施例使得后视镜的垂直角度基于驾驶员视野和车外场景,或者根据当前行驶道路的坡度进行调节,进而使得驾驶员随时可以通过后视镜观察自车后面的车况。相比于传统的后视镜调节,本实施例采用自适应调节的方式,不需要手动调节,避免了驾驶员因分心手动调节后视镜的角度而影响驾驶安全。本实施的方案可采用不同模态的摄像头,比如CMS摄像头、DMS摄像头。并且本方案不依赖前置摄像头,体现了对硬件要求低的优点。
参见图10,图10为本发明实施例提供的另一种后视镜自适应调节方法的流程示意图。如10所示,该方法包括:
S1001、获取自车后视摄像头采集的后车图像,将该后车图像转换为灰度图,并计算灰度图中像素的平均值。
在此需要说明的是,后车图像中可包括在自车后方多车道上行驶的车辆。
可选地,后视摄像头可置于自车的车尾中心位置,还可位于可以获取后车图像的位置,比如自车的左上角、右上角、左下角、右下角或者其他位置。
在此需要说明的是,对于黑夜或极端天气情况(比如大雾、大雨、大雪等能见度较低的天气),是无法基于后车图像获取天空地面比R,因此在使用后车图像时,需要对后车图像进行检测确定能否获取天空地面比R;若无法通过后车图像获取天空地面比R,则根据自车当前行驶的坡度来调节目标后视镜。具体判断能否获取天空地面比R是通过将后车图像转换为灰度图,并计算灰度图中像素的平均值,再判断该平均值是否不小于预设值。
S1002、判断灰度图像素的平均值是否不小于预设值。
其中,若灰度图像素的平均值不小于预设值,则执行步骤S1003和S1004;若灰度图像素的平均值小于预设值,则执行步骤1005和S1006。
在此需要说明的是,若后车图像对应的灰度图像素的平均值小于预设值,则表示该后车图像的过暗,比如在黑夜或大雾天获取的图像,此时无法计算出天空地面比R。
S1003、获取驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间 位置和目标后视镜的空间位置获取垂直视野角度。
其中,该自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度,该第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,该第三参考点为该目标后视镜上的点。
可选地,第三参考点为目标后视镜的中心位置点。
在此需要说明的是,获取自车驾驶员人眼的空间位置及自车目标后视镜的位置的过程具体可参见上述步骤S201的相关描述,在此不再叙述。
S1004、根据后车图像获取天空地面比R;根据辅助调节角度和天空地面比R获取目标后车的垂直调节角度,并根据垂直调节角度将目标后视镜调节至目标垂直角度。
在一个可行的实施例中,根据所述后车图像获取天空地面比R,包括:
将后车图像纵向划分为多个图像带;从多个图像带中获取目标图像带,目标图像带为多个图像带中天空和地面过渡连续的图像带;统计目标图像带中天空占据的像素个数和地面占据的像素个数;根据天空占据的像素个数和地面占据的像素个数计算得到天空地面比R,天空地面比R为天空占据的像素个数和地面占据的像素个数的比值。
在一个可行的实施例中,根据辅助调节角度和天空地面比R获取目标后车的垂直调节角度,包括:
根据辅助调节调度和天空地面比R查询第五关系表,以得到辅助调节角度和天空地面比R对应的垂直调节角度,其中,目标后车的垂直调节角度为辅助调节角度和天空地面比R对应的垂直调节角度,第五关系表为辅助调节调度和天空地面比R与垂直调节角度对应的关系表。
在此需要说明的是,步骤S1001-S1004的具体描述可参见步骤S701-S704的相关描述,在此不再叙述。
S1005、获取预设时长内自车所行驶道路的坡度,并根据预设时长内自车所行驶道路的坡度确定自车的行驶状态。
在无法计算天空地面比的情况下,可通过自车的陀螺仪获取自车所行驶道路的坡度,进而获取自车的行驶状态。
具体地,获取预设内自车所行驶道路的坡度,进而可根据预设时长内自车所行驶道路的坡度确定自车的行驶状态。其中,自车的行驶状态包括平地驾驶状态、长下坡状态、进入下坡状态、脱离下坡状态、长上坡状态、进入上坡状态和脱离上坡状态。
如图11所示,图中的方框表示自车。当在预设时长内检测到所行驶道路的坡度的绝对值均趋近于零时,确定自车的行驶状态为平地驾驶状态,如图11中的a图所示;
当在预设时长内检测到的所行驶道路的坡度均小于0,且该坡度保持不变时,确定自车的行驶状态为长下坡状态,如图11中的b图所示;
当在预设时长内检测到的所行驶道路的坡度均小于0,且该坡度的绝对值逐渐增大时,确定自车的行驶状态为进入下坡状态,如图11中的c图所示;
当在预设时长内检测到的所行驶道路的坡度均小于0,且该坡度的绝对值逐渐减小甚至趋近于零时,确定自车的行驶状态为脱离下坡状态,如图11中的d图所示;
当在预设时长内检测到的所行驶道路的坡度均大于0,且该坡度保持不变时,确定自 车的行驶状态为长上坡状态,如图11中的e图所示;
当在预设时长内检测到的所行驶道路的坡度均大于0,且该坡度的绝对值逐渐增大至坡度保持不变时,确定自车的行驶状态为进入上坡状态,如图11中的f图所示;
当在预设时长内检测到的所行驶道路的坡度均小于0,且该坡度的绝对值逐渐减小至趋近于零时,确定自车的行驶状态为脱离上坡状态,如图11中的g图所示。
在此需要说明的是,由于道路存在坑洼的路面,导致车辆在行驶过程中会存在颠簸,因此上述“坡度保持不变”具体是指坡度在区间[a-b,a+b]之间变化,其中,a表示自车当前所在道路的坡度的绝对值,b为变化幅度,为一个较小值。比如0.01,0.02,0.05,0.1,0.5,0.6,0.8,1或者其他较小值。
S1006、根据自车的行驶状态和坡度β将目标后视镜的垂直角度调节至目标垂直角度,该坡度β为自车当前所在道路的坡度的绝对值。
具体地,对于不同的驾驶状态,为了保证后视镜能够实时观察到后方的车况,需要基于自车的行驶状态调整目标后视镜的垂直角度调节至目标垂直角度。
当自车的行驶状态为平地行驶状态时,目标垂直角度为预设角度θ;当自车的行驶状态为长下坡行驶状态时,目标垂直角度为预设角度θ;当自车的行驶状态为进入下坡行驶状态时,目标垂直角度为θ-β/2;当自车的行驶状态为脱离下坡行驶状态时,目标垂直角度为θ+β/2;当自车的行驶状态为长上坡行驶状态时,目标垂直角度为预设角度θ;当自车的行驶状态为进入上坡行驶状态时,目标垂直角度为θ-β/2;当自车的行驶状态为脱离上坡行驶状态时,目标垂直角度为θ+β/2。
在一个可行的实施例中,在采用图7所示实施例的方法对目标后视镜的垂直角度调节完成后,判断调节后目标后视镜的垂直角度是否用户需求,若不满足用户需求,可根据用户的指令来执行基于自车的行驶状态来调节目标后视镜的垂直角度的动作,从而能够更好地满足用户的需求。
在另一个可行的实施例中,先分别采用图7所示实施例的方法和基于自车的行驶状态分别确定目标后视镜的垂直调节角度,然后对得到的垂直调节角进行处理,比如求平均值,加权求和等,得到处理后的垂直调节角度,最后基于处理后的垂直调节角度将目标后视镜的垂直角度调节至目标垂直角度。采用此方式可以有效避免采用上述两种方式中一种得到的垂直调节角度的不准确性。
可以看出,在本申请实施例的方案中,采用本实施例使得后视镜的垂直角度基于驾驶员视野和车外场景,或者根据当前行驶道路的坡度进行调节,进而使得驾驶员随时可以通过后视镜观察自车后面的车况。相比于传统的后视镜调节,本实施例采用自适应调节的方式,不需要手动调节,避免了驾驶员因分心手动调节后视镜的角度而影响驾驶安全。本实施的方案可采用不同模态的摄像头,比如CMS摄像头、DMS摄像头。并且本方案不依赖前置摄像头,体现了对硬件要求低的优点。
参见图12,本申请实施例提供的一种后视镜自适应调节装置的结构图。如图12所示,该后视镜自适应调节装置1200,包括:
获取模块1201,用于获取自车驾驶员人眼空间位置和目标后视镜空间位置,并根据人 眼空间位置及目标后视镜空间位置获取驾驶员在目标后视镜中的水平视野角度;获取后车图像,根据后车图像获取目标后车的第一辅助角度,该第一辅助角度为第一直线与第二直线形成的夹角的角度,其中,第一直线为经过目标后车和第一参考点的直线,第二直线为经过第一参考点且垂直于目标后视镜的直线,第一参考点为目标后视镜上的点,后车图像为后视摄像头获取的,且后车图像中包括目标后车;
计算模块1202,用于根据水平视野角度、人眼空间位置和目标后视镜位置计算得到第二辅助角度,该第二辅助角度为第三直线与水平视野中心线形成的夹角的角度,第三直线为经过驾驶员人眼位置与第二参考点的直线,该第二参考点为水平视野中心线与目标后视镜的镜面的交点,水平视野中心线为水平视野角度的角平分线;
获取模块1201,还用于根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度;
调节模块1203,用于根据水平调节角度调节目标后视镜的水平角度。
在一个可行的实施例中,后车图像中包括M辆车,M为大于或者等于1的整数,在根据后车图像获取目标后车的第一辅助角度的方面,获取模块1201具体用于:
根据后车图像获取后车A的车体框和偏移量,该偏移量为在后车图像中后车A的前脸中心位置与后车图像的纵向中心线之间的距离,车体框为后车A的轮廓在后车图像中所占据的像素点的个数;根据后车A的偏移量和车体框获取车距d,该车距d为后车A的前脸与自车车尾之间的距离;根据后车A的车距d及偏移量获取后车A的第三辅助角度,第三辅助角度为第四直线与自车的横向中心线形成夹角的角度,第四直线为经过后视摄像头的位置和后车A前脸中心位置的直线;
M=1时,后车A为目标后车,根据目标后车的第三辅助角度及车距d获取目标后车的第一辅助角度;
当M大于1时,后车A为M辆后车中的第i辆,i=1,2,…,M,根据第i辆后车的第三辅助角度及车距d获取第i辆后车的重要概率,并将重要概率最大的后车确定为目标车辆,根据目标后车的第三辅助角度及车体框获取目标后车的第一辅助角度。
在一个可行的实施例中,在根据后车A的偏移量和车体框获取车距d的方面,获取模块1201具体用于:
根据后车A的偏移量和车体框查询第一关系表,以得到后车A的车距d;其中,后车A的车距d为后车A的偏移量和车体框对应的距离,第一关系表为偏移量及车体框与距离的对应关系表;
在根据后车A的车距d及偏移量获取后车A的第三辅助角度的方面,获取模块1201具体用于:
根据后车A的车距d及偏移量查询第二关系表,以得到后车A的第三辅助角度,其中,后车A的第三辅助角度为后车A的车距d和偏移量对应的第三辅助角度,第二关系表为距离和偏移量与第三辅助角度的对应关系表。
在一个可行的实施例中,在根据目标后车的第三辅助角度和车距d获取目标后车的第一辅助角度的方面,获取模块1201具体用于:
根据目标后车的第三辅助角度和车距d查询第三关系表,以得到目标后车的第三辅助 角度和车距d对应的第一辅助角度,目标后车的第三辅助角度和车距d对应的第一辅助角度为后车目标的第一辅助角度,第三关系表为第三辅助角度及车距d与第一辅助角度的对应关系表。
在一个可行的实施例中,在根据后车图像获取后车A的车体框的方面,获取模块1201具体用于:
对后车图像进行中值滤波,以得到滤波后的图像;根据canny边缘检测算法对滤波后的图像进行边缘检测,得到边缘检测结果;根据haar算子从边缘检测结果中获取后车A的轮廓,并计算得到后车A的轮廓中像素点的个数。
在一个可行的实施例中,在根据第i辆后车的第三辅助角度及车体框获取第i辆后车的重要概率的方面,获取模块1201具体用于:
根据第i辆后车的第三辅助角度及其车体框查询第四关系表,以得到第i辆后车的第三辅助角度及其车体框对应的重要概率;其中,第i辆后车的第三辅助角度及其车体框对应的重要概率为第i辆后车的重要概率,第四关系表为第三辅助角度及车体框与重要概率的对应关系表。
在一个可行的实施例中,在根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度的方面,获取模块1201具体用于:
根据第一辅助角度和第二辅助角度查询第五关系表,以得到第一辅助角度和第二辅助角度对应的水平调节角度;其中,第一辅助角度和第二辅助角度对应的水平调节角度为目标后视镜的水平调节角度,第五关系表为第一辅助角度和第二辅助角度与水平调节角度的对应关系表。
需要说明的是,上述各模块(获取模块1201、计算模块1202和调节模块1203)用于执行上述方法的相关步骤。比如获取模块1201用于执行步骤S201和S202的相关内容,计算模块1202用于执行步骤S203的相关内容,调节模块1203用于执行步骤S204的相关内容。
在本实施例中,后视镜自适应调节装置1200是以模块的形式来呈现。这里的“模块”可以指特定应用集成电路(application-specific integrated circuit,ASIC),执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。此外,以上获取模块1201、计算模块1202和调节模块1203可通过图15所示的后视镜自适应调节装置的处理器1501来实现。
参见图13,本申请实施例提供的另一种后视镜自适应调节装置的结构图。如图13所示,该后视镜自适应调节装置1300,包括:
获取模块1301,用于获取驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;
计算模块1302,用于根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度,自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度;其中,第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,第三参考点为目标后视镜上的点,垂直视野中心线为垂直视野角度的角平分线;
获取模块1301,还用于获取自车后视摄像头采集的后车图像,并根据后车图像获取天空地面比R;根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度;
调节模块1303,用于根据垂直调节角度将目标后视镜调节至目标垂直角度。
在一个可行的实施例中,在根据后车图像获取天空地面比R的方面,获取模块1301具体用于:
将后车图像纵向划分为多个图像带;从该多个图像带中获取目标图像带,该目标图像带为多个图像带中天空和地面过渡连续的图像带;统计目标图像带中天空占据的像素个数和地面占据的像素个数;根据天空占据的像素个数和地面占据的像素个数计算得到天空地面比R,该天空地面比R为天空占据的像素个数和地面占据的像素个数的比值。
在一个可行的实施例中,在根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度的方面,获取模块1301具体用于:
根据辅助调节角度和天空地面比R查询辅助关系表,获取辅助调节角度和天空地面比R对应的垂直调节角度;其中,辅助调节角度和天空地面比R对应的垂直调节角为目标后视镜的垂直调节角度,辅助关系表为辅助调节角度及天空地面比,与垂直调节角度之间的对应关系表。
需要说明的是,上述各模块(获取模块1301、计算模块1302和调节模块1303)用于执行上述方法的相关步骤。比如获取模块1301用于执行步骤S701和S703的相关内容,计算模块1302用于执行步骤S702的相关内容,调节模块1303用于执行步骤S704的相关内容。
在本实施例中,后视镜自适应调节装置1300是以模块的形式来呈现。这里的“模块”可以指特定应用集成电路(application-specific integrated circuit,ASIC),执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。此外,以上获取模块1301、计算模块1302和调节模块1303可通过图16所示的后视镜自适应调节装置的处理器1601来实现。
参见图14,图14为本申请实施例提供一种后视镜自适应调节装置的结构示意图。如图14所示,该后视镜自适应调节装置1400,包括:
获取模块1401,用于获取自车后视摄像头采集的后车图像;
计算模块1402,用于将后车图像转换为灰度图,并计算灰度图中像素的平均值;
获取模块1401,还用于若平均值不小于预设值,则获取自车驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;
计算模块1402,还用于根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度,该自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度;
获取模块1401,还用于根据后车图像获取天空地面比R;根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度;
调节模块1403,用于根据垂直调节角度将目标后视镜调节至目标垂直角度。
在一个可行的实施例中,在根据后车图像获取天空地面比R的方面,获取模块1601 具体用于:
将后车图像纵向划分为多个图像带;从多个图像带中获取目标图像带,该目标图像带为多个图像带中天空和地面过渡连续的图像带;统计目标图像带中天空占据的像素个数和地面占据的像素个数;根据天空占据的像素个数和地面占据的像素个数计算得到天空地面比R,该天空地面比R为天空占据的像素个数和地面占据的像素个数的比值。
在一个可行的实施例中,在根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度的方面,获取模块1401具体用于:
根据辅助调节角度和天空地面比R查询辅助关系表,获取辅助调节角度和天空地面比R对应的垂直调节角度;其中,辅助调节角度和天空地面比R对应的垂直调节角为目标后视镜的垂直调节角度,辅助关系表为辅助调节角度及天空地面比R,与垂直调节角度之间的对应关系表。
在一个可行的实施例中,若平均值小于预设值,获取模块1401还用于:
获取预设时长内自车所行驶的道路的坡度;根据预设时长内自车所行驶的道路的坡度确定自车的行驶状态;
调节模块,还用于根据自车的行驶状态和坡度β将目标后视镜的垂直角度调节至目标垂直角度,该坡度β为自车当前所在道路的坡度的绝对值。
在一个可行的实施例中,调节模块1403具体用于:
当自车的行驶状态为平地行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为长下坡行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为进入下坡行驶状态时,将目标垂直角度调节为θ-β/2;当自车的行驶状态为脱离下坡行驶状态时,将目标垂直角度调节为θ+β/2;当自车的行驶状态为长上坡行驶状态时,将目标垂直角度调节为预设角度θ;当自车的行驶状态为进入上坡行驶状态时,将目标垂直角度调节为θ-β/2;当自车的行驶状态为脱离上坡行驶状态时,将目标垂直角度调节为θ+β/2。
需要说明的是,上述各模块(获取模块1401、计算模块1402和调节模块1403)用于执行上述方法的相关步骤。比如获取模块1401用于执行步骤S1001-S1005的相关内容,计算模块1302用于执行步骤S1001的相关内容,调节模块1403用于执行步骤S1006的相关内容。
在本实施例中,后视镜自适应调节装置1400是以模块的形式来呈现。这里的“模块”可以指特定应用集成电路(application-specific integrated circuit,ASIC),执行一个或多个软件或固件程序的处理器和存储器,集成逻辑电路,和/或其他可以提供上述功能的器件。此外,以上获取模块1401、计算模块1402和调节模块1403可通过图16所示的后视镜自适应调节装置的处理器1601来实现。
在此需要说明的是,图12与图13或图14所示的装置可以为同一个装置,或者不同的装置。图13和图14可以为同一个装置或者不同的装置。
如图15所示调节装置1500可以以图15中的结构来实现,该调节装置1500包括至少一个处理器1501,至少一个存储器1502以及至少一个通信接口1503。所述处理器1501、所述存储器1502和所述通信接口1503通过所述通信总线连接并完成相互间的通信。
处理器1501可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。
通信接口1503,用于与其他设备或通信网络通信,如以太网,无线接入网(RAN),无线局域网(Wireless Local Area Networks,WLAN)等。
存储器1502可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,所述存储器1502用于存储执行以上方案的应用程序代码,并由处理器1501来控制执行。所述处理器1501用于执行所述存储器1502中存储的应用程序代码。
存储器1502存储的代码可执行以上图2所示提供的一种后视镜自适应调节的方法,比如:
获取自车驾驶员人眼空间位置和目标后视镜空间位置,并根据人眼空间位置及目标后视镜空间位置获取驾驶员在目标后视镜中的水平视野角度;获取后车图像,并根据后车图像获取目标后车的第一辅助角度,其中,第一辅助角度为第一直线与第二直线形成的夹角的角度,第一直线为经过目标后车和第一参考点的直线,第二直线为经过第一参考点且垂直于目标后视镜的直线,第一参考点为目标后视镜上的点,后车图像为后视摄像头获取的,且后车图像中包括目标后车;根据水平视野角度、人眼空间位置和目标后视镜位置计算得到第二辅助角度,第二辅助角度为第三直线与水平视野中心线形成的夹角的角度,第三直线为经过驾驶员人眼位置与第二参考点的直线,第二参考点为水平视野中心线与目标后视镜的镜面的交点;根据第一辅助角度和第二辅助角度获取目标后视镜的水平调节角度;根据水平调节角度调节目标后视镜的水平角度。
在此需要说明的是,处理器根据水平调节角度调节目标后视镜的水平角度,可以是处理器根据水平调节角度直接控制目标后视镜,以调节目标后视镜的水平角度,或者是处理器向目标后视镜的控制装置发送控制指令,以指示该控制装置根据水平调节角度调节目标后视镜的水平角度。
如图16所示调节装置1600可以以图16中的结构来实现,该调节装置1600包括至少一个处理器1601,至少一个存储器1602以及至少一个通信接口1603。所述处理器1601、所述存储器1602和所述通信接口1603通过所述通信总线连接并完成相互间的通信。
处理器1601可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。
通信接口1603,用于与其他设备或通信网络通信,如以太网,无线接入网(RAN), 无线局域网(Wireless Local Area Networks,WLAN)等。
存储器1602可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,所述存储器1602用于存储执行以上方案的应用程序代码,并由处理器1601来控制执行。所述处理器1601用于执行所述存储器1602中存储的应用程序代码。
存储器1602存储的代码可执行以上图7或图10提供的另一种后视镜自适应调节的方法,比如:
获取驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度,自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度;获取自车后视摄像头采集的后车图像,并根据后车图像获取天空地面比R;根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,并根据垂直调节角度将目标后视镜调节至目标垂直角度。其中,第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,第三参考点为目标后视镜上的点,垂直视野中心线为垂直视野角度的角平分线,
或者;
获取自车后视摄像头采集的后车图像;将该后车图像转换为灰度图,并计算灰度图中像素的平均值;若该平均值不小于预设值,则获取自车驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;根据驾驶员人眼的空间位置及垂直视野角度计算得到自车的辅助调节角度,该自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度;根据后车图像获取天空地面比R;根据自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,并根据垂直调节角度将目标后视镜调节至目标垂直角度。其中,第五直线为经过驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,第三参考点为目标后视镜上的点,垂直视野中心线为垂直视野角度的角平分线。
在此需要说明的是,处理器根据垂直调节角度将目标后视镜调节至目标垂直角度,可以是处理器根据垂直调节角度直接控制目标后视镜,以调节目标后视镜的垂直角度至目标垂直角度,或者是处理器向目标后视镜的控制装置发送控制指令,以指示该控制装置根据垂直调节角度将目标后视镜的垂直角度调节至目标垂直角度。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任何一种后视镜自适应调节方法的部分或全部步骤。
程序产品实施例:
在一些实施例中,所公开的方法可以实施为以机器可读格式被编码在计算机可读存储介质上的或者被编码在其它非瞬时性介质或者制品上的计算机程序指令。图17示意性地示出根据这里展示的至少一些实施例而布置的示例计算机程序产品的概念性局部视图,所述示例计算机程序产品包括用于在计算设备上执行计算机进程的计算机程序。在一个实施例中,示例计算机程序产品1700是使用信号承载介质1701来提供的。所述信号承载介质1701可以包括一个或多个程序指令1702,其当被一个或多个处理器运行时可以提供以上针对图2、图7或图10描述的功能或者部分功能。此外,图17中的程序指令1702也描述示例指令。
在一些示例中,信号承载介质1701可以包含计算机可读介质1703,诸如但不限于,硬盘驱动器、紧密盘(CD)、数字视频光盘(DVD)、数字磁带、存储器、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等等。在一些实施方式中,信号承载介质1701可以包含计算机可记录介质1704,诸如但不限于,存储器、读/写(R/W)CD、R/W DVD、等等。在一些实施方式中,信号承载介质1701可以包含通信介质1705,诸如但不限于,数字和/或模拟通信介质(例如,光纤电缆、波导、有线通信链路、无线通信链路、等等)。因此,例如,信号承载介质1701可以由无线形式的通信介质1705(例如,遵守IEEE 802.11标准或者其它传输协议的无线通信介质)来传达。一个或多个程序指令1702可以是,例如,计算机可执行指令或者逻辑实施指令。在一些示例中,诸如针对图2、图7或图10描述的计算设备可以被配置为,响应于通过计算机可读介质1703、计算机可记录介质1704、和/或通信介质1705中的一个或多个传达到计算设,的程序指令1702,提供各种操作、功能、或者动作。应该理解,这里描述的布置仅仅是用于示例的目的。因而,本领域技术人员将理解,其它布置和其它元素(例如,机器、接口、功能、顺序、和功能组等等)能够被取而代之地使用,并且一些元素可以根据所期望的结果而一并省略。另外,所描述的元素中的许多是可以被实现为离散的或者分布式的组件的、或者以任何适当的组合和位置来结合其它组件实施的功能实体。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。
以上对本发明实施例进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上上述,本说明书内容不应理解为对本发明的限制。

Claims (36)

  1. 一种后视镜自适应调节方法,其特征在于,包括:
    获取自车驾驶员人眼空间位置和目标后视镜空间位置,并根据所述人眼空间位置及目标后视镜空间位置获取所述驾驶员在所述目标后视镜中的水平视野角度;
    获取后车图像,并根据所述后车图像获取目标后车的第一辅助角度,所述第一辅助角度是根据所述后车图像和第一参考点得到的,其中,所述第一参考点为所述目标后视镜上的点,所述后车图像为后视摄像头获取的,且所述后车图像中包括所述目标后车;
    根据所述水平视野角度、人眼空间位置和所述目标后视镜位置计算得到第二辅助角度,
    根据所述第一辅助角度和所述第二辅助角度获取所述目标后视镜的水平调节角度;
    根据所述水平调节角度调节所述目标后视镜的水平角度。
  2. 根据权利要求1所述的方法,其特征在于,所述第一辅助角度为第一直线与第二直线形成的夹角的角度,所述第一直线为经过所述目标后车和第一参考点的直线,所述第二直线为经过所述第一参考点且垂直于所述目标后视镜的直线;
    所述第二辅助角度为第三直线与水平视野中心线形成的夹角的角度,所述第三直线为经过所述驾驶员人眼位置与第二参考点的直线,所述第二参考点为所述水平视野中心线与所述目标后视镜的镜面的交点,所述水平视野中心线为所述水平视野角度的角平分线。
  3. 根据权利要求1或2所述的方法,其特征在于,所述后车图像中包括M辆车,M为大于或者等于1的整数,所述根据后车图像确定目标后车的第一辅助角度,包括:
    根据所述后车图像获取所述后车A的车体框和偏移量,所述偏移量为在所述后车图像中所述后车A的前脸中心位置与后车图像的纵向中心线之间的距离,所述车体框为所述后车A的轮廓在所述后车图像中所占据的像素点的个数;
    根据所述后车A的偏移量和车体框获取车距d,所述车距d为所述后车A的前脸与自车车尾之间的距离;
    根据所述后车A的车距d及偏移量获取所述后车A的第三辅助角度,所述第三辅助角度为第四直线与所述自车的横向中心线形成夹角的角度,所述第四直线为经过所述后视摄像头的位置和所述后车A前脸中心位置的直线;
    当所述M=1时,所述后车A为所述目标后车,根据所述目标后车的第三辅助角度及车距d获取所述目标后车的第一辅助角度;
    当所述M大于1时,所述后车A为所述M辆后车中的第i辆,i=1,2,…,M,根据所述第i辆后车的第三辅助角度及车体框获取所述第i辆后车的重要概率,并将所述重要概率最大的后车确定为所述目标车辆,根据所述目标后车的第三辅助角度及车距d获取所述目标后车的第一辅助角度。
  4. 根据权利要求3所述的方法,其特征在于,
    所述根据所述后车A的偏移量和车体框获取车距d,包括:
    根据所述后车A的偏移量和车体框查询第一关系表,以得到所述后车A的车距d;
    其中,所述后车A的车距d为所述后车A的偏移量和车体框对应的距离,所述第一关系表为偏移量及车体框与距离的对应关系表;
    所述根据所述后车A的车距d及偏移量获取所述后车A的第三辅助角度,包括:
    根据所述后车A的车距d及偏移量查询第二关系表,以得到所述后车A的第三辅助角度,其中,所述后车A的第三辅助角度为所述后车A的车距d和偏移量对应的第三辅助角度,所述第二关系表为距离和偏移量与第三辅助角度的对应关系表。
  5. 根据权利要求3或4所述的方法,其特征在于,所述根据所述目标后车的第三辅助角度和车距d获取所述目标后车的第一辅助角度,包括:
    根据所述目标后车的第三辅助角度和车距d查询所述第三关系表,以得到所述目标后车的第三辅助角度和车距d对应的第一辅助角度,
    其中,所述目标后车的第三辅助角度和车距d对应的第一辅助角度为所述后车目标的第一辅助角度,所述第三关系表为第三辅助角度及车距d与第一辅助角度的对应关系表。
  6. 根据权利要求3-5任一项所述的方法,其特征在于,所述根据后车图像获取所述后车A的车体框,包括:
    对所述后车图像进行中值滤波,以得到滤波后的图像;
    根据canny边缘检测算法对所述滤波后的图像进行边缘检测,得到边缘检测结果;
    根据haar算子从所述边缘检测结果中获取所述后车A的轮廓,并计算得到所述后车A的轮廓中像素点的个数。
  7. 根据权利要求3-6任一项所述的方法,其特征在于,所述根据所述第i辆后车的第三辅助角度及车体框获取所述第i辆后车的重要概率,包括:
    根据所述第i辆后车的第三辅助角度及其车体框查询第四关系表,以得到所述第i辆后车的第三辅助角度及其车体框对应的重要概率;
    其中,所述第i辆后车的第三辅助角度及其车体框对应的重要概率为所述第i辆后车的重要概率,所述第四关系表为第三辅助角度及车体框与重要概率的对应关系表。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述根据所述第一辅助角度和所述第二辅助角度获取所述目标后视镜的水平调节角度,包括:
    根据所述第一辅助角度和第二辅助角度查询第五关系表,以得到所述第一辅助角度和第二辅助角度对应的水平调节角度;
    其中,所述第一辅助角度和第二辅助角度对应的水平调节角度为所述目标后视镜的水平调节角度,所述第五关系表为所述第一辅助角度和第二辅助角度与水平调节角度的对应关系表。
  9. 一种后视镜自适应调节方法,其特征在于,包括:
    获取驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据所述人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;
    根据所述驾驶员人眼的空间位置及垂直视野角度计算得到所述自车的辅助调节角度,
    获取自车后视摄像头采集的后车图像,并根据所述后车图像获取天空地面比R;
    根据所述自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,并根据所述垂直调节角度将所述目标后视镜调节至目标垂直角度。
  10. 根据权利要求9所述的方法,其特征在于,所述自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度;其中,所述第五直线为经过所述驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,所述第三参考点为所述目标后视镜上的点,所述垂直视野中心线为所述垂直视野角度的角平分线。
  11. 根据权利要求9或10所述的方法,其特征在于,所述根据所述后车图像获取天空地面比R,包括:
    将所述后车图像纵向划分为多个图像带;
    从所述多个图像带中获取目标图像带,所述目标图像带为所述多个图像带中天空和地面过渡连续的图像带;
    统计所述目标图像带中天空占据的像素个数和地面占据的像素个数;
    根据所述天空占据的像素个数和地面占据的像素个数计算得到所述天空地面比R,所述天空地面比R为所述天空占据的像素个数和地面占据的像素个数的比值。
  12. 根据权利要求10或11所述的方法,其特征在于,所述根据所述自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,包括:
    根据所述辅助调节角度和天空地面比R查询辅助关系表,获取所述辅助调节角度和天空地面比R对应的垂直调节角度;
    其中,所述辅助调节角度和天空地面比R对应的垂直调节角为所述目标后视镜的垂直调节角度,所述辅助关系表为辅助调节角度及天空地面比R,与垂直调节角度之间的对应关系表。
  13. 一种后视镜自适应调节方法,其特征在于,包括:
    获取自车后视摄像头采集的后车图像;
    将所述后车图像转换为灰度图,并计算所述灰度图中像素的平均值;
    若所述平均值不小于预设值,则获取自车驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据所述人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;根据所述驾驶员人眼的空间位置及垂直视野角度计算得到所述自车的辅助调节角度,根据所述后车图像获取天空地面比R;根据所述自车的辅助调节角度和所述天空地面比R获取目标后视镜的垂直调节角度,并根据所述垂直调节角度将所述目标后视镜调节至目标垂直角度。
  14. 根据权利要求13所述的方法,其特征在于,所述自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度,所述第五直线为经过所述驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,所述第三参考点为所述目标后视镜上的点,所述垂直视野中心线为所述垂直视野角度的角平分线。
  15. 根据权利要求13或14所述的方法,其特征在于,所述根据所述后车图像获取天空地面比R,包括:
    将所述后车图像纵向划分为多个图像带;
    从所述多个图像带中获取目标图像带,所述目标图像带为所述多个图像带中天空和地面过渡连续的图像带;
    统计所述目标图像带中天空占据的像素个数和地面占据的像素个数;
    根据所述天空占据的像素个数和地面占据的像素个数计算得到所述天空地面比R,所述天空地面比R为所述天空占据的像素个数和地面占据的像素个数的比值。
  16. 根据权利要求13-15任一项所述的方法,其特征在于,所述根据所述自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度,包括:
    根据所述辅助调节角度和天空地面比R查询辅助关系表,获取所述辅助调节角度和天空地面比R对应的垂直调节角度;
    其中,所述辅助调节角度和天空地面比R对应的垂直调节角为所述目标后视镜的垂直调节角度,所述辅助关系表为辅助调节角度及天空地面比,与垂直调节角度之间的对应关系表。
  17. 根据权利要求13-16任一项所述的方法,其特征在于,若所述平均值小于预设值,所述方法还包括:
    获取预设时长内自车所行驶的道路的坡度;
    根据所述预设时长内自车所行驶的道路的坡度确定所述自车的行驶状态;
    根据所述自车的行驶状态和坡度β将所述目标后视镜的垂直角度调节至目标垂直角度,所述坡度β为所述自车当前所在道路的坡度的绝对值。
  18. 根据权利要求17所述的方法,其特征在于,所述根据所述自车的行驶状态和坡度β确定垂直调节角度,包括:
    当所述自车的行驶状态为平地行驶状态时,将所述目标垂直角度调节为预设角度θ;
    当所述自车的行驶状态为长下坡行驶状态时,将所述目标垂直角度调节为所述预设角度θ;
    当所述自车的行驶状态为进入下坡行驶状态时,将所述目标垂直角度调节为θ-β/2;
    当所述自车的行驶状态为脱离下坡行驶状态时,将所述目标垂直角度调节为θ+β/2;
    当所述自车的行驶状态为长上坡行驶状态时,将所述目标垂直角度调节为所述预设角度θ;
    当所述自车的行驶状态为进入上坡行驶状态时,将所述目标垂直角度调节为θ-β/2;
    当所述自车的行驶状态为脱离上坡行驶状态时,将所述目标垂直角度调节为θ+β/2。
  19. 一种后视镜自适应调节装置,其特征在于,包括:
    获取模块,用于获取自车驾驶员人眼空间位置和目标后视镜空间位置,并根据所述人眼空间位置及目标后视镜空间位置获取所述驾驶员在所述目标后视镜中的水平视野角度;获取后车图像,
    所述获取模块,还用于根据所述后车图像获取目标后车的第一辅助角度,所述第一辅助角度是根据所述后车图像和第一参考点得到的,所述第一参考点为所述目标后视镜上的点,所述后车图像为所述后视摄像头获取的,且所述后车图像中包括所述目标后车;
    计算模块,用于根据所述水平视野角度、人眼空间位置和所述目标后视镜位置计算得到第二辅助角度,
    所述获取模块,还用于根据所述第一辅助角度和所述第二辅助角度获取所述目标后视镜的水平调节角度;
    调节模块,用于根据所述水平调节角度调节所述目标后视镜的水平角度。
  20. 根据权利要求19所述的装置,其特征在于,所述第一辅助角度为第一直线与第二直线形成的夹角的角度,其中,所述第一直线为经过所述目标后车和第一参考点的直线,所述第二直线为经过所述第一参考点且垂直于所述目标后视镜的直线;
    所述第二辅助角度为第三直线与水平视野中心线形成的夹角的角度,所述第三直线为经过所述驾驶员人眼位置与第二参考点的直线,所述第二参考点为所述水平视野中心线与所述目标后视镜的镜面的交点,所述水平视野中心线为所述水平视野角度的角平分线。
  21. 根据权利要求19或20所述的装置,其特征在于,所述后车图像中包括M辆车,所述M为大于或者等于1的整数,在所述根据所述后车图像获取目标后车的第一辅助角度的方面,所述获取模块具体用于:
    根据所述后车图像获取所述后车A的车体框和偏移量,所述偏移量为在所述后车图像中所述后车A的前脸中心位置与后车图像的纵向中心线之间的距离,所述车体框为所述后车A的轮廓在所述后车图像中所占据的像素点的个数;
    根据所述后车A的偏移量和车体框获取车距d,所述车距d为所述后车A的前脸与自车车尾之间的距离;
    根据所述后车A的车距d及偏移量获取所述后车A的第三辅助角度,所述第三辅助角度为第四直线与所述自车的横向中心线形成夹角的角度,所述第四直线为经过所述后视摄像头的位置和所述后车A前脸中心位置的直线;
    当所述M=1时,所述后车A为所述目标后车,根据所述目标后车的第三辅助角度及车距d获取所述目标后车的第一辅助角度;
    当所述M大于1时,所述后车A为所述M辆后车中的第i辆,i=1,2,…,M,根据所述第i辆后车的第三辅助角度及车体框获取所述第i辆后车的重要概率,并将所述重要概率最 大的后车确定为所述目标车辆,根据所述目标后车的第三辅助角度及车距d获取所述目标后车的第一辅助角度。
  22. 根据权利要求21所述的装置,其特征在于,在所述根据所述后车A的偏移量和车体框获取车距d的方面,所述确定模块具体用于:
    根据所述后车A的偏移量和车体框查询第一关系表,以得到所述后车A的车距d;
    其中,所述后车A的车距d为所述后车A的偏移量和车体框对应的距离,所述第一关系表为偏移量及车体框与距离的对应关系表;
    在根据所述后车A的车距d及偏移量获取所述后车A的第三辅助角度的方面,所述确定模块具体用于:
    根据所述后车A的车距d及偏移量查询第二关系表,以得到所述后车A的第三辅助角度,其中,所述后车A的第三辅助角度为所述后车A的车距d和偏移量对应的第三辅助角度,所述第二关系表为距离和偏移量与第三辅助角度的对应关系表。
  23. 根据权利要求21或22所述的装置,其特征在于,在所述根据所述目标后车的第三辅助角度和车距d获取所述目标后车的第一辅助角度的方面,所述确定模块具体用于:
    根据所述目标后车的第三辅助角度和车距d查询所述第三关系表,以得到所述目标后车的第三辅助角度和车距d对应的第一辅助角度,所述目标后车的第三辅助角度和车距d对应的第一辅助角度为所述后车目标的第一辅助角度,所述第三关系表为第三辅助角度及车距d与第一辅助角度的对应关系表。
  24. 根据权利要求21-23任一项所述的装置,其特征在于,在所述根据后车图像获取所述后车A的车体框的方面,所述获取模块具体用于:
    对所述后车图像进行中值滤波,以得到滤波后的图像;
    根据canny边缘检测算法对所述滤波后的图像进行边缘检测,得到边缘检测结果;
    根据haar算子从所述边缘检测结果中获取所述后车A的轮廓,并计算得到所述后车A的轮廓中像素点的个数。
  25. 根据权利要求21-24任一项所述的装置,其特征在于,在所述根据所述第i辆后车的第三辅助角度及车体框获取所述第i辆后车的重要概率的方面,所述获取模块具体用于:
    根据所述第i辆后车的第三辅助角度及其车体框查询第四关系表,以得到所述第i辆后车的第三辅助角度及其车体框对应的重要概率;
    其中,所述第i辆后车的第三辅助角度及其车体框对应的重要概率为所述第i辆后车的重要概率,所述第四关系表为第三辅助角度及车体框与重要概率的对应关系表。
  26. 根据权利要求19-25任一项所述的装置,其特征在于,在所述根据所述第一辅助角度和所述第二辅助角度获取所述目标后视镜的水平调节角度的方面,所述获取模块具体用于:
    根据所述第一辅助角度和第二辅助角度查询第五关系表,以得到所述第一辅助角度和第二辅助角度对应的水平调节角度;
    其中,所述第一辅助角度和第二辅助角度对应的水平调节角度为所述目标后视镜的水平调节角度,所述第五关系表为所述第一辅助角度和第二辅助角度与水平调节角度的对应关系表。
  27. 一种后视镜自适应调节装置,其特征在于,包括:
    获取模块,用于获取驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据所述人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;
    计算模块,用于根据所述驾驶员人眼的空间位置及垂直视野角度计算得到所述自车的辅助调节角度;
    所述获取模块,还用于获取自车后视摄像头采集的后车图像,并根据所述后车图像获取天空地面比R;根据所述自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度;
    所述调节模块,用于根据所述垂直调节角度将所述目标后视镜调节至目标垂直角度。
  28. 根据权利要求23所述的装置,其特征在于,所述自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度;其中,所述第五直线为经过所述驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,所述第三参考点为所述目标后视镜上的点,所述垂直视野中心线为所述垂直视野角度的角平分线。
  29. 根据权利要求27或28所述的装置,其特征在于,在所述根据所述后车图像获取天空地面比R的方面,所述获取模块具体用于:
    将所述后车图像纵向划分为多个图像带;
    从所述多个图像带中获取目标图像带,所述目标图像带为所述多个图像带中天空和地面过渡连续的图像带;
    统计所述目标图像带中天空占据的像素个数和地面占据的像素个数;
    根据所述天空占据的像素个数和地面占据的像素个数计算得到所述天空地面比R,所述天空地面比R为所述天空占据的像素个数和地面占据的像素个数的比值。
  30. 根据权利要求28或29所述的装置,其特征在于,在所述根据所述自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度的方面,所述获取模块具体用于:
    根据所述辅助调节角度和天空地面比R查询辅助关系表,获取所述辅助调节角度和天空地面比R对应的垂直调节角度;
    其中,所述辅助调节角度和天空地面比R对应的垂直调节角为所述目标后视镜的垂直调节角度,所述辅助关系表为辅助调节角度及天空地面比,与垂直调节角度之间的对应关系表。
  31. 一种后视镜自适应调节装置,其特征在于,包括:
    获取模块,用于获取自车后视摄像头采集的后车图像;
    计算模块,用于将所述后车图像转换为灰度图,并计算所述灰度图中像素的平均值;
    所述获取模块,还用于若所述平均值不小于预设值,则获取自车驾驶员人眼的空间位置及自车目标后视镜的空间位置,并根据所述人眼的空间位置和目标后视镜的空间位置获取垂直视野角度;
    所述计算模块,还用于根据所述驾驶员人眼的空间位置及垂直视野角度计算得到所述自车的辅助调节角度;
    所述获取模块,还用于根据所述后车图像获取天空地面比R;根据所述自车的辅助调节角度和所述天空地面比R获取目标后视镜的垂直调节角度;
    调节模块,用于根据所述垂直调节角度将所述目标后视镜调节至目标垂直角度。
  32. 根据权利要求31所述的装置,其特征在于,所述自车的辅助调节角度为垂直视野中心线与第五直线形成的夹角的角度;其中,所述第五直线为经过所述驾驶员人眼的空间位置及第三参考点的直线与垂直视野中心线形成的夹角的角度,所述第三参考点为所述目标后视镜上的点,所述垂直视野中心线为所述垂直视野角度的角平分线。
  33. 根据权利要求31或32所述的装置,其特征在于,在所述根据所述后车图像获取天空地面比R的方面,所述获取模块具体用于:
    将所述后车图像纵向划分为多个图像带;
    从所述多个图像带中获取目标图像带,所述目标图像带为所述多个图像带中天空和地面过渡连续的图像带;
    统计所述目标图像带中天空占据的像素个数和地面占据的像素个数;
    根据所述天空占据的像素个数和地面占据的像素个数计算得到所述天空地面比R,所述天空地面比R为所述天空占据的像素个数和地面占据的像素个数的比值。
  34. 根据权利要求31-33任一项所述的装置,其特征在于,在所述根据所述自车的辅助调节角度和天空地面比R获取目标后视镜的垂直调节角度的方面,所述获取模块具体用于:
    根据所述辅助调节角度和天空地面比R查询辅助关系表,获取所述辅助调节角度和天空地面比R对应的垂直调节角度;
    其中,所述辅助调节角度和天空地面比R对应的垂直调节角为所述目标后视镜的垂直调节角度,所述辅助关系表为辅助调节角度及天空地面比R,与垂直调节角度之间的对应关系表。
  35. 根据权利要求31-34任一项所述的装置,其特征在于,若所述平均值小于预设值,所述获取模块还用于:
    获取预设时长内自车所行驶的道路的坡度;根据所述预设时长内自车所行驶的道路的 坡度确定所述自车的行驶状态;
    所述调节模块还用于:
    根据所述自车的行驶状态和坡度β将所述目标后视镜的垂直角度调节至目标垂直角度,所述坡度β为所述自车当前所在道路的坡度的绝对值。
  36. 根据权利要求35所述的装置,其特征在于,所述调节模块具体用于:
    当所述自车的行驶状态为平地行驶状态时,将所述目标垂直角度调节为预设角度θ;
    当所述自车的行驶状态为长下坡行驶状态时,将所述目标垂直角度调节为所述预设角度θ;
    当所述自车的行驶状态为进入下坡行驶状态时,将所述目标垂直角度调节为θ-β/2;
    当所述自车的行驶状态为脱离下坡行驶状态时,将所述目标垂直角度调节为θ+β/2;
    当所述自车的行驶状态为长上坡行驶状态时,将所述目标垂直角度调节为所述预设角度θ;
    当所述自车的行驶状态为进入上坡行驶状态时,将所述目标垂直角度调节为θ-β/2;
    当所述自车的行驶状态为脱离上坡行驶状态时,将所述目标垂直角度调节为θ+β/2。
PCT/CN2020/103362 2019-08-31 2020-07-21 后视镜自适应调节方法及装置 WO2021036592A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
MX2021011749A MX2021011749A (es) 2019-08-31 2020-07-21 Metodo y aparato de ajuste de espejo retrovisor adaptativo.
EP20857477.2A EP3909811B1 (en) 2019-08-31 2020-07-21 Adaptive adjustment method and device for rear-view mirror
US17/489,112 US12049170B2 (en) 2019-08-31 2021-09-29 Adaptive rearview mirror adjustment method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910830441.7A CN112440881B (zh) 2019-08-31 2019-08-31 后视镜自适应调节方法及装置
CN201910830441.7 2019-08-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/489,112 Continuation US12049170B2 (en) 2019-08-31 2021-09-29 Adaptive rearview mirror adjustment method and apparatus

Publications (1)

Publication Number Publication Date
WO2021036592A1 true WO2021036592A1 (zh) 2021-03-04

Family

ID=74684878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103362 WO2021036592A1 (zh) 2019-08-31 2020-07-21 后视镜自适应调节方法及装置

Country Status (5)

Country Link
US (1) US12049170B2 (zh)
EP (1) EP3909811B1 (zh)
CN (1) CN112440881B (zh)
MX (1) MX2021011749A (zh)
WO (1) WO2021036592A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113859127A (zh) * 2021-09-28 2021-12-31 惠州华阳通用智慧车载系统开发有限公司 一种电子后视镜模式切换方法
CN114537290A (zh) * 2022-01-28 2022-05-27 岚图汽车科技有限公司 一种后视镜成像的控制方法及装置

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113602197B (zh) * 2021-08-30 2023-04-14 海信集团控股股份有限公司 车辆及其后视镜调整方法
CN113911033B (zh) * 2021-10-12 2023-04-18 岚图汽车科技有限公司 车辆视镜的控制方法、存储介质、控制装置和车辆
CN114212026B (zh) * 2021-12-27 2024-01-23 东软睿驰汽车技术(沈阳)有限公司 车辆后视镜调节方法、装置、车辆及存储介质
CN114506274B (zh) * 2022-02-28 2023-10-20 同致电子科技(厦门)有限公司 一种汽车后视镜片自动调节方法、装置、可读存储介质
CN115147797B (zh) * 2022-07-18 2024-06-21 东风汽车集团股份有限公司 一种电子外后视镜视野智能调整方法、系统和介质
CN115240415B (zh) * 2022-07-19 2024-03-26 中车南京浦镇车辆有限公司 通过输入数据可快速检验轨道交通车辆司机可见度的方法
US20240100947A1 (en) * 2022-09-28 2024-03-28 Ford Global Technologies, Llc Automatically adjusting displayed camera angle and distance to objects

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4896954A (en) * 1989-05-25 1990-01-30 Swanson Arthur P Rearview mirror
CN101628559A (zh) * 2008-07-16 2010-01-20 通用汽车环球科技运作公司 用于车辆的自动后视镜调整系统
CN203713705U (zh) * 2013-12-30 2014-07-16 北京汽车股份有限公司 一种后视镜自动调节系统及车辆
CN203793213U (zh) * 2014-03-10 2014-08-27 北京汽车研究总院有限公司 一种汽车用后视镜总成
US9073493B1 (en) * 2014-04-10 2015-07-07 Qualcomm Incorporated Method and apparatus for adjusting view mirror in vehicle
CN106427788A (zh) * 2016-11-23 2017-02-22 佛山海悦智达科技有限公司 一种智能型汽车内后视镜
CN108045308A (zh) * 2017-11-02 2018-05-18 宝沃汽车(中国)有限公司 车辆的内后视镜调节方法、系统及车辆
CN109421599A (zh) * 2017-08-24 2019-03-05 长城汽车股份有限公司 一种车辆的内后视镜调节方法和装置

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7354166B2 (en) * 2003-11-25 2008-04-08 Temic Automotive Of North America, Inc. Automatic viewing of vehicle blind spot
JP4412365B2 (ja) * 2007-03-26 2010-02-10 アイシン・エィ・ダブリュ株式会社 運転支援方法及び運転支援装置
US20090310237A1 (en) * 2008-06-16 2009-12-17 Gm Global Technology Operations, Inc. Lane change aid side-mirror system
KR101154018B1 (ko) 2009-09-14 2012-06-08 홍상욱 차량의 미러 각도 자동 조정 시스템 및 방법
CN103507718B (zh) 2012-06-26 2016-06-08 北汽福田汽车股份有限公司 车辆后视镜自动调节方法、后视镜自动调节系统及车辆
CN104228688B (zh) 2013-06-19 2016-09-07 聚晶半导体股份有限公司 调整后视镜的方法及使用该方法的电子装置
US9676336B2 (en) * 2013-06-25 2017-06-13 Magna Mirrors Of America, Inc. Exterior rearview mirror assembly for vehicle
KR101480914B1 (ko) * 2013-08-12 2015-01-09 현대오트론 주식회사 차량용 실외 후사경의 제어방법 및 제어장치
JP6384053B2 (ja) 2014-01-28 2018-09-05 アイシン・エィ・ダブリュ株式会社 後写鏡角度設定システム、後写鏡角度設定方法および後写鏡角度設定プログラム
JP2015140070A (ja) * 2014-01-28 2015-08-03 アイシン・エィ・ダブリュ株式会社 後写鏡角度設定システム、後写鏡角度設定方法および後写鏡角度設定プログラム
CN104590130A (zh) 2015-01-06 2015-05-06 上海交通大学 基于图像识别的后视镜自适应调节方法
JP6327161B2 (ja) 2015-01-23 2018-05-23 株式会社デンソー 後写鏡制御装置
CN106004663B (zh) * 2016-07-07 2018-06-29 辽宁工业大学 一种汽车驾驶员视野拓展系统及方法
CN206217761U (zh) * 2016-09-28 2017-06-06 北京汽车股份有限公司 坡路辅助后视野装置和车辆
CN106394408B (zh) * 2016-10-20 2018-07-27 郑州云海信息技术有限公司 一种后视镜的调节方法及装置
CN106379242B (zh) 2016-10-27 2019-01-15 武汉工程大学 一种扩展车辆外后视镜视角的方法及装置
CN107139927B (zh) 2017-04-07 2019-03-19 清华大学 一种自适应驾驶人体态的智能驾驶室控制方法和装置
CN107323347A (zh) * 2017-06-28 2017-11-07 北京小米移动软件有限公司 后视镜调整方法、装置及终端
CN107415832B (zh) * 2017-09-06 2023-10-27 滕文龙 辅助视镜
CN110228418B (zh) * 2018-03-05 2020-12-15 宝沃汽车(中国)有限公司 用于车辆内后视镜的控制方法、装置、系统和车辆

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4896954A (en) * 1989-05-25 1990-01-30 Swanson Arthur P Rearview mirror
CN101628559A (zh) * 2008-07-16 2010-01-20 通用汽车环球科技运作公司 用于车辆的自动后视镜调整系统
CN203713705U (zh) * 2013-12-30 2014-07-16 北京汽车股份有限公司 一种后视镜自动调节系统及车辆
CN203793213U (zh) * 2014-03-10 2014-08-27 北京汽车研究总院有限公司 一种汽车用后视镜总成
US9073493B1 (en) * 2014-04-10 2015-07-07 Qualcomm Incorporated Method and apparatus for adjusting view mirror in vehicle
CN106427788A (zh) * 2016-11-23 2017-02-22 佛山海悦智达科技有限公司 一种智能型汽车内后视镜
CN109421599A (zh) * 2017-08-24 2019-03-05 长城汽车股份有限公司 一种车辆的内后视镜调节方法和装置
CN108045308A (zh) * 2017-11-02 2018-05-18 宝沃汽车(中国)有限公司 车辆的内后视镜调节方法、系统及车辆

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3909811A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113859127A (zh) * 2021-09-28 2021-12-31 惠州华阳通用智慧车载系统开发有限公司 一种电子后视镜模式切换方法
CN113859127B (zh) * 2021-09-28 2023-07-07 惠州华阳通用智慧车载系统开发有限公司 一种电子后视镜模式切换方法
CN114537290A (zh) * 2022-01-28 2022-05-27 岚图汽车科技有限公司 一种后视镜成像的控制方法及装置
CN114537290B (zh) * 2022-01-28 2023-10-20 岚图汽车科技有限公司 一种后视镜成像的控制方法及装置

Also Published As

Publication number Publication date
CN112440881B (zh) 2022-07-12
EP3909811A4 (en) 2022-08-17
EP3909811B1 (en) 2024-09-25
CN112440881A (zh) 2021-03-05
MX2021011749A (es) 2021-10-22
US12049170B2 (en) 2024-07-30
EP3909811A1 (en) 2021-11-17
US20220017014A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
WO2021036592A1 (zh) 后视镜自适应调节方法及装置
US11568652B2 (en) Use of relationship between activities of different traffic signals in a network to improve traffic signal state estimation
JP7255782B2 (ja) 障害物回避方法、障害物回避装置、自動運転装置、コンピュータ可読記憶媒体及びプログラム
US11443525B1 (en) Image and video compression for remote vehicle assistance
US9690297B1 (en) Classifier hierarchies for traffic light and traffic indicator detection
CN110789533B (zh) 一种数据呈现的方法及终端设备
CN112512887B (zh) 一种行驶决策选择方法以及装置
WO2022205243A1 (zh) 一种变道区域获取方法以及装置
WO2021217575A1 (zh) 用户感兴趣对象的识别方法以及识别装置
CN114056347A (zh) 车辆运动状态识别方法及装置
CN113343738A (zh) 检测方法、装置及存储介质
CN115042821B (zh) 车辆控制方法、装置、车辆及存储介质
CN115100377A (zh) 地图构建方法、装置、车辆、可读存储介质及芯片
WO2023050058A1 (zh) 控制车载摄像头的视角的方法、装置以及车辆
CN115116161A (zh) 车辆数据采集方法、装置、存储介质以及车辆
WO2021057662A1 (zh) 地图级别指示方法、地图级别获取方法及相关产品
CN115082886A (zh) 目标检测的方法、装置、存储介质、芯片及车辆

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20857477

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020857477

Country of ref document: EP

Effective date: 20210813

NENP Non-entry into the national phase

Ref country code: DE