CN108932587B - Overlooking pedestrian risk quantification system of two-dimensional world coordinate system - Google Patents

Overlooking pedestrian risk quantification system of two-dimensional world coordinate system Download PDF

Info

Publication number
CN108932587B
CN108932587B CN201810697704.7A CN201810697704A CN108932587B CN 108932587 B CN108932587 B CN 108932587B CN 201810697704 A CN201810697704 A CN 201810697704A CN 108932587 B CN108932587 B CN 108932587B
Authority
CN
China
Prior art keywords
pedestrian
world coordinate
coordinate system
risk
dimensional world
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810697704.7A
Other languages
Chinese (zh)
Other versions
CN108932587A (en
Inventor
杨大伟
毛琳
黄俊达
陈思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Minzu University filed Critical Dalian Minzu University
Priority to CN201810697704.7A priority Critical patent/CN108932587B/en
Publication of CN108932587A publication Critical patent/CN108932587A/en
Application granted granted Critical
Publication of CN108932587B publication Critical patent/CN108932587B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • Multimedia (AREA)
  • Educational Administration (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

A pedestrian risk quantification system for overlooking of a two-dimensional world coordinate system belongs to the field of driving risk analysis, and aims to solve the problem of determining pedestrian target risk by using an image visual angle, the system is characterized by comprising a vehicle-mounted camera and a quantification device; the vehicle-mounted camera is mounted at the vehicle roof and faces the vehicle running direction and is used for shooting a pedestrian head-up image; the quantification device stores a plurality of instructions, the instructions are suitable for being loaded and executed by the processor, and the pedestrian target risk during vehicle driving is quantified into a normalized risk index, so that an important vehicle driving decision data basis is provided for the pedestrian target obstacle avoidance function of advanced assistant driving and autonomous cruising of the intelligent vehicle.

Description

Overlooking pedestrian risk quantification system of two-dimensional world coordinate system
Technical Field
The invention belongs to the field of driving risk analysis, and relates to a pedestrian risk quantification system.
Background
The road traffic in many areas in China has the danger condition of mixed traffic of people and vehicles for a long time, pedestrians serve as weak groups in the road traffic, occupy a large proportion in the fatality rate of accident personnel throughout the year, and are reasonably protected by obstacle avoidance of vehicles, so that the importance of improving the safety avoidance capacity of automobiles for the pedestrians is self-evident.
The pedestrian risk analysis method based on the vehicle onboard system mainly uses a sensor to sense vehicle environment information, combines the motion state of a pedestrian target, judges the danger of the pedestrian target and adjusts driving decision according to the judgment, and achieves early protection of the dangerous pedestrian target. A pedestrian risk analysis method based on vehicle-mounted images is the mainstream research direction at present, and many researchers analyze pedestrian movement trends by recognizing pedestrian target postures so as to classify dangerous pedestrians. The Joko Hariyono et al uses an optical flow method to segment pedestrian contours, identifies the horizontal movement trend of pedestrians by a pedestrian posture ratio method, and judges that the pedestrians moving to a traffic area are dangerous pedestrians. In addition, Keller and Gavrila et al use gaussian dynamic process models and trajectory probability hierarchical matching to identify standing or horizontal motion states of pedestrian objects in images.
Most pedestrian risk analysis methods based on vehicle-mounted images directly analyze pedestrian risks from image view angles, but due to imaging distortion of the vehicle-mounted images, researchers can only recognize the motion postures of pedestrians instead of mastering the exact motion states of the pedestrians. Therefore, the existing pedestrian risk analysis method can only give qualitative two-classification judgment on whether the pedestrian is dangerous or not, so that the method is mainly used for providing real-time early warning for a driver and cannot provide fine data support for vehicle decision making.
In order to realize accurate driver assistance and improve the on-vehicle autonomic cruise performance of intelligence, the publication no: CN107240167A, a chinese patent application, discloses a pedestrian monitoring system with a car data recorder, and provides a quantitative pedestrian risk analysis method. The system uses sensing equipment comprising a body sensing controller, an infrared sensor and a sounding meter to obtain pedestrian information in a vehicle running environment, and calculates a pedestrian collision coefficient and achieves pedestrian danger early warning in a mode of matching a pedestrian depth image stream with a pedestrian target model. Although a quantitative risk analysis result is given, risk quantification factors come from the postures of the pedestrians, and the intention of the pedestrians in deliberate collision with the vehicle is actually judged, so that the quantification coefficients do not have the objective property of kinematics and are not enough to reflect the real motion risk degree of the pedestrians.
Publication No.: the CN104239741A chinese patent application for automobile driving safety assistance method based on automobile risk field, from three comprehensive angles of people, vehicle and road, by analyzing kinetic energy field, potential energy field and behavior field of the vehicle environment, a vehicle risk field model of vehicle driving to obstacle risk is constructed in a fusion manner, and the driving risk of vehicle to road obstacle is quantified, so as to evaluate different degrees. The invention gives reasonable kinematics principle to the driving risk field by introducing the potential field theory, so that the risk quantification result can be objectively and effectively used for driving decision.
Disclosure of Invention
In order to solve the problem of determining the pedestrian target risk by using the image visual angle, the invention provides the following technical scheme:
a pedestrian risk quantification system in overlook of a two-dimensional world coordinate system is characterized by comprising a vehicle-mounted camera and a quantification device; the vehicle-mounted camera is mounted at the vehicle roof and faces the vehicle running direction and is used for shooting a pedestrian head-up image; the quantization device stores a plurality of instructions adapted to be loaded and executed by a processor to:
calculating pedestrian track points of all pedestrian targets according to pedestrian images shot by a vehicle-mounted camera, wherein the number of the pedestrian targets is N, and acquiring and updating all pedestrian target head-up pedestrian track point vectors at real time;
mapping all the head-up pedestrian track point vectors to a two-dimensional world coordinate system to obtain pedestrian track point vectors corresponding to N overlooking two-dimensional world coordinate systems;
setting a pedestrian motion observation range, and constructing a pedestrian trajectory matrix M of the overlook two-dimensional world coordinate system corresponding to the N pedestrian trajectory vectors of the overlook two-dimensional world coordinate systemPIndependently and correspondingly overlooking a two-dimensional world coordinate system pedestrian track matrix from a world coordinate system
Figure GDA0001753304250000021
Construction of overlook two-dimensional world coordinate system vehicle front risk matrix M matched with pedestrian trajectory matrixVAnd copy the same copy matrix as itself
Figure GDA0001753304250000022
For pedestrian target i e [1, N]Calculating a neighboring pedestrian risk coefficient Ri
Figure GDA0001753304250000023
Wherein: k is a radical ofiThe number of the trace points is the number of the pedestrian.
Further, the adaptive range of the ground height H of the vehicle-mounted camera is 1.2m to 1.6m, the ideal assembly angle of the yaw angle gamma is 0 degree, the acceptable range of the assembly error is +/-1 degree, the ideal assembly angle of the pitch angle theta is 0 degree, and the acceptable range of the assembly error is +/-3 degrees.
Further, the quantization device can be used for carrying out head-up pedestrian track point vector based on the following mode
Figure GDA0001753304250000031
Mapping to overlook pedestrian track point vector
Figure GDA0001753304250000032
Firstly, calculating mapping factors rFactor and cFactor:
Figure GDA0001753304250000033
wherein u and v are input values representing inverse perspective mapping points in the image, M and N are constant values representing the width and height of the image, AlphaU is a horizontal hole near angle, and AlphaV is a vertical hole near angle;
secondly, calculating two-dimensional world coordinate initial mapping points (x ', y'):
Figure GDA0001753304250000034
wherein C isx、CyAnd CzFor fixed value, representing the coordinates of the camera in the world coordinate system, setting CxC y0 and CzH is the height from the ground; theta is the pitch angle between the camera and the ground;
thirdly, correcting the initial mapping point to obtain a mapping coordinate point (x, y) of a two-dimensional world coordinate system:
Figure GDA0001753304250000035
where γ is a constant value and represents the camera deflection angle.
Further, the quantification apparatus calculates the horizontal hole approach angle AlphaU and the vertical hole approach angle AlphaV based on the following manner:
Figure GDA0001753304250000036
wherein the focal length is f and the length of the photosensitive element is dxThe width of the photosensitive element is dy
Further, the quantization device generates a pedestrian trajectory matrix M using a matrix mapping functionP
(n,m)=fwm(x,y) (5)
(x, y) represents coordinate points of a two-dimensional world coordinate system, and (n, m) represents the positions of rows and columns of elements in the operation matrix.
Further, the pedestrian movement observation range is set to the distance OWThe horizontal +/-10 m and the vertical 0-20 m.
Further, the quantification device constructs a overlook two-dimensional world coordinate system front risk matrix M matched with a pedestrian track matrix based on the following modeV
Equipotential each coordinate point in the two-dimensional world coordinate system to YWA shaft;
calculating the risk weight of the corresponding coordinate point;
the same matrix mapping function as the pedestrian trajectory matrix is used to map to the pre-vehicle risk matrix.
Further, the risk equipotential lines of the two-dimensional world coordinate system are composed of 6 with respect to YWThe second-order curve of the axis is formed and satisfies:
y=γ(x)=α1x22x+α3 (6)
wherein: alpha is alpha1、α2And alpha3Is a second-order polynomial coefficient vector and satisfies:
Figure GDA0001753304250000041
further, the quantifying means calculates the pre-vehicle risk weight based on:
Figure GDA0001753304250000042
wrto normalize the intensity of risk, a certain area wrThe closer the value is to 1, the more dangerous the area is, whereas the more toward 0, the safer it is;
the quantization device generates an in-vehicle risk matrix M using a matrix mapping functionV
(n,m)=fwm(x,y,wr)
(x,y,wr) And (2) representing coordinate points of a two-dimensional world coordinate system and corresponding risk intensity, and (n, m) representing the row and column positions of elements in the operation matrix.
Has the advantages that: the invention relates to a quantification system for pedestrian target risk degree of a vehicle-mounted video image, which is used for quantifying the pedestrian target risk during vehicle driving into a normalized risk index, thereby providing an important vehicle decision data basis for the pedestrian target obstacle avoidance function of advanced assistant driving and autonomous cruising of an intelligent vehicle. The algorithm has the beneficial effects that: (1) the pedestrian risk analysis uses a two-dimensional world coordinate system with an intuitive visual angle, so that a driver can observe the moving trend of each pedestrian target at a more accurate visual angle conveniently; (2) the static risk distribution of the area in front of the vehicle is described by overlooking the risk matrix in front of the two-dimensional world coordinate system, the risk distribution condition of the static risk distribution is related to urban speed limit, and the static risk distribution is not influenced by the road surface environment and the vehicle running speed, so that the complexity of practical application is reduced; (3) the movement conditions of different pedestrian targets and vehicles in the two-dimensional world coordinate system are considered independently, the movement of the pedestrians is not interfered with each other, and corresponding attention can be given to the specific pedestrian target according to the attention requirement of a driver or an autonomous driving system. (4) The normalized adjacent pedestrian risk coefficient of the pedestrian target is obtained through quantification, different risk degrees of the pedestrian target are reflected from 0 to 1, and the method can be used for dangerous pedestrian classification and vehicle driving avoidance priority determination.
Drawings
FIG. 1 is a schematic diagram of the invention;
FIG. 2 image coordinate system;
FIG. 3 a world coordinate system;
FIG. 4 is a top view of a two-dimensional world coordinate system;
FIG. 5 parameter FIG. 1;
FIG. 6 parameter FIG. 2;
FIG. 7 is a head-up trajectory plot;
FIG. 8 is a two-dimensional world coordinate system pedestrian trajectory matrix diagram from above;
FIG. 9 is a top view of a two-dimensional world coordinate system pre-vehicle risk matrix diagram;
FIG. 10 is a graph of a method of calculating a risk factor for an adjacent pedestrian;
fig. 11 is a graph of the calculation result of the risk coefficient of an adjacent pedestrian according to embodiment 1;
fig. 12 is a graph of the adjacent pedestrian risk factor calculation results of embodiment 2;
fig. 13 is a graph of the calculation result of the risk factors of neighboring pedestrians in embodiment 3;
Detailed Description
The invention is described in further detail below with reference to the figures and the specific real-time modes:
as shown in fig. 1, the invention discloses a pedestrian risk quantification method in an overlook based on a two-dimensional world coordinate system, which can be realized by using software, and can solve the quantification risk degree of a pedestrian target in front of a vehicle under an overlook condition by transforming a video of a vehicle-mounted camera.
The method mainly comprises the following implementation steps:
step 1: for an image (unit: pixel) with the size of 1920 multiplied by 1080, calculating pedestrian track points of all N pedestrian targets frame by frame, and acquiring and updating all pedestrian target head-up pedestrian track point vectors at real time
Figure GDA0001753304250000061
Step 2: mapping all head-up pedestrian track points to a two-dimensional world coordinate system and based on an origin O of the two-dimensional world coordinate systemWObtaining a pedestrian track matrix corresponding to N two-dimensional world coordinate systems by taking the horizontal distance of +/-10 m and the vertical distance of 0-20 m as the analysis range of the pedestrian motion
Figure GDA0001753304250000062
Step 3Copying N copies of the risk matrix in front of the two-dimensional world coordinate system
Figure GDA0001753304250000063
Step 4, aiming at the pedestrian target i to be in the range of [1, N ∈]Using the formula
Figure GDA0001753304250000064
And calculating a risk coefficient R of the adjacent pedestrians.
The present disclosure provides a detailed description of the above method, which aims at the problem that it is difficult to accurately determine the pedestrian target risk by directly adopting the image view, and the principle of the method is as shown in fig. 1, and mainly maps the pedestrian motion track point to the two-dimensional world coordinate system of the depression view, and calculates the risk weight in front of the vehicle in the two-dimensional world coordinate system. Further, a pedestrian track matrix and an automobile front risk matrix are generated through quantitative mapping, each pedestrian target has an independent pedestrian track matrix, the same automobile front risk matrix is shared, quantitative risk calculation is achieved, and normalized adjacent pedestrian risk coefficients of different pedestrian targets are obtained. The adjacent pedestrian risk coefficient is used as an overlooking pedestrian risk quantification method based on a two-dimensional world coordinate system to output a result, and can be used for supporting the operation of a driving decision module of an auxiliary driving and an autonomous vehicle.
The technical scheme of the invention relates to related image coordinate system, world coordinate system and camera parameter definition, and the specific visible schematic diagrams are shown in figure 1, figure 2, figure 3 and figure 4.
Image coordinate system definition (see fig. 1): and defining an image coordinate system by taking the upper left corner of the image as an origin O, the horizontal right side as a u axis and the vertical downward side as a v axis.
World coordinate system definition (see fig. 2): using the light center of the vehicle-mounted camera to the ground projection point as the origin OWThe running direction of the vehicle is YWPositive direction, coplanar with the driving plane of the vehicle and with YWPerpendicular to the right direction as XWThe positive direction of the axis, the direction of the pointing camera is ZWThe positive direction of the axis is defined as the world coordinate system.
Two-dimensional world coordinate system definition (see fig. 3): ignoring the world coordinate system ZWShaft (height axis) scaleAnd a world coordinate system defined as a two-dimensional world coordinate system.
The present invention requires that the vehicle-mounted camera be mounted in such a manner as to be mounted at the roof of the vehicle and face in the traveling direction, as shown in fig. 2. The vehicle-mounted camera needs to carry out dynamic shooting, so intrinsic parameters and assembly parameters of the camera are relatively fixed, and the intrinsic parameters comprise a focal length f and a photosensitive element length dxWidth d of the photosensitive elementyImage length M and image width N; the assembly parameters include ground height H, yaw angle γ, pitch angle θ, horizontal aperture angle AlphaU, and vertical aperture angle AlphaV.
The invention has the following internal parameter adaptive values: the focal length f is 16mm-23 mm; the size of the photosensitive element has no special requirement; the image length M is conventionally 1920 pixels and should not be smaller than 1080 pixels; image width N is conventionally 1080 pixels and no less than 640 pixels. The invention has the following assembly parameter adaptive values: the adaptation range of the height H is between 1.2m and 1.6 m; the ideal assembly angle of the yaw angle is 0 degree, and the acceptable range of the assembly error is +/-1 degree; the ideal assembly angle of the pitch angle is 0 degree, and the acceptable range of the assembly error is +/-3 degrees. The method for calculating the horizontal hole near angle AlphaU and the vertical hole near angle AlphaV comprises the following steps:
Figure GDA0001753304250000071
firstly, pedestrian track points in an input image are converted into a world coordinate system through inverse perspective mapping, and a two-dimensional world coordinate system pedestrian track matrix M is constructedP
Let pt(ut,vt) For the t frame image of the input video, pedestrian track points are utAnd vtRepresenting column coordinates and row coordinates in the image; p is a radical oft'(xt,yt) Mapping coordinates of pedestrian track points in a two-dimensional world coordinate system for the t frame image of the video, wherein xtAnd ytRepresenting horizontal and vertical coordinates in a two-dimensional world coordinate system. Accordingly, then there are
Figure GDA0001753304250000072
Looking up pedestrian track point direction for input videoThe amount of the compound (A) is,
Figure GDA0001753304250000073
is a vector
Figure GDA0001753304250000074
And overlooking the pedestrian track point vector in a two-dimensional world coordinate system.
Head-up pedestrian track point vector
Figure GDA0001753304250000075
To overlook pedestrian track point vector
Figure GDA0001753304250000076
The mapping transformation step of (2) is:
firstly, calculating mapping factors rFactor and cFactor (see formula (2)), wherein u and v are input values to represent inverse perspective mapping points in an image, and M and N are constant values to represent the width and height of the image;
Figure GDA0001753304250000081
second, the two-dimensional world coordinate initial mapping point (x ', y') (see equation (3)) is calculated, where Cx、CyAnd CzTo represent the coordinates of the camera in the world coordinate system in constant values, C is usually setxC y0 and CzH; theta is the camera-to-ground pitch angle.
Figure GDA0001753304250000082
And thirdly, correcting the initial mapping point to obtain a mapping coordinate point (x, y) of a two-dimensional world coordinate system (see formula (4)), wherein gamma is a fixed value and represents the deflection angle of the camera.
Figure GDA0001753304250000083
A fourth step of using the matrix mapping functionNumber (as shown in equation (5)) to generate a pedestrian trajectory matrix MP
(n,m)=fwm(x,y) (5)
In the formula (5), (x, y) represents a coordinate point of a two-dimensional world coordinate system, and (n, m) represents the row and column positions of elements in an operation matrix. Constructing a pedestrian trajectory matrix MPAiming at representing the pedestrian track point information within a vehicle-ahead defined distance in a two-dimensional world coordinate system by a matrix method, and aiming at the inverse perspective mapping effect, the mapping range from the two-dimensional world coordinate system to an operation matrix is set as a distance OWHorizontal + -10 m and vertical 0-20 m. From this, a two-dimensional world coordinate system pedestrian trajectory matrix M as shown in FIG. 6 can be constructedP
Then, a two-dimensional world coordinate system front risk matrix M corresponding to the two-dimensional world coordinate system pedestrian track matrix is constructedV. The risk equipotential lines of the two-dimensional world coordinate system are composed of 6 lines with respect to YWThe second-order curve is formed and satisfies:
y=γ(x)=α1x22x+α3 (6)
in the formula (6), α1、α2And alpha3Is a second-order polynomial coefficient vector and satisfies:
Figure GDA0001753304250000091
Figure GDA0001753304250000092
the function for calculating the risk weight of the vehicle front influenced by the distance between the vehicle front is given as shown in the formula (8), and the function for calculating the risk weight of the vehicle front is itself prototype to be a Gaussian distribution function. Wherein, C1And C2For normalizing the parameters, the values are set to C10.05 and C247.7; μ and σ are function expectations and variances, the physical meaning of which is a risk distribution parameter affected by vehicle braking capability, with values set to μ -0 and σ -8. W in formula (8)rTo normalize the intensity of risk, a certain area wrThe closer the value is1 the more dangerous the area is, whereas the more toward 0 the safer it is.
The coordinate in the two-dimensional world coordinate system is equipotential to Y through the formula (6)WAnd (4) calculating the corresponding risk weight according to the formula (8). The vehicle-front risk matrix is mainly used for matching with a pedestrian track matrix to realize pedestrian risk coefficient quantification, so that the same matrix mapping function is selected for constructing the vehicle-front risk matrix. Accordingly, the risk matrix M in front of the vehicle can be generated by further using the formula (5) according to the risk weight of the vehicle running at each coordinate in the two-dimensional world coordinate systemVAs shown in fig. 7. Generating an in-vehicle risk matrix M using a matrix mapping functionV
(n,m)=fwm(x,y,wr)
(x,y,wr) And (2) representing coordinate points of a two-dimensional world coordinate system and corresponding risk intensity, and (n, m) representing the row and column positions of elements in the operation matrix.
Finally, combining a two-dimensional world coordinate system pedestrian track matrix MPAnd a two-dimensional world coordinate system risk matrix M in front of the vehicleVAnd calculating a risk coefficient R of the adjacent pedestrians.
Setting N different pedestrian targets in continuous images, and setting i E to [1, N ] for any pedestrian target]All have unique head-up pedestrian track point vector of
Figure GDA0001753304250000093
Corresponding thereto. Further, vector quantities
Figure GDA0001753304250000094
Then the overlook pedestrian track point vector can be obtained through the second step
Figure GDA0001753304250000095
And can independently and correspondingly overlook a two-dimensional world coordinate system pedestrian track matrix from the world coordinate system
Figure GDA0001753304250000096
As shown in FIG. 8, looking down the two-dimensional world coordinate system front risk matrix is obtained by copying the same copy as itself
Figure GDA0001753304250000101
And the pedestrian track matrix of the two-dimensional world coordinate system is overlooked
Figure GDA0001753304250000102
Quantifying an adjacent pedestrian risk factor RiThe formula is as follows:
Figure GDA0001753304250000103
equation (9) is a formula for quantifying the risk factor of the neighboring pedestrian according to the present invention, wherein k isiThe output result R is the number of the trace points of the pedestrianiI.e. the adjacent pedestrian risk factor, R, of the pedestrian object iiCloser to 1 indicates a more dangerous pedestrian target, whereas closer to 0 is safer. The physical significance of the calculation method in the formula (9) is that the pedestrian track matrix is utilized to screen the vehicle front risk matrix, so that the vehicle front risk degree corresponding to the pedestrian track point position is obtained.
Example 1:
in the embodiment, for an actually measured road scene vehicle-mounted video with a pixel size of 1920 × 1080, the adjacent pedestrian risk coefficients of 2 pedestrian targets in the image are quantized. The results of the calculation of the risk coefficients of neighboring pedestrians can be seen in fig. 11 (a), (b), (c) and (d), and it can be seen that reasonable pedestrian risk quantification results are given for two pedestrian objects crossing the front region of the vehicle in the image.
Example 2:
in the present embodiment, calculation results of risk coefficients of neighboring pedestrians for 2 pedestrian targets in an actual road scene vehicle-mounted video with a size of 1920 × 1080 are shown in (a), (b), (c), and (d) in fig. 12. Therefore, the method and the device provide an accurate pedestrian risk quantification result aiming at the pedestrian target which is independent of the vehicle and moves in opposite directions.
Example 3:
the present embodiment quantifies 2 pedestrian objects in the continuous image for the vehicle-mounted video as the actual measurement road scene image with the pixel size of 1920 × 1080, and the calculation results of the risk coefficients of the adjacent pedestrians are shown in (a), (b), (c) and (d) of fig. 13. Therefore, for the pedestrian crossing the front area in the video image, the invention provides an accurate pedestrian risk quantification result.
The above description is only for the purpose of creating a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the technical scope of the present invention.

Claims (5)

1. A pedestrian risk quantification system in overlook of a two-dimensional world coordinate system is characterized by comprising a vehicle-mounted camera and a quantification device; the vehicle-mounted camera is mounted at the vehicle roof and faces the vehicle running direction and is used for shooting a pedestrian head-up image; the quantization device stores a plurality of instructions adapted to be loaded and executed by a processor to:
calculating pedestrian track points of all pedestrian targets according to pedestrian images shot by a vehicle-mounted camera, wherein the number of the pedestrian targets is N, and acquiring and updating all pedestrian target head-up pedestrian track point vectors at real time;
mapping all the head-up pedestrian track point vectors to a two-dimensional world coordinate system to obtain pedestrian track point vectors corresponding to N overlooking two-dimensional world coordinate systems;
the quantization device looks up the pedestrian track point vector in the following way
Figure FDA0003205734730000011
Mapping to overlook pedestrian track point vector
Figure FDA0003205734730000012
Firstly, calculating mapping factors rFactor and cFactor:
Figure FDA0003205734730000013
wherein u and v are input values representing inverse perspective mapping points in the image, M and N are constant values representing the width and height of the image, AlphaU is a horizontal hole near angle, and AlphaV is a vertical hole near angle;
secondly, calculating two-dimensional world coordinate initial mapping points (x ', y'):
Figure FDA0003205734730000014
wherein C isx、CyAnd CzFor fixed value, representing the coordinates of the camera in the world coordinate system, setting Cx=Cy0 and CzH is the height from the ground; theta is the pitch angle between the camera and the ground;
thirdly, correcting the initial mapping point to obtain a mapping coordinate point (x, y) of a two-dimensional world coordinate system:
Figure FDA0003205734730000015
wherein gamma is a fixed value and represents a deflection angle of the camera;
setting a pedestrian motion observation range, and constructing a pedestrian trajectory matrix M of the overlook two-dimensional world coordinate system corresponding to the N pedestrian trajectory vectors of the overlook two-dimensional world coordinate systemPIndependently and correspondingly overlooking a two-dimensional world coordinate system pedestrian track matrix from a world coordinate system
Figure FDA0003205734730000021
Construction of overlook two-dimensional world coordinate system vehicle front risk matrix M matched with pedestrian trajectory matrixVAnd copy the same copy matrix as itself
Figure FDA0003205734730000022
The quantification device constructs a two-dimensional world coordinate system front risk matrix M matched with a pedestrian trajectory matrix in a overlooking modeV
Subjecting the two dimensions toEach coordinate point in the world coordinate system is equipotential to YWAxis, YWThe axis represents one coordinate axis in a two-dimensional world coordinate system;
calculating the risk weight of the corresponding coordinate point;
mapping the vehicle front risk matrix by using a matrix mapping function which is the same as the pedestrian track matrix;
the risk equipotential lines of the two-dimensional world coordinate system are composed of 6 lines with respect to YWThe second-order curve of the axis is formed and satisfies:
y=γ(x)=α1x22x+α3 (6)
wherein: alpha is alpha1、α2And alpha3Is a second-order polynomial coefficient vector and satisfies:
Figure FDA0003205734730000023
the quantifying means calculates the pre-vehicle risk weight based on:
Figure FDA0003205734730000024
wrto normalize the intensity of risk, a certain area wrThe closer the value is to 1, the more dangerous the area is, whereas the more toward 0, the safer it is; c1And C2For normalizing the parameters, mu and sigma are expressed as function expectation and variance, respectively, the quantizing device generates an plantago risk matrix M using a matrix mapping functionV
(n,m)=fwm(x,y,wr)
(x,y,wr) Representing coordinate points of a two-dimensional world coordinate system and corresponding risk intensity, and (n, m) representing the row and column positions of elements in an operation matrix;
for pedestrian target i e [1, N]Calculating a neighboring pedestrian risk coefficient Ri
Figure FDA0003205734730000031
Wherein: k is a radical ofiThe number of the trace points is the number of the pedestrian.
2. The two-dimensional world coordinate system pedestrian risk quantification system of overlooking according to claim 1, wherein the vehicle-mounted camera is adapted to have a ground height H ranging from 1.2m to 1.6m, a yaw angle γ of 0 ° with an ideal fitting angle, an acceptable range of fitting errors of ± 1 °, a pitch angle θ of 0 ° with an ideal fitting angle, and an acceptable range of fitting errors of ± 3 °.
3. The two-dimensional world coordinate system look-down pedestrian risk quantification system of claim 1, wherein the quantification means calculates the horizontal hole approach angle AlphaU and the vertical hole approach angle AlphaV based on:
Figure FDA0003205734730000032
wherein the focal length is f and the length of the photosensitive element is dxThe width of the photosensitive element is dy
4. The two-dimensional world coordinate system look-down pedestrian risk quantification system of claim 1, wherein the quantification means generates a pedestrian trajectory matrix M using a matrix mapping functionP
(n,m)=fwm(x,y) (5)
(x, y) represents coordinate points of a two-dimensional world coordinate system, and (n, m) represents the positions of rows and columns of elements in the operation matrix.
5. The two-dimensional world coordinate system look-down pedestrian risk quantification system of claim 4 wherein the pedestrian motion observation range is set to a distance OWHorizontal +/-10 m and vertical 0-20 m, OWRepresenting the origin of coordinates in a two-dimensional world coordinate system.
CN201810697704.7A 2018-06-29 2018-06-29 Overlooking pedestrian risk quantification system of two-dimensional world coordinate system Expired - Fee Related CN108932587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810697704.7A CN108932587B (en) 2018-06-29 2018-06-29 Overlooking pedestrian risk quantification system of two-dimensional world coordinate system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810697704.7A CN108932587B (en) 2018-06-29 2018-06-29 Overlooking pedestrian risk quantification system of two-dimensional world coordinate system

Publications (2)

Publication Number Publication Date
CN108932587A CN108932587A (en) 2018-12-04
CN108932587B true CN108932587B (en) 2021-09-21

Family

ID=64447015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810697704.7A Expired - Fee Related CN108932587B (en) 2018-06-29 2018-06-29 Overlooking pedestrian risk quantification system of two-dimensional world coordinate system

Country Status (1)

Country Link
CN (1) CN108932587B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033441B (en) * 2021-03-31 2024-05-10 广州敏视数码科技有限公司 Pedestrian collision early warning method based on wide-angle imaging

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1540286A (en) * 2003-04-25 2004-10-27 王舜清 Multifunctional equipment and method for traffic safety management in real time
CN102096803A (en) * 2010-11-29 2011-06-15 吉林大学 Safe state recognition system for people on basis of machine vision
CN103593641A (en) * 2012-08-16 2014-02-19 株式会社理光 Object detecting method and device based on stereoscopic camera
CN103971096A (en) * 2014-05-09 2014-08-06 哈尔滨工程大学 Multi-pose face recognition method based on MB-LBP features and face energy diagram
CN103996199A (en) * 2014-03-26 2014-08-20 北京大学深圳研究生院 Movement detection method based on depth information
CN105095908A (en) * 2014-05-16 2015-11-25 华为技术有限公司 Video image group behavior characteristic processing method and apparatus
CN108961313A (en) * 2018-06-29 2018-12-07 大连民族大学 Vertical view pedestrian's risk quantification method of two-dimensional world coordinate system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137252A (en) * 2011-02-28 2011-07-27 兰州大学 Vehicle-mounted virtual panoramic display device
US9558667B2 (en) * 2012-07-09 2017-01-31 Elwha Llc Systems and methods for cooperative collision detection
WO2015011527A1 (en) * 2013-07-25 2015-01-29 Desarrollo E Investigación Tecnológica Axon Intelligence Co S.A. System and procedure to recognize, notice and identify a deficiency in the eye-motor function coordination of a motor vehicle driver
CN104616277B (en) * 2013-11-01 2019-02-22 深圳力维智联技术有限公司 Pedestrian's localization method and its device in video structural description
US9384666B1 (en) * 2015-02-01 2016-07-05 Thomas Danaher Harvey Methods to operate autonomous vehicles to pilot vehicles in groups or convoys
GB201818058D0 (en) * 2015-05-18 2018-12-19 Mobileye Vision Technologies Ltd Safety system for a vehicle to detect and warn of a potential collision
CN105469405B (en) * 2015-11-26 2018-08-03 清华大学 Positioning and map constructing method while view-based access control model ranging

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1540286A (en) * 2003-04-25 2004-10-27 王舜清 Multifunctional equipment and method for traffic safety management in real time
CN102096803A (en) * 2010-11-29 2011-06-15 吉林大学 Safe state recognition system for people on basis of machine vision
CN103593641A (en) * 2012-08-16 2014-02-19 株式会社理光 Object detecting method and device based on stereoscopic camera
CN103996199A (en) * 2014-03-26 2014-08-20 北京大学深圳研究生院 Movement detection method based on depth information
CN103971096A (en) * 2014-05-09 2014-08-06 哈尔滨工程大学 Multi-pose face recognition method based on MB-LBP features and face energy diagram
CN105095908A (en) * 2014-05-16 2015-11-25 华为技术有限公司 Video image group behavior characteristic processing method and apparatus
CN108961313A (en) * 2018-06-29 2018-12-07 大连民族大学 Vertical view pedestrian's risk quantification method of two-dimensional world coordinate system

Also Published As

Publication number Publication date
CN108932587A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
CN108961313B (en) Overlooking pedestrian risk quantification method of two-dimensional world coordinate system
CN110764108B (en) Obstacle detection method and device for port automatic driving scene
CN110745140B (en) Vehicle lane change early warning method based on continuous image constraint pose estimation
CN108596058A (en) Running disorder object distance measuring method based on computer vision
CN108960183A (en) A kind of bend target identification system and method based on Multi-sensor Fusion
CN107972662A (en) To anti-collision warning method before a kind of vehicle based on deep learning
JP2023510734A (en) Lane detection and tracking method for imaging system
CN107985189B (en) Early warning method for lane changing depth of driver in high-speed driving environment
CN105678287B (en) A kind of method for detecting lane lines based on ridge measurement
CN107796373A (en) A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven
US10984264B2 (en) Detection and validation of objects from sequential images of a camera
DE102010005290A1 (en) Vehicle controlling method for vehicle operator i.e. driver, involves associating tracked objects based on dissimilarity measure, and utilizing associated objects in collision preparation system to control operation of vehicle
US20190180121A1 (en) Detection of Objects from Images of a Camera
CN106909929A (en) Pedestrian's distance detection method and device
CN107290738A (en) A kind of method and apparatus for measuring front vehicles distance
CN109059863B (en) Method for mapping track point vector of head-up pedestrian to two-dimensional world coordinate system
CN106570487A (en) Method and device for predicting collision between objects
Adamshuk et al. On the applicability of inverse perspective mapping for the forward distance estimation based on the HSV colormap
CN117111055A (en) Vehicle state sensing method based on thunder fusion
CN112562061A (en) Driving vision enhancement system and method based on laser radar image
CN108932587B (en) Overlooking pedestrian risk quantification system of two-dimensional world coordinate system
TWI680898B (en) Light reaching detection device and method for close obstacles
CN108805105B (en) Method for constructing risk matrix before looking down two-dimensional world coordinate system
CN116710809A (en) System and method for monitoring LiDAR sensor health
CN113888463A (en) Wheel rotation angle detection method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210921