CN112666550B - Moving object detection method and device, fusion processing unit and medium - Google Patents

Moving object detection method and device, fusion processing unit and medium Download PDF

Info

Publication number
CN112666550B
CN112666550B CN202011560817.6A CN202011560817A CN112666550B CN 112666550 B CN112666550 B CN 112666550B CN 202011560817 A CN202011560817 A CN 202011560817A CN 112666550 B CN112666550 B CN 112666550B
Authority
CN
China
Prior art keywords
moving object
information
plane
event data
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011560817.6A
Other languages
Chinese (zh)
Other versions
CN112666550A (en
Inventor
吴臻志
杨哲宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lynxi Technology Co Ltd
Original Assignee
Beijing Lynxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lynxi Technology Co Ltd filed Critical Beijing Lynxi Technology Co Ltd
Priority to CN202011560817.6A priority Critical patent/CN112666550B/en
Publication of CN112666550A publication Critical patent/CN112666550A/en
Priority to PCT/CN2021/141370 priority patent/WO2022135594A1/en
Application granted granted Critical
Publication of CN112666550B publication Critical patent/CN112666550B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a method for detecting a moving object, including: acquiring event data, wherein the event data represents light intensity change information in a first plane; acquiring first state component information of a moving object in a first direction, wherein the first direction is perpendicular to the first plane; and carrying out fusion processing on the event data and the first state component information to generate three-dimensional state information of the moving object. The disclosure also provides a detection device for a moving object, a fusion processing unit and a computer readable medium.

Description

Moving object detection method and device, fusion processing unit and medium
Technical Field
The disclosure relates to the technical field of target detection, and in particular relates to a method for detecting a moving object, a device for detecting the moving object, a fusion processing unit and a computer readable medium.
Background
Moving object detection is an important technology for video analysis and understanding, and is an important preprocessing step for calculating vision tasks, such as object identification, moving object tracking and the like. The inter-frame difference method and the optical flow method are common methods for detecting moving objects in some related arts. The inter-frame difference method obtains the outline of a moving object by calculating the difference value of two adjacent frames of images, specifically, the two frames of images are subtracted to obtain the absolute value of the brightness difference of the two frames of images, and the motion characteristics of a video or an image sequence are analyzed by judging whether the calculated absolute value is larger than a threshold value or not. Optical flow rules are the use of optical flow to describe the movement of an observed object, surface or edge caused by movement relative to an observer.
In some related application scenarios, the detection effect of the moving object detection algorithm is not ideal enough.
Disclosure of Invention
The present disclosure provides a detection method of a moving object, a detection device of a moving object, a fusion processing unit, a computer readable medium.
In a first aspect, an embodiment of the present disclosure provides a method for detecting a moving object, including:
acquiring event data, wherein the event data represents light intensity change information in a first plane;
acquiring first state component information of a moving object in a first direction, wherein the first direction is perpendicular to the first plane;
and carrying out fusion processing on the event data and the first state component information to generate three-dimensional state information of the moving object.
In a second aspect, an embodiment of the present disclosure provides a detection apparatus for a moving object, including:
the sensor is used for detecting light intensity change information in the first plane and generating event data;
a radar for detecting a motion state of a moving object in a first direction, and determining first state component information of the moving object in the first direction, wherein the first direction is perpendicular to the first plane;
and the fusion processing unit is used for carrying out fusion processing on the event data and the first state component information to generate three-dimensional state information of the moving object.
In a third aspect, an embodiment of the present disclosure provides a fusion processing unit applied to a detection device of a moving object, the fusion processing unit including:
one or more processors;
and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method for detecting a moving object according to the first aspect of the embodiments of the present disclosure.
In a fourth aspect, an embodiment of the present disclosure provides a computer readable medium having stored thereon a computer program, which when executed by a processor, implements the method for detecting a moving object according to the first aspect of the embodiment of the present disclosure.
In the embodiment of the disclosure, the sensor of the detection device has motion sensitivity, can dynamically respond to scene changes in real time, and the sensor detects event data generated after light intensity change information in a first plane, and the fusion processing unit can fuse the event data representing the light intensity information in the first plane with first state component information of a moving object in a first direction, so that the acquisition of three-dimensional state information of the moving object is realized, meanwhile, redundant data is eliminated, the real-time dynamic response of the detection of the moving object is realized, and the detection efficiency of the moving object is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. The above and other features and advantages will become more readily apparent to those skilled in the art by describing in detail exemplary embodiments with reference to the attached drawings, in which:
FIG. 1 is a flow chart of a detection method in an embodiment of the present disclosure;
FIG. 2 is a flow chart of some steps of another detection method in an embodiment of the present disclosure;
FIG. 3 is a flow chart of some steps of yet another detection method in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of one implementation of determining an offset in an embodiment of the present disclosure;
FIG. 5 is a flow chart of some steps of yet another detection method in an embodiment of the present disclosure;
FIG. 6 is a block diagram of a detection device in an embodiment of the present disclosure;
fig. 7 is a block diagram of a fusion processing unit in an embodiment of the disclosure.
Detailed Description
For a better understanding of the technical solutions of the present disclosure, exemplary embodiments of the present disclosure will be described below with reference to the accompanying drawings, in which various details of the embodiments of the present disclosure are included to facilitate understanding, and they should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Embodiments of the disclosure and features of embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In a first aspect, referring to fig. 1, an embodiment of the present disclosure provides a method for detecting a moving object, including:
in step S100, event data is acquired, wherein the event data characterizes light intensity variation information in a first plane;
in step S200, first state component information of a moving object in a first direction is acquired, wherein the first direction is perpendicular to the first plane;
in step S300, the event data and the first state component information are fused to generate three-dimensional state information of the moving object.
In an embodiment of the present disclosure, a detection apparatus for a moving object is provided for detecting a moving state of the moving object. The detection device of the moving object comprises a sensor and a radar. The sensor in the detection device has motion sensitivity and can dynamically respond to scene changes in real time. The event data in step S100 is generated by collecting the light intensity variation information in the first plane by using a sensor, where the sensor only retains the dynamic information when generating the event data.
In an embodiment of the present disclosure, the first state component information is generated by detecting a motion of a moving object by a radar. The first state component information is not particularly limited in the embodiments of the present disclosure. For example, the first state component information may include a velocity component of the moving object in the first direction, a distance of the moving object to the detection device.
In the embodiment of the present disclosure, the first direction is a direction parallel to a radial direction of the radar, and the first plane is a plane perpendicular to the radial direction of the radar.
In the embodiment of the present disclosure, step S100 and step S200 may be performed simultaneously, to obtain event data and first state component information corresponding to the same time point; step S100 and step S200 may also be performed, respectively, to obtain event data corresponding to a plurality of time points and first state component information corresponding to a plurality of time points. The embodiments of the present disclosure are not particularly limited thereto. In step S300, the event data and the first state component information are fused, and the event data corresponding to the same time point and the first state component information are aligned and calibrated to generate three-dimensional state information of the moving object.
The embodiment of the present disclosure does not particularly limit the three-dimensional state information of the moving object generated through step S300. For example, the three-dimensional state information may include three-dimensional speeds, three-dimensional position coordinates of respective time points of the moving object, velocity components of respective time points of the moving object in a plurality of predetermined directions including the first direction, and motion trajectories of the moving object generated according to the three-dimensional speeds, three-dimensional positions of the plurality of time points.
In the embodiment of the disclosure, event data representing light intensity change information in a first plane and first state component information of a moving object in a first direction are fused, so that acquisition of three-dimensional state information of the moving object is realized; the event data is generated by the real-time dynamic response of the sensor with motion sensitivity to scene change, redundant data is eliminated, the real-time dynamic response of the detection of the moving object is realized, and the efficiency of the detection of the moving object is improved.
In the embodiment of the disclosure, the state component information of the moving object on the first plane can be determined according to the event data, and the three-dimensional state information of the moving object is determined according to the state component information of the moving object on the first plane and the state component information in the first direction.
Accordingly, in some embodiments, referring to fig. 2, step S300 includes:
in step S310, determining second state component information of the moving object on the first plane according to the event data;
in step S320, three-dimensional state information of the moving object is generated according to the second state component information and the first state component information.
It should be noted that, in the embodiment of the present disclosure, the first state component information may be generated by detecting the moving object by the radar; it is also possible that the radar detects the moving object to generate an initial detection signal, e.g. a doppler shift corresponding to the moving object measured by a pulsed doppler radar, and then determines the first state component based on the initial detection information. The embodiments of the present disclosure are not particularly limited thereto.
In the case where the radar detects a moving object to generate an initial detection signal, the detection method may further include the step of determining the first state component from the initial detection signal.
The sensor in the detection device is not particularly limited in the embodiments of the present disclosure. As an alternative embodiment, the sensor in the detection device is a dynamic visual sensor (DVS, dynamic Vision Sensor). The DVS is a sensor imitating the working mechanism of biological vision, can detect the change of light and output the address and information of pixels with changed light intensity, eliminates redundant data and can dynamically respond to scene changes in real time.
In the embodiment of the disclosure, the event data collected by the DVS is two-dimensional data of the first plane, and according to the address of the pixel and the light intensity variation information of which the light intensity is varied at each time point provided by the DVS, a moving object in the first plane can be determined, and a state component of the moving object in the first plane can be determined.
Accordingly, in some embodiments, the event data includes coordinates of pixels where the first plane light intensity changes and light intensity change information; the second state component information includes position coordinates of the moving object in the first plane; referring to fig. 3, step S310 includes:
in step S311, the event data in the same sampling period is framed to generate an event frame;
in step S312, position coordinates of the moving object in the first plane are determined from the event frame.
In the embodiment of the present disclosure, there is no particular limitation on how to implement step S312. As an alternative embodiment, a target detection algorithm is used to determine the position coordinates of the moving object in the first plane from the event frame. In the embodiment of the present disclosure, the coordinates of any point on the moving object in the first plane may be taken as the position coordinates of the moving object in the first plane, for example, the coordinates of the center point of the moving object may be taken as the position coordinates of the moving object in the first plane. The embodiments of the present disclosure are not particularly limited thereto.
In the embodiment of the present disclosure, the position coordinates of the moving object in the first plane may be expressed as (x, y). Wherein x corresponds to one of the second direction and the third direction and y corresponds to the other of the second direction and the third direction.
Accordingly, in some embodiments, the second state component information further includes a second velocity component of the moving object in a second direction and a third velocity component in a third direction, the second direction, the third direction being parallel to the first plane and the second direction being perpendicular to the third direction; referring to fig. 3, step S310 further includes:
in step S313, a second offset amount of the moving object in the second direction and a third offset amount in the third direction are respectively determined according to the position coordinates of the moving object in the first plane;
in step S314, determining the second velocity component according to the second offset;
in step S315, the third velocity component is determined according to the third offset.
In the embodiment of the present disclosure, the second direction and the third direction in the first plane are not particularly limited. As an alternative embodiment, the first direction, the second direction and the third direction form a rectangular coordinate system, the first direction is a vertical coordinate direction of the rectangular coordinate system, the second direction is a horizontal coordinate direction of the rectangular coordinate system, and the third direction is a vertical coordinate direction of the rectangular coordinate system.
The embodiment of the present disclosure is not particularly limited as to how to perform step S311 to determine the second offset amount and the third offset amount. Fig. 4 shows an alternative embodiment for determining a third offset in a third direction from the coordinates of pixels where the intensity of the first plane changes at adjacent points in time. As shown in fig. 4, the absolute offset in the third direction can be obtained. Similarly, the absolute offset in the second direction can be obtained.
In step S312 and step S313, the time differences of adjacent time points are also combined when determining the second speed component and the third speed component.
In some embodiments, the first state component information includes a first velocity component and a distance of the moving object in the first direction; the three-dimensional state information comprises the three-dimensional speed and the three-dimensional coordinates of the moving object; referring to fig. 3, step S320 includes:
in step S321, position coordinates of the moving object in the first plane, the first velocity component, the second velocity component, the third velocity component, and distances of the moving object in the first direction corresponding to the same point in time are determined;
in step S322, the three-dimensional speed and the three-dimensional coordinates are determined from the position coordinates of the moving object in the first plane, the first speed component, the second speed component, the third speed component, and the distance of the moving object in the first direction, which correspond to the same point in time.
In the disclosed embodiments, three-dimensional coordinates may be represented by (x, y, z). Wherein z corresponds to a first direction, x corresponds to one of a second direction and a third direction, and y corresponds to the other of the second direction and the third direction.
As an alternative embodiment, the event data is obtained by DVS.
Accordingly, in some embodiments, referring to fig. 5, step S100 includes:
in step S110, the event data including coordinates of the pixels whose light intensity changes in the first plane and light intensity change information is acquired in response to the light intensity change of the pixels in the first plane.
As an alternative embodiment, the first state component information is obtained by a pulsed doppler radar, which is capable of detecting information such as velocity and distance of a moving object.
Accordingly, in some embodiments, referring to fig. 5, step S200 includes:
in step S210, a first velocity component and a distance of the moving object in the first direction are acquired as the first state component information.
In some embodiments, referring to fig. 5, the detection method further comprises:
in step S400, the three-dimensional state information is output.
The embodiment of the present disclosure does not particularly limit the three-dimensional state information. For example, the three-dimensional state information may include three-dimensional speeds and three-dimensional position coordinates of respective time points of the moving object, may further include speed components of respective time points of the moving object in a first direction, a second direction, and a third direction, and may further include a motion trajectory of the moving object generated according to the three-dimensional speeds and the three-dimensional positions of the plurality of time points.
The embodiment of the present disclosure is not particularly limited as to how to output the three-dimensional state information. For example, three-dimensional state information of a moving object may be displayed on a display screen.
In a second aspect, an embodiment of the present disclosure provides a detection apparatus for a moving object, referring to fig. 6, including:
a sensor 101 for detecting light intensity variation information in a first plane to generate event data;
a radar 102, configured to detect a motion state of a moving object in a first direction, and determine first state component information of the moving object in the first direction, where the first direction is perpendicular to the first plane;
and a fusion processing unit 103, configured to perform fusion processing on the event data and the first state component information, and generate three-dimensional state information of the moving object.
In the disclosed embodiment, the sensor 101 has motion sensitivity and is capable of real-time dynamic response to scene changes.
The first state component information is not particularly limited in the embodiments of the present disclosure. For example, the first state component information may include a velocity component of the moving object in the first direction, a distance of the moving object to the detection device.
In an embodiment of the present disclosure, the fusion processing unit 103 may perform the method for detecting a moving object according to the first aspect of the embodiment of the present disclosure, and fuse the event data with the first state component information to generate three-dimensional state information of the moving object.
It should be noted that, in the embodiment of the present disclosure, the first direction is a direction parallel to the radial direction of the radar 102, and the first plane is a plane perpendicular to the radial direction of the radar 102.
The embodiment of the present disclosure does not particularly limit the three-dimensional state information of the moving object. For example, the three-dimensional state information may include three-dimensional speeds, three-dimensional position coordinates of respective time points of the moving object, velocity components of respective time points of the moving object in a plurality of predetermined directions including the first direction, and motion trajectories of the moving object generated according to the three-dimensional speeds, three-dimensional positions of the plurality of time points.
In the embodiment of the disclosure, the sensor of the detection device has motion sensitivity, can dynamically respond to scene change in real time, detects event data generated by light intensity change information in the first plane, and the fusion processing unit can fuse the event data representing the dynamic information of the first plane with first state component information of the moving object in the first direction, so that the acquisition of three-dimensional state information of the moving object is realized, meanwhile, redundant data is eliminated, the real-time dynamic response of the moving object detection is realized, and the efficiency of the moving object detection is improved.
The sensor 101 in the detection device is not particularly limited in the embodiment of the present disclosure. As an alternative embodiment, the sensor in the detection device is a dynamic visual sensor (DVS, dynamic Vision Sensor). The DVS is a sensor imitating the working mechanism of biological vision, can detect the change of light and output the address and information of pixels with changed light intensity, eliminates redundant data and can dynamically respond to scene changes in real time.
Accordingly, in some embodiments, the sensor 101 is a dynamic vision sensor;
the dynamic vision sensor is used for detecting the change of the light intensity of the first plane and generating the event data; the event data includes coordinates of pixels where the light intensity of the first plane changes and light intensity change information.
As an alternative embodiment, the radar 102 in the detection device is a pulse doppler radar, and the fusion processing unit 103 acquires the first state component information from the pulse doppler radar.
Accordingly, in some embodiments, the radar is a pulsed doppler radar, the first state component information includes a velocity component and a distance of the moving object in the first direction;
the pulse Doppler radar is used for sending and receiving pulse signals and determining the speed component and the distance of the moving object in the first direction.
A brief description of pulsed doppler radar is provided below.
Doppler radar is a radar that measures a radial velocity component of a target relative to the radar or extracts a target having a specific radial velocity in a radial direction by using the doppler effect. The pulsed doppler radar is a doppler radar that transmits a pulsed signal.
When pulse doppler radar scans empty at a fixed frequency, if a moving target is encountered, a frequency difference occurs between the frequency of the reflected echo and the frequency of the transmitted wave, that is, doppler shift. The doppler shift is proportional to the relative radial velocity of the moving target and the radar. From the magnitude of the doppler shift, the radial velocity of the moving object can be determined. The amplitude of the doppler shift is calculated from the phase of the signal. Thus, in the disclosed embodiment, radar 102 is a coherent radar, so that phase information may be preserved.
It should be noted that, for a moving object, the doppler shift is positive when the moving object moves toward the radar; the doppler shift is negative when the moving object moves away from the radar.
In some embodiments, referring to fig. 6, the detection device further comprises:
an output unit 104 for outputting three-dimensional state information of the moving object.
The embodiment of the present disclosure does not particularly limit the output unit. For example, the output unit is a display screen, and three-dimensional state information of the moving object is displayed on the display screen.
In a third aspect, an embodiment of the present disclosure provides a fusion processing unit applied to a detection device of a moving object, with reference to fig. 7, the fusion processing unit including:
one or more processors 201;
a memory 202 having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method for detecting a moving object according to the first aspect of the embodiments of the present disclosure;
one or more I/O interfaces 203, coupled between the processor and the memory, are configured to enable information interaction of the processor with the memory.
Wherein the processor 201 is a device having data processing capabilities, including but not limited to a Central Processing Unit (CPU) or the like; memory 202 is a device with data storage capability including, but not limited to, random access memory (RAM, more specifically SDRAM, DDR, etc.), read-only memory (ROM), electrically charged erasable programmable read-only memory (EEPROM), FLASH memory (FLASH); an I/O interface (read/write interface) 203 is connected between the processor 201 and the memory 202 to enable information interaction between the processor 201 and the memory 202, including but not limited to a data Bus (Bus) or the like.
In some embodiments, processor 201, memory 202, and I/O interface 203 are connected to each other and, in turn, to other components of the computing device via bus 204.
In a fourth aspect, an embodiment of the present disclosure provides a computer readable medium having stored thereon a computer program, which when executed by a processor, implements the method for detecting a moving object according to the first aspect of the embodiment of the present disclosure.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, it will be apparent to one skilled in the art that features, characteristics, and/or elements described in connection with a particular embodiment may be used alone or in combination with other embodiments unless explicitly stated otherwise. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.

Claims (15)

1. A method of detecting a moving object, comprising:
acquiring event data, wherein the event data represents light intensity change information in a first plane, and the first plane is a plane vertical to the radial direction of the radar;
acquiring first state component information of a moving object in a first direction, wherein the first direction is perpendicular to the first plane, and the first state component information is information for representing the speed and/or the distance of the moving object in the first direction;
and carrying out fusion processing on the event data and the first state component information to generate three-dimensional state information of the moving object.
2. The detection method according to claim 1, wherein the step of fusing the event data with the first state component information to generate three-dimensional state information of the moving object includes:
determining second state component information of the moving object on the first plane according to the event data, wherein the second state component information comprises position coordinates of the moving object on the first plane;
and generating three-dimensional state information of the moving object according to the second state component information and the first state component information.
3. The detection method according to claim 2, wherein the event data includes coordinates of pixels where the first plane light intensity changes and light intensity change information; the second state component information includes position coordinates of the moving object in the first plane; the step of determining second state component information of the moving object on the first plane according to the event data comprises the following steps:
framing the event data in the same sampling period to generate an event frame;
and determining the position coordinates of the moving object in the first plane according to the event frame.
4. The detection method according to claim 3, wherein the second state component information further includes a second velocity component of the moving object in a second direction and a third velocity component in a third direction, the second direction, the third direction being parallel to the first plane and the second direction being perpendicular to the third direction; the step of determining second state component information of the moving object in the first plane according to the event data further comprises:
determining a second offset of the moving object in the second direction and a third offset of the moving object in the third direction according to the position coordinates of the moving object in the first plane;
determining the second velocity component from the second offset;
and determining the third speed component according to the third offset.
5. The detection method according to claim 4, wherein first state component information includes a first velocity component and a distance of the moving object in the first direction; the three-dimensional state information comprises the three-dimensional speed and the three-dimensional coordinates of the moving object; the step of generating three-dimensional state information of the moving object from the second state component information and the first state component information includes:
determining a position coordinate of the moving object in the first plane, the first velocity component, the second velocity component, the third velocity component, and a distance of the moving object in the first direction corresponding to the same point in time;
the three-dimensional velocity and the three-dimensional coordinates are determined according to the position coordinates of the moving object in the first plane, the first velocity component, the second velocity component, the third velocity component, and the distance of the moving object in the first direction, which correspond to the same point in time.
6. The detection method according to any one of claims 1 to 5, wherein the step of acquiring event data includes:
and acquiring the event data in response to the light intensity change of the pixels in the first plane, wherein the event data comprises coordinates of the pixels with changed light intensity of the first plane and light intensity change information.
7. The detection method according to any one of claims 1 to 5, wherein the step of acquiring first state component information of the moving object in the first direction includes:
and acquiring a first speed component and a distance of the moving object in the first direction as the first state component information.
8. The detection method according to any one of claims 1 to 5, wherein the detection method further comprises:
and outputting the three-dimensional state information.
9. The detection method according to any one of claims 1 to 5, wherein the step of acquiring event data includes:
the event data is acquired by a dynamic vision sensor.
10. A moving object detection apparatus comprising:
the sensor is used for detecting light intensity change information in a first plane and generating event data, and the first plane is a plane vertical to the radial direction of the radar;
a radar for detecting a motion state of a moving object in a first direction, determining first state component information of the moving object in the first direction, wherein the first direction is perpendicular to the first plane, and the first state component information is information for representing a speed and/or a distance of the moving object in the first direction;
and the fusion processing unit is used for carrying out fusion processing on the event data and the first state component information to generate three-dimensional state information of the moving object.
11. The detection apparatus according to claim 10, wherein the sensor is a dynamic vision sensor;
the dynamic vision sensor is used for detecting the change of the light intensity of the pixels in the first plane and generating the event data; the event data includes coordinates of pixels where the light intensity of the first plane changes and light intensity change information.
12. The detection apparatus according to claim 10, wherein the radar is a pulsed doppler radar, the first state component information including a velocity component and a distance of the moving object in the first direction;
the pulse Doppler radar is used for sending and receiving pulse signals and determining the speed component and the distance of the moving object in the first direction.
13. The detection apparatus according to any one of claims 10 to 12, wherein the detection apparatus further comprises:
and the output unit is used for outputting the three-dimensional state information of the moving object.
14. A fusion processing unit applied to a detection device of a moving object, the fusion processing unit comprising:
one or more processors;
storage means having stored thereon one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of detecting a moving object according to any one of claims 1 to 9.
15. A computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, implements the method of detecting a moving object according to any one of claims 1 to 9.
CN202011560817.6A 2020-12-25 2020-12-25 Moving object detection method and device, fusion processing unit and medium Active CN112666550B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011560817.6A CN112666550B (en) 2020-12-25 2020-12-25 Moving object detection method and device, fusion processing unit and medium
PCT/CN2021/141370 WO2022135594A1 (en) 2020-12-25 2021-12-24 Method and apparatus for detecting target object, fusion processing unit, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011560817.6A CN112666550B (en) 2020-12-25 2020-12-25 Moving object detection method and device, fusion processing unit and medium

Publications (2)

Publication Number Publication Date
CN112666550A CN112666550A (en) 2021-04-16
CN112666550B true CN112666550B (en) 2024-01-16

Family

ID=75408920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011560817.6A Active CN112666550B (en) 2020-12-25 2020-12-25 Moving object detection method and device, fusion processing unit and medium

Country Status (1)

Country Link
CN (1) CN112666550B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022135594A1 (en) * 2020-12-25 2022-06-30 北京灵汐科技有限公司 Method and apparatus for detecting target object, fusion processing unit, and medium
CN113096158A (en) * 2021-05-08 2021-07-09 北京灵汐科技有限公司 Moving object identification method and device, electronic equipment and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN107036534A (en) * 2016-02-03 2017-08-11 北京振兴计量测试研究所 Method and system based on laser speckle measurement Vibration Targets displacement
CN109146929A (en) * 2018-07-05 2019-01-04 中山大学 A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN109752719A (en) * 2019-01-27 2019-05-14 南昌航空大学 A kind of intelligent automobile environment perception method based on multisensor
US10345447B1 (en) * 2018-06-27 2019-07-09 Luminar Technologies, Inc. Dynamic vision sensor to direct lidar scanning
CN110427823A (en) * 2019-06-28 2019-11-08 北京大学 Joint objective detection method and device based on video frame and pulse array signals
KR102104351B1 (en) * 2019-11-20 2020-05-29 주식회사 플랜비 Cctv camera for detecting moving object using radar module and cctv system including thereof
JP2020170370A (en) * 2019-04-04 2020-10-15 株式会社デンソー Falling object discrimination device, driving support system and falling object discrimination method
CN112106121A (en) * 2018-05-11 2020-12-18 三星电子株式会社 Electronic device and control method thereof

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10510160B2 (en) * 2016-12-20 2019-12-17 Samsung Electronics Co., Ltd. Multiscale weighted matching and sensor fusion for dynamic vision sensor tracking
US11886968B2 (en) * 2020-03-27 2024-01-30 Intel Corporation Methods and devices for detecting objects and calculating a time to contact in autonomous driving systems

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107036534A (en) * 2016-02-03 2017-08-11 北京振兴计量测试研究所 Method and system based on laser speckle measurement Vibration Targets displacement
CN106908783A (en) * 2017-02-23 2017-06-30 苏州大学 Obstacle detection method based on multi-sensor information fusion
CN112106121A (en) * 2018-05-11 2020-12-18 三星电子株式会社 Electronic device and control method thereof
US10345447B1 (en) * 2018-06-27 2019-07-09 Luminar Technologies, Inc. Dynamic vision sensor to direct lidar scanning
CN109146929A (en) * 2018-07-05 2019-01-04 中山大学 A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN109752719A (en) * 2019-01-27 2019-05-14 南昌航空大学 A kind of intelligent automobile environment perception method based on multisensor
JP2020170370A (en) * 2019-04-04 2020-10-15 株式会社デンソー Falling object discrimination device, driving support system and falling object discrimination method
CN110427823A (en) * 2019-06-28 2019-11-08 北京大学 Joint objective detection method and device based on video frame and pulse array signals
KR102104351B1 (en) * 2019-11-20 2020-05-29 주식회사 플랜비 Cctv camera for detecting moving object using radar module and cctv system including thereof

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Event-Based Neuromorphic Vision for Autonomous Driving: A Paradigm Shift for Bio-Inspired Visual Sensing and Perception;Guang Chen;IEEE Signal Processing Magazine;全文 *
基于无人机视觉的场景感知方法研究;谢榛;中国博士学位论文全文数据库 工程科技Ⅱ辑;全文 *
神经形态视觉传感器及其应用研究;桑永胜;李仁昊;李耀仟;王蔷薇;毛耀;;物联网学报(第04期);全文 *
视觉传感机理与数据处理进展;王程;陈峰;吴金建;赵勇;雷浩;刘纪元;汶德胜;;中国图象图形学报(第01期);全文 *

Also Published As

Publication number Publication date
CN112666550A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
US11668571B2 (en) Simultaneous localization and mapping (SLAM) using dual event cameras
WO2021072696A1 (en) Target detection and tracking method and system, and movable platform, camera and medium
CN107610084B (en) Method and equipment for carrying out information fusion on depth image and laser point cloud image
CN112384891B (en) Method and system for point cloud coloring
CN106767852B (en) A kind of method, apparatus and equipment generating detection target information
CN101933065B (en) Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method
CN112816995B (en) Target detection method and device, fusion processing unit and computer readable medium
US20170132806A1 (en) System and method for augmented reality and virtual reality applications
CN112346073A (en) Dynamic vision sensor and laser radar data fusion method
CN112666550B (en) Moving object detection method and device, fusion processing unit and medium
WO2022135594A1 (en) Method and apparatus for detecting target object, fusion processing unit, and medium
JP2017068700A (en) Object detection apparatus, object detection method, and program
CN111913177A (en) Method and device for detecting target object and storage medium
CN116310679A (en) Multi-sensor fusion target detection method, system, medium, equipment and terminal
CN111862208B (en) Vehicle positioning method, device and server based on screen optical communication
EP3051492B1 (en) Method and apparatus for determining disparity
CN112241978A (en) Data processing method and device
CN113989755A (en) Method, apparatus and computer readable storage medium for identifying an object
CN114545434A (en) Road side visual angle speed measurement method and system, electronic equipment and storage medium
CN116990830A (en) Distance positioning method and device based on binocular and TOF, electronic equipment and medium
KR102003387B1 (en) Method for detecting and locating traffic participants using bird's-eye view image, computer-readerble recording medium storing traffic participants detecting and locating program
CN117250956A (en) Mobile robot obstacle avoidance method and obstacle avoidance device with multiple observation sources fused
CN111260607A (en) Automatic suspicious article detection method, terminal device, computer device and medium
Bruyelle et al. Direct range measurement by linear stereovision for real-time obstacle detection in road traffic
Bharade et al. Statistical approach for distance estimation using inverse perspective mapping on embedded platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant