CN100373394C - Petoscope based on bionic oculus and method thereof - Google Patents

Petoscope based on bionic oculus and method thereof Download PDF

Info

Publication number
CN100373394C
CN100373394C CNB2005100950861A CN200510095086A CN100373394C CN 100373394 C CN100373394 C CN 100373394C CN B2005100950861 A CNB2005100950861 A CN B2005100950861A CN 200510095086 A CN200510095086 A CN 200510095086A CN 100373394 C CN100373394 C CN 100373394C
Authority
CN
China
Prior art keywords
image
moveable
goal
moving target
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005100950861A
Other languages
Chinese (zh)
Other versions
CN1932841A (en
Inventor
徐贵力
黄祝新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CNB2005100950861A priority Critical patent/CN100373394C/en
Publication of CN1932841A publication Critical patent/CN1932841A/en
Application granted granted Critical
Publication of CN100373394C publication Critical patent/CN100373394C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

One moveable goal detecting equipment and method based on artificial compound belongs to artificial compound moveable goal detecting system. It relates to camera, outline border, standing pole, Yunnan and DSP processor. The moveable goal detecting method uses moveable goal division arithmetic to pick up and identify moveable goal form the video frequency picture having moveable goal. The dynamic and static state parameter testing method uses some cameras to screen the moveable goal to calculate the coordinate under the appointed world coordinate system and draw the movement track and speed. The moveable goal tracking method is about dividing up the dynamic and static state parameter of moveable goal to judge its movement trend, and run Yunnan to follow moveable goal by serial port. The panorama creating method takes the method gray-level relativity method and put multi-way overlap pictures together one large-scale seamless picture. It owns large watch field, high distinguish rate and delicacy rate.

Description

Motion target detection method based on bionic compound eyes
Technical field
The present invention relates to a kind of moving target detecting method, specifically be used for motion target detection and tracking.
Background technology
The measurement of moving target and tracking are one of important topics of machine vision research field, also are the forward position directions that enjoys the researcher to pay close attention in recent years, in a plurality of fields such as pattern-recognition, intelligent monitoring, navigational guidances scientific research and actual application value are arranged.The measurement of moving target and the main contents of tracking are by to detection in a series of image sequence that comprises moving target, identification, pursuit movement target.
The insect compound eye has two distinguishing features: 1, insect is simple eye little, and the visual field is also little, but the compound eye visual field of the simple eye composition of One's name is legion is openr than human eye.Some insect horizontal field of view can reach 240 °, and the vertical visual field scope can reach 360 °, and people's field range has only 180 °.2, the insect compound eye is very responsive to the object reaction of moving, and as honeybee reaction time of emergent object is only needed 0.01s, and human eye needs 0.05s.This shows that the insect compound eye has special advantages on moving object measurement is followed the tracks of.
According to the document of retrieval, mainly studied both at home and abroad the biological structure of compound eye, imaging mechanism etc., and utilize the visual performance of the device simulation realization compound eye of optical device such as photoelectric sensor, video camera and collection and processing signals.Various bionic compound eyes imaging systems design in order to realize specific function separately.The big visual field of compound eye, high resolution and further investigation document that the responsive physilogical characteristics of moving target are used for motion target detection and tracking aspect be yet there are no report.
Summary of the invention
Because it is big that compound eye has the visual field, see under the situation of same visual field having higher resolution with respect to monocular, and compound eye is responsive especially to dynamic object, the objective of the invention is to be subjected to these physilogical characteristics of insect compound eye to inspire, develop a kind of principle and realize accuracy of detection height, the bionic compound eyes moving object detection system that accuracy is strong, the visual field is wide, autokinesis is high based on compound eye tracing movement target.We are for the quick measurement and the tracking of the three-dimensional motion (becoming various angular movements with the imaging system plane) that realizes moving target, according to the physilogical characteristics of insect compound eye, development and design quick measurement that can realize moving target with follow the tracks of the bionic compound eyes system.
The present invention is achieved in that
The present invention is orthohexagonal feature according to the minimal structure unit of dragonfly Yan compound eye, has designed a moving object detection device, and pick-up unit is rearranged according to the compound eye minimum unit by seven video cameras, with a plurality of DSP synchronous processing vision signals.It is spliced into the high-resolution panorama sketch of ultra-large vision field with the multiple-camera video image, can fast detecting be partitioned into moving target, measures the three-dimensional dynamic and static parameter of moving target, and the control The Cloud Terrace drives vision system and rotates the pursuit movement target.Why this system selects for use a plurality of ccd video cameras to replace the optics compound eye, be when realizing guaranteeing big visual field, can realize high resolution, and each video camera ommatidium unit can the parallel processing video, thereby makes this system have big visual field, high resolution, parallel detection and the target following of carrying out the dynamic and static parameter of target fast.
The concrete composition of its device is, housing is made up of seven regular hexagon frames, is the center with middle regular hexagon frame, one side overlaid of one side of all the other six regular hexagon frames and Main subrack.Wherein, the positive one-tenth of housing cambered surface, seven ccd video cameras are placed in each regular hexagon framework, whole housing links to each other with a vertical rod, vertical rod is fixed on the The Cloud Terrace that can horizontally rotate with flip vertical by support, each video camera is connected in the detection processing unit that dsp processor constitutes, and video camera is realized the acquisition function of video image as the ommatidium of insect.Device integral body is cambered surface, and the image mosaic of seven video cameras becomes the panorama sketch of a super large, has enlarged the field range of whole device, guarantees to have very high image resolution ratio simultaneously.The big visual field that a plurality of video cameras are formed is easier to find target, both can realize the measurement in space of target, also can be chosen to handle as one road video image of better effects if, and measurement result is more accurate like this; A plurality of dsp processors are run simultaneously to handle and are made treatment effeciency higher; By the level of control The Cloud Terrace, the rotation of vertical direction, can control the vision system of whole housing and partly do the purpose that corresponding rotation reaches the pursuit movement target.
Motion target detection method based on bionic compound eyes of the present invention is characterized in that, comprises moving target detecting method, sound attitude measurement method of parameters, motion target tracking method and Panoramagram generation method, is specially:
(1) motion target detection method: be to utilize the moving Object Segmentation algorithm from the video image that contains moving target, extract and the identification moving target, at first, video image is carried out the Wiener filtering pre-service, the high-order statistic that calculates frame difference image carries out Threshold Segmentation, obtains frame difference information; Fully utilize the background information that multiframe information obtains then, utilize frame difference information and contextual information extraction to go out moving target, at last the moving target that obtains is done the morphology filtering operation, remove noise, cavity and shadow, obtain more complete moving target;
(2) sound attitude measurement method of parameters: be moving target to be taken with a plurality of video cameras, calculate it at the coordinate of specifying under the world coordinate system, be plotted in the movement locus under this coordinate system simultaneously, calculate movement velocity, at first video camera is demarcated, obtain the intrinsic parameter of video camera and the transformational relation between image coordinate and the world coordinate system, then according to coordinate and the calibration result of moving target in image, obtain it at the coordinate of specifying under the world coordinate system, extract the barycenter of moving target at last, draw movement locus, calculate the speed of moving target;
(3) motion target tracking method: rotate by serial ports controlled motion object detecting device, realize motion target tracking;
(4) Panoramagram generation method: be to utilize the gray scale correlation method, seven road superimposed images are spliced into a large-scale seamless high-definition picture, at first determine the characteristic area in the adjacent image overlapping region, come positioning image according to image characteristic region then, realize the aligning of image, determine splicing place image slices vegetarian refreshments gray-scale value at last, eliminate the artificial gap effect in the image that is stitched together, realize the seamless spliced of image.
Visual field of the present invention is big, resolution is high, highly sensitive, can realize the quick measurement and the tracking of moving target, and can generate seamless, the extraordinary panorama sketch of effect automatically.
Description of drawings
Fig. 1 is based on the moving object detection system structural front view of bionic compound eyes; Number in the figure title: 1. video camera, 2. housing, 3. vertical rod, 4. The Cloud Terrace;
Fig. 2 is the right view of Fig. 1;
Fig. 3 is a fundamental diagram of the present invention;
Fig. 4 is the moving Object Segmentation algorithm flow chart.
Embodiment
Fig. 1 is based on the moving object detection system structural front view of bionic compound eyes, Fig. 2 is the right view of Fig. 1, comprise by seven regular hexagon frames and forming, constitute cambered surface and measure face, with middle regular hexagon frame is the center, and one side overlaid of one side of all the other six regular hexagon frames and Main subrack is settled a video camera 1 in each regular hexagon framework, housing 2 links to each other with a vertical rod 3, and vertical rod 3 is fixed on the The Cloud Terrace 4 that can horizontally rotate with vertical rotation.Detecting processing system of the present invention connects a video camera 1 separately by seven dsp processors, constitutes synchronous, parallel Coordination Treatment system.
Detection method of the present invention comprises measurement, motion target tracking and the panorama map generalization of motion target detection, sound attitude parameter.Principle of work as shown in Figure 3.
1, moving object detection is to utilize partitioning algorithm from the video image that contains moving target, extracts and the identification moving target.At first, video image is carried out the Wiener filtering pre-service, the high-order statistic (HOS) that calculates frame difference image carries out Threshold Segmentation, obtains frame difference information; Comprehensive utilization multiframe information obtains background information; Then utilize frame difference and contextual information extraction to go out moving target; At last the moving target that obtains is done the morphology filtering operation and remove noise, cavity and shadow etc., to obtain more complete moving target.
The moving Object Segmentation algorithm mainly divided for 5 steps, and process flow diagram is seen Fig. 4.
The first step, frame differential mode plate.Utilize Threshold Segmentation to obtain the frame differential mode plate of two continuous frames image.If the interframe gray scale difference is a nonzero value, think that then variation by noise and motion object causes, the statistic of noise generally meets Gaussian characteristics, and that the motion object variation has is very strong structural, so with the HOS test of hypothesis of frame-to-frame differences, ask frame differential mode plate.It is poor that the two continuous frames image is done, and then calculates the part 4 rank squares of frame difference image, and setting threshold is cut apart and obtained frame differential mode plate.
Second step, the background enrollment.Counted frame frame differential mode plate according to the past,, then thought reliable background if one of them pixel is all motionless for a long time.This step is set up the buffer zone of a continual renovation, judges by the background enrollment whether pixel is reliable as a setting simultaneously.
The 3rd step, the background subtraction template.By comparing the background frames in present frame and the buffer zone, we can obtain the background subtraction template.This template is the initial prototype of our the moving target profile that obtains.
The 4th step, To Template.From background subtraction template and frame differential mode plate, we set up original To Template.If the background enrollment shows that the background information of this pixel is reliable, then its background subtraction template just is used as original To Template, otherwise its frame difference stencil value is as To Template.
The 5th step, aftertreatment.Because random noise and camera noise, and the moving object boundary line is not very level and smooth, so we eliminate noise with aftertreatments such as morphology, smooth boundary is eliminated cavity and shadow etc., to obtain more complete moving target.
2, sound attitude parameter measurement is with a plurality of video cameras moving target to be taken, and calculates it at the coordinate (static parameter) of specifying under the world coordinate system, draws the movement locus under this coordinate system simultaneously, calculates movement velocity (kinematic parameter) etc.At first, video camera is demarcated, obtained the intrinsic parameter of video camera and the transformational relation between image coordinate and the world coordinates; Then according to coordinate and the calibration result of moving target in image, obtain its specify world coordinate system (the center camera camera lens is an initial point, and the camera lens plane is an X-Y plane, vertically should towards outer be the coordinate of Z axle positive dirction) under coordinate; Extract the barycenter of moving target at last, draw its movement locus, calculate the speed of moving target, and judge the movement tendency of target, provide parameter for the control The Cloud Terrace rotates with this.
3, motion target tracking is according to the sound attitude parameter of the moving target that is partitioned into, and judges its movement tendency, rotates by serial ports control The Cloud Terrace and follows the tracks of it.
The measurement requirement moving target of moving target images in the camera coverage all the time, and the effect that as far as possible is in camera coverage middle part position preferably.Therefore, when moving target leaves camera coverage or image space when bad, need to adjust video camera to satisfy measurement requirement.Broaden one's vision and control video camera and rotate these two kinds of methods of tracking target and can satisfy measurement environment, realize that motion target tracking measures.These two kinds of methods of integrated use of the present invention.The system of seven video camera compositions has enlarged the visual field greatly, utilizes the kinematic parameter that calculates above simultaneously, and the control The Cloud Terrace drives video camera pursuit movement target.
4, panorama sketch generates, and is to utilize the gray scale related algorithm, splices the panoramic picture that seven road video images become ultra-large vision field.Selected characteristic zone in the overlapping region of piece image in office and another width of cloth image utilizes the gray scale correlativity, and coupling is sought the characteristic area in another width of cloth image, then with image alignment, realizes that image is seamless spliced, and similar approach realizes the splicing of seven road video images.Panorama sketch generates
Automatically the active research field that to set up large-scale, high-resolution image technique be photogrammetry, computer vision, Flame Image Process and computer graphics always.Splicing generates panorama sketch, mainly contains two partial contents: 1, local alignment technology, i.e. two width of cloth image alignments.2, the integrated technology of image: with the reference frame of image alignment to an appointment, to form i.e. overall situation alignment of a big image and image fusion technology.
According to the different modes that obtains original image, the Panoramagram montage algorithm roughly can be divided into several classes: 1, cylinder/spherical panorama figure.It requires camera to do to horizontally rotate around a vertical rotation axis; 2, based on the affined transformation panorama sketch.The translation, camera lens that it is used for handling camera stretches and around the rotation of optical axis; 3, based on the perspective transform panorama sketch.It does not have strict restriction to camera motion, but the scenery that requires to be taken is an approximate plane, to prevent the appearance of parallax.In the shooting of reality, the subject distance camera enough far can be considered as the plane to scenery.
The present invention utilizes correlation method, and seven road superimposed images are spliced into a large-scale seamless high-definition picture.Concrete steps are divided into following three: (1) determines the characteristic area in the adjacent image overlapping region; (2) aligning of image comes positioning image according to image characteristic region; (3) determine splicing place image slices vegetarian refreshments gray level, eliminate the artificial gap effect in the image that is stitched together, realize seamless spliced.
Can reach moving object detection and tracking by above pick-up unit and detection method.

Claims (1)

1. based on the motion target detection method of bionic compound eyes, it is characterized in that, comprise moving target detecting method, sound attitude measurement method of parameters, motion target tracking method and Panoramagram generation method, be specially:
(1) motion target detection method: be to utilize the moving Object Segmentation algorithm from the video image that contains moving target, extract and the identification moving target, at first, video image is carried out the Wiener filtering pre-service, the high-order statistic that calculates frame difference image carries out Threshold Segmentation, obtains frame difference information; Fully utilize the background information that multiframe information obtains then, utilize frame difference information and contextual information extraction to go out moving target, at last the moving target that obtains is done the morphology filtering operation, remove noise, cavity and shadow, obtain more complete moving target;
(2) sound attitude measurement method of parameters: be moving target to be taken with a plurality of video cameras, calculate it at the coordinate of specifying under the world coordinate system, be plotted in the movement locus under this coordinate system simultaneously, calculate movement velocity, at first video camera is demarcated, obtain the intrinsic parameter of video camera and the transformational relation between image coordinate and the world coordinate system, then according to coordinate and the calibration result of moving target in image, obtain it at the coordinate of specifying under the world coordinate system, extract the barycenter of moving target at last, draw movement locus, calculate the speed of moving target;
(3) motion target tracking method: rotate by serial ports controlled motion object detecting device, realize motion target tracking;
(4) Panoramagram generation method: be to utilize the gray scale correlation method, seven road superimposed images are spliced into a large-scale seamless high-definition picture, at first determine the characteristic area in the adjacent image overlapping region, come positioning image according to image characteristic region then, realize the aligning of image, determine splicing place image slices vegetarian refreshments gray-scale value at last, eliminate the artificial gap effect in the image that is stitched together, realize the seamless spliced of image.
CNB2005100950861A 2005-10-28 2005-10-28 Petoscope based on bionic oculus and method thereof Expired - Fee Related CN100373394C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100950861A CN100373394C (en) 2005-10-28 2005-10-28 Petoscope based on bionic oculus and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100950861A CN100373394C (en) 2005-10-28 2005-10-28 Petoscope based on bionic oculus and method thereof

Publications (2)

Publication Number Publication Date
CN1932841A CN1932841A (en) 2007-03-21
CN100373394C true CN100373394C (en) 2008-03-05

Family

ID=37878670

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100950861A Expired - Fee Related CN100373394C (en) 2005-10-28 2005-10-28 Petoscope based on bionic oculus and method thereof

Country Status (1)

Country Link
CN (1) CN100373394C (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4818987B2 (en) * 2007-05-21 2011-11-16 オリンパスイメージング株式会社 Imaging apparatus, display method, and program
CN101968890B (en) * 2009-07-27 2013-07-10 西安费斯达自动化工程有限公司 360-degree full-view simulation system based on spherical display
CN102063724A (en) * 2010-11-25 2011-05-18 四川省绵阳西南自动化研究所 Panoramic virtual alert target relay tracking device
CN102510436B (en) * 2011-10-17 2014-06-25 河海大学常州校区 Device and method for detecting high-speed tiny target online in real time by simulating fly vision
CN103123690B (en) * 2011-11-21 2017-02-22 中兴通讯股份有限公司 Information acquisition device, information acquisition method, identification system and identification method
CN102572220A (en) * 2012-02-28 2012-07-11 北京大学 Bionic compound eye moving object detection method adopting new 3-2-2 spatial information conversion model
CN105302160A (en) * 2013-09-10 2016-02-03 蒋春花 Processor controlled bionic compound eye perception imaging information acquisition system
CN103888751A (en) * 2014-03-12 2014-06-25 天津理工大学 Embedded type panoramic three-dimensional spherical visual image acquisition system based on DSP
CN103903279B (en) * 2014-03-21 2017-07-25 上海大学 Parallel Tracking System and method for based on bionic binocular vision airborne platform
CN103996181B (en) * 2014-05-12 2017-06-23 上海大学 A kind of big view field image splicing system and method based on bionical eyes
CN104165626B (en) * 2014-06-18 2019-08-13 长春理工大学 Bionic compound eyes imageable target positioning system
CN104270576B (en) * 2014-10-23 2017-07-04 吉林大学 A kind of bionic telescopic formula sector compound eye
EP3268929A1 (en) * 2015-03-13 2018-01-17 Aqueti Incorporated Multi-array camera imaging system and method therefor
CN105352482B (en) * 2015-11-02 2017-12-26 北京大学 332 dimension object detection methods and system based on bionic compound eyes micro lens technology
CN105954292B (en) * 2016-04-29 2018-09-14 河海大学常州校区 Underwater works surface crack detection device based on the bionical vision of compound eye and method
CN106485736B (en) * 2016-10-27 2022-04-12 深圳市道通智能航空技术股份有限公司 Panoramic visual tracking method for unmanned aerial vehicle, unmanned aerial vehicle and control terminal
CN106791294A (en) * 2016-11-25 2017-05-31 益海芯电子技术江苏有限公司 Motion target tracking method
CN106598046B (en) * 2016-11-29 2020-07-10 北京儒博科技有限公司 Robot avoidance control method and device
CN108881702B (en) * 2017-05-09 2020-12-11 浙江凡后科技有限公司 System and method for capturing object motion track by multiple cameras
CN108317958A (en) * 2017-12-29 2018-07-24 广州超音速自动化科技股份有限公司 A kind of image measuring method and measuring instrument
CN110197097B (en) * 2018-02-24 2024-04-19 北京图森智途科技有限公司 Harbor district monitoring method and system and central control system
EP3606032B1 (en) * 2018-07-30 2020-10-21 Axis AB Method and camera system combining views from plurality of cameras
CN110794575A (en) * 2019-10-23 2020-02-14 天津大学 Bionic compound eye space detection and positioning system based on light energy information
CN113610898A (en) * 2021-08-25 2021-11-05 浙江大华技术股份有限公司 Holder control method and device, storage medium and electronic device
CN116994075B (en) * 2023-09-27 2023-12-15 安徽大学 Small target rapid early warning and identifying method based on compound eye event imaging

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517019A (en) * 1995-03-07 1996-05-14 Lopez; Luis R. Optical compound eye sensor with ommatidium sensor and related methods
CN2368099Y (en) * 1999-01-06 2000-03-08 胡志立 Compound eye type infrared scanning vehicle-kind detector
CN2483797Y (en) * 2001-03-16 2002-03-27 王广生 Controllable visual field type display apparatus
JP2005057328A (en) * 2003-08-04 2005-03-03 Fuji Photo Film Co Ltd Compound eye image pickup apparatus and control method thereof
CN1655013A (en) * 2005-02-28 2005-08-17 北京理工大学 Compound eye stereoscopic vision device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517019A (en) * 1995-03-07 1996-05-14 Lopez; Luis R. Optical compound eye sensor with ommatidium sensor and related methods
CN2368099Y (en) * 1999-01-06 2000-03-08 胡志立 Compound eye type infrared scanning vehicle-kind detector
CN2483797Y (en) * 2001-03-16 2002-03-27 王广生 Controllable visual field type display apparatus
JP2005057328A (en) * 2003-08-04 2005-03-03 Fuji Photo Film Co Ltd Compound eye image pickup apparatus and control method thereof
CN1655013A (en) * 2005-02-28 2005-08-17 北京理工大学 Compound eye stereoscopic vision device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于生物复眼结构的视觉运动检测研究. 李东光,殷俊,房慧敏.光学技术,第31(增刊)卷. 2005 *
用于运动目标探测的多通道成象系统. 田维坚,姚胜利,陈荣利,张薇,李小俊.光子学报,第31卷第1期. 2002 *

Also Published As

Publication number Publication date
CN1932841A (en) 2007-03-21

Similar Documents

Publication Publication Date Title
CN100373394C (en) Petoscope based on bionic oculus and method thereof
CN108846867A (en) A kind of SLAM system based on more mesh panorama inertial navigations
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN110458897B (en) Multi-camera automatic calibration method and system and monitoring method and system
Klein Visual tracking for augmented reality
CN1701595B (en) Image pickup processing method and image pickup apparatus
US20110085704A1 (en) Markerless motion capturing apparatus and method
CN114119739A (en) Binocular vision-based hand key point space coordinate acquisition method
CN103006332A (en) Scalpel tracking method and device and digital stereoscopic microscope system
US20200159339A1 (en) Desktop spatial stereoscopic interaction system
Pang et al. Generation of high speed CMOS multiplier-accumulators
Saner et al. High-Speed Object Tracking Using an Asynchronous Temporal Contrast Sensor.
Gu et al. Real-time image mosaicing system using a high-frame-rate video sequence
McMurrough et al. Low-cost head position tracking for gaze point estimation
CN106846379A (en) Multi-vision visual system and its application method
Wu et al. FlyTracker: Motion tracking and obstacle detection for drones using event cameras
CN111784749A (en) Space positioning and motion analysis system based on binocular vision
Ohmura et al. Method of detecting face direction using image processing for human interface
Rougeaux et al. Tracking a moving object with a stereo camera head
CN112052827B (en) Screen hiding method based on artificial intelligence technology
Zhenglei et al. Laser Scanning Measurement based on Event Cameras
CN115223023B (en) Human body contour estimation method and device based on stereoscopic vision and deep neural network
Hallerbach Development of a toolset and benchmark framework for monocular event-based depth extraction
Du et al. Location Estimation from an Indoor Selfie

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee