CN112949409A - Eye movement data analysis method and device based on interested object and computer equipment - Google Patents
Eye movement data analysis method and device based on interested object and computer equipment Download PDFInfo
- Publication number
- CN112949409A CN112949409A CN202110144505.5A CN202110144505A CN112949409A CN 112949409 A CN112949409 A CN 112949409A CN 202110144505 A CN202110144505 A CN 202110144505A CN 112949409 A CN112949409 A CN 112949409A
- Authority
- CN
- China
- Prior art keywords
- ooi
- eye movement
- video frames
- movement data
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004424 eye movement Effects 0.000 title claims abstract description 145
- 238000007405 data analysis Methods 0.000 title claims abstract description 57
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 101
- 238000001514 detection method Methods 0.000 claims abstract description 91
- 238000012545 processing Methods 0.000 claims description 26
- 238000001914 filtration Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 230000010339 dilation Effects 0.000 claims description 4
- 230000003628 erosive effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 10
- 210000001508 eye Anatomy 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 5
- 238000005260 corrosion Methods 0.000 description 4
- 230000007797 corrosion Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000012552 review Methods 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000012800 visualization Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 210000005252 bulbus oculi Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses an eye movement data analysis method and device based on an interested object, computer equipment and a storage medium. The eye movement data analysis method based on the interested object comprises the following steps: acquiring a target video stream comprising a plurality of consecutive video frames; sequentially detecting a plurality of continuous video frames by using an object of interest (OOI) detection algorithm, and judging whether OOIs exist in the plurality of continuous video frames; when OOIs exist in a plurality of video frames, determining coordinate information of OOI areas corresponding to the OOIs; acquiring eye movement data when a testee watches an interested video stream, wherein a starting video frame of the interested video stream is a video frame of which OOI is detected in a first continuous video frame; the end point video frame of the interested video stream is the last video frame of which the OOI is detected in a plurality of continuous video frames; the eye movement index is determined based on the coordinate information of each OOI region and the eye movement data. The embodiment of the invention improves the accuracy and efficiency of the eye movement data analysis result.
Description
Technical Field
The embodiment of the invention relates to the field of data analysis, in particular to an eye movement data analysis method and device based on an interested object and computer equipment.
Background
In the eye movement application research adopting video materials, the automatic analysis of the collected eye movement data is beneficial to improving the accuracy and efficiency of the eye movement data analysis, and is beneficial to accelerating the research progress in the related fields of human-computer interaction, development psychology, advertising psychology, engineering psychology, traffic psychology and the like.
In the related art, generally, eye movement data Of a subject when viewing a video stream is acquired, an analyst manually delineates an Object Of Interest (OOI) in each video frame Of the video stream, and based on the OOI and the eye movement data, a gaze track Of the subject at each OOI, a degree Of attention Of the subject to each OOI, and the like are obtained.
However, manual OOI drawing by an analyst is time-consuming and labor-consuming, low in efficiency and large in error, so that the accuracy of the eye movement data analysis result is low.
Disclosure of Invention
The invention provides an eye movement data analysis method and device based on an interested object, computer equipment and a storage medium, so as to improve the accuracy and efficiency of an eye movement data analysis result.
In a first aspect, an embodiment of the present invention provides an eye movement data analysis method based on an object of interest, including:
acquiring a target video stream comprising a plurality of consecutive video frames;
sequentially detecting the plurality of continuous video frames by using an OOI detection algorithm, and judging whether OOI exists in the plurality of continuous video frames;
when the OOIs exist in the plurality of video frames, determining coordinate information of OOI areas corresponding to the OOIs;
acquiring eye movement data when a testee watches interested video streams, wherein a starting video frame of the interested video streams is a video frame of which the OOI is detected in the first continuous video frames; the endpoint video frame of the video stream of interest is the last video frame of the OOI detected in the plurality of consecutive video frames;
and determining an eye movement index based on the coordinate information of each OOI area and the eye movement data.
Optionally, the target video frame in the multiple consecutive video frames includes a target object determined according to an operation of an analyst, and before the multiple consecutive video frames are sequentially detected by using an object of interest OOI detection algorithm, the method further includes:
determining a target type of the target object;
the sequentially detecting the plurality of continuous video frames by using an object of interest (OOI) detection algorithm comprises:
and sequentially detecting the continuous video frames positioned behind the target video frame in the plurality of continuous video frames by using an OOI detection algorithm corresponding to the target type.
Optionally, the sequentially detecting the plurality of consecutive video frames by using the object of interest OOI detection algorithm includes:
and sequentially detecting the plurality of continuous video frames by utilizing at least one preset OOI detection algorithm corresponding to at least one object type one to one.
Optionally, before acquiring the target video stream including a plurality of consecutive video frames, the method further includes:
acquiring an initial video stream;
intercepting the target video stream in the initial video stream based on a start time and/or an end time, wherein the start time and/or the end time are determined according to the operation of an analyst.
Optionally, before the sequentially detecting the plurality of consecutive video frames by using the object of interest OOI detection algorithm, the method further includes:
performing image processing on the plurality of consecutive image frames, the image processing including at least one of smoothing processing, dilation and erosion processing, and filtering processing.
Optionally, the OOI detection algorithm corresponding to the OOI of the face type includes at least one of: a Gaussian model algorithm based on skin detection, a face detection algorithm based on an elastic model and a specific face subspace algorithm;
the OOI detection algorithm corresponding to the OOI of the moving object type comprises at least one of the following: background subtraction, feature point tracking algorithms, and active contour-based tracking algorithms.
Optionally, the OOI detection algorithm includes a gaussian model algorithm based on skin detection, and the sequentially detecting the plurality of consecutive video frames by using the object of interest OOI detection algorithm includes:
respectively passing any one of the plurality of continuous video frames through YCbCrColor space conversion separates luminance and chrominance to obtain CbCrA color space value;
based on the CbCrEstablishing a Gaussian skin color model according to the color space value;
and detecting whether the OOI of the human face type exists in any video frame by utilizing the Gaussian skin color model.
In a second aspect, an embodiment of the present invention provides an apparatus for analyzing eye movement data based on an object of interest, including:
a first obtaining module, configured to obtain a target video stream including a plurality of consecutive video frames;
the judging module is used for sequentially detecting the plurality of continuous video frames by using an object of interest (OOI) detection algorithm and judging whether OOI exists in the plurality of continuous video frames;
a first determining module, configured to determine, when the OOIs exist in the multiple video frames, coordinate information of an OOI area corresponding to each OOI;
a second obtaining module, configured to obtain eye movement data when a subject watches an interested video stream, where a starting video frame of the interested video stream is a video frame in which the OOI is detected in a first of the multiple consecutive video frames; the endpoint video frame of the video stream of interest is the last video frame of the OOI detected in the plurality of consecutive video frames;
and the second determining module is used for determining an eye movement index based on the coordinate information of each OOI area and the eye movement data.
Optionally, a target video frame in the plurality of consecutive video frames includes a target object determined according to an operation of an analyst, and the apparatus further includes:
the third determining module is used for determining the target type of the target object;
the judging module is used for:
and sequentially detecting the continuous video frames positioned behind the target video frame in the plurality of continuous video frames by using an OOI detection algorithm corresponding to the target type.
Optionally, the determining module is configured to:
and sequentially detecting the plurality of continuous video frames by utilizing at least one preset OOI detection algorithm corresponding to at least one object type one to one.
Optionally, the apparatus further comprises:
a third obtaining module, configured to obtain an initial video stream;
and the intercepting module is used for intercepting the target video stream in the initial video stream based on a starting time and/or an end time, and the starting time and/or the end time are determined according to the operation of an analyst.
Optionally, the apparatus further comprises:
a processing module for performing image processing on the plurality of consecutive image frames, the image processing including at least one of smoothing, dilation and erosion processing, and filtering processing.
Optionally, the OOI detection algorithm corresponding to the OOI of the face type includes at least one of: a Gaussian model algorithm based on skin detection, a face detection algorithm based on an elastic model and a specific face subspace algorithm;
the OOI detection algorithm corresponding to the OOI of the moving object type comprises at least one of the following: background subtraction, feature point tracking algorithms, and active contour-based tracking algorithms.
Optionally, the OOI detection algorithm includes a gaussian model algorithm based on skin detection, and the determining module is configured to:
respectively passing any one of the plurality of continuous video frames through YCbCrColor space conversion separates luminance and chrominance to obtain CbCrA color space value;
based on the CbCrEstablishing a Gaussian skin color model according to the color space value;
and detecting whether the OOI of the human face type exists in any video frame by utilizing the Gaussian skin color model.
In a third aspect, an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for analyzing eye movement data based on an object of interest according to any one of the first aspect when executing the program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method for analyzing eye movement data based on an object of interest according to any one of the first aspect.
The method comprises the steps of sequentially detecting a plurality of continuous video frames in a target video stream by using an OOI detection algorithm, determining coordinate information of an OOI area corresponding to the detected OOI when the OOI exists in the plurality of continuous video frames, then obtaining eye movement data when a tested person watches interested video streams in the target video stream, and determining eye movement indexes according to the coordinate information and the eye movement data of each OOI area. Whether the OOI exists in the continuous video frames can be automatically detected through an OOI detection algorithm, and an analyst does not need to subjectively and manually draw the OOI, so that the problems of low accuracy and low efficiency of eye movement data analysis results in the related technology are solved.
Drawings
FIG. 1 is a flow chart of a method for analyzing eye movement data based on an object of interest according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for analyzing eye movement data based on an object of interest according to an embodiment of the present invention;
fig. 3 is a block diagram of an eye movement data analysis apparatus based on an object of interest according to an embodiment of the present invention;
FIG. 4 is a block diagram of another apparatus for analyzing eye movement data based on an object of interest according to an embodiment of the present invention;
fig. 5 is a block diagram of an apparatus for analyzing eye movement data based on an object of interest according to another embodiment of the present invention;
fig. 6 is a block diagram of an apparatus for analyzing eye movement data based on an object of interest according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
In the eye movement application research adopting the video material, the eye movement data is obtained by tracking the eye movement of the video material watched by the testee, and then the obtained eye movement data is automatically analyzed, so that the accuracy and the efficiency of the eye movement data analysis are improved, and the research progress in the related fields of human-computer interaction, psychology development, advertising psychology, engineering psychology, traffic psychology and the like is accelerated. The eye tracking refers to measuring the position of a gazing point of the eyes of a subject or the movement of eyeballs relative to the head, and the like.
In the related art, generally, eye movement data of a subject when viewing a partial video stream in a video stream is obtained, then an analyst manually delineates an OOI in each video frame of the partial video stream, and based on the OOI and eyeball motion of the subject, eye movement indicators such as gaze time and gaze frequency of the subject in each OOI are obtained, where the OOI may be determined by the analyst.
However, since the position of the OOI in each video frame may be different, the OOI needs to be sketched in each video frame by an analyst, and the subjective manual sketching of the OOI by the analyst is inefficient and has a large error, so that the accuracy of the eye movement data analysis result is low. In addition, currently, an analyst drags a time point in a video stream subjectively and manually to intercept a part of the video stream, so that human errors are easy to occur, and the accuracy of an eye movement data analysis result is influenced.
Referring to fig. 1, fig. 1 is a flowchart of an eye movement data analysis method based on an object of interest according to an embodiment of the present invention, where the method may be executed by an eye movement data analysis apparatus based on an object of interest, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in a computer device, and the method may be applied to an application scenario for processing and analyzing a video stream and analyzing eye movement data. The method can comprise the following steps:
Alternatively, the target video stream may be obtained first, and then the target video stream is parsed to obtain the plurality of consecutive video frames.
In the process of watching the initial video stream, the testee acquires eye movement data through the eye movement instrument and automatically analyzes the eye movement data. The target video stream may be the initial video stream or a portion of the initial video stream.
The initial video stream may be stored in a computer device, for example, a local disk of the computer, and the computer device may read the initial video stream locally and then obtain the target video stream based on the initial video stream.
And step 120, detecting the plurality of continuous video frames in sequence by using an OOI detection algorithm, and judging whether OOI exists in the plurality of continuous video frames.
The OOI may include: the OOI algorithm may include at least one of an OOI detection algorithm corresponding to an OOI of a human type, an OOI detection algorithm corresponding to an OOI of an animal type, an OOI detection algorithm corresponding to an OOI of a moving object type, and an OOI detection algorithm corresponding to an OOI of a plant type. For example, the human-type OOI may include a face-type OOI, and the OOI algorithm includes an OOI detection algorithm corresponding to the face-type OOI.
A plurality of consecutive video frames may be detected in sequence using at least one OOI detection algorithm determined based on an analyst's operation. Optionally, an analyst may directly set at least one object type, and the computer device sequentially detects a plurality of consecutive video frames according to at least one OOI detection algorithm corresponding to the at least one object type; or the analyst may determine at least one OOI, and the computer device first determines a type of the at least one OOI, and then sequentially detects a plurality of consecutive video frames using an OOI detection algorithm corresponding to the determined type.
The OOI area may be an area surrounded by an outer contour edge of the corresponding OOI, and the coordinate information of the OOI area includes a coordinate set of all pixel points in the OOI area. Alternatively, the computer device may determine the coordinates of all the pixel points in each OOI area using Matlab.
The video stream of interest is a part of the target video stream, and when a plurality of continuous video frames are detected in sequence, the computer device determines a video stream composed of a first video frame and a last video frame, which detect the OOI, and a continuous video frame between the two video frames as the video stream of interest.
And 150, determining an eye movement index based on the coordinate information of each OOI area and the eye movement data.
The eye movement data includes the fixation point of the subject in each OOI area, the fixation duration of the fixation point, and the like. The eye movement indicator includes at least one of: the number of fixations, fixation time, and the number of lookbacks to the OOI area. The eye movement index may be determined based on the coordinate information of each OOI region and eye movement data (the effective gazing point of each OOI region and the gazing duration of the effective gazing point).
The number of gazing times is the total number of effective gazing points in any OOI area. Gaze time refers to the sum of gaze times of all active gaze points in any OOI region. The number of times of review refers to the number of times of review from any OOI region from which review occurs, starting when the effective point of regard falls on the OOI region. For example, when the effective fixation point is switched back to any OOI region from an OOI region other than the one OOI region, the number of times of the occurrence of the review from the one OOI region is increased by 1.
Further, the visualization may also be determined based on the eye movement indicator. Optionally, the visualization includes a gaze trajectory and/or a hotspot graph. The gazing track refers to the sequence and position of the effective gazing points in any OOI area, and may include dots and numbers in the dots, wherein the dots represent the effective gazing points, the size of the dots represents the gazing time of the corresponding effective gazing points, and the numbers represent the arrangement sequence of the effective gazing points. The heat map refers to a map in which the degree of attention to each area of one video frame is represented by different colors, wherein red generally represents the most effective gazing point and the longest gazing time.
In summary, according to the eye movement data analysis method based on an object of interest provided in the embodiments of the present invention, the computer device can sequentially detect a plurality of consecutive video frames in the target video stream by using an OOI detection algorithm, determine, when detecting that an OOI exists in the plurality of consecutive video frames, coordinate information of an OOI area corresponding to the detected OOI, then obtain eye movement data when the subject watches the video stream of interest in the target video stream, and determine the eye movement index according to the coordinate information and the eye movement data of each OOI area. Whether the OOI exists in the continuous video frames can be automatically detected through an OOI detection algorithm, and the OOI does not need to be manually drawn by an analyst subjectively, so that the OOI detection efficiency is improved, the OOI detection accuracy is improved, the artificial error is avoided, and the eye movement data analysis result accuracy is improved.
On the basis of the foregoing technical solution, an embodiment of the present invention provides another method for analyzing eye movement data based on an object of interest, please refer to fig. 2, where fig. 2 is a flowchart of another method for analyzing eye movement data based on an object of interest according to an embodiment of the present invention, where the method may be performed by an apparatus for analyzing eye movement data based on an object of interest, and the method may include:
The initial video stream may be stored in the computer device, for example, on a local disk of the computer device from which the computer device retrieves the initial video stream.
And step 220, intercepting a target video stream in the initial video stream based on a starting time and/or an end time, wherein the target video stream comprises a plurality of continuous video frames, and the starting time and/or the end time are determined according to the operation of an analyst.
Optionally, an initial video stream may be presented to the analyst, where the initial video stream includes a timeline in which the analyst selects a start point and/or an end point, and then obtains the target video stream according to the start point and/or the end point selected by the analyst.
Illustratively, when the analyst selects a start point, the computer device determines a video stream segment between the start point selected by the analyst and the end point of the initial video stream as the target video stream; when an analyst selects a destination, the computer device determines a video stream segment between a starting point of the initial video stream and the destination selected by the analyst as a target video stream; when the analyst selects a start point and an end point, the computer device determines a video stream segment between the start point selected by the analyst and the end point selected by the analyst as a target video stream.
And determining the target video stream in the initial video stream according to the starting point and/or the end point set by the analyst, so that the result of the eye movement data analysis can meet the requirements of the analyst, and the flexibility of the eye movement data analysis process is improved. And the video frames which do not include the OOI in the initial video stream can be directly removed according to the operation of an analyst, and only the target video stream can be detected subsequently without unnecessarily detecting the video frames which do not include the OOI in the initial video stream, so that the eye movement data analysis efficiency is improved.
And step 230, performing image processing on the plurality of continuous image frames, wherein the image processing comprises at least one of smoothing processing, expansion and corrosion processing and filtering processing, and target video frames in the plurality of continuous video frames comprise target objects determined according to the operation of an analyst.
By performing image processing on each video frame, noise blocks and the like in each video frame can be removed, thereby improving the resolution of the video frame.
Wherein the smoothing process may include: denoising, image segmentation and binarization. Illustratively, more significant noise blocks in each video frame may be removed by gaussian filtering; then, each video frame is divided; and then each divided video frame is binarized. Specifically, each video frame after being divided may be shifted from a red-green-blue (RGB) color space to a Hue/Saturation/Value (HSV) color space. Optionally, a more obvious noise block in each video frame after binarization can be removed through gaussian filtering.
The expansion and corrosion treatment may include: and carrying out corrosion operation and then expansion operation on each video frame to eliminate scattered noise points in each video frame, wherein the corrosion operation and the expansion operation are also called as opening operation.
The filtering process may include: each video frame is band pass filtered. Illustratively, the band-pass filtering can be performed by using a laplacian operator, so that irrelevant noise and a mottled pixel block occurring due to equipment or weather and the like in each video frame can be effectively eliminated.
Optionally, the plurality of consecutive video frames in the target video stream may include a target video frame including a target object determined according to an operation of an analyst. The analyst may delineate the target object in the target video frame, such as by enclosing the target object with a box. The target video frame may be a video frame including a target object in a first of a plurality of consecutive video frames.
The target object is determined by an analyst, which may include at least one type of object, and taking the example that the target object includes an object of a face type and an object of a moving object type, the analyst may draw a face and a moving object in a target video frame.
And step 240, determining the target type of the target object.
The number of target objects is at least one. It should be noted that one target object corresponds to one target type. When the number of the target objects is multiple, multiple target types corresponding to the multiple target objects one to one can be determined. The computer device may classify any of the target objects based on the stored classification database to determine a target type of the any of the target objects. Illustratively, the classification database comprises a plurality of first feature vectors corresponding to a plurality of types of objects in a one-to-one manner. Feature extraction can be performed on any target object by using a classification neural network to obtain a second feature vector. And then determining the target type of any target object based on a K-Nearest Neighbor (KNN) classification algorithm, the first feature vector, the second feature vector and a classification database.
And step 250, detecting continuous video frames positioned behind the target video frame in the plurality of continuous video frames in sequence by using an OOI detection algorithm corresponding to the target type, and judging whether OOI exists in the plurality of continuous video frames.
The number of target objects is at least one, and accordingly, the number of target types is at least one. One object type corresponds to one OOI detection algorithm. When the number of the target types is multiple, multiple OOI detection algorithms corresponding to the multiple target types one to one can be used to respectively and sequentially detect the continuous video frames located behind the target video frames in the multiple continuous video frames.
For example, when the target type includes a face type, its corresponding OOI detection algorithm may include a face OOI detection algorithm; when the target type includes a moving object type, the OOI detection algorithm corresponding thereto may include a moving object OOI detection algorithm, and the like. For example, the face OOI detection algorithm may include at least one of: a Gaussian model algorithm based on skin detection, a face detection algorithm based on an elastic model, a specific face subspace algorithm and the like. The moving object OOI detection algorithm may include at least one of: a background difference method, a feature point tracking algorithm, an active contour-based tracking algorithm, and the like, which are not limited in the embodiments of the present invention.
This step 250 is illustrated by way of example in which the target type comprises a face type and the OOI detection algorithm comprises a Gaussian model algorithm based on skin detection. Any one of a plurality of continuous video frames can be firstly passed through YC respectivelybCrColor space conversion separates luminance and chrominance to obtain CbCrA color space value. Then based on CbCrAnd establishing a Gaussian skin color model according to the color space value, and detecting whether the OOI of the face type exists in any video frame by using the Gaussian skin color model.
For example, C, which is approximately normally distributed in the skin color range set, can be obtained by the following formulabCrColor space value:
in the above formula, Y denotes a luminance component, CbRefers to the blue chrominance component, CrRefers to the red chrominance component.
For a sample pixel x in any video frame, its similarity is calculated by the following formula:
M=E(x)
C=E[(x-M)(x-M)T]
and then judging whether the sample pixel x is subjected to two-dimensional Gaussian normal distribution with the skin color mean value of M and the skin color similarity variance matrix of C based on the similarity. If the sample pixel x follows the two-dimensional Gaussian normal distribution with the skin color mean value of M and the skin color similarity variance matrix of C, the sample pixel x belongs to the face.
Alternatively, when OOI is not present in a plurality of consecutive video frames, the subsequent flow may be directly ended.
And step 260, when the OOIs exist in the plurality of video frames, determining the coordinate information of the OOI areas corresponding to the OOIs.
The computer device may first determine the OOI regions corresponding to the respective OOIs, and then obtain the coordinate positions of all pixels in the respective OOI regions in the video frame.
Taking the OOI including the face as an example, after the face is detected by the computer device, the height width and the height-width ratio threshold of the circumscribed matrix of the outline of the face can be determined according to the similarity of each sample pixel, so as to determine the face region. And then acquiring the coordinate positions of all pixels in the video frame, wherein the pixels are included in the face region, and the circumscribed matrix is a matrix for representing the outline of the face.
Illustratively, the plurality of consecutive video frames includes video frames A-F, and in the aforementioned step 260, the computer device determines video frames B-F as the video stream of interest if OOI is detected first in video frame B and last in video frame F. Therefore, the interested video stream in the target video stream can be automatically, objectively and accurately determined, the time period is not required to be manually intercepted by an analyst, the artificial error is avoided, the efficiency and the accuracy of determining the interested video stream are improved, the effectiveness and the accuracy of the obtained eye movement data are further ensured, and the accuracy of the eye movement data analysis result is improved.
Therein, the eye movement data may be stored in the computer device, for example in a local disk of the computer device, from which the eye movement data is read by the computer device.
And step 280, determining an eye movement index based on the coordinate information of each OOI area and the eye movement data.
For example, an effective gaze point in the eye movement data and a gaze duration of the effective gaze point may be determined first using an Incremental Visual Tracking (IVT) algorithm. And then adding the coordinate information of each OOI area into the eye movement data to determine the effective fixation point in each OOI area and the fixation duration of the effective fixation point.
Optionally, when the speed of any data point in the eye movement data reaching its adjacent data point is less than the speed threshold, the any data point and its adjacent data point are divided into the same fixation point to determine a plurality of fixation points. And then, determining the fixation point with the fixation time length larger than the time threshold value as an effective fixation point. In yet another alternative, the data points of the eye movement data, of which the gazing time duration is greater than the time threshold, may be determined as the effective gazing point. And then when the speed of any one of the effective fixation points reaching the adjacent effective fixation point is less than the speed threshold value, dividing the effective fixation point and the adjacent effective fixation point into the same effective fixation point.
In an exemplary process of determining an effective point of regard using the IVT algorithm, a plurality of points of regard are determined first and then the effective point of regard is determined. The process specifically comprises the following steps: the distance between every two adjacent data points in the eye movement data is measured, and then the moving speed of each data point to the adjacent data point is determined based on the distance between every two adjacent data points and the moving time. If the speed of any data point reaching the adjacent data point is less than a reference value (namely, a speed threshold), dividing any data point and the adjacent data point into the same fixation point; and if the speed of any data point reaching the adjacent data point is greater than or equal to the reference value, dividing the data point and the adjacent data point into eye jumps. And then determining the gaze duration of each obtained gaze point, and determining the gaze point with the gaze duration being greater than the duration threshold value as an effective gaze point, thereby determining the gaze point in the eye movement data. Where eye jump represents eye movement behavior where vision moves rapidly from one data point to another. This behavior has a fast initial acceleration until the angular velocity of movement of the eye reaches a peak, after which deceleration begins until the eye reaches the target position.
Further, the determined effective fixation points can be further screened according to a time threshold set by an analyst, and the effective fixation points with the fixation time smaller than the time threshold are removed. Therefore, the determined effective fixation point can meet the requirements of analysts, and the flexibility of the eye movement data analysis process is improved.
The eye movement indicator includes at least one of: number of fixations, fixation time, and number of lookbacks to the OOI area. The total number of the effective fixation points in any OOI area can be calculated to obtain the fixation times of any OOI area; the sum of the gazing time of all effective gazing points in any OOI area can be calculated to obtain the gazing time of any OOI area; when the effective fixation point is switched back to any OOI area from an OOI area outside the OOI area, the access times of the OOI area are added with 1, and the return-to-view times of the area are obtained.
Further, the computer device may determine a visualization result based on the eye movement indicator. Taking the visualization result including the gazing track and the hotspot graph as an example, the computer device may use dots to represent all effective gazing points in any OOI area, and the size of the dot represents the length of gazing time of the corresponding effective gazing point, and the arrangement order of each effective gazing point is represented by a number in the dot to obtain the gazing track of any OOI area; the computer device may mark effective gazing points in each OOI area, determine a color corresponding to each gazing point based on the gazing time of each effective gazing point, and then determine the color of the area around each effective gazing point through a gaussian curve approximation (cspline) to obtain a hotspot graph.
It should be noted that, in the above embodiment, the target type is determined based on the target object sketched by the analyst, and then whether an OOI exists in a video frame after the target video frame is detected by using an OOI detection algorithm corresponding to each target type is taken as an example for explanation. Optionally, the analyst may also directly set at least one object type without delineating the target object, and the computer device may sequentially detect the plurality of consecutive video frames by using at least one preset OOI detection algorithm corresponding to the at least one object type one to one. The embodiment of the present invention is not limited thereto.
In summary, according to the eye movement data analysis method based on an object of interest provided in the embodiments of the present invention, the computer device can sequentially detect a plurality of consecutive video frames in the target video stream by using an OOI detection algorithm, determine, when detecting that an OOI exists in the plurality of consecutive video frames, coordinate information of an OOI area corresponding to the detected OOI, then obtain eye movement data when the subject watches the video stream of interest in the target video stream, and determine the eye movement index according to the coordinate information and the eye movement data of each OOI area. Whether the OOI exists in the continuous video frames can be automatically detected through an OOI detection algorithm, and the OOI does not need to be manually drawn by an analyst subjectively, so that the OOI detection efficiency is improved, the OOI detection accuracy is improved, the artificial error is avoided, and the eye movement data analysis result accuracy is improved.
In addition, the computer equipment can automatically, objectively and accurately determine the first video frame and the last video frame which detect the OOI, so that the interested video stream in the target video stream is determined, an analyst does not need to manually intercept the time period, human errors are avoided, the efficiency and the accuracy of determining the interested video stream are improved, the effectiveness and the accuracy of the obtained eye movement data are further ensured, and the accuracy of the eye movement data analysis result is improved.
Further, the computer device can determine the target video stream in the initial video stream according to the starting point and/or the end point set by the analyst, so that the result of the eye movement data analysis can meet the requirements of the analyst, and the flexibility of the eye movement data analysis process is improved. And the video frames which do not include the OOI in the initial video stream can be removed according to the operation of an analyst, and only the target video stream can be detected subsequently without unnecessarily detecting the video frames which do not include the OOI in the initial video stream, so that the eye movement data analysis efficiency is improved.
Alternatively, the above embodiment is described by taking an example in which the eye movement data analysis device based on the object of interest performs the eye movement data analysis method based on the object of interest. In one example, different steps in the method of eye movement data analysis based on an object of interest may be performed by different modules. The different modules may be located in one device or in different devices. The embodiment of the present invention does not limit an apparatus for performing the eye movement data analysis method based on the object of interest. Alternatively, the eye movement data analysis device based on the object of interest may be integrated in a computer device or a server, and the like, which is not limited in this embodiment of the present invention.
Referring to fig. 3, fig. 3 is a block diagram of an apparatus for analyzing eye movement data based on an object of interest according to an embodiment of the present invention, where the apparatus 30 includes:
a first obtaining module 301, configured to obtain a target video stream including a plurality of consecutive video frames.
The determining module 302 is configured to sequentially detect multiple consecutive video frames by using an object of interest OOI detection algorithm, and determine whether an OOI exists in the multiple consecutive video frames.
The first determining module 303 is configured to determine, when OOIs exist in a plurality of video frames, coordinate information of an OOI area corresponding to each OOI.
A second obtaining module 304, configured to obtain eye movement data of a subject when watching a video stream of interest, where a starting video frame of the video stream of interest is a video frame in which an OOI is detected in a first of a plurality of consecutive video frames; the end video frame of the video stream of interest is the last video frame of a plurality of consecutive video frames for which OOI was detected.
A second determining module 305, configured to determine an eye movement indicator based on the coordinate information of each OOI area and the eye movement data.
In summary, in the eye movement data analysis apparatus based on an object of interest provided in the embodiment of the present invention, the determination module sequentially detects, by using an OOI detection algorithm, a plurality of consecutive video frames in the target video stream acquired by the first acquisition module, when detecting that an OOI exists in the plurality of consecutive video frames, the first determination module determines coordinate information of an OOI area corresponding to the detected OOI, then the second acquisition module acquires eye movement data when the subject views the video stream of interest in the target video stream, and the second determination module determines the eye movement index according to the coordinate information and the eye movement data of each OOI area. Whether the OOI exists in the continuous video frames can be automatically detected through an OOI detection algorithm, and the OOI does not need to be manually drawn by an analyst subjectively, so that the OOI detection efficiency is improved, the OOI detection accuracy is improved, the artificial error is avoided, and the eye movement data analysis result accuracy is improved.
Optionally, if a target video frame in the plurality of consecutive video frames includes a target object determined according to an operation of an analyst, referring to fig. 4, fig. 4 is a block diagram of another eye movement data analysis apparatus based on an object of interest according to an embodiment of the present invention, and on the basis of fig. 3, the apparatus 30 further includes:
a third determining module 306, configured to determine a target type of the target object.
Here, the determining module 302 is configured to:
and sequentially detecting the continuous video frames positioned behind the target video frame in the plurality of continuous video frames by using an OOI detection algorithm corresponding to the target type.
Optionally, the determining module 302 is configured to:
and sequentially detecting a plurality of continuous video frames by utilizing at least one preset OOI detection algorithm corresponding to at least one object type one to one.
Optionally, referring to fig. 5, fig. 5 is a block diagram of another eye movement data analysis apparatus based on an object of interest according to an embodiment of the present invention, and on the basis of fig. 3, the apparatus 30 further includes:
a third obtaining module 307, configured to obtain the initial video stream.
And the intercepting module 308 is used for intercepting the target video stream in the initial video stream based on the starting time and/or the ending time, wherein the starting time and/or the ending time are determined according to the operation of the analyst.
Optionally, referring to fig. 6, fig. 6 is a block diagram of another eye movement data analysis apparatus based on an object of interest according to an embodiment of the present invention, and on the basis of fig. 3, the apparatus 30 further includes:
a processing module 309, configured to perform image processing on the plurality of consecutive image frames, where the image processing includes at least one of smoothing, dilation, and erosion processing, and filtering.
Optionally, the OOI detection algorithm corresponding to the OOI of the face type includes at least one of: a Gaussian model algorithm based on skin detection, a face detection algorithm based on an elastic model, a specific face subspace algorithm and the like;
the OOI detection algorithm corresponding to the OOI of the moving object type comprises at least one of the following: background subtraction, feature point tracking algorithms, and active contour-based tracking algorithms.
Optionally, the OOI detection algorithm includes a gaussian model algorithm based on skin detection, and the determining module 302 is configured to:
passing any one of a plurality of consecutive video frames through YC respectivelybCrColor space conversion separates luminance and chrominance to obtain CbCrA color space value.
Based on CbCrAnd establishing a Gaussian skin color model according to the color space value.
And detecting whether the OOI of the face type exists in any video frame by using a Gaussian skin color model.
In summary, in the eye movement data analysis apparatus based on an object of interest provided in the embodiment of the present invention, the determination module sequentially detects, by using an OOI detection algorithm, a plurality of consecutive video frames in the target video stream acquired by the first acquisition module, when detecting that an OOI exists in the plurality of consecutive video frames, the first determination module determines coordinate information of an OOI area corresponding to the detected OOI, then the second acquisition module acquires eye movement data when the subject views the video stream of interest in the target video stream, and the second determination module determines the eye movement index according to the coordinate information and the eye movement data of each OOI area. Whether the OOI exists in the continuous video frames can be automatically detected through an OOI detection algorithm, and the OOI does not need to be manually drawn by an analyst subjectively, so that the OOI detection efficiency is improved, the OOI detection accuracy is improved, the artificial error is avoided, and the eye movement data analysis result accuracy is improved.
In addition, the eye movement data analysis device based on the interested object can automatically, objectively and accurately determine the first video frame and the last video frame which detect the OOI, so that the interested video stream in the target video stream is determined, an analyst does not need to manually intercept a time period, human errors are avoided, the efficiency and the accuracy of determining the interested video stream are improved, the effectiveness and the accuracy of the obtained eye movement data are further ensured, and the accuracy of an eye movement data analysis result is improved.
Furthermore, the target video stream in the initial video stream can be determined by the intercepting module according to the starting point and/or the end point set by the analyst, so that the result of the eye movement data analysis can meet the requirements of the analyst, and the flexibility of the eye movement data analysis process is improved. And the video frames which do not include the OOI in the initial video stream can be removed according to the operation of an analyst, and the subsequent eye movement data analysis device based on the interested object can only detect the target video stream without unnecessarily detecting the video frames which do not include the OOI in the initial video stream, so that the eye movement data analysis efficiency is improved.
The eye movement data analysis device based on the interested object provided by the embodiment of the invention can execute the eye movement data analysis method based on the interested object provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 7 is a schematic structural diagram of a computer apparatus according to an embodiment of the present invention, as shown in fig. 7, the computer apparatus includes a processor 70, a memory 71, an input device 72, and an output device 73; the number of the processors 70 in the computer device may be one or more, and one processor 70 is taken as an example in fig. 7; the processor 70, the memory 71, the input device 72 and the output device 73 in the computer apparatus may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 7.
The memory 71 serves as a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method for analyzing eye movement data based on an object of interest in the embodiment of the present invention (for example, the first obtaining module 301, the judging module 302, the first determining module 303, the second obtaining module 304, and the second determining module 305 in the device for analyzing eye movement data based on an object of interest). The processor 70 executes various functional applications of the computer device and eye movement data analysis, i.e., implements the above-described eye movement data analysis method based on the object of interest, by executing software programs, instructions, and modules stored in the memory 71.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 71 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 71 may further include memory located remotely from the processor 70, which may be connected to a computer device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 72 may be used to receive entered numerical or character information (e.g., eye movement data) and to generate key signal inputs (e.g., target data ranges) related to analyst settings and function control of the computer device. The output device 73 may include a display device such as a display screen.
Embodiments of the present invention further provide a storage medium readable by a computer, having a computer program stored thereon, where the computer program is executed by a processor to implement any of the methods for analyzing eye movement data based on an object of interest provided by the embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the above search apparatus, each included unit and module are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
In the present invention, "at least one" means one or more, "a plurality" means two or more, "and/or" is only one kind of association relation describing an associated object, which means that there may be three kinds of relations, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. Unless explicitly defined otherwise.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A method for eye movement data analysis based on an object of interest, comprising:
acquiring a target video stream comprising a plurality of consecutive video frames;
sequentially detecting the plurality of continuous video frames by using an object of interest (OOI) detection algorithm, and judging whether OOIs exist in the plurality of continuous video frames;
when the OOIs exist in the plurality of video frames, determining coordinate information of OOI areas corresponding to the OOIs;
acquiring eye movement data when a testee watches interested video streams, wherein a starting video frame of the interested video streams is a video frame of which the OOI is detected in the first continuous video frames; the endpoint video frame of the video stream of interest is the last video frame of the OOI detected in the plurality of consecutive video frames;
and determining an eye movement index based on the coordinate information of each OOI area and the eye movement data.
2. The method of claim 1, wherein the target video frame of the plurality of consecutive video frames comprises a target object determined according to human operations, and wherein the OOI detection algorithm is applied to the target object
Before sequentially detecting the plurality of continuous video frames, the method further comprises the following steps:
determining a target type of the target object;
the sequentially detecting the plurality of continuous video frames by using an object of interest (OOI) detection algorithm comprises:
and sequentially detecting the continuous video frames positioned behind the target video frame in the plurality of continuous video frames by using an OOI detection algorithm corresponding to the target type.
3. The method of claim 1, wherein the detecting the plurality of consecutive video frames in sequence using an object of interest (OOI) detection algorithm comprises:
and sequentially detecting the plurality of continuous video frames by utilizing at least one preset OOI detection algorithm corresponding to at least one object type one to one.
4. The method of claim 1, further comprising, prior to obtaining the target video stream comprising a plurality of consecutive video frames:
acquiring an initial video stream;
intercepting the target video stream in the initial video stream based on a start time and/or an end time, wherein the start time and/or the end time are determined according to the operation of an analyst.
5. The method of claim 1, further comprising, before sequentially detecting the plurality of consecutive video frames using an object of interest (OOI) detection algorithm:
performing image processing on the plurality of consecutive image frames, the image processing including at least one of smoothing processing, dilation and erosion processing, and filtering processing.
6. The method of any one of claims 1 to 5, wherein the OOI detection algorithm corresponding to the OOI of the face type comprises at least one of: a Gaussian model algorithm based on skin detection, a face detection algorithm based on an elastic model and a specific face subspace algorithm;
the OOI detection algorithm corresponding to the OOI of the moving object type comprises at least one of the following: background subtraction, feature point tracking algorithms, and active contour-based tracking algorithms.
7. The method of claim 6, wherein the OOI detection algorithm comprises a Gaussian model algorithm based on skin detection, and wherein the sequential detection of the plurality of consecutive video frames using an object of interest OOI detection algorithm comprises:
respectively passing any one of the plurality of continuous video frames through YCbCrColor space conversion separates luminance and chrominance to obtain CbCrA color space value;
based on the CbCrEstablishing a Gaussian skin color model according to the color space value;
and detecting whether the OOI of the human face type exists in any video frame by utilizing the Gaussian skin color model.
8. An eye movement data analysis apparatus based on an object of interest, comprising:
a first obtaining module, configured to obtain a target video stream including a plurality of consecutive video frames;
the judging module is used for sequentially detecting the plurality of continuous video frames by using an object of interest (OOI) detection algorithm and judging whether OOI exists in the plurality of continuous video frames;
a first determining module, configured to determine, when the OOIs exist in the multiple video frames, coordinate information of an OOI area corresponding to each OOI;
a second obtaining module, configured to obtain eye movement data when a subject watches an interested video stream, where a starting video frame of the interested video stream is a video frame in which the OOI is detected in a first of the multiple consecutive video frames; the endpoint video frame of the video stream of interest is the last video frame of the OOI detected in the plurality of consecutive video frames;
and the second determining module is used for determining an eye movement index based on the coordinate information of each OOI area and the eye movement data.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements an object of interest-based eye movement data analysis method according to any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for analyzing eye movement data based on an object of interest according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110144505.5A CN112949409A (en) | 2021-02-02 | 2021-02-02 | Eye movement data analysis method and device based on interested object and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110144505.5A CN112949409A (en) | 2021-02-02 | 2021-02-02 | Eye movement data analysis method and device based on interested object and computer equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112949409A true CN112949409A (en) | 2021-06-11 |
Family
ID=76241690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110144505.5A Pending CN112949409A (en) | 2021-02-02 | 2021-02-02 | Eye movement data analysis method and device based on interested object and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112949409A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115661913A (en) * | 2022-08-19 | 2023-01-31 | 北京津发科技股份有限公司 | Eye movement analysis method and system |
CN115866288A (en) * | 2021-09-03 | 2023-03-28 | 中移(成都)信息通信科技有限公司 | Video stream processing method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1705454A (en) * | 2002-10-15 | 2005-12-07 | 沃尔沃技术公司 | Method and arrangement for interpreting a subjects head and eye activity |
US20080130948A1 (en) * | 2005-09-13 | 2008-06-05 | Ibrahim Burak Ozer | System and method for object tracking and activity analysis |
US20160232708A1 (en) * | 2015-02-05 | 2016-08-11 | Electronics And Telecommunications Research Institute | Intuitive interaction apparatus and method |
US20170091591A1 (en) * | 2015-09-29 | 2017-03-30 | Medstar Health | Human-assisted learning in eye tracking applications |
US20190164313A1 (en) * | 2017-11-30 | 2019-05-30 | Kofax, Inc. | Object detection and image cropping using a multi-detector approach |
CN110464365A (en) * | 2018-05-10 | 2019-11-19 | 深圳先进技术研究院 | A kind of attention rate determines method, apparatus, equipment and storage medium |
-
2021
- 2021-02-02 CN CN202110144505.5A patent/CN112949409A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1705454A (en) * | 2002-10-15 | 2005-12-07 | 沃尔沃技术公司 | Method and arrangement for interpreting a subjects head and eye activity |
US20080130948A1 (en) * | 2005-09-13 | 2008-06-05 | Ibrahim Burak Ozer | System and method for object tracking and activity analysis |
US20160232708A1 (en) * | 2015-02-05 | 2016-08-11 | Electronics And Telecommunications Research Institute | Intuitive interaction apparatus and method |
US20170091591A1 (en) * | 2015-09-29 | 2017-03-30 | Medstar Health | Human-assisted learning in eye tracking applications |
US20190164313A1 (en) * | 2017-11-30 | 2019-05-30 | Kofax, Inc. | Object detection and image cropping using a multi-detector approach |
CN110464365A (en) * | 2018-05-10 | 2019-11-19 | 深圳先进技术研究院 | A kind of attention rate determines method, apparatus, equipment and storage medium |
Non-Patent Citations (2)
Title |
---|
HYUNGIL KIM ET AL.: "Toward Real-Time Estimation of Driver Situation Awareness: An Eye-tracking Approach based on Moving Objects of Interest", 2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), pages 1 - 4 * |
KAREN PANETTA ET AL.: "ISeeColor: Method for Advanced Visual Analytics of Eye Tracking Data", IEEE ACCESS, pages 1 - 3 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115866288A (en) * | 2021-09-03 | 2023-03-28 | 中移(成都)信息通信科技有限公司 | Video stream processing method and device, electronic equipment and storage medium |
CN115661913A (en) * | 2022-08-19 | 2023-01-31 | 北京津发科技股份有限公司 | Eye movement analysis method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11341669B2 (en) | People flow analysis apparatus, people flow analysis system, people flow analysis method, and non-transitory computer readable medium | |
Fuhl et al. | Eyes wide open? eyelid location and eye aperture estimation for pervasive eye tracking in real-world scenarios | |
CN110443210B (en) | Pedestrian tracking method and device and terminal | |
EP3678056B1 (en) | Skin color detection method and device and storage medium | |
EP3767520B1 (en) | Method, device, equipment and medium for locating center of target object region | |
US9672610B2 (en) | Image processing apparatus, image processing method, and computer-readable recording medium | |
CN110443148B (en) | Action recognition method, system and storage medium | |
Abate et al. | BIRD: Watershed based iris detection for mobile devices | |
CN105844242A (en) | Method for detecting skin color in image | |
CN112949409A (en) | Eye movement data analysis method and device based on interested object and computer equipment | |
CN108537787B (en) | Quality judgment method for face image | |
US10922535B2 (en) | Method and device for identifying wrist, method for identifying gesture, electronic equipment and computer-readable storage medium | |
CN112115803B (en) | Mask state reminding method and device and mobile terminal | |
TWI776176B (en) | Device and method for scoring hand work motion and storage medium | |
CN106934338B (en) | Long-term pedestrian tracking method based on correlation filter | |
CN105426816A (en) | Method and device of processing face images | |
CN111582032A (en) | Pedestrian detection method and device, terminal equipment and storage medium | |
KR20210157194A (en) | Crop growth measurement device using image processing and method thereof | |
Xia et al. | Visual crowding in driving | |
US11132778B2 (en) | Image analysis apparatus, image analysis method, and recording medium | |
Min et al. | Influence of compression artifacts on visual attention | |
CN112907206A (en) | Service auditing method, device and equipment based on video object identification | |
CN112733650A (en) | Target face detection method and device, terminal equipment and storage medium | |
CN111860079B (en) | Living body image detection method and device and electronic equipment | |
US10916016B2 (en) | Image processing apparatus and method and monitoring system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210611 |