CN115249377A - Method and device for identifying micro-expression - Google Patents
Method and device for identifying micro-expression Download PDFInfo
- Publication number
- CN115249377A CN115249377A CN202210941662.3A CN202210941662A CN115249377A CN 115249377 A CN115249377 A CN 115249377A CN 202210941662 A CN202210941662 A CN 202210941662A CN 115249377 A CN115249377 A CN 115249377A
- Authority
- CN
- China
- Prior art keywords
- frame
- apex
- micro
- neural network
- expression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000014509 gene expression Effects 0.000 claims abstract description 37
- 238000003062 neural network model Methods 0.000 claims abstract description 36
- 238000013135 deep learning Methods 0.000 claims abstract description 28
- 230000001815 facial effect Effects 0.000 claims abstract description 27
- 230000000306 recurrent effect Effects 0.000 claims abstract description 19
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims 1
- 238000013145 classification model Methods 0.000 abstract description 2
- 238000012706 support-vector machine Methods 0.000 description 11
- 230000015654 memory Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 238000011161 development Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000011840 criminal investigation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method and a device for identifying micro expressions, wherein the method for identifying the micro expressions comprises the following steps: acquiring a micro-expression data set video; preprocessing the micro expression data set video to obtain each frame of facial feature points; calculating the change intensity of the micro expression sequence frame, and determining the Apex frame with the maximum change intensity; determining frames with minimum change intensity before and after the Apex frame according to the face change intensity sequence; determining a face feature point sequence of frames with minimum change intensity before and after the Apex frame; inputting the facial feature point sequence of the frame with the minimum variation intensity before and after the Apex frame into a trained deep learning neural network model, and identifying the micro expression by using the deep learning neural network model; the identification method provided by the invention is used for accurately judging based on a model combining an SVM classification model and a recurrent neural network model, has small calculated amount and high precision, can deploy various operating systems, and is convenient for program transplantation.
Description
Technical Field
The invention relates to the field of image recognition, in particular to a method and a device for recognizing a micro expression.
Background
Micro-expression is a short duration/low intensity facial movement that can reveal whether a person is trying to hide is really an emotion, in clinical medicine/criminal investigation/public safety. The method has important value in the fields of business negotiation and the like. The micro expression is a process of facial feature change according to time sequence, and the facial feature change is analyzed for expression recognition.
The existing algorithm generally realizes the micro-expression recognition based on a convolutional neural network, the models are large, a machine needs to pull stream for decoding, the recognition model is operated, a good carrier is needed for realizing calculation, and the cost is high.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method and an apparatus for identifying micro-expressions, which are low in cost, low in calculation amount, and easy in model migration.
In order to achieve the above object, an aspect of the present invention provides a method for recognizing a micro expression, including:
acquiring a micro-expression data set video;
preprocessing the micro expression data set video to obtain each frame of facial feature points;
calculating the change intensity of the micro expression sequence frame, and determining the Apex frame with the maximum change intensity;
determining frames with minimum change intensity before and after the Apex frame according to the face change intensity sequence;
determining a face feature point sequence of frames with minimum change intensity before and after the Apex frame;
and inputting the facial feature point sequence of the frame with the minimum variation intensity before and after the Apex frame into a trained deep learning neural network model, and identifying the micro expression by using the deep learning neural network model.
As a preferred technical solution, the method for preprocessing the micro expression dataset video to obtain the facial feature point of each frame further includes:
importing the micro-expression data set video into a faceRandmark model;
obtaining each frame of facial feature points by using the faceRandmark model;
as a preferred technical solution, the calculating the frame change strength of the micro expression sequence further includes: :
calculating the change intensity of each frame of face and constructing a change intensity sequence;
determining an Apex frame according to the change strength sequence;
determining coordinates of other characteristic points by taking the nose tip in the Apex frame as a coordinate origin;
and standardizing the Apex frame.
As a preferred technical solution, the processing of normalizing the Apex frame further includes: all facial feature points acquired by faceRandmark are subjected to coordinate conversion by taking the nose tip as the origin of coordinates, and the coordinates of the nose tip are unified as the origin of coordinates.
Preferably, the deep learning neural network model includes an SVM model and a recurrent neural network model.
As a preferred technical solution, the training step of the deep learning neural network model includes: and taking the Apex frame as the input of an SVM (support vector machine) model, taking the frame with the minimum strength change before the Apex frame, the Apex frame and the frame with the minimum strength change after the Apex frame as the input of a recurrent neural network, weighting the output result of the SVM model and the output result of the recurrent neural network to obtain a prediction output, calculating loss with real data, calculating each weight by adopting a gradient descent mode and carrying out reverse iterative computation, and carrying out model training.
In another aspect, the present invention further provides a device for recognizing a micro expression, including:
the acquiring unit is used for acquiring the micro-expression data set video;
the preprocessing unit is used for preprocessing the micro expression data set video to obtain each frame of facial feature points;
the computing unit is used for computing the variation intensity of the micro expression sequence frames and determining the Apex frame with the maximum variation intensity;
a first determining unit, configured to determine, according to the face change strength sequence, frames with minimum change strengths before and after the Apex frame;
a second determining unit, configured to determine a face feature point sequence of a frame with a minimum change strength before and after the Apex frame;
and the recognition unit is used for inputting the facial feature point sequence of the frame with the minimum change intensity before and after the Apex frame into the trained deep learning neural network model, and recognizing the micro expression by using the deep learning neural network model.
As a preferred technical solution, the deep learning neural network model includes an SVM model and a recurrent neural network model.
As a preferred technical solution, the training step of the deep learning neural network model includes: and taking the Apex frame as the input of the SVM model, taking the frame with the minimum strength change before the Apex frame, the Apex frame and the frame with the minimum strength change after the Apex frame as the input of the recurrent neural network, weighting the output result of the SVM model and the output result of the recurrent neural network to obtain a predicted output, calculating loss with real data, calculating each weight in a reverse iteration mode by adopting a gradient descent mode, and training the model.
Compared with the prior art, the invention has the beneficial effects that: the recognition method provided by the invention trains the deep learning neural network model by obtaining the Apex frame and the two frames with the minimum strength change before and after the Apex frame, utilizes the trained deep learning neural network model to be the micro expression, has small calculated amount and high precision, does not need to rely on a strong carrier, can deploy various operating systems, and is convenient for program transplantation.
Drawings
FIG. 1 is a flow chart of a method for identifying micro-expressions according to the present invention;
FIG. 2 is a flow chart of the deep learning network model training provided by the present invention;
fig. 3 is a structural diagram of a micro-expression recognition apparatus according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
As used in this application and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" are intended to cover only the explicitly identified steps or elements as not constituting an exclusive list and that the method or apparatus may comprise further steps or elements.
The relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present application unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as exemplary only and not as limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
It should be noted that the terms "first", "second", and the like are used to define the components, and are only used for convenience of distinguishing the corresponding components, and the terms have no special meanings unless otherwise stated, and therefore, the scope of protection of the present application is not to be construed as being limited. Further, although the terms used in the present application are selected from publicly known and used terms, some of the terms mentioned in the specification of the present application may be selected by the applicant at his or her discretion, the detailed meanings of which are described in relevant parts of the description herein. Further, it is required that the present application is understood not only by the actual terms used but also by the meaning of each term lying within.
Referring to fig. 1, the present invention provides a method for recognizing a micro expression, including the steps of:
s10: acquiring a micro-expression data set video;
s20: preprocessing the micro expression data set video to obtain each frame of facial feature point;
in the present embodiment, the RK3399Pro development board is illustrated, but it should be noted that, although the RK3399Pro development board is illustrated in the present embodiment, the scope of the present invention is not limited thereto, and other platforms are also within the scope of the present invention.
Specifically, hard decoding of the video stream is realized by using RK3399Pro, a faceRandmark model is operated, and faceland of RockX is called to obtain the faceRandmark feature point of each frame.
S30: calculating the face change intensity of each frame of the micro expression, and determining the Apex frame with the maximum change intensity;
specifically, firstly, calculating the change intensity of each frame face, and constructing a change intensity sequence according to the change intensity of each frame face; determining an Apex frame according to the change strength sequence to acquire the Apex frame; and then, taking the nose tip of the face in the Apex frame as the origin of coordinates, acquiring the coordinates of the feature points, and realizing standardization.
It should be noted that the intensity of the facial changes in this embodiment is measured based on the regions where the micro-expression muscle movements occur frequently.
It should be understood that an Apex frame described in this embodiment refers to a frame having the greatest variation intensity in the sequence.
S40: determining frames with minimum change intensity before and after the Apex frame according to the face change intensity sequence;
specifically, a frame with the minimum change strength Before the Apex frame is determined according to the face change strength sequence, and the frame with the minimum change strength Before the Apex frame and the Apex frame interval frame number N are stored as an N _ beforee _ Apex frame;
then determining a face characteristic point sequence of a frame with minimum change intensity After the Apex frame, and marking a frame number M of an interval between the frame with minimum change intensity After the Apex frame and the Apex frame as an M _ After _ Apex frame;
it should be noted that the micro-expression is a short process, and frames farther from the Apex frame have lower influence on the micro-expression determination and higher influence on the micro-expression determination, so that the corresponding weights are calculated based on the intervals N and M between the N _ Before _ Apex frame and the M _ Apex _ After frame and the Apex frame, the weight _ Before = M/(N + M) of the N _ Before _ Apex frame, and the weight _ After = N/(N + M) of the M _ Apex _ After frame.
S50: determining a face feature point sequence of a frame with minimum change intensity before and after the Apex frame;
it should be understood that After determining the Apex frame, the N _ Before _ Apex frame, and the M _ Apex _ After frame, the sequences of facial feature points between the three are determined.
S60: and inputting the facial feature point sequence of the frame with the minimum change intensity before and after the Apex frame into a trained deep learning neural network model, and identifying the micro expression by using the deep learning neural network model.
In this embodiment, the deep learning constructed deep learning network model includes an SVM model and a recurrent neural network model, referring to fig. 2, the SVM model and the recurrent neural network model have inputs of an N _ Before _ Apex frame, an Apex frame and an M _ Apex _ After frame, and an output of a micro-representation classification result, and the training step includes: the Apex frame is used as the input of the SVM model, the characteristic sequences of the N _ Beforee _ Apex frame, the Apex frame and the M _ Apex _ After frame are used as the input of the recurrent neural network based on the weight, the micro-expression classification number is used as the output, and the deep learning network is trained.
In this embodiment, in actual use, the sequences of the facial feature points of the Apex frame, the N _ Before _ Apex frame, and the M _ Apex _ After frame are input into a trained neural network model, and the recursive neural network model is used to identify the micro-expression.
It should be noted that the recurrent neural network model used in the present embodiment is an LSTM model, but the scope of the present invention is not limited thereto, and other recurrent neural network models are also possible.
It should be noted that the method for recognizing the micro-expressions provided by the invention can be deployed on various platforms, and if the number of devices is increased, an edge calculation framework can be constructed to adjust the calculation. If the RK3399Pro is not used, only a development board is needed to analyze the video stream, only the facial feature point recognition can be deployed, the result is sent to an edge server with better performance for processing, and the steps in the whole recognition method can be split to be operated on a proper machine.
The identification method provided by the invention adopts the SVM classification model for preliminary screening, and then utilizes the recurrent neural network model for accurate judgment, so that the calculation amount is small, the precision is high, various operating systems can be deployed without depending on a strong carrier, and the program transplantation is convenient.
In another embodiment, the present invention further provides a device for recognizing a micro expression, as shown in fig. 3, the device comprising:
an obtaining unit 100, configured to obtain a micro-expression dataset video; it should be noted that, since the specific obtaining manner and process are already described in detail in step S10 of the method for identifying a micro expression, they are not described herein again.
The preprocessing unit 200 is configured to preprocess the micro-expression data set video to obtain each frame of facial feature points; it should be noted that, since the specific preprocessing method and process have been described in detail in step S20 of the method for identifying a micro expression, they are not described herein again.
A calculating unit 300, configured to calculate a face change strength of each frame of the micro expression, and determine an Apex frame with the largest change strength; it should be noted that, since the specific calculation method and process are already described in detail in step S30 of the method for identifying a micro expression, they are not described herein again.
A first determining unit 400, configured to determine, according to the face change strength sequence, a frame with the minimum change strength before and after the Apex frame; it should be noted that, since the specific determination method and process are already described in detail in step S40 of the method for identifying a micro expression, they are not described herein again.
A second determining unit 500, configured to determine a sequence of facial feature points of frames with minimum variation intensity before and after the Apex frame; it should be noted that, since the specific determination method and process are already described in detail in step S50 of the method for identifying a micro-expression, they are not described herein again.
A recognition unit 600, configured to input the facial feature point sequence of the frame with the minimum change strength before and after the Apex frame into a trained deep learning neural network model, and recognize a micro-expression by using the deep learning neural network model; it should be noted that, since the specific identification method and process are already described in detail in step S60 of the method for identifying a micro expression, detailed description thereof is omitted here.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may store a program, and when the program is executed, the program includes some or all of the steps of any one of the methods for identifying a micro-expression described in the above method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
An exemplary flowchart of a method for recognizing a micro-expression according to an embodiment of the present invention is described above with reference to the drawings. It should be noted that the numerous details included in the above description are merely exemplary of the invention and are not limiting of the invention. In other embodiments of the invention, the method may have more, fewer, or different steps, and the order, inclusion, function, etc. of the steps may be different from that described and illustrated.
Claims (9)
1. A method for recognizing a micro expression, comprising:
acquiring a micro-expression data set video;
preprocessing the micro expression data set video to obtain each frame of facial feature point;
calculating the change intensity of the micro expression sequence frame, and determining the Apex frame with the maximum change intensity;
determining frames with minimum change intensity before and after the Apex frame according to the face change intensity sequence;
determining a face feature point sequence of a frame with minimum change intensity before and after the Apex frame;
and inputting the facial feature point sequence of the frame with the minimum variation intensity before and after the Apex frame into a trained deep learning neural network model, and identifying the micro expression by using the deep learning neural network model.
2. The recognition method of claim 1, wherein preprocessing the micro-expression dataset video to obtain each frame of facial feature points, further comprises:
importing the micro-expression data set video into a facelandmark model;
acquiring a face feature point of each frame by using the facelandmark model;
the recognition method of claim 2, wherein calculating the micro-expression sequence frame change strength further comprises:
calculating the change intensity of each frame of face and constructing a change intensity sequence;
determining an Apex frame according to the change strength sequence;
determining coordinates of other feature points by taking the nose tip in the Apex frame as an origin of coordinates;
and standardizing the Apex frame.
3. The method according to claim 3, wherein normalizing the Apex frame further comprises: all facial feature points acquired by faceRandmark are subjected to coordinate conversion by taking the nose tip as the origin of coordinates, and the coordinates of the nose tip are unified as the origin of coordinates.
4. The identification method according to claim 1, characterized in that: the deep learning neural network model comprises an SVM model and a recurrent neural network model.
5. The recognition method of claim 6, wherein the training step of the deep learning neural network model comprises: and taking the Apex frame as the input of the SVM model, taking the frame with the minimum strength change before the Apex frame, the Apex frame and the frame with the minimum strength change after the Apex frame as the input of the recurrent neural network, weighting the output result of the SVM model and the output result of the recurrent neural network to obtain a predicted output, calculating loss with real data, calculating each weight in a reverse iteration mode by adopting a gradient descent mode, and training the model.
6. An apparatus for recognizing micro-expression, comprising:
the acquiring unit is used for acquiring the micro-expression data set video;
the preprocessing unit is used for preprocessing the micro expression data set video to obtain each frame of facial feature points;
the computing unit is used for computing the variation intensity of the micro expression sequence frames and determining the Apex frame with the maximum variation intensity;
a first determining unit, configured to determine, according to the face change strength sequence, frames with minimum change strengths before and after the Apex frame;
a second determining unit, configured to determine a face feature point sequence of a frame with a minimum change strength before and after the Apex frame;
and the recognition unit is used for inputting the facial feature point sequence of the frame with the minimum change strength before and after the Apex frame into the trained deep learning neural network model and recognizing the micro-expression by using the deep learning neural network model.
7. The identification device of claim 7, wherein: the deep learning neural network model comprises an SVM model and a recurrent neural network model.
8. The recognition apparatus of claim 8, wherein the training step of the deep learning neural network model comprises: and taking the Apex frame as the input of the SVM model, taking the frame with the minimum strength change before the Apex frame, the Apex frame and the frame with the minimum strength change after the Apex frame as the input of the recurrent neural network, weighting the output result of the SVM model and the output result of the recurrent neural network to obtain a predicted output, calculating loss with real data, calculating each weight in a reverse iteration mode by adopting a gradient descent mode, and training the model.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of a method for identifying a micro-expression according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210941662.3A CN115249377B (en) | 2022-08-08 | 2022-08-08 | Micro-expression recognition method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210941662.3A CN115249377B (en) | 2022-08-08 | 2022-08-08 | Micro-expression recognition method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115249377A true CN115249377A (en) | 2022-10-28 |
CN115249377B CN115249377B (en) | 2024-10-29 |
Family
ID=83699983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210941662.3A Active CN115249377B (en) | 2022-08-08 | 2022-08-08 | Micro-expression recognition method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115249377B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118490231A (en) * | 2024-07-17 | 2024-08-16 | 南昌航空大学 | Electroencephalogram emotion recognition method, equipment, medium and product under dynamic situation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582212A (en) * | 2020-05-15 | 2020-08-25 | 山东大学 | Multi-domain fusion micro-expression detection method based on motion unit |
CN113537008A (en) * | 2021-07-02 | 2021-10-22 | 江南大学 | Micro-expression identification method based on adaptive motion amplification and convolutional neural network |
CN113743275A (en) * | 2021-08-30 | 2021-12-03 | 西南大学 | Micro-expression type determination method and device, electronic equipment and storage medium |
-
2022
- 2022-08-08 CN CN202210941662.3A patent/CN115249377B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582212A (en) * | 2020-05-15 | 2020-08-25 | 山东大学 | Multi-domain fusion micro-expression detection method based on motion unit |
CN113537008A (en) * | 2021-07-02 | 2021-10-22 | 江南大学 | Micro-expression identification method based on adaptive motion amplification and convolutional neural network |
CN113743275A (en) * | 2021-08-30 | 2021-12-03 | 西南大学 | Micro-expression type determination method and device, electronic equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118490231A (en) * | 2024-07-17 | 2024-08-16 | 南昌航空大学 | Electroencephalogram emotion recognition method, equipment, medium and product under dynamic situation |
Also Published As
Publication number | Publication date |
---|---|
CN115249377B (en) | 2024-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107958230B (en) | Facial expression recognition method and device | |
CN110797021A (en) | Hybrid speech recognition network training method, hybrid speech recognition device and storage medium | |
CN108197592B (en) | Information acquisition method and device | |
CN111582348A (en) | Method, device, equipment and storage medium for training condition generating type countermeasure network | |
CN110751069A (en) | Face living body detection method and device | |
CN110929836B (en) | Neural network training and image processing method and device, electronic equipment and medium | |
CN107463865A (en) | Face datection model training method, method for detecting human face and device | |
CN110503000B (en) | Teaching head-up rate measuring method based on face recognition technology | |
CN111161314B (en) | Target object position area determination method and device, electronic equipment and storage medium | |
CN109255289A (en) | A kind of across aging face identification method generating model based on unified formula | |
KR20120066462A (en) | Method and system for providing face recognition, feature vector extraction apparatus for face recognition | |
CN115034315B (en) | Service processing method and device based on artificial intelligence, computer equipment and medium | |
JP2018092612A (en) | Valuation device of complexity of classification task and method | |
CN111401105B (en) | Video expression recognition method, device and equipment | |
CN110288085A (en) | A kind of data processing method, device, system and storage medium | |
CN111582358A (en) | Training method and device for house type recognition model and house type weight judging method and device | |
CN110717407A (en) | Human face recognition method, device and storage medium based on lip language password | |
CN108171208A (en) | Information acquisition method and device | |
KR102185979B1 (en) | Method and apparatus for determining type of movement of object in video | |
CN111429414B (en) | Artificial intelligence-based focus image sample determination method and related device | |
CN115249377B (en) | Micro-expression recognition method and device | |
CN113673465B (en) | Image detection method, device, equipment and readable storage medium | |
CN111860056A (en) | Blink-based in-vivo detection method and device, readable storage medium and equipment | |
CN116246303A (en) | Sample construction method, device, equipment and medium for model cross-domain training | |
CN112464699B (en) | Image normalization method, system and readable medium for face analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230406 Address after: 211100 3286, floor 3, Chuangye building, No. 1009, Tianyuan East Road, Jiangning District, Nanjing, Jiangsu Province (Jiangning high tech Zone) Applicant after: Jiangsu Zhengfang Transportation Technology Co.,Ltd. Address before: Room 921, Building 6, No. 6, Sanhong Road, Yuhuatai District, Nanjing, Jiangsu Province, 210012 Applicant before: Nanjing Wuxiang Cloud Intelligent Technology Co.,Ltd. |
|
GR01 | Patent grant |