CN102193620B - Input method based on facial expression recognition - Google Patents

Input method based on facial expression recognition Download PDF

Info

Publication number
CN102193620B
CN102193620B CN 201010118463 CN201010118463A CN102193620B CN 102193620 B CN102193620 B CN 102193620B CN 201010118463 CN201010118463 CN 201010118463 CN 201010118463 A CN201010118463 A CN 201010118463A CN 102193620 B CN102193620 B CN 102193620B
Authority
CN
China
Prior art keywords
input
format
classification
expression
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN 201010118463
Other languages
Chinese (zh)
Other versions
CN102193620A (en
Inventor
杜乐
谢林
朱昊亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN 201010118463 priority Critical patent/CN102193620B/en
Publication of CN102193620A publication Critical patent/CN102193620A/en
Application granted granted Critical
Publication of CN102193620B publication Critical patent/CN102193620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Telephonic Communication Services (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses an input method based on facial expression recognition, which comprises the following steps of: acquiring an image by using a camera device; recognizing a face and positioning an outline by utilizing a reference template method, a face rule method, a sample learning method, a skin color model method or a characteristic sub-face method; extracting a facial expression feature from the face by utilizing a principal component analysis method or Gabor wavelet method; acquiring the classification of the facial expression from the facial expression feature by utilizing a template-based matching method, a neural network based method or a support vector machine based method; matching the classification with a corresponding input result; and inputting the input result.

Description

A kind of input method based on Expression Recognition
Technical field
The present invention relates to input method, particularly based on the input method of Expression Recognition.
Background technology
At present, in electronic equipments such as mobile phone, personal digital assistant, televisor, PC, input expression information, mainly contain following method.
The most original method is to describe particular emotion by literal, and such as user input text " smile " or " indignation " etc. is for the expression of describing particular type.Use in addition the method for punctuation mark combination.Express particular emotion by punctuation mark and refer to that the user is by inputting a series of punctuation marks, be combined into the emoticon with pictograph meaning, for example " ^_^ ".
But above-mentioned direct input represents the method for the text message of expressing one's feelings, and has the problems such as input mode is single, the form of expression is stiff.The method of punctuation mark combination then exist input operation loaded down with trivial details, transmit the abundant not or not accurate enough problem of expression kind.
Relatively above-mentioned two kinds of methods, the method that the use expression picture is arranged that the each side performance is better.Specifically, in the instant messaging equipment such as mobile phone, computer or the bitcom systems such as QQ, microsoft network service (MSN:MicrosoftService Network), the expression picture choice box is provided, suitable expression picture is clicked in the user selection expression picture choice box, and then this expression picture will be output to display terminal or output to other output devices.But still there is the problem of complex operation in the method.
In addition, utilize in addition human facial expression recognition to carry out the method for expression information input, namely according to the user's facial expression that recognizes, export corresponding information to display terminal or other output devices.The method is multiplex in fields such as equipment control, input information, safety certifications.But the limitation of present this method is, realize the human face expression matching and recognition by in advance sampling, so versatility is lower, can't be applicable to the not Expression Recognition of particular persons and the situation of expression information input.
Summary of the invention
In view of the problem that above-mentioned input method exists, the object of the present invention is to provide a kind of easy to operate, versatility good, can be applicable to the not input method of the expression information of particular persons.
To achieve these goals, according to the input method based on Expression Recognition of the present invention, comprising: acquisition step, utilize camera head to gather image; Identification and classifying step identify people's face and carry out the extraction of expressive features and obtain the classification of described expression from described image; The coupling step is complementary described classification and corresponding input results; Input step uses described input results to input.
And, in the above-mentioned input method based on Expression Recognition, it is characterized in that described identification and classifying step comprise: use reference template method, people's face rule method, sample learning method, complexion model method or the sub-face method of feature to identify described people's face and locate profile; Use principal component analysis (PCA) or gal cypress (Gabor) wavelet method from described people's face, to extract expressive features; Use is based on the matching process of template, based on the method for neural network or obtain the classification of described expression from described expressive features based on the method for support vector machine.
And, in the above-mentioned input method based on Expression Recognition, it is characterized in that with predetermined time the interval carry out successively repeatedly described acquisition step, identification and classifying step, coupling step; Before the coupling step, also comprise: determining step, judge whether described classification is identical with the classification of time interval expression before, if identical, then do not carry out follow-up described coupling step, and directly return described acquisition step in the next time interval.
And, in the above-mentioned input method based on Expression Recognition, it is characterized in that in described coupling step, with described classification and matching in predefined a plurality of input results in predetermined input format with the corresponding input results of described classification.
And, in the input method based on Expression Recognition of the above, it is characterized in that described predetermined input format is in text formatting, picture format, symbol combination form, video format, the audio format.
And, in the above-mentioned input method based on Expression Recognition, it is characterized in that in described coupling step, from by selecting a kind of input format in one or more input format that form text formatting, picture format, symbol combination form, video format, the audio format, and with described classification and matching in predefined a plurality of input results in selected input format with the corresponding input results of described classification.
And, the input method based on Expression Recognition above-mentioned characterized by further comprising: to comprised deletion, the editing and processing of revising or increasing by one or more input format that form in text formatting, picture format, symbol combination form, video format, the audio format.
And the input method based on Expression Recognition above-mentioned characterized by further comprising: to comprising deletion, the editing and processing of revising or increasing by one or more input results that form glad, angry, that be taken aback, reach in the fear.
According to the input method based on Expression Recognition of the present invention, can provide a kind of easy to operate, versatility good, can be applicable to the not input method of the expression information of particular persons.
Description of drawings
By the description of carrying out below in conjunction with accompanying drawing, above and other purpose of the present invention and characteristics will become apparent, wherein:
Fig. 1 is expression according to the process flow diagram based on the step of the input method of Expression Recognition of embodiment of the present invention;
Fig. 2 is for the form of expression type, input format and input results is described.
Fig. 3 is the figure of expression one routine input results.
Main symbol description: S1010-S1080 is step.
Embodiment
Below, describe embodiments of the invention in detail with reference to accompanying drawing.
(embodiment)
Fig. 1 is expression according to the process flow diagram based on the step of the input method of Expression Recognition of present embodiment.
As shown in Figure 1, the input method based on Expression Recognition according to present embodiment can roughly be divided into four modules, be subdivided into eight steps, wherein step S1010 consists of image collecting module 101, step S1020-S1040 consists of Expression Recognition and sort module 102, step S1050-S1070 consists of input results matching module 103, and step S1080 consists of load module.Concrete step is as follows.
At step S1010, started input method after, utilize camera head user face to be made a video recording and with Fixed Time Interval Δ t, for example gather signal of video signal with 0.1 second interval.Wherein, camera head is installed on user's operating equipment, perhaps as an equipment independently.
Then at step S1020, utilize signal of video signal to identify the people's face in this image and locate profile, obtain the information such as people's face quantity, profile, primary and secondary relation in the image.Wherein, about the method for recognition of face and location, existing technology has reference template method, people's face rule method, sample learning method, complexion model method, the sub-face method of feature etc.
Then at step S1030, utilize the people's face and the locations of contours result that in step S1020, recognize, carry out the extraction of human face expression feature.Prior art about the human face expression Feature Extraction Method has principal component analysis (PCA), Gabor wavelet method etc.
Then at step S1040, utilizing the expressive features of the people's face that is drawn at step S1030, is such as glad, angry, startled, unidentified etc. with this expression classification.The prior art of human face expression sorting technique mainly contains matching process based on template, based on the method for neural network, based on method of support vector machine etc.
Then at step S1050, judge whether the classification results that obtains at step S1040 is " unidentified ", if (step S1050: "Yes"), then abandon the image information that this time collects, return step S1010.
If classification results is not " unidentified " (step S1050: "No"), then then at step S1060, continue to judge that the classification results of the expression whether this classification results collects with the last time interval is consistent.If consistent (step S1060: "Yes"), then illustrate the user express one's feelings time interval Δ t (Δ t=t2-t1, wherein, t2 is this time image collection time, t1 is image collection time last time) in do not change, need not to re-enter, therefore return step S1010.
If inconsistent (step S1060: "No"), then then at step S1070, select input format, and corresponding input results in classification results and the selected input format is mated.Wherein, input format is predefined several input format, comprises text, picture, symbol combination etc.Fig. 2 is the form for the input results that expression type, input format and coupling are described.For example, suppose that the expression type is glad, input format is picture, and then the input results of coupling is the picture shown in the 3rd row fourth line in the form.
Then at step S1080, the input results that will in step S1070, obtain be input to present embodiment input method for system show, this system comprises the display screen of mobile phone, personal digital assistant, televisor, PC.Even this input results can also be directly inputted to network interface and go out by Internet Transmission, or is applied to the fields such as equipment control, input information, safety certification.
As mentioned above, the input method based on Expression Recognition according to present embodiment, the classification that the expression of user's nature of collecting is identified and analyzed and obtains expressing one's feelings, need not in advance specific user's expression to be sampled and by comparing with sampling expression, therefore not specific user's Expression Recognition and the input of expression information can be applicable to, the versatility based on the input method of Expression Recognition can be improved.
And, as mentioned above, the input method based on Expression Recognition according to present embodiment, naturally the expressing one's feelings of people's face that camera head collects identified, and the classification that obtains expressing one's feelings according to extraction and the analysis of expressive features, classification and matching is inputted to input results, above-mentioned input process need not user's manual operation, therefore can improve efficient and the convenience of expression information input, and input the user by the mode of image and express one's feelings with auxiliary traditional input mode, enriched input method form, strengthened the interest of operation.
And, as mentioned above, in the input method based on Expression Recognition according to present embodiment, camera head is with predetermined time interval collection image, and when the classification results of judging expression when identical to the classification results of time interval expression before, no longer carry out follow-up step, and directly proceed image collection in the next time interval by camera head, therefore, can carry out continuously in real time Expression Recognition and input, increase efficient and the convenience of input operation, and enlarged the usable range of input method, be specially adapted to comprise the real-time Internet Transmission of Internet chat.
In addition, in the situation that does not break away from the spirit and scope of the present invention that are defined by the claims, can also carry out various changes on form and the details to the method for extracting webpage subject contents in the present embodiment.
For example, in the input method based on Expression Recognition of present embodiment, by the user selection input format, but the present invention is not limited to this at step S1070, and the user can also set in advance the input format of acquiescence.Thus, save the operation of selecting input format, can further improve efficient and the convenience of input process.
Again for example, the editing and processing such as the user can also add input format, deletion, modification for example, are added other input format such as video format, audio format.
Again for example, the user can also be according to the result of Expression Recognition and classification, dynamic editing (for example add, delete, revise) input results.For example, when the classification results that obtains according to Expression Recognition " happiness " can further be refined as " smile " with " laugh ", then can correspondingly add new input results, for example, add the input results as shown in Figure 3 corresponding to " laugh ".
Utilizability on the industry
Input method based on Expression Recognition of the present invention is applicable to the input of the expression information in the electronic equipments such as mobile phone, personal digital assistant, televisor, PC and the Internet Transmission.

Claims (7)

1. input method based on Expression Recognition comprises:
Acquisition step utilizes camera head to gather image;
Identification and classifying step identify people's face and carry out the extraction of expressive features and obtain the classification of described expression from described image;
The coupling step is complementary described classification and corresponding input results;
Input step uses described input results to input,
Wherein, with predetermined time the interval carry out successively repeatedly described acquisition step, identification and classifying step, coupling step; Before the coupling step, also comprise:
Determining step judges that whether described classification is identical with the classification of time interval expression before, if identical, then do not carry out follow-up described coupling step, and directly returns described acquisition step in the next time interval.
2. the input method based on Expression Recognition as claimed in claim 1 is characterized in that described identification and classifying step comprise:
Use reference template method, people's face rule method, sample learning method, complexion model method or the sub-face method of feature to identify described people's face and locate profile;
Use principal component analysis (PCA) or gal cypress (Gabor) wavelet method from described people's face, to extract expressive features;
Use is based on the matching process of template, based on the method for neural network or obtain the classification of described expression from described expressive features based on the method for support vector machine.
3. the input method based on Expression Recognition as claimed in claim 1 is characterized in that in described coupling step, with described classification and matching in predefined a plurality of input results in predetermined input format with the corresponding input results of described classification.
4. the input method based on Expression Recognition as claimed in claim 3 is characterized in that described predetermined input format is in text formatting, picture format, symbol combination form, video format, the audio format.
5. the input method based on Expression Recognition as claimed in claim 1, it is characterized in that in described coupling step, from by selecting a kind of input format in one or more input format that form text formatting, picture format, symbol combination form, video format, the audio format, and with described classification and matching in predefined a plurality of input results in selected input format with the corresponding input results of described classification.
6. such as claim 4 or 5 described input methods based on Expression Recognition, characterized by further comprising:
To comprised deletion, the editing and processing of revising or increasing by one or more input format that form in text formatting, picture format, symbol combination form, video format, the audio format.
7. the input method based on Expression Recognition as claimed in claim 1 characterized by further comprising:
To comprising deletion, the editing and processing of revising or increasing by one or more input results that form glad, angry, that be taken aback, reach in the fear.
CN 201010118463 2010-03-02 2010-03-02 Input method based on facial expression recognition Active CN102193620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010118463 CN102193620B (en) 2010-03-02 2010-03-02 Input method based on facial expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010118463 CN102193620B (en) 2010-03-02 2010-03-02 Input method based on facial expression recognition

Publications (2)

Publication Number Publication Date
CN102193620A CN102193620A (en) 2011-09-21
CN102193620B true CN102193620B (en) 2013-01-23

Family

ID=44601804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010118463 Active CN102193620B (en) 2010-03-02 2010-03-02 Input method based on facial expression recognition

Country Status (1)

Country Link
CN (1) CN102193620B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI470564B (en) * 2012-02-21 2015-01-21 Wistron Corp User emtion detection method and handwriting input electronic device
TWI484475B (en) * 2012-06-05 2015-05-11 Quanta Comp Inc Method for displaying words, voice-to-text device and computer program product
CN103514389A (en) * 2012-06-28 2014-01-15 华为技术有限公司 Equipment authentication method and device
CN103677226B (en) * 2012-09-04 2016-08-03 北方工业大学 expression recognition input method
CN102880388A (en) * 2012-09-06 2013-01-16 北京天宇朗通通信设备股份有限公司 Music processing method, music processing device and mobile terminal
CN104244101A (en) * 2013-06-21 2014-12-24 三星电子(中国)研发中心 Method and device for commenting multimedia content
CN103399630A (en) * 2013-07-05 2013-11-20 北京百纳威尔科技有限公司 Method and device for recording facial expressions
CN108762480A (en) * 2013-08-28 2018-11-06 联想(北京)有限公司 A kind of input method and electronic equipment
CN104333688B (en) * 2013-12-03 2018-07-10 广州三星通信技术研究有限公司 The device and method of image formation sheet feelings symbol based on shooting
CN103809759A (en) * 2014-03-05 2014-05-21 李志英 Face input method
CN104063427A (en) * 2014-06-06 2014-09-24 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
CN104063683B (en) * 2014-06-06 2017-05-17 北京搜狗科技发展有限公司 Expression input method and device based on face identification
CN104284131A (en) * 2014-10-29 2015-01-14 四川智诚天逸科技有限公司 Video communication device adjusting image
CN108216254B (en) * 2018-01-10 2020-03-10 山东大学 Road anger emotion recognition method based on fusion of facial image and pulse information

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070071288A1 (en) * 2005-09-29 2007-03-29 Quen-Zong Wu Facial features based human face recognition method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1606347A (en) * 2004-11-15 2005-04-13 北京中星微电子有限公司 A video communication method

Also Published As

Publication number Publication date
CN102193620A (en) 2011-09-21

Similar Documents

Publication Publication Date Title
CN102193620B (en) Input method based on facial expression recognition
US10685186B2 (en) Semantic understanding based emoji input method and device
CN106681633B (en) System and method for auxiliary information input control function of sliding operation of portable terminal equipment
CN104063683A (en) Expression input method and device based on face identification
CN104076944A (en) Chat emoticon input method and device
CN102890776A (en) Method for searching emoticons through facial expression
CN106294774A (en) User individual data processing method based on dialogue service and device
CN111179935B (en) Voice quality inspection method and device
CN107733782A (en) The method, apparatus and system of group is generated according to task
CN102855317A (en) Multimode indexing method and system based on demonstration video
US10062384B1 (en) Analysis of content written on a board
CN113094512A (en) Fault analysis system and method in industrial production and manufacturing
CN111882625A (en) Method and device for generating dynamic graph, electronic equipment and storage medium
CN112925905A (en) Method, apparatus, electronic device and storage medium for extracting video subtitles
CN102890777A (en) Computer system capable of identifying facial expressions
CN111061838A (en) Text feature keyword determination method and device and storage medium
WO2024193538A1 (en) Video data processing method and apparatus, device, and readable storage medium
CN112417095A (en) Voice message processing method and device
CN116484872A (en) Multi-modal aspect emotion judging method and system based on pre-training and attention
CN116403199A (en) Screen icon semantic recognition method and system based on deep learning
Srividhya et al. Deep Learning based Telugu Video Text Detection using Video Coding Over Digital Transmission
CN114429647A (en) Progressive character interaction identification method and system
CN107918606A (en) Tool is as name word recognition method and device
US11210335B2 (en) System and method for judging situation of object
KR20220079432A (en) Mxethod for providing user with tag information extracted from screenshot image and system thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 5-12 / F, building 6, 57 Andemen street, Yuhuatai District, Nanjing City, Jiangsu Province

Patentee after: Samsung Electronics (China) R&D Center

Patentee after: Samsung Electronics Co.,Ltd.

Address before: No. 268 Nanjing Huijie square Zhongshan Road city in Jiangsu province 210008 8 floor

Patentee before: Samsung Electronics (China) R&D Center

Patentee before: Samsung Electronics Co.,Ltd.