WO2019085585A1 - Procédé et appareil de traitement de commande de dispositif - Google Patents

Procédé et appareil de traitement de commande de dispositif Download PDF

Info

Publication number
WO2019085585A1
WO2019085585A1 PCT/CN2018/100489 CN2018100489W WO2019085585A1 WO 2019085585 A1 WO2019085585 A1 WO 2019085585A1 CN 2018100489 W CN2018100489 W CN 2018100489W WO 2019085585 A1 WO2019085585 A1 WO 2019085585A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
photo
level
sound
emotion
Prior art date
Application number
PCT/CN2018/100489
Other languages
English (en)
Chinese (zh)
Inventor
刘质斌
王九飚
周文斌
石秋成
王红霞
王琳
Original Assignee
格力电器(武汉)有限公司
珠海格力电器股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 格力电器(武汉)有限公司, 珠海格力电器股份有限公司 filed Critical 格力电器(武汉)有限公司
Publication of WO2019085585A1 publication Critical patent/WO2019085585A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2805Home Audio Video Interoperability [HAVI] networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L2012/2847Home automation networks characterised by the type of home appliance used
    • H04L2012/2849Audio/video appliances

Definitions

  • the present application relates to the field of smart homes, and in particular to a device control processing method and apparatus.
  • the embodiment of the present application provides a device control processing method and device, so as to at least solve the technical problem that the home system in the related art cannot meet the user's demand for the intelligence degree of the home system.
  • a device control processing method including: acquiring information of a user, where the information includes at least one of: a photo of the user captured by an imaging device, through audio a voice of the user received by the device; using a model to evaluate an emotional level of the user corresponding to the information, wherein the model is trained according to a plurality of sets of data, each set of data in the plurality of sets of data Each includes: a photo and/or a sound of the user, and a label for identifying an emotional level represented by the photo and/or sound; transmitting a control command according to the emotional level, wherein the control command is used to indicate the device Perform the scheduled operation.
  • the method further includes: transmitting a photo and/or a sound of the user to another user; acquiring the other user as The label of the user's photo and/or sound added.
  • acquiring the tag added by the other user for the photo and/or sound of the user includes at least one of: sending a photo and/or a sound of the user, and a plurality of emotion levels that can be selected to Receiving, by the other user, an emotion level selected by the other user from the plurality of emotion levels as the tag; acquiring an evaluation of the photo and/or voice of the user by the other user, from the evaluation The emotion level is extracted as the tag, wherein the evaluation includes at least one of: an evaluation of a natural language, an evaluation of a voice.
  • the method further includes: after obtaining the photo and/or sound of the user, according to the photo of the user and/or Or the voice makes a question to the user; the emotion level corresponding to the photo and/or sound of the user is extracted according to the user's answer to the question questioned.
  • sending the control command according to the emotion level comprises: sending the control command if the emotion level matches a predetermined level, wherein the control command is used to control the device to perform the following operations At least one of: playing music corresponding to the emotion level, playing a video corresponding to the emotion level.
  • a device control processing apparatus including: a first acquiring unit, configured to acquire information of a user, where the information includes at least one of the following: capturing by the imaging device a photo of the user, the voice of the user received through the audio device; an evaluation unit for evaluating a mood level of the user corresponding to the information using a model, wherein the model is trained according to multiple sets of data Obtaining, each of the plurality of sets of data includes: a photo and/or a sound of the user, and a label for identifying an emotional level represented by the photo and/or the sound; the first sending unit, And a method for transmitting a control command according to the emotion level, wherein the control command is used to instruct the device to perform a predetermined operation.
  • the device further includes: a second sending unit, configured to send the photo and/or sound of the user to other users before evaluating the emotion level of the user corresponding to the information by using the model a second obtaining unit, configured to acquire the label added by the other user for the photo and/or sound of the user.
  • a second sending unit configured to send the photo and/or sound of the user to other users before evaluating the emotion level of the user corresponding to the information by using the model
  • a second obtaining unit configured to acquire the label added by the other user for the photo and/or sound of the user.
  • the second obtaining unit includes at least one of: a first sending module, configured to send a photo and/or a sound of the user, and a plurality of selectable multiple emotion levels to the other user; Determining, by the other user, an emotion level selected from the plurality of emotion levels as the label; an extracting module, configured to obtain an evaluation of the photo and/or sound of the user by the other user, and extracting from the evaluation
  • the emotion level is the tag, wherein the evaluation includes at least one of: an evaluation of a natural language, an evaluation of a voice.
  • the device further includes: a questioning unit, configured to: after obtaining the photo and/or sound of the user, using the model to evaluate the emotion level of the user corresponding to the information, according to the The user's photo and/or voice makes a question to the user; and an extracting unit is configured to extract an emotion level corresponding to the photo and/or sound of the user according to the user's answer to the question questioned.
  • a questioning unit configured to: after obtaining the photo and/or sound of the user, using the model to evaluate the emotion level of the user corresponding to the information, according to the The user's photo and/or voice makes a question to the user
  • an extracting unit is configured to extract an emotion level corresponding to the photo and/or sound of the user according to the user's answer to the question questioned.
  • the first sending unit includes: a second sending module, configured to send the control command if the emotion level matches a predetermined level, where the control command is used to control the device Performing at least one of the following operations: playing music corresponding to the emotion level, and playing a video corresponding to the emotion level.
  • a storage medium comprising a stored program, wherein the program performs the device control processing method according to any one of the above.
  • processor configured to run a program, wherein the program is executed to perform the device control processing method according to any one of the above.
  • the information of the user may be acquired, where the information includes at least one of the following: a photo of the user captured by the imaging device, a voice of the user received by the audio device, and a user corresponding to the model evaluation information.
  • the tag transmits a control command according to the emotion level, wherein the control command is used to instruct the device to perform a predetermined operation.
  • the device control processing method provided by the embodiment of the present application achieves the purpose of controlling the smart home system according to the acquired user's emotion, and achieves the technical effect of letting the user experience the happiness brought by the modern technology and improving the quality of life, thereby solving the problem.
  • the home system does not satisfy the technical problem of the user's demand for the intelligence degree of the home system, and the user experience is improved.
  • FIG. 1 is a flowchart of a device control processing method according to an embodiment of the present application.
  • FIG. 2 is a flow chart of an adjustment mechanism of a smart home system according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a device control apparatus according to an embodiment of the present application.
  • Pixel is the smallest unit that can be displayed on the computer screen. It is used to represent the unit of the image. It refers to the array of horizontal and vertical pixels that can be displayed. The more pixels in the screen, the higher the resolution of the image, the more the image is. Delicate and realistic.
  • Pixel refers to the value of a pixel.
  • Binarization Most of the pictures taken by the camera are color images.
  • the color image contains a huge amount of information.
  • the color picture is processed first, so that the picture only has foreground information.
  • With the background information you can simply define the foreground information as black and the background information as white. This is the binarization graph.
  • Neural network algorithm refers to the process of reasoning according to logic rules. It first converts information into concepts and represents user symbols. Then, according to symbolic operations, it performs logical reasoning in serial mode; this process can be written as serial instructions. Let the computer execute.
  • Voiceprint It is a sound wave spectrum that is displayed with electro-acoustic instruments and carries speech information.
  • Voiceprint recognition It is a kind of biometric technology. Also known as speaker recognition, there are two types, namely speaker recognition and speaker confirmation. Different tasks and applications use different voiceprint recognition techniques. For example, when narrowing the scope of criminal investigation, it may be necessary to identify the technology. Confirm the technology.
  • the following embodiments may be used in various electrical devices, and the types of the electrical devices are not specifically limited, including but not limited to: a washing machine, an air conditioner, a refrigerator, etc., and the above various electrical devices constitute the smart home system in the embodiment of the present application.
  • the embodiments of the present application are described in detail below.
  • a method embodiment of a device control processing method is provided, and it should be noted that the steps shown in the flowchart of the accompanying drawings may be executed in a computer system such as a set of computer executable instructions, and Although the logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flowchart of a device control processing method according to an embodiment of the present application. As shown in FIG. 1 , the device control processing method includes the following steps:
  • Step S102 Acquire information of the user, where the information includes at least one of the following: a photo of the user captured by the imaging device, and a voice of the user received by the audio device.
  • one or more cameras may be installed in the user's home for taking a photo of the user.
  • the setting position of the camera is not specifically limited, and may include, but is not limited to, a user.
  • the door, ceiling, etc. of each room in the home can be used to collect photos of the user through cameras installed in different locations.
  • the user's photo is taken by using the one or more cameras described above, the user may be photographed every predetermined time period, and then the emotion of the user in the image may be analyzed.
  • the category of the captured image is not specifically limited in the embodiment of the present application, and may include, but is not limited to, a black and white image (grayscale image) and a color image (RGB image).
  • the information in the image can be analyzed according to the binarized image processing manner. Specifically, when analyzing, multiple pixel points in the image can be compared with the pixel position in the historical image to determine the existence. The pixel points of the difference are then distinguished from the pixel points where the difference exists, and the user's information can be extracted from the image captured by the imaging device.
  • one or more audio devices may be installed in the user's home for receiving the voice of the user.
  • the installation location of the audio device is not performed.
  • the specific limitation may include, but is not limited to, a position at a doorway, a ceiling, and the like of each room in the user's home.
  • the audio device may be installed at a position where the user is frequently active and the position is equal to the height of the human body.
  • the voice device includes a voice model library, where the voice model library stores voiceprints of each member of the family.
  • Each member of a family can issue a voice to the voice device, and the voice device can perform feature extraction to store voiceprints of different members into the voice model library.
  • the voice device can extract the feature of the member's voice, obtain the voiceprint of the member, and then match the voiceprint of the member with the voiceprint stored in the voice model library. And then identifying the family member corresponding to the voiceprint, and then obtaining the information corresponding to the member.
  • Step S104 using the model to evaluate the emotional level of the user corresponding to the information, wherein the model is obtained according to the plurality of sets of data, each of the plurality of sets of data includes: a photo and/or a sound of the user, and is used to identify The photo and/or voice represents the level of the emotional level of the label.
  • the above model may be an image captured by the camera during a predetermined period of time, a voice of the user within a predetermined period of time received by the audio device, and a label for identifying an emotional level represented by the photo and/or sound. Learned by training.
  • Step S106 sending a control command according to the emotion level, wherein the control command is used to instruct the device to perform a predetermined operation.
  • the model when the smart home system is running, the model can be used to evaluate the user's information to obtain the corresponding user's emotional level, and then send a control command to the smart home system according to the evaluated emotional level.
  • the device control processing method provided by the embodiment of the present application achieves the purpose of controlling the smart home system according to the acquired user's emotion, and achieves the technical effect of letting the user experience the happiness brought by the modern technology and improving the quality of life, thereby solving the problem.
  • the home system does not satisfy the technical problem of the user's demand for the intelligence degree of the home system, and the user experience is improved.
  • the camera captures the user's current facial expressions and limb movements, and combines image recognition technology to record the user's facial expressions and limb movements through the neural network.
  • the algorithm compares, judges, feedbacks, and learns; at the same time, the sound sensor is used to receive the user's voice, and the user's voice changes are recorded, and the neural network algorithm is used to compare, judge, feedback, and learn.
  • f facial expression, limb movement, sound ... external stimulation
  • independent variables facial expressions, limb movements, sounds etc.
  • dependent variable the current emotional level of the user; here different degrees of facial expressions, physical movements, sounds... external stimuli, etc., corresponding to different current emotional levels of the user.
  • the label used to identify the emotional level represented by the photo and/or sound can be obtained from multiple aspects.
  • the device control processing method may further include: sending the user's photo and/or voice to other users; Get tags added by other users to the user's photos and/or sounds.
  • acquiring a tag added by other users for the user's photo and/or sound may include at least one of: transmitting the user's photo and/or sound, and a plurality of selectable emotion levels to other users; receiving Other users select the emotion level from the plurality of emotion levels as a label; obtain other users' evaluations of the user's photos and/or sounds, and extract the emotion level as a label from the evaluation, wherein the evaluation includes at least one of the following: natural language Evaluation, speech evaluation.
  • the smart home system will send the user's photo or voice to the user's relatives and friends (that is, other users in the context), and the user's friends and relatives will receive the user's photo or sound and
  • the emotions of the user's historical time period are compared, and then the user's photos and/or sounds are tagged, for example, the user is depressed due to work pressure, or is troubled by unhappy things outside, etc.
  • the user's friends and relatives can also directly evaluate the received photo and/or voice of the user, and the smart home system extracts the emotion level as a label from the evaluation of the user's relatives and friends.
  • the evaluation may include, but is not limited to, an evaluative text (ie, a natural language evaluation in context) sent by the user's friends and relatives, and an evaluative speech (ie, an evaluation of the voice in the context).
  • the device control processing method may further include: after obtaining the photo and/or sound of the user, asking the user according to the photo and/or sound of the user.
  • the emotion level corresponding to the user's photo and/or voice is extracted according to the user's answer to the question questioned.
  • the smart home system can obtain the user's emotional level through a conversational manner.
  • the smart home system After receiving the user's photo and/or voice, the smart home system will ask the user a question, if the smart home system asks a question: "Today "How the mood”; the user replies: "The work pressure is too big, more irritating”; then the smart home system will extract the user's corresponding emotional level from the user's answer.
  • the smart home system can also perform self-correction in order to better understand the user and serve the user. For example, if the user's emotional level is obtained by the user's friends and relatives, and the user's emotional level is happy by the conversational manner, the smart home system combines the above-mentioned user's emotional level obtained from other people and The level of emotions obtained from the users themselves through dialogue, judgment of deviations, and corrections, continuous learning and improvement.
  • sending the control command according to the emotion level may include: sending a control command when the emotion level matches the predetermined level, wherein the control command is used to control the device to perform at least the following operations One: Play music corresponding to the emotional level, and play the video corresponding to the emotional level.
  • the user opens the smart home system, which observes and records the daily life of the user, interacts with the user, compares, judges, feeds back, and learns the user's emotions; wherein the communication mode may include but is not limited to: voice and system dialogue Such as inner monologues, transcripts, facial expressions, and body movements.
  • voice and system dialogue Such as inner monologues, transcripts, facial expressions, and body movements.
  • the smart home system will adjust the user's emotions according to their own decisions through external means (for example, playing music, videos, etc.).
  • the above-mentioned smart home system can also find out the cause of the user's negative emotion by querying, and then can take targeted measures to alleviate the emotion of the user. For example, when the historical home mood similar to the current mood of the user is stored in the smart home system, the historical solution corresponding to the historical emotion may be searched in the smart home system, and then the historical solution may be referred to or directly used to alleviate the user. Emotions. Wherein, in the above smart home system, when the historical mood similar to the current mood of the user is not found, the smart home system does not store a reference solution for solving the current mood of the user.
  • the smart home system can The network searches for similar emotions that the user's current emotional similarity reaches a certain threshold, and searches for a similar emotion solution on the network, and then the smart home system can refer to the current mood of the user by referring to the solution searched on the network.
  • Mitigation measures. 2 is a flowchart of an adjustment mechanism of a smart home system according to an embodiment of the present application. Specifically, as shown in FIG. 2, the smart home system may display facial expressions that may be presented by the user during use or before leaving the factory. Actions, sounds, etc. are stored in the smart home system.
  • the above information may be recorded, and then the facial expression previously stored in the smart home system may be The comparison of the body movements and the sounds, the judgment and the processing of the learning, the solution for the user to alleviate the user's emotions, for example, the corresponding music or video can be played, thereby alleviating the user's emotions.
  • FIG. 3 is a schematic diagram of a device control device according to an embodiment of the present application. As shown in FIG. 3, the device control device includes: first acquiring The unit 31, the evaluation unit 33 and the first transmitting unit 35. The device control device will be described in detail below.
  • the first obtaining unit 31 is configured to acquire information of the user, where the information includes at least one of the following: a photo of the user captured by the imaging device, and a voice of the user received by the audio device.
  • the evaluation unit 33 is connected to the first obtaining unit 31 for using the emotion level of the user corresponding to the model evaluation information, wherein the model is obtained according to the plurality of sets of data, and each set of data in the plurality of sets of data includes : The user's photo and/or sound, and a label that identifies the level of emotion that the photo and/or sound represents.
  • the first sending unit 35 is connected to the evaluation unit 33 for transmitting a control command according to an emotion level, wherein the control command is used to instruct the device to perform a predetermined operation.
  • the first acquiring unit 31 is configured to acquire information of the user, where the information includes at least one of the following: a photo of the user captured by the camera device, and received by the audio device.
  • the evaluation unit 33 is configured to use the model to evaluate the user's emotional level corresponding to the information, wherein the model is obtained according to the plurality of sets of data, each of the plurality of sets of data includes: the user's photo And/or a sound, and a label for identifying an emotional level represented by the photo and/or sound;
  • the first transmitting unit 35 is configured to send a control command according to the emotion level, wherein the control command is used to instruct the device to perform a predetermined operation.
  • the device control device provided by the embodiment of the present application achieves the purpose of controlling the smart home system according to the acquired user's emotion, and achieves the technical effect of letting the user experience the happiness brought by the modern technology to improve the quality of life, thereby solving the related In the technology, the home system can not meet the technical problem of the user's demand for the intelligence degree of the home system, and the user experience is improved.
  • the device control processing device further includes: a second sending unit, configured to send the photo and/or sound of the user to the user before using the model to evaluate the emotional level of the user corresponding to the information Other users; a second obtaining unit, configured to acquire tags added by other users for photos and/or sounds of the user.
  • the second obtaining unit includes at least one of the following: a first sending module, configured to send a photo and/or a sound of the user, and multiple emotion levels that can be selected to other users; Receiving, by the other user, an emotion level selected from a plurality of emotion levels as a label; an extraction module, configured to obtain an evaluation of the user's photo and/or sound by the other user, and extracting an emotion level as a label from the evaluation, wherein the evaluation includes the following At least one: evaluation of natural language, evaluation of speech.
  • the device control processing device further includes: a questioning unit, configured to: after acquiring the photo and/or sound of the user, using the model to evaluate the emotional level of the user corresponding to the information, according to The user's photo and/or sound asks the user; the extracting unit is configured to extract the emotional level corresponding to the user's photo and/or sound according to the user's answer to the question questioned.
  • a questioning unit configured to: after acquiring the photo and/or sound of the user, using the model to evaluate the emotional level of the user corresponding to the information, according to The user's photo and/or sound asks the user
  • the extracting unit is configured to extract the emotional level corresponding to the user's photo and/or sound according to the user's answer to the question questioned.
  • the first sending unit includes: a second sending module, configured to send a control command when the emotion level matches the predetermined level, where the control command is used to control the device to perform the following At least one of the operations: playing music corresponding to the emotional level, playing a video corresponding to the emotional level.
  • a storage medium comprising a stored program, wherein the program executes the device control processing method of any of the above.
  • processor configured to run a program, wherein the program is executed to execute the device control processing method of any one of the above.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

La présente invention concerne un procédé et un appareil de traitement de dispositif. Le procédé de commande de dispositif comprend les étapes consistant à : acquérir des informations d'un utilisateur, les informations de l'utilisateur comprenant au moins un élément parmi : une photographie de l'utilisateur capturée par un dispositif de photographie et une voix de l'utilisateur reçue par un dispositif audio ; utiliser un modèle pour évaluer un niveau émotionnel de l'utilisateur correspondant aux informations de l'utilisateur, le modèle étant obtenu par apprentissage selon une pluralité d'ensembles de données, chacun de la pluralité d'ensembles de données comprenant : une photographie et/ou une voix de l'utilisateur et une étiquette pour identifier un niveau émotionnel représentant la photographie et/ou la voix ; et envoyer une instruction de commande selon le niveau émotionnel, l'instruction de commande étant utilisée pour ordonner à un dispositif d'effectuer une opération prédéterminée. La présente invention aborde le problème technique selon lequel des systèmes domestiques dans l'état de la technique associé ne peuvent pas satisfaire aux exigences des utilisateurs en ce qui concerne le degré d'intelligence des systèmes domestiques.
PCT/CN2018/100489 2017-10-31 2018-08-14 Procédé et appareil de traitement de commande de dispositif WO2019085585A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711062745.0A CN108039988B (zh) 2017-10-31 2017-10-31 设备控制处理方法及装置
CN201711062745.0 2017-10-31

Publications (1)

Publication Number Publication Date
WO2019085585A1 true WO2019085585A1 (fr) 2019-05-09

Family

ID=62093587

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100489 WO2019085585A1 (fr) 2017-10-31 2018-08-14 Procédé et appareil de traitement de commande de dispositif

Country Status (2)

Country Link
CN (1) CN108039988B (fr)
WO (1) WO2019085585A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114446325A (zh) * 2022-03-11 2022-05-06 平安普惠企业管理有限公司 基于情绪识别的信息推送方法、装置、计算机设备及介质
CN115209048A (zh) * 2022-05-19 2022-10-18 广东逸动科技有限公司 影像数据处理方法、装置、电子设备及存储介质

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108039988B (zh) * 2017-10-31 2021-04-30 珠海格力电器股份有限公司 设备控制处理方法及装置
CN109118626B (zh) * 2018-08-08 2022-09-13 腾讯科技(深圳)有限公司 锁具的控制方法、装置、存储介质及电子装置
KR20200035887A (ko) * 2018-09-27 2020-04-06 삼성전자주식회사 대화형 인터페이스를 제공하는 방법 및 시스템
CN109634129B (zh) * 2018-11-02 2022-07-01 深圳慧安康科技有限公司 主动关怀的实现方法、系统及装置
CN109766776A (zh) * 2018-12-18 2019-05-17 深圳壹账通智能科技有限公司 操作执行方法、装置、计算机设备和存储介质
CN109948780A (zh) * 2019-03-14 2019-06-28 江苏集萃有机光电技术研究所有限公司 基于人工智能的辅助决策方法、装置及设备
CN110096707B (zh) * 2019-04-29 2020-09-29 北京三快在线科技有限公司 生成自然语言的方法、装置、设备及可读存储介质
CN110197677A (zh) * 2019-05-16 2019-09-03 北京小米移动软件有限公司 一种播放控制方法、装置及播放设备
CN110262413A (zh) * 2019-05-29 2019-09-20 深圳市轱辘汽车维修技术有限公司 智能家居控制方法、控制装置、车载终端及可读存储介质
CN110491425A (zh) * 2019-07-29 2019-11-22 恒大智慧科技有限公司 一种智能音乐播放装置
CN110412885A (zh) * 2019-08-30 2019-11-05 北京青岳科技有限公司 一种基于计算机视觉的家居智能控制系统
JP7248615B2 (ja) * 2020-03-19 2023-03-29 ヤフー株式会社 出力装置、出力方法及び出力プログラム
CN112631137A (zh) * 2020-04-02 2021-04-09 张瑞华 应用于生物特征识别的智能家居控制方法及智能控制设备
CN113589697A (zh) * 2020-04-30 2021-11-02 青岛海尔多媒体有限公司 用于家电设备的控制方法及装置、智能家电
CN112180747A (zh) * 2020-09-28 2021-01-05 上海连尚网络科技有限公司 一种用于调节智能家居设备的方法与设备
CN112464018A (zh) * 2020-12-10 2021-03-09 山西慧虎健康科技有限公司 一种智能情绪识别调节方法及系统
CN115047824A (zh) * 2022-05-30 2022-09-13 青岛海尔科技有限公司 数字孪生多模态设备控制方法、存储介质及电子装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024521A (zh) * 2012-12-27 2013-04-03 深圳Tcl新技术有限公司 节目筛选方法、系统及具有该系统的电视
US20140192229A1 (en) * 2013-01-04 2014-07-10 Samsung Electronics Co., Ltd. Apparatus and method for providing user's emotional information in electronic device
CN106919821A (zh) * 2015-12-25 2017-07-04 阿里巴巴集团控股有限公司 用户验证方法和装置
CN107272607A (zh) * 2017-05-11 2017-10-20 上海斐讯数据通信技术有限公司 一种智能家居控制系统及方法
CN108039988A (zh) * 2017-10-31 2018-05-15 珠海格力电器股份有限公司 设备控制处理方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103024521A (zh) * 2012-12-27 2013-04-03 深圳Tcl新技术有限公司 节目筛选方法、系统及具有该系统的电视
US20140192229A1 (en) * 2013-01-04 2014-07-10 Samsung Electronics Co., Ltd. Apparatus and method for providing user's emotional information in electronic device
CN106919821A (zh) * 2015-12-25 2017-07-04 阿里巴巴集团控股有限公司 用户验证方法和装置
CN107272607A (zh) * 2017-05-11 2017-10-20 上海斐讯数据通信技术有限公司 一种智能家居控制系统及方法
CN108039988A (zh) * 2017-10-31 2018-05-15 珠海格力电器股份有限公司 设备控制处理方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114446325A (zh) * 2022-03-11 2022-05-06 平安普惠企业管理有限公司 基于情绪识别的信息推送方法、装置、计算机设备及介质
CN115209048A (zh) * 2022-05-19 2022-10-18 广东逸动科技有限公司 影像数据处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN108039988B (zh) 2021-04-30
CN108039988A (zh) 2018-05-15

Similar Documents

Publication Publication Date Title
WO2019085585A1 (fr) Procédé et appareil de traitement de commande de dispositif
KR101803081B1 (ko) 매장 관리 로봇
Chen et al. Hierarchical cross-modal talking face generation with dynamic pixel-wise loss
CN110291489B (zh) 计算上高效的人类标识智能助理计算机
CN106295313B (zh) 对象身份管理方法、装置和电子设备
TWI661363B (zh) 智慧型機器人及人機交互方法
WO2021077382A1 (fr) Procédé et appareil pour déterminer un état d'apprentissage, et robot intelligent
KR20100001928A (ko) 감정인식에 기반한 서비스 장치 및 방법
TW201220216A (en) System and method for detecting human emotion and appeasing human emotion
US9661208B1 (en) Enhancing video conferences
CN109986553B (zh) 一种主动交互的机器人、系统、方法及存储装置
US11852357B2 (en) Method for controlling air conditioner, air conditioner
CN109241336A (zh) 音乐推荐方法和装置
WO2021200503A1 (fr) Système d'apprentissage et dispositif de collecte de données
KR20200012355A (ko) Clm과 가버 웨이블렛을 이용한 얼굴 인증 과정을 구비한 온라인 강의 모니터링 방법
JP2010224715A (ja) 画像表示システム、デジタルフォトフレーム、情報処理システム、プログラム及び情報記憶媒体
Błażek et al. An unorthodox view on the problem of tracking facial expressions
CN115867948A (zh) 识别物体的卫生状况方法及相关电子设备
TW202303444A (zh) 影像式情緒辨識系統和方法
CN115988164A (zh) 一种会议室多媒体控制方法、系统及计算机设备
CN113591550B (zh) 一种个人喜好自动检测模型构建方法、装置、设备及介质
JP2005199373A (ja) コミュニケーション装置及びコミュニケーション方法
JP2021033359A (ja) 感情推定装置、感情推定方法、プログラム、情報提示装置、情報提示方法及び感情推定システム
Zhang et al. Quantification of advanced dementia patients’ engagement in therapeutic sessions: An automatic video based approach using computer vision and machine learning
Miao et al. Study of detecting behavioral signatures within DeepFake videos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18872426

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18872426

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.10.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18872426

Country of ref document: EP

Kind code of ref document: A1