CN111314187A - Storage medium, smart home device and awakening method thereof - Google Patents
Storage medium, smart home device and awakening method thereof Download PDFInfo
- Publication number
- CN111314187A CN111314187A CN202010071543.8A CN202010071543A CN111314187A CN 111314187 A CN111314187 A CN 111314187A CN 202010071543 A CN202010071543 A CN 202010071543A CN 111314187 A CN111314187 A CN 111314187A
- Authority
- CN
- China
- Prior art keywords
- voice
- voice segment
- segment
- opening degree
- splicing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000004590 computer program Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 239000012634 fragment Substances 0.000 claims description 3
- 230000002618 waking effect Effects 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 9
- 230000004044 response Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000009423 ventilation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a storage medium, intelligent home equipment and a wake-up method thereof, wherein the wake-up method comprises the following steps: acquiring an environment image around the intelligent household equipment; judging whether a face exists in the environment image; when the face exists in the environment image, recognizing the opening degree of the mouth in the face, starting a sound pick-up to collect voice around the intelligent home equipment, and storing a first voice section of a newly collected preset time length; judging whether the opening degree of the mouth part is larger than a preset opening degree threshold value or not; when the opening degree of the mouth is judged to be larger than a preset opening degree threshold value, the opening state of the sound pick-up is kept, second voice segments around the intelligent home equipment are continuously collected and stored, and the first voice segments and the second voice segments are spliced to form third voice segments; the semantic content of the third speech segment is analyzed and a response is made to the third speech segment based on the semantic content. The method and the device can facilitate voice interaction between the user and the intelligent household equipment.
Description
Technical Field
The invention relates to the technical field of intelligent home, in particular to a storage medium, intelligent home equipment and a wake-up method thereof.
Background
The intelligent home is embodied in an internet of things manner under the influence of the internet. The intelligent home connects various devices (such as audio and video devices, lighting systems, curtain control, air conditioner control, security systems, digital cinema systems, audio and video servers, video cabinet systems, network home appliances and the like) in the home together through the Internet of things technology, and provides multiple functions and means such as home appliance control, lighting control, telephone remote control, indoor and outdoor remote control, anti-theft alarm, environment monitoring, heating and ventilation control, infrared forwarding, programmable timing control and the like. Compared with the common home, the intelligent home has the traditional living function, integrates the functions of building, network communication, information household appliance and equipment automation, provides an all-around information interaction function, and even saves funds for various energy expenses.
Voice interaction is the common function of intelligent home equipment, and current intelligent home equipment needs specific word of awaking usually when awaking up, and intelligent home equipment just can be awaken up after the word is awakened up in the input, and the back of awaking up carries out voice interaction with intelligent home equipment again, realizes the man-machine interaction such as control to the home equipment, and this kind of mode leads to all to input before the voice interaction at every turn and awakens up the word, and the operation is inconvenient, and is inefficient.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a storage medium, an intelligent home device and a wake-up method thereof, which can realize wake-up free words of the intelligent home device and facilitate voice interaction between a user and the intelligent home device.
In order to solve the above technical problem, one technical solution adopted in the embodiments of the present application is: the method for waking up the smart home equipment comprises the following steps: acquiring an environment image around the intelligent household equipment; judging whether a face exists in the environment image; when the face exists in the environment image, recognizing the opening degree of the mouth in the face, starting a sound pick-up of the intelligent household equipment to collect voice around the intelligent household equipment, and storing a first voice section of a newly collected preset time length; judging whether the opening degree of the mouth part is larger than a preset opening degree threshold value or not; when the opening degree of the mouth is judged to be larger than a preset opening degree threshold value, the opening state of the sound pick-up is kept, second voice segments around the intelligent home equipment are continuously collected and stored, and the first voice segments and the second voice segments are spliced to form third voice segments; the semantic content of the third speech segment is analyzed and a response is made to the third speech segment based on the semantic content.
The awakening method further comprises the following steps: and when the opening degree of the mouth is judged to be smaller than or equal to the preset opening degree threshold value, deleting the first voice segment, and returning to the step of acquiring the environment image around the intelligent household equipment.
The awakening method further comprises the following steps: and when the opening degree of the mouth part is judged to be less than or equal to a preset opening degree threshold value, the sound pick-up is closed.
The step of splicing the first voice segment and the second voice segment to form a third voice segment comprises the following steps: and splicing the complete first voice segment and the second voice segment to form a third voice segment.
The step of splicing the first voice segment and the second voice segment to form a third voice segment comprises the following steps: performing voice endpoint detection on the first voice segment to obtain a voice interval time point closest to a voice ending time point of the first voice segment; generating a fourth voice fragment from the voice interval time point to the voice end time point; and splicing the fourth voice segment and the second voice segment to form a third voice segment.
The step of splicing the first voice segment and the second voice segment to form a third voice segment comprises the following steps: identifying a first voice segment and judging whether a preset keyword exists in the first voice segment; and when the preset keyword exists in the first voice segment, generating a fifth voice segment from the keyword to the voice ending time point of the first voice segment, and splicing the fifth voice segment and the second voice segment to form a third voice segment.
The step of splicing the first voice segment and the second voice segment to form a third voice segment further includes: and when judging that the first voice segment does not have the preset keyword, splicing the complete first voice segment and the second voice segment to form a third voice segment.
Wherein, open the sound pick-up of intelligent household equipment and gather the pronunciation around the intelligent household equipment and carry out the step of saving to the first pronunciation section of the predetermined duration of newly gathering and include: and (4) deleting the voice data stored before the preset time length by starting calculation from the current time point.
In order to solve the above technical problem, another technical solution adopted in the embodiment of the present application is: the intelligent household equipment comprises a processor and a memory electrically connected with the processor, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program to execute the method.
In order to solve the above technical problem, another technical solution adopted in the embodiments of the present application is: a storage medium is provided which stores a computer program executable by a processor to implement the above-described method.
According to the embodiment of the application, the environment image around the intelligent household equipment is obtained; judging whether a face exists in the environment image; when the face exists in the environment image, recognizing the opening degree of the mouth in the face, starting a sound pick-up of the intelligent household equipment to collect voice around the intelligent household equipment, and storing a first voice section of a newly collected preset time length; judging whether the opening degree of the mouth part is larger than a preset opening degree threshold value or not; when the opening degree of the mouth is judged to be larger than a preset opening degree threshold value, the opening state of the sound pick-up is kept, second voice segments around the intelligent home equipment are continuously collected and stored, and the first voice segments and the second voice segments are spliced to form third voice segments; and analyzing the semantic content of the third voice segment and responding to the third voice segment according to the semantic content, so that awakening of the smart home equipment without awakening words can be realized, and the voice interaction between the user and the smart home equipment is facilitated.
Drawings
Fig. 1 is a schematic flowchart of a wake-up method of smart home devices according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating a first implementation manner of splicing a first speech segment and a second speech segment to form a third speech segment according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a second implementation manner of splicing a first speech segment and a second speech segment to form a third speech segment according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a third implementation manner of splicing a first speech segment and a second speech segment to form a third speech segment according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of the smart home device according to the embodiment of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a wake-up method of an intelligent home device according to an embodiment of the present application.
In this embodiment, the method for waking up the smart home device may include the following steps:
step S11: and acquiring an environment image around the intelligent household equipment.
The smart home devices can be smart panels, smart sound boxes and the like. The intelligent household equipment comprises a camera, and an environment image around the intelligent household equipment is obtained through the camera of the intelligent household equipment. Optionally, the intelligent panel may be an intelligent panel installed on an indoor wall, or a placement type intelligent panel placed on a desktop, or the like.
Step S12: and judging whether the face exists in the environment image.
The intelligent household equipment further comprises a processor, and the processor judges whether a human face exists in the environment image acquired by the camera.
In step S12, if yes, that is, if it is determined that a human face exists in the environment image, steps S13 to S14 are performed.
Step S13: the opening degree of the mouth in the face is recognized, a sound pick-up of the intelligent household equipment is started to collect voice around the intelligent household equipment, and the newly collected first voice section with the preset duration is stored.
The method comprises the steps of judging whether the mouth of a user is open or not, judging whether the mouth of the user is open or not, if so, further identifying the opening degree of the mouth of the user, meanwhile, starting a sound pickup of the intelligent household equipment to collect voice, storing a first voice section of a preset time length collected newly in a queue mode, namely deleting voice data stored before the preset time length in continuous updating of the first voice section, and storing recently stored data.
Wherein, open the sound pick-up of intelligent household equipment and gather the pronunciation around the intelligent household equipment and carry out the step of saving to the first pronunciation section of the predetermined duration of newly gathering and include: and (4) deleting the voice data stored before the preset time length by starting calculation from the current time point.
Step S14: and judging whether the opening degree of the mouth is larger than a preset opening degree threshold value or not.
In step S14, if yes, that is, if it is determined that the mouth opening degree is greater than the preset opening degree threshold, steps S15-S17 are performed.
In step S14, if no, that is, if it is determined that the mouth opening degree is smaller than the preset opening degree threshold, step S18 is executed, and after step S18 is completed, the process returns to step S11.
Step S15: the on state of the microphone is maintained and the continuous collection and storage of a second voice clip around the smart home device begins.
If the opening degree of the mouth is recognized to be larger than the preset opening degree threshold value, a normal voice storage mode is started, the second voice segment around the intelligent household equipment is continuously collected and stored, and the voice data stored before the preset time is not deleted.
Step S16: and splicing the first voice segment and the second voice segment to form a third voice segment.
And in order to prevent missing of voice before the opening degree of the mouth is recognized to be larger than a preset opening degree threshold value, the first voice segment and the second voice segment are spliced to form a third voice segment.
The step of splicing the first voice segment and the second voice segment to form a third voice segment comprises the following steps: and splicing the complete first voice segment and the second voice segment to form a third voice segment.
Step S17: the semantic content of the third speech segment is analyzed and a response is made to the third speech segment based on the semantic content.
For example, the semantic content of the third speech segment is "how is the weather today? ". Then the response of the smart home device to the response can be '15-18 ℃ in sunny days and 65% in humidity'
Step S18: the first voice segment is deleted and the microphone is turned off.
In this embodiment, whether a human face exists in the acquired surrounding image is judged, when the human face exists, a latest section of voice is stored in a queue form, the opening degree of the mouth in the human face is judged, and then when the opening degree is greater than a threshold value, the complete voice is stored continuously.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first implementation manner of splicing a first speech segment and a second speech segment to form a third speech segment according to an embodiment of the present application.
In this embodiment, the step of splicing the first voice segment and the second voice segment to form the third voice segment may specifically include:
step S21: and performing voice endpoint detection on the first voice segment to obtain a voice interval time point closest to the voice ending time point of the first voice segment.
The content of the first voice segment is not all valid, and the voice interval time period can be detected through a voice endpoint, redundant parts are removed, and then the first voice segment is spliced with the second voice segment.
Step S22: and generating a fourth voice segment from the voice interval time point to the voice ending time point.
Step S23: and splicing the fourth voice segment and the second voice segment to form a third voice segment.
Referring to fig. 3, fig. 3 is a flowchart illustrating a second implementation manner of splicing a first speech segment and a second speech segment to form a third speech segment according to an embodiment of the present application.
In this embodiment, the step of splicing the first voice segment and the second voice segment to form the third voice segment may specifically include:
step S31: and identifying the first voice segment and judging whether a preset keyword exists in the first voice segment.
The first voice segment is not all effective contents, and for the smart home device, only the contents after the occurrence of the keyword are effective, and the keyword may be multiple, so as to form a keyword database, for example, "i want", "i check", "i want to know", and the like. And splicing the content behind the keyword in the first voice segment with the second voice segment to obtain the whole effective voice segment.
If yes, that is, if it is determined that the first speech segment has the predetermined keyword in step S31, step S32 is performed.
In step S31, if no, that is, if it is determined that there is no preset keyword in the first speech segment, step S33 is executed.
Step S32: and generating a fifth voice segment from the keyword to the voice ending time point of the first voice segment, and splicing the fifth voice segment and the second voice segment to form a third voice segment.
Step S33: and splicing the complete first voice segment and the second voice segment to form a third voice segment.
Referring to fig. 4, fig. 4 is a flowchart illustrating a third implementation manner of splicing a first voice segment and a second voice segment to form a third voice segment according to the embodiment of the present application.
In this embodiment, the step of splicing the first voice segment and the second voice segment to form the third voice segment may specifically include:
step S41: and identifying the first voice segment and judging whether a preset keyword exists in the first voice segment.
If yes, that is, if it is determined that the first speech segment has the predetermined keyword in step S41, step S42 is performed.
In step S41, if no, i.e. when it is determined that there is no preset keyword in the first speech segment, steps S43-S45 are performed.
Step S42: and generating a fifth voice segment from the keyword to the voice ending time point of the first voice segment, and splicing the fifth voice segment and the second voice segment to form a third voice segment.
Step S43: and performing voice endpoint detection on the first voice segment to obtain a voice interval time point closest to the voice ending time point of the first voice segment.
Step S44: and generating a fourth voice segment from the voice interval time point to the voice ending time point.
Step S45: and splicing the fourth voice segment and the second voice segment to form a third voice segment.
Referring to fig. 5, fig. 5 is a schematic diagram of a hardware structure of an intelligent home device according to an embodiment of the present application.
In this embodiment, the smart home device 50 includes a processor 51 and a memory 52 electrically connected to the processor 52, where the memory 52 is used to store a computer program, and the processor 52 is used to call the computer program to execute the method described in any one of the above embodiments.
The embodiment of the present application also provides a storage medium, which stores a computer program, and the computer program can implement the method of any one of the above embodiments when executed by a processor.
The computer program may be stored in the storage medium in the form of a software product, and includes several instructions for causing a device or a processor to execute all or part of the steps of the method according to the embodiments of the present application.
A storage medium is a medium in computer memory for storing some discrete physical quantity. And the aforementioned storage medium may be: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
According to the embodiment of the application, the environment image around the intelligent household equipment is obtained; judging whether a face exists in the environment image; when the face exists in the environment image, recognizing the opening degree of the mouth in the face, starting a sound pick-up of the intelligent household equipment to collect voice around the intelligent household equipment, and storing a first voice section of a newly collected preset time length; judging whether the opening degree of the mouth part is larger than a preset opening degree threshold value or not; when the opening degree of the mouth is judged to be larger than a preset opening degree threshold value, the opening state of the sound pick-up is kept, second voice segments around the intelligent home equipment are continuously collected and stored, and the first voice segments and the second voice segments are spliced to form third voice segments; and analyzing the semantic content of the third voice segment and responding to the third voice segment according to the semantic content, so that awakening of the smart home equipment without awakening words can be realized, and the voice interaction between the user and the smart home equipment is facilitated.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.
Claims (10)
1. A method for waking up smart home equipment is characterized by comprising the following steps:
acquiring an environment image around the intelligent household equipment;
judging whether a human face exists in the environment image or not;
when the face exists in the environment image, recognizing the opening degree of the mouth in the face, starting a sound pickup of the intelligent household equipment to collect voice around the intelligent household equipment, and storing a first voice section of a newly collected preset time length;
judging whether the opening degree of the mouth is larger than a preset opening degree threshold value or not;
when the opening degree of the mouth is judged to be larger than the preset opening degree threshold value, keeping the starting state of the sound pick-up, starting to continuously collect and store a second voice segment around the intelligent household equipment, and splicing the first voice segment and the second voice segment to form a third voice segment;
and analyzing the semantic content of the third voice fragment and responding to the third voice fragment according to the semantic content.
2. Wake-up method according to claim 1, characterized in that it further comprises:
and deleting the first voice segment when the opening degree of the mouth is judged to be smaller than or equal to the preset opening degree threshold value, and returning to the step of acquiring the environment image around the intelligent household equipment after deleting the first voice segment.
3. Wake-up method according to claim 1, characterized in that it further comprises:
and when the opening degree of the mouth part is judged to be less than or equal to the preset opening degree threshold value, closing the sound pick-up.
4. The wake-up method according to claim 1, wherein the step of splicing the first voice segment and the second voice segment to form a third voice segment comprises:
and splicing the complete first voice segment and the second voice segment to form the third voice segment.
5. The wake-up method according to claim 1, wherein the step of splicing the first voice segment and the second voice segment to form a third voice segment comprises:
performing voice endpoint detection on the first voice segment to obtain a voice interval time point closest to a voice ending time point of the first voice segment;
generating a fourth speech segment from the speech interval time point to the speech end time point;
and splicing the fourth voice segment and the second voice segment to form the third voice segment.
6. The wake-up method according to claim 1, wherein the step of splicing the first voice segment and the second voice segment to form a third voice segment comprises:
identifying the first voice segment and judging whether a preset keyword exists in the first voice segment;
and when judging that a preset keyword exists in the first voice segment, generating a fifth voice segment from the keyword to the voice ending time point of the first voice segment, and splicing the fifth voice segment and the second voice segment to form the third voice segment.
7. The wake-up method according to claim 6, wherein the step of splicing the first voice segment and the second voice segment to form a third voice segment further comprises:
and when judging that the first voice segment does not have the preset keyword, splicing the complete first voice segment and the second voice segment to form the third voice segment.
8. The wake-up method according to claim 1, wherein the step of turning on a sound pickup of the smart home device to collect voices around the smart home device and storing a first voice section of a predetermined time length collected last time comprises:
and (4) deleting the voice data stored before the preset time length by starting calculation from the current time point.
9. An intelligent home device, comprising a processor and a memory electrically connected to the processor, wherein the memory is configured to store a computer program, and the processor is configured to call the computer program to perform the method according to any one of claims 1 to 8.
10. A storage medium, characterized in that the storage medium stores a computer program executable by a processor to implement the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010071543.8A CN111314187A (en) | 2020-01-21 | 2020-01-21 | Storage medium, smart home device and awakening method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010071543.8A CN111314187A (en) | 2020-01-21 | 2020-01-21 | Storage medium, smart home device and awakening method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111314187A true CN111314187A (en) | 2020-06-19 |
Family
ID=71161551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010071543.8A Pending CN111314187A (en) | 2020-01-21 | 2020-01-21 | Storage medium, smart home device and awakening method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111314187A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450795A (en) * | 2021-06-28 | 2021-09-28 | 深圳七号家园信息技术有限公司 | Image recognition method and system with voice awakening function |
CN115588435A (en) * | 2022-11-08 | 2023-01-10 | 荣耀终端有限公司 | Voice wake-up method and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105161100A (en) * | 2015-08-24 | 2015-12-16 | 联想(北京)有限公司 | Control method and electronic device |
CN108733420A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Awakening method, device, smart machine and the storage medium of smart machine |
CN109767774A (en) * | 2017-11-08 | 2019-05-17 | 阿里巴巴集团控股有限公司 | A kind of exchange method and equipment |
-
2020
- 2020-01-21 CN CN202010071543.8A patent/CN111314187A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105161100A (en) * | 2015-08-24 | 2015-12-16 | 联想(北京)有限公司 | Control method and electronic device |
CN109767774A (en) * | 2017-11-08 | 2019-05-17 | 阿里巴巴集团控股有限公司 | A kind of exchange method and equipment |
CN108733420A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Awakening method, device, smart machine and the storage medium of smart machine |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113450795A (en) * | 2021-06-28 | 2021-09-28 | 深圳七号家园信息技术有限公司 | Image recognition method and system with voice awakening function |
CN115588435A (en) * | 2022-11-08 | 2023-01-10 | 荣耀终端有限公司 | Voice wake-up method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109410952A (en) | A kind of voice awakening method, apparatus and system | |
CN110853619A (en) | Man-machine interaction method, control device, controlled device and storage medium | |
CN111123723A (en) | Grouping interaction method, electronic device and storage medium | |
CN111314187A (en) | Storage medium, smart home device and awakening method thereof | |
US20200257254A1 (en) | Progressive profiling in an automation system | |
CN111862965A (en) | Awakening processing method and device, intelligent sound box and electronic equipment | |
CN111710339B (en) | Voice recognition interaction system and method based on data visual display technology | |
CN106251871A (en) | A kind of Voice command music this locality playing device | |
CN111061160A (en) | Storage medium, intelligent household control equipment and control method | |
CN111240221A (en) | Storage medium, intelligent panel and equipment control method based on intelligent panel | |
CN111147935A (en) | Control method of television, intelligent household control equipment and storage medium | |
CN111025930A (en) | Intelligent home control method, intelligent home control equipment and storage medium | |
CN212461143U (en) | Voice recognition interaction system based on data visualization display technology | |
US11521626B2 (en) | Device, system and method for identifying a scene based on an ordered sequence of sounds captured in an environment | |
CN109658924B (en) | Session message processing method and device and intelligent equipment | |
CN111142996B (en) | Page display method, page display system, mobile terminal and storage medium | |
CN110647050B (en) | Storage medium, intelligent panel and multi-level interaction method thereof | |
CN109254820B (en) | Window closing method, device, terminal and computer readable storage medium | |
CN110941198A (en) | Storage medium, smart panel and power-saving booting method thereof | |
CN112815610A (en) | Control method and device for ion generator in household appliance and household appliance | |
CN113470642A (en) | Method and system for realizing voice control scene based on intelligent household APP | |
CN111182349A (en) | Storage medium, interactive device and video playing method thereof | |
CN111093112A (en) | Storage medium, interaction device and video shadow rendering and playing method thereof | |
CN110806700A (en) | Household equipment control method based on intelligent umbrella, intelligent umbrella and storage medium | |
CN111600935A (en) | Storage medium, interactive device and reminding method based on interactive device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200619 |