CN105529025B - Voice operation input method and electronic equipment - Google Patents

Voice operation input method and electronic equipment Download PDF

Info

Publication number
CN105529025B
CN105529025B CN201410509616.1A CN201410509616A CN105529025B CN 105529025 B CN105529025 B CN 105529025B CN 201410509616 A CN201410509616 A CN 201410509616A CN 105529025 B CN105529025 B CN 105529025B
Authority
CN
China
Prior art keywords
processing unit
information
syllable
command
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410509616.1A
Other languages
Chinese (zh)
Other versions
CN105529025A (en
Inventor
章丹峰
靳玉茹
钟荣标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410509616.1A priority Critical patent/CN105529025B/en
Publication of CN105529025A publication Critical patent/CN105529025A/en
Application granted granted Critical
Publication of CN105529025B publication Critical patent/CN105529025B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Telephone Function (AREA)

Abstract

The invention discloses a voice operation input method and electronic equipment. The method is applied to an electronic device with a sound acquisition unit, a first processing unit and a second processing unit, and comprises the following steps: acquiring sound information through the sound acquisition unit; the first processing unit identifies the sound information; when the sound information meets a preset condition, extracting the characteristics of the sound information; generating an information set according to the characteristics of the sound information; sending the set of information to the second processing unit; the second processing unit executes a command corresponding to the set of information. By adopting the voice operation input method and the electronic equipment, the voice operation can be directly input without triggering the application program supporting the voice operation, and the voice operation can be input even if the electronic equipment is in a standby state.

Description

Voice operation input method and electronic equipment
Technical Field
The present invention relates to the field of control, and in particular, to a voice operation input method and an electronic device.
Background
Currently, electronic devices are becoming more and more rich in functionality. More and more electronic devices have become available to support voice operations.
For example, an application such as a voice assistant on a smart phone may enable a user to operate the smart phone through voice.
In the prior art, a method for inputting voice operation mainly comprises: the method comprises the steps of firstly triggering an application program supporting voice operation to enable the application program to enter an enabled state, then receiving voice input by a user, identifying the voice, and finally converting the voice into certain operation on the electronic equipment.
As can be seen from the above, the voice operation input method in the prior art needs to trigger the application program supporting the voice operation first, so that the application program enters the enabled state, and can receive the voice input by the user. This results in a cumbersome operation process, and also, when the electronic apparatus is in a standby state, a voice operation cannot be input.
Disclosure of Invention
The invention aims to provide a voice operation input method and electronic equipment, which can directly input voice operation without triggering an application program supporting the voice operation and can input the voice operation even if the electronic equipment is in a standby state.
In order to achieve the purpose, the invention provides the following scheme:
a voice operation input method is applied to an electronic device with a sound acquisition unit, a first processing unit and a second processing unit, and comprises the following steps:
acquiring sound information through the sound acquisition unit;
the first processing unit identifies the sound information;
when the sound information meets a preset condition, extracting the characteristics of the sound information;
generating an information set according to the characteristics of the sound information;
sending the set of information to the second processing unit;
the second processing unit executes a command corresponding to the set of information.
Optionally, the power consumption of the first processing unit is lower than the power consumption of the second processing unit.
Optionally, the executing, by the second processing unit, a command corresponding to the information set specifically includes:
the second processing unit is switched from a first state to a second state; wherein the power consumption of the second processing unit in the first state is lower than the power consumption of the second processing unit in the second state;
the second processing unit in the second state searches for a command corresponding to the information set;
the command is executed.
Optionally, before the sound information is acquired by the sound acquisition unit, the method further includes:
detecting syllable template input operation of a user;
acquiring syllable template information input by a user after the syllable template input operation;
and saving the syllable template represented by the syllable template information.
Optionally, after saving the syllable template represented by the syllable template information, the method further includes:
distributing corresponding marks for the saved syllable templates;
displaying the corresponding relation between the syllable template and the identification;
acquiring the arrangement sequence of the identifiers input by the user;
acquiring an operation command option selected by a user; the operation command option is used for representing a command needing to be executed;
and establishing a corresponding relation between the arrangement sequence and the operation command options.
An electronic device, the electronic device comprising:
the voice acquisition unit is used for acquiring voice information;
a first processing unit for recognizing the sound information;
when the sound information meets a preset condition, extracting the characteristics of the sound information; generating an information set according to the characteristics of the sound information and then sending the information set to a second processing unit;
and the second processing unit is used for executing a command corresponding to the information set after receiving the information set.
Optionally, the power consumption of the first processing unit is lower than the power consumption of the second processing unit.
Optionally, the second processing unit specifically includes:
the state switching subunit is used for controlling the second processing unit to be switched from a first state to a second state; wherein the power consumption of the second processing unit in the first state is lower than the power consumption of the second processing unit in the second state;
the command searching subunit is used for controlling the second processing unit in the second state to search for the command corresponding to the information set;
and the command execution subunit is used for executing the command.
Optionally, the first processing unit specifically includes:
a syllable identifying subunit configured to identify a plurality of syllables included in the speech information;
the matching subunit is used for matching the syllables with a plurality of preset syllable templates respectively;
the arrangement order determining subunit is configured to determine an arrangement order of the identifiers of the syllable templates corresponding to the plurality of syllables when the plurality of syllables are successfully matched with one of a plurality of preset syllable templates respectively;
an information set generating subunit, configured to generate an information set including the arrangement order;
the information sending subunit is used for sending the information set containing the arrangement sequence to the second processing unit;
the second processing unit specifically includes:
the command determining subunit is used for determining the commands corresponding to the arrangement sequence according to the set mapping relationship between the arrangement sequence and the commands;
and the command execution subunit is used for executing the command.
Optionally, the second processing unit further includes:
an entry operation acquisition unit for detecting a syllable template entry operation of a user before acquiring sound information;
a syllable template information acquisition unit for acquiring syllable template information input by a user after the syllable template input operation;
and the syllable template storage unit is used for storing the syllable template represented by the syllable template information.
Optionally, the second processing unit further includes:
the identification distribution unit is used for distributing corresponding identifications to the stored syllable templates after the syllable templates represented by the syllable template information are stored;
the corresponding relation display unit is used for displaying the corresponding relation between the syllable template and the identification;
an arrangement order acquisition unit for acquiring an arrangement order of the identifiers input by the user;
the operation command option acquisition unit is used for acquiring operation command options selected by a user; the operation command option is used for representing a command needing to be executed;
and the corresponding relation establishing unit is used for establishing the corresponding relation between the arrangement sequence and the operation command options.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the voice operation input method and the electronic equipment, the voice information is identified by adopting the first processing unit; when the sound information meets a preset condition, extracting the characteristics of the sound information; generating an information set according to the characteristics of the sound information; sending the set of information to the second processing unit; executing the command corresponding to the information set by the second processing unit; the voice operation can be directly input without triggering the application program supporting the voice operation, and the voice operation can be input even if the electronic equipment is in a standby state.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a voice operation input method embodiment 1 of the present invention;
FIG. 2 is a flowchart of a voice operation input method embodiment 2 of the present invention;
FIG. 3 is a flowchart of the voice operation input method embodiment 3 of the present invention;
FIG. 4 is a flowchart illustrating a syllable template setup in an embodiment of the voice input method according to the invention;
fig. 5 is a block diagram of an embodiment of an electronic device of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The voice operation input method is applied to electronic equipment with a sound acquisition unit, a first processing unit and a second processing unit.
The electronic device can be a mobile phone, a tablet computer and the like. The sound collection unit may be a microphone. The first processing unit may be an Application Specific Integrated Circuit (ASIC), and the second processing unit may be an Application Processor (AP).
Fig. 1 is a flowchart of a voice operation input method 1 according to an embodiment of the present invention. As shown in fig. 1, the method may include:
step 101: acquiring sound information through the sound acquisition unit;
the sound collection unit can acquire external sound information in real time. The first processing unit may be arranged to respond only to certain specific speech uttered by the user.
Step 102: the first processing unit identifies the sound information;
the first processing unit may pre-store some speech information as syllable templates. After the sound acquisition unit acquires the sound information, the sound acquisition unit can be matched with the syllable template. The sound information may include a plurality of voices matched to the syllable template. The first processing unit may match each syllable unit in the sound information with each syllable template, and may determine that the syllable unit is a syllable unit matched with a syllable template if the syllable unit is successfully matched with any one syllable template.
Step 103: when the sound information meets a preset condition, extracting the characteristics of the sound information;
if each syllable unit in the sound information can identify the matched syllable template, the sound information can be judged to be in accordance with the preset condition.
The feature of the audio information may be an arrangement order of syllable templates corresponding to syllable units included in the audio information.
Step 104: generating an information set according to the characteristics of the sound information;
different sets of information may be generated based on different characteristics of the sound information. The information set includes characteristics of the sound information.
Step 105: sending the set of information to the second processing unit;
the second processing unit may have different operating states, for example, the second processing unit may have a sleep state and a wake state. When the second processing unit receives the set of information, a state change may occur, e.g., a switch may be made from a sleep state to an awake state.
Step 106: the second processing unit executes a command corresponding to the set of information.
After receiving the information set, the second processing unit may analyze information included in the information set. According to the information set, the corresponding command can be found, and the command is executed.
In summary, in this embodiment, the first processing unit is adopted to identify the sound information; when the sound information meets a preset condition, extracting the characteristics of the sound information; generating an information set according to the characteristics of the sound information; sending the set of information to the second processing unit; executing the command corresponding to the information set by the second processing unit; the voice operation can be directly input without triggering the application program supporting the voice operation, and the voice operation can be input even if the electronic equipment is in a standby state.
It should be noted that, in the embodiment of the present invention, the power consumption of the first processing unit may be lower than the power consumption of the second processing unit. The first processing unit may be an ASIC chip. The first processing unit may be in a working state all the time after the electronic device is powered on. The second processing unit may be in a sleep state when it is not required to operate in the wake state. Since the power consumption of the first processing unit is lower than the power consumption of the second processing unit, even if the first processing unit is always in an operating state, the power consumption of the first processing unit is lower than that when the second processing unit is continuously in an operating state.
Fig. 2 is a flowchart of a voice operation input method embodiment 2 of the present invention. As shown in fig. 2, the method may include:
step 201: acquiring sound information through a microphone;
step 202: recognizing the sound information by adopting an ASIC chip;
step 203: when the sound information meets a preset condition, extracting the characteristics of the sound information;
step 204: generating an information set according to the characteristics of the sound information;
step 205: sending the information set to an application processor;
step 206: the application processor is switched from a first state to a second state; wherein the power consumption of the application processor in the first state is lower than the power consumption of the application processor in the second state;
for example, the first state may be a sleep state and the second state may be an awake state.
Step 207: the application processor in the second state searches for a command corresponding to the information set;
step 208: the command is executed.
For example, when the operation corresponding to the command is to make a call to the user a, the electronic device may automatically perform the operation of making a call to the user a. When the operation corresponding to the command is to start the navigation application program, the electronic device can automatically execute the operation of starting the navigation application program.
In this embodiment, by using the ASIC chip as the first processing unit, the power consumption of the first processing unit that needs to recognize the sound information in real time can be reduced; by the second processing unit switching from a first state to a second state when the second processing unit receives the set of information; wherein the power consumption of the second processing unit in the first state is lower than the power consumption of the second processing unit in the second state; the power consumption of the voice operation input method in the embodiment of the invention can be further reduced.
Fig. 3 is a flowchart of a voice operation input method embodiment 3 of the present invention. As shown in fig. 3, the method may include:
step 301: acquiring sound information through the sound acquisition unit;
step 302: identifying a plurality of syllables contained in the speech information;
the syllable may refer to a pronunciation unit into which the voice information may be divided. Assuming that the voice information is "ABC", the syllables may be "a", "B", "C"; assuming that the voice message is "123", the syllable may be "1", "2", "3"; assuming that the voice message is "make a call," the syllable may be "make", "call", "phone".
Step 303: matching the syllables with a plurality of preset syllable templates respectively;
the syllable template may be preset. The user can input the self-uttered voice into the electronic equipment and store the voice as the syllable template in the electronic equipment.
For example, the user may speak the voice "A" and enter the voice "A" into the electronic device to be saved as a syllable template. On the basis, the user can also save the voices "B" and "C" as syllable templates.
When the stored syllable templates are respectively 'a', 'B', 'C', the sound information acquired by the sound acquisition unit can be respectively matched with 'a', 'B', 'C', and whether the syllables included in the sound information are matched with the syllable templates is determined.
Step 304: when the plurality of syllables are successfully matched with one syllable template in a plurality of preset syllable templates respectively, determining the arrangement sequence of the identifiers of the syllable templates corresponding to the plurality of syllables;
as long as the syllables contained in the voice information can be successfully matched with the syllable template, the voice information can be judged to be in accordance with the preset condition.
In the above example, when the stored syllable templates are "a", "B", and "C", respectively, the voice information can be determined to meet the predetermined condition, since the contained syllables can be successfully matched with the syllable template, regardless of whether the voice information is "ABC", "BCA", "CBA", "CAB", "ACB", or "BAC".
When the plurality of syllables are successfully matched with one syllable template in a plurality of preset syllable templates respectively, the arrangement sequence of the identifications of the syllable templates corresponding to the plurality of syllables can be determined.
When setting the syllable templates, a corresponding identifier may be set for each syllable template. For example, the flag 1 may be set for the syllable template "a", the flag 2 may be set for the syllable template "B", and the flag 3 may be set for the syllable template "C". In practical application, which kind of identifier is specifically set can be selected according to actual requirements.
In the above example, assuming that the sound information is "ABC", the sequence of the identifiers of the corresponding syllable templates is "123"; assuming that the sound information is "CBA", the arrangement order of the marks of the corresponding syllable templates is "321".
Step 305: generating an information set containing the arrangement sequence;
the information set may include the arrangement order and may further include an interrupt instruction. The interrupt instruction may be to switch the second processing unit from a sleep state to a wake state.
Step 306: sending the set of information to the second processing unit;
and after receiving the information set, the second processing unit can be switched from a dormant state to an awakening state.
Step 307: the second processing unit determines the commands corresponding to the arrangement sequence according to the set mapping relation between the arrangement sequence and the commands;
when the syllable template is set in advance, the correspondence between the order of arrangement of the identifiers of the syllable template and the command may be set. For example, the command corresponding to the arrangement sequence "123" may be a command for controlling the electronic device to dial a telephone call to the user a; the command corresponding to the arrangement sequence "321" may be a command for controlling the electronic device to start a navigation application.
Step 308: the command is executed.
In summary, in the embodiment, a plurality of syllables included in the voice information are identified;
matching the syllables with a plurality of preset syllable templates respectively; when the plurality of syllables are successfully matched with one syllable template in a plurality of preset syllable templates respectively, determining the arrangement sequence of the identifiers of the syllable templates corresponding to the plurality of syllables; determining commands corresponding to the arrangement sequence according to a mapping relation between the set arrangement sequence and the commands; a limited number of syllable templates can be used to correspond to different commands through different arrangement sequences. Because the number of the used syllable templates is less, the complexity of the first processing unit in recognizing the voice information can be reduced, so that the first processing unit can adopt an ASIC chip with simpler structure, lower cost and lower power consumption, and the cost and the power consumption of the voice operation input method of the embodiment of the invention are reduced. In addition, because the arrangement sequence of the syllable templates with less quantity can be various, fewer syllable templates can be adopted to correspond to more commands, and the number of the commands which can be corresponding to the voice operation input method of the embodiment of the invention is enriched.
FIG. 4 is a flowchart illustrating a syllable template setup procedure in an embodiment of the voice input method according to the present invention. As shown in fig. 4, the process may include:
step 401: detecting syllable template input operation of a user;
the syllable template entry operation indicates that the user is about to input a voice, and the voice is taken as a syllable template.
The syllable template entry operation can be realized in various ways. For example, a program for setting a syllable template may be opened, and a syllable template entry operation may be input by clicking a key corresponding to the syllable template entry operation in the program interface.
Step 402: acquiring syllable template information input by a user after the syllable template input operation;
after the syllable template entry operation is input, the sound collection unit of the electronic equipment can be in a working state. And the sound acquisition unit in the working state acquires the voice sent by the user in real time. The voice input by the user after the syllable template entry operation is referred to as syllable template information in the present flow.
Step 403: and saving the syllable template represented by the syllable template information.
Specifically, every time a user utters a voice, the electronic device may store the voice as a syllable template. In this way, the user can speak multiple voices in sequence, which the electronic device saves as syllable templates in sequence.
Step 404: distributing corresponding marks for the saved syllable templates;
after saving the syllable template, a corresponding identifier may be assigned to the syllable template. The specific identifier is not limited herein. As long as the corresponding identifications of different syllable templates are different. Assuming that there are four syllable templates, these four syllable templates can be identified individually as A, B, C, D, 1, 2, 3, 4, and of course, other forms of identification.
Step 405: displaying the corresponding relation between the syllable template and the identification;
after the identifier is allocated, the electronic device may further display a corresponding relationship between the syllable template and the identifier through a display unit. For example, a "1: a ", indicates that the first syllable template is identified as A.
Step 406: acquiring the arrangement sequence of the identifiers input by the user;
after the user knows the corresponding relationship between the syllable template and the identifier, the user can also use the identifier to represent the corresponding syllable template. For example, the user may input "ABC" indicating that the syllable templates are arranged in the order of: the first input syllable template is in the front, the second input syllable template is in the middle, and the third input syllable template is in the last.
Step 407: acquiring an operation command option selected by a user; the operation command option is used for representing a command needing to be executed;
the electronic device can display a plurality of operation command options through the display unit. For example, an operation command option for calling the user a, an option for starting a navigation application program, or other operation command options may be displayed. The user may select one of the plurality of operation command options as an operation command option corresponding to the identified arrangement order.
Step 408: and establishing a corresponding relation between the arrangement sequence and the operation command options.
After the corresponding relation is established, the user can trigger the electronic equipment to execute a corresponding command by sending out the voice which accords with the arrangement sequence in the subsequent process of using the electronic equipment.
In summary, in the present process, by saving a syllable template, allocating a corresponding identifier to the saved syllable template, obtaining an arrangement sequence of identifiers input by a user and an operation command option selected by the user, and establishing a corresponding relationship between the arrangement sequence and the operation command option; the user can trigger the electronic equipment to execute the corresponding command by sending out the syllables in different arrangement sequences in the subsequent use process by inputting the syllable template once and selecting different operation command options corresponding to the different arrangement sequences, and the syllables in various arrangement sequences are not required to be used as training voices to be input into the electronic equipment, so that the setting efficiency of the set voice and the corresponding operation command can be improved.
The invention also discloses an electronic device. The electronic device is provided with a sound collection unit, a first processing unit and a second processing unit. The electronic device can be a mobile phone, a tablet computer and the like. The sound collection unit may be a microphone. The first processing unit may be an Application Specific Integrated Circuit (ASIC), and the second processing unit may be an Application Processor (AP).
Fig. 5 is a block diagram of an embodiment of an electronic device of the present invention. As shown in fig. 5, the electronic device may include:
a sound information acquisition unit 501 for acquiring sound information;
the sound collection unit can acquire external sound information in real time. The first processing unit may be arranged to respond only to certain specific speech uttered by the user.
A first processing unit 502 for identifying the sound information; when the sound information meets a preset condition, extracting the characteristics of the sound information; the information set generating unit is used for generating an information set according to the characteristics of the sound information; an information set sending unit, configured to send the information set to the second processing unit 503.
The first processing unit may pre-store some speech information as syllable templates. After the sound acquisition unit acquires the sound information, the sound acquisition unit can be matched with the syllable template. The sound information may include a plurality of voices matched to the syllable template. The first processing unit may match each syllable unit in the sound information with each syllable template, and may determine that the syllable unit is a syllable unit matched with a syllable template if the syllable unit is successfully matched with any one syllable template.
If each syllable unit in the sound information can identify the matched syllable template, the sound information can be judged to be in accordance with the preset condition.
The feature of the audio information may be an arrangement order of syllable templates corresponding to syllable units included in the audio information.
Different sets of information may be generated based on different characteristics of the sound information. The information set includes characteristics of the sound information.
The second processing unit 503 may have different operating states, for example, the second processing unit may have a sleep state and a wake state. When the second processing unit receives the set of information, a state change may occur, e.g., a switch may be made from a sleep state to an awake state.
The second processing unit 503 is configured to execute a command corresponding to the information set after receiving the information set.
In summary, in this embodiment, the first processing unit is adopted to identify the sound information; when the sound information meets a preset condition, extracting the characteristics of the sound information; generating an information set according to the characteristics of the sound information; sending the set of information to the second processing unit; executing the command corresponding to the information set by the second processing unit; the voice operation can be directly input without triggering the application program supporting the voice operation, and the voice operation can be input even if the electronic equipment is in a standby state.
In practical applications, the power consumption of the first processing unit may be lower than the power consumption of the second processing unit.
In practical applications, the second processing unit 503 may specifically include:
the state switching subunit is used for controlling the second processing unit to be switched from a first state to a second state; wherein the power consumption of the second processing unit in the first state is lower than the power consumption of the second processing unit in the second state;
the command searching subunit is used for controlling the second processing unit in the second state to search for the command corresponding to the information set;
and the command execution subunit is used for executing the command.
In practical applications, the first processing unit 502 may specifically include:
a syllable identifying subunit configured to identify a plurality of syllables included in the speech information;
the matching subunit is used for matching the syllables with a plurality of preset syllable templates respectively;
the arrangement order determining subunit is configured to determine an arrangement order of the identifiers of the syllable templates corresponding to the plurality of syllables when the plurality of syllables are successfully matched with one of a plurality of preset syllable templates respectively;
an information set generating subunit, configured to generate an information set including the arrangement order;
the information sending subunit is used for sending the information set containing the arrangement sequence to the second processing unit;
the second processing unit 503 may specifically include:
the command determining subunit is used for determining the commands corresponding to the arrangement sequence according to the set mapping relationship between the arrangement sequence and the commands;
and the command execution subunit is used for executing the command.
In practical applications, the second processing unit 503 may further include:
an entry operation acquisition unit for detecting a syllable template entry operation of a user before acquiring sound information;
a syllable template information acquisition unit for acquiring syllable template information input by a user after the syllable template input operation;
and the syllable template storage unit is used for storing the syllable template represented by the syllable template information.
In practical applications, the second processing unit 503 may further include:
the identification distribution unit is used for distributing corresponding identifications to the stored syllable templates after the syllable templates represented by the syllable template information are stored;
the corresponding relation display unit is used for displaying the corresponding relation between the syllable template and the identification;
an arrangement order acquisition unit for acquiring an arrangement order of the identifiers input by the user;
the operation command option acquisition unit is used for acquiring operation command options selected by a user; the operation command option is used for representing a command needing to be executed;
and the corresponding relation establishing unit is used for establishing the corresponding relation between the arrangement sequence and the operation command options.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus a necessary hardware platform, and certainly may be implemented by hardware, but in many cases, the former is a better embodiment. With this understanding in mind, all or part of the technical solutions of the present invention that contribute to the background can be embodied in the form of a software product, which can be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes instructions for causing a computer device (which can be a personal computer, a server, or a network device, etc.) to execute the methods according to the embodiments or some parts of the embodiments of the present invention.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the electronic equipment disclosed by the embodiment, the description is relatively simple because the electronic equipment corresponds to the method disclosed by the embodiment, and the relevant part can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A voice operation input method is applied to an electronic device with a sound acquisition unit, a first processing unit and a second processing unit, and comprises the following steps:
acquiring sound information through the sound acquisition unit;
the first processing unit identifies the sound information;
when the sound information meets a preset condition, extracting the characteristics of the sound information;
generating an information set according to the characteristics of the sound information;
sending the set of information to the second processing unit;
the second processing unit executes a command corresponding to the set of information;
the first processing unit identifies the sound information, and specifically includes:
identifying a plurality of syllables contained in the sound information;
matching the syllables with a plurality of preset syllable templates respectively;
when the sound information meets the predetermined condition, extracting the characteristics of the sound information specifically includes:
when the plurality of syllables are successfully matched with one syllable template in a plurality of preset syllable templates respectively, determining the arrangement sequence of the identifiers of the syllable templates corresponding to the plurality of syllables;
the generating an information set according to the characteristics of the sound information specifically includes:
generating an information set containing the arrangement sequence;
the second processing unit executes a command corresponding to the information set, and specifically includes:
determining commands corresponding to the arrangement sequence according to a mapping relation between the set arrangement sequence and the commands;
the command is executed.
2. The method of claim 1, wherein the power consumption of the first processing unit is lower than the power consumption of the second processing unit.
3. The method according to claim 1, wherein the second processing unit executes a command corresponding to the set of information, specifically comprising:
the second processing unit is switched from a first state to a second state; wherein the power consumption of the second processing unit in the first state is lower than the power consumption of the second processing unit in the second state;
the second processing unit in the second state searches for a command corresponding to the information set;
the command is executed.
4. The method of claim 1, wherein before the obtaining of the sound information by the sound collection unit, further comprising:
detecting syllable template input operation of a user;
acquiring syllable template information input by a user after the syllable template input operation;
and saving the syllable template represented by the syllable template information.
5. The method according to claim 4, wherein after saving the syllable template represented by the syllable template information, further comprising:
distributing corresponding marks for the saved syllable templates;
displaying the corresponding relation between the syllable template and the identification;
acquiring the arrangement sequence of the identifiers input by the user;
acquiring an operation command option selected by a user; the operation command option is used for representing a command needing to be executed;
and establishing a corresponding relation between the arrangement sequence and the operation command options.
6. An electronic device, characterized in that the electronic device comprises:
the voice acquisition unit is used for acquiring voice information;
a first processing unit for recognizing the sound information;
when the sound information meets a preset condition, extracting the characteristics of the sound information; generating an information set according to the characteristics of the sound information and then sending the information set to a second processing unit;
the second processing unit is used for executing a command corresponding to the information set after receiving the information set;
the first processing unit specifically includes:
a syllable identifying subunit configured to identify a plurality of syllables included in the sound information;
the matching subunit is used for matching the syllables with a plurality of preset syllable templates respectively;
the arrangement order determining subunit is configured to determine an arrangement order of the identifiers of the syllable templates corresponding to the plurality of syllables when the plurality of syllables are successfully matched with one of a plurality of preset syllable templates respectively;
an information set generating subunit, configured to generate an information set including the arrangement order;
the information sending subunit is used for sending the information set containing the arrangement sequence to the second processing unit;
the second processing unit specifically includes:
the command determining subunit is used for determining the commands corresponding to the arrangement sequence according to the set mapping relationship between the arrangement sequence and the commands;
and the command execution subunit is used for executing the command.
7. The electronic device of claim 6, wherein the power consumption of the first processing unit is lower than the power consumption of the second processing unit.
8. The electronic device according to claim 7, wherein the second processing unit specifically includes:
the state switching subunit is used for controlling the second processing unit to be switched from a first state to a second state; wherein the power consumption of the second processing unit in the first state is lower than the power consumption of the second processing unit in the second state;
the command searching subunit is used for controlling the second processing unit in the second state to search for the command corresponding to the information set;
and the command execution subunit is used for executing the command.
9. The electronic device of claim 6, wherein the second processing unit further comprises:
an entry operation acquisition unit for detecting a syllable template entry operation of a user before acquiring sound information;
a syllable template information acquisition unit for acquiring syllable template information input by a user after the syllable template input operation;
and the syllable template storage unit is used for storing the syllable template represented by the syllable template information.
10. The electronic device of claim 9, wherein the second processing unit further comprises:
the identification distribution unit is used for distributing corresponding identifications to the stored syllable templates after the syllable templates represented by the syllable template information are stored;
the corresponding relation display unit is used for displaying the corresponding relation between the syllable template and the identification;
an arrangement order acquisition unit for acquiring an arrangement order of the identifiers input by the user;
the operation command option acquisition unit is used for acquiring operation command options selected by a user; the operation command option is used for representing a command needing to be executed;
and the corresponding relation establishing unit is used for establishing the corresponding relation between the arrangement sequence and the operation command options.
CN201410509616.1A 2014-09-28 2014-09-28 Voice operation input method and electronic equipment Active CN105529025B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410509616.1A CN105529025B (en) 2014-09-28 2014-09-28 Voice operation input method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410509616.1A CN105529025B (en) 2014-09-28 2014-09-28 Voice operation input method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105529025A CN105529025A (en) 2016-04-27
CN105529025B true CN105529025B (en) 2019-12-24

Family

ID=55771203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410509616.1A Active CN105529025B (en) 2014-09-28 2014-09-28 Voice operation input method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105529025B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106098066B (en) * 2016-06-02 2020-01-17 深圳市智物联网络有限公司 Voice recognition method and device
CN108806673B (en) * 2017-05-04 2021-01-15 北京猎户星空科技有限公司 Intelligent device control method and device and intelligent device
KR102441067B1 (en) * 2017-10-12 2022-09-06 현대자동차주식회사 Apparatus and method for processing user input for vehicle
WO2019239582A1 (en) * 2018-06-15 2019-12-19 三菱電機株式会社 Apparatus control device, apparatus control system, apparatus control method, and apparatus control program
CN110265011B (en) * 2019-06-10 2020-10-23 龙马智芯(珠海横琴)科技有限公司 Electronic equipment interaction method and electronic equipment

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH067357B2 (en) * 1982-10-19 1994-01-26 シャープ株式会社 Voice recognizer
CN1337670A (en) * 2001-09-28 2002-02-27 北京安可尔通讯技术有限公司 Fast voice identifying method for Chinese phrase of specific person
CN1991976A (en) * 2005-12-31 2007-07-04 潘建强 Phoneme based voice recognition method and system
US8880400B2 (en) * 2009-03-03 2014-11-04 Mitsubishi Electric Corporation Voice recognition device
US8768707B2 (en) * 2011-09-27 2014-07-01 Sensory Incorporated Background speech recognition assistant using speaker verification
US8793136B2 (en) * 2012-02-17 2014-07-29 Lg Electronics Inc. Method and apparatus for smart voice recognition
CN103811003B (en) * 2012-11-13 2019-09-24 联想(北京)有限公司 A kind of audio recognition method and electronic equipment
CN103841248A (en) * 2012-11-20 2014-06-04 联想(北京)有限公司 Method and electronic equipment for information processing
CN103594089A (en) * 2013-11-18 2014-02-19 联想(北京)有限公司 Voice recognition method and electronic device
CN103730120A (en) * 2013-12-27 2014-04-16 深圳市亚略特生物识别科技有限公司 Voice control method and system for electronic device
CN103885596B (en) * 2014-03-24 2017-05-24 联想(北京)有限公司 Information processing method and electronic device
CN103943105A (en) * 2014-04-18 2014-07-23 安徽科大讯飞信息科技股份有限公司 Voice interaction method and system
CN104036778A (en) * 2014-05-20 2014-09-10 安徽科大讯飞信息科技股份有限公司 Equipment control method, device and system

Also Published As

Publication number Publication date
CN105529025A (en) 2016-04-27

Similar Documents

Publication Publication Date Title
CN103903611B (en) A kind of recognition methods of voice messaging and equipment
CN108831469B (en) Voice command customizing method, device and equipment and computer storage medium
CN105529025B (en) Voice operation input method and electronic equipment
CN107591155B (en) Voice recognition method and device, terminal and computer readable storage medium
CN106782526B (en) Voice control method and device
CN106406867B (en) Screen reading method and device based on android system
CN105791931A (en) Smart television and voice control method of the smart television
CN105843681B (en) Mobile terminal and operating system switching method thereof
JPWO2017090115A1 (en) Voice dialogue apparatus and voice dialogue method
CN104168353A (en) Bluetooth earphone and voice interaction control method thereof
TW202006532A (en) Broadcast voice determination method, device and apparatus
US11062708B2 (en) Method and apparatus for dialoguing based on a mood of a user
CN109215638B (en) Voice learning method and device, voice equipment and storage medium
CN104461348B (en) Information choosing method and device
CN111343028A (en) Distribution network control method and device
WO2016183961A1 (en) Method, system and device for switching interface of smart device, and nonvolatile computer storage medium
CN106550082A (en) The method and apparatus that a kind of use voice assistant application is dialled
CN103177724A (en) Method, device and terminal for text operating controlled by voice
WO2016161750A1 (en) Terminal call control method and device, and computer storage medium
CN108986813A (en) Wake up update method, device and the electronic equipment of word
CN105611033A (en) Method and device for voice control
US10216732B2 (en) Information presentation method, non-transitory recording medium storing thereon computer program, and information presentation system
CN103903615B (en) A kind of information processing method and electronic equipment
CN109068005B (en) Method and device for creating timing reminding event
KR20140049922A (en) Language recognition apparatus using user information for mutilingual automated speech translation machine

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant