CN107613132A - Voice answering method and mobile terminal apparatus - Google Patents

Voice answering method and mobile terminal apparatus Download PDF

Info

Publication number
CN107613132A
CN107613132A CN201710903738.2A CN201710903738A CN107613132A CN 107613132 A CN107613132 A CN 107613132A CN 201710903738 A CN201710903738 A CN 201710903738A CN 107613132 A CN107613132 A CN 107613132A
Authority
CN
China
Prior art keywords
voice
voice signal
mobile terminal
terminal apparatus
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710903738.2A
Other languages
Chinese (zh)
Inventor
寻亮
张国峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Via Technologies Inc
Original Assignee
Via Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Via Technologies Inc filed Critical Via Technologies Inc
Publication of CN107613132A publication Critical patent/CN107613132A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephonic Communication Services (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A kind of voice answering method for mobile terminal apparatus, this method include:Judge whether to receive the first voice signal for meeting identification information;When receiving the first voice signal for meeting identification information, start voice receiving unit to receive message;Judge whether to receive the second voice signal after the first voice signal;When receiving the second voice signal, the second voice signal is parsed according to semantic database to obtain voice recognition result;Judge whether there is executable solicited message in voice recognition result;And when voice identification result has executable solicited message, perform response operation and close and receive the 3rd voice signal, wherein, when not receiving the second voice signal or when voice identification result does not have executable solicited message, voice dialogue pattern is performed.

Description

Voice answering method and mobile terminal apparatus
It is July 11, Application No. 201310291083.X, entitled " voice in 2013 applying date that the application, which is, The divisional application of the application for a patent for invention of answering method and mobile terminal apparatus ".
Technical field
The present invention relates to a kind of technology of speech control, and more particularly to a kind of voice for automatically turning on hand-free system is answered Method and the mobile terminal apparatus using the method.
Background technology
With the development of science and technology, the mobile terminal apparatus with voice system is increasingly popularized.Above-mentioned voice system is By speech understanding technology, user is allowed to be linked up with mobile terminal apparatus.For example, as long as user is to above-mentioned shifting Dynamic terminal installation tells a certain requirement, such as wants to look into train number, looks into weather or be intended to call, and system will be according to use The voice signal of person, take corresponding action.Above-mentioned action be probably with voice mode answer user's problem or according to User's instruction goes to drive the system of mobile terminal apparatus to be acted.
Mostly it is the screen for triggering mobile terminal apparatus at present shown by it for the convenience started with voice system Application program starts to start, or by the physical button set by mobile terminal apparatus.Therefore, user must be direct The screen of mobile terminal apparatus or set physical button are touched, to start voice system in itself by mobile terminal apparatus System, but this is for the user, in some occasions, above-mentioned design is suitable inconvenience.Such as:In the driving phase Between, or when kitchen is cooked, it is necessary to the mobile phone positioned at parlor be dialed, to inquire that the users such as friend's recipe details can not Mobile terminal apparatus is touched immediately, but the situation that need to open voice system.Further, after opening voice dialogue, how to enter Row more meets the multiple interactive dialogue slipped out of the hand completely of the human conversation natural law.In other words, user still has to lead at present Receive and distribute, to start the voice system of mobile terminal apparatus, and can not accomplish to be completely free of the operation of hand.
Base this, how to improve these above-mentioned shortcomings, turn into subject under discussion urgently to be resolved hurrily.
The content of the invention
The present invention provides a kind of voice answering method and mobile terminal apparatus, wherein when mobile terminal apparatus receives incoming call During call, mobile terminal apparatus will automatically turn on its hand-free system, easily allow user to carry out language with mobile terminal apparatus Sound is linked up, and mobile terminal apparatus can be conversed according to the content described in user to respond this incoming call so that user is talking with During no longer need to participate in manually.Thereby, the present invention can realize it is interactive slip out of the hand completely, use more convenient, quick Ground provides voice service.
The present invention proposes a kind of voice answering method, for the mobile terminal apparatus with normal mode and first mode. Voice answering method comprises the following steps.First mode is switched to from normal mode.Conversed when receiving incoming call in first mode When, verbal announcement is sent, and start reception voice signal.Voice signal is parsed to obtain voice recognition result.Distinguished according to voice Know result, response operation corresponding to execution.
The present invention separately proposes a kind of mobile terminal apparatus, and it includes voice-output unit, voice receiving unit, language understanding Module and carry out communication unit.Voice-output unit is sending verbal announcement.Voice receiving unit is believed to receive voice Number.Language understanding module is coupled to voice receiving unit, to parse voice signal.Carry out communication unit and be coupled to voice output Unit and language understanding module.Carry out communication unit to receive incoming call call and perform response operation.Wherein, mobile terminal fills Put from normal mode and switch to first mode, and when carrying out communication unit reception incoming call call, carry out communication unit and pass through language Sound output unit sends verbal announcement, and starts voice receiving unit and receive voice signal.Also, language understanding module parses language Sound signal is to obtain voice recognition result, and carrys out communication unit response operation according to corresponding to performing voice recognition result.
Based on above-mentioned, when mobile terminal apparatus receives incoming call call in first mode, mobile terminal apparatus can be automatic Verbal announcement is sent to inquire user, and allows user to be manipulated according to verbal announcement by way of voice mobile whole End device is responded.Also, mobile terminal apparatus can be according to from user, what is said or talked about, response operation corresponding to execution. Consequently, it is possible to which mobile terminal apparatus can automatically turn on its hand-free system rapidly to provide voice service, user is allowed more just Profit and mobile terminal apparatus is more easily manipulated by way of voice, thereby, to receive incoming call logical when mobile terminal apparatus During words, user can completely disengage manually operated responded.
For features described above of the invention and advantage can be become apparent, special embodiment below, and it is detailed to coordinate accompanying drawing to make Carefully it is described as follows.
Brief description of the drawings
Fig. 1 is the block diagram according to the mobile terminal apparatus depicted in one embodiment of the invention.
Fig. 2 is the flow chart according to the voice answering method depicted in one embodiment of the invention.
Fig. 3 is the block diagram according to the mobile terminal apparatus depicted in one embodiment of the invention.
Fig. 4 is the flow chart according to the voice control method depicted in one embodiment of the invention.
Fig. 5 is the flow chart according to the voice control method depicted in one embodiment of the invention.
【Symbol description】
100、300:Mobile terminal apparatus
104、304:Auxiliary operation device
106、306:Semantic database
110、310:Voice-output unit
120、320:Voice receiving unit
130、330:Language understanding module
140、340:Carry out communication unit
350:Voice wake-up module
A1:Voice answer-back
C:Incoming call call
V1、V2、V3:Voice signal
SD:Voice recognition result
SO:Verbal announcement
SI:Voice signal S202, S204, S206, S208:Each step of voice answering method
S402、S404、S406、S408、S410、S412、S414、S502、S504、S506、S508、S510:Speech control The flow chart of method
Embodiment
Although mobile terminal apparatus now can provide voice system, to allow user to send voice and mobile terminal Device is linked up, but user is when starting this voice system, still has in itself start by mobile terminal apparatus.Therefore make User can not touch mobile terminal apparatus immediately, but the situation that need to open voice system, often can not meet user immediately Demand.Further, that is, allow to wake up speech dialogue system, but current mobile device still needs in dialog procedure The participation frequently of hand, such as user put question to and terminate rear, it is necessary to need to be again turned on speech dialogue system manually when inquiring again, It is extremely inconvenient.Therefore, the present invention proposes a kind of voice answering method, voice control method and mobile terminal apparatus, user is allowed Voice system can more easily be opened.Further, the present invention enables to user to break away from hand in whole dialog procedure Operation so that dialogue more convenient quickly is natural.In order that the content of the present invention becomes apparent, below especially exemplified by embodiment conduct The example that the present invention can actually be implemented according to this.
Fig. 1 is the block diagram according to the mobile terminal apparatus depicted in one embodiment of the invention.Fig. 1 is refer to, it is mobile whole End device 100 has voice-output unit 110, voice receiving unit 120, language understanding module 130 and carrys out communication unit 140.Mobile terminal apparatus 100 is, for example, mobile phone (Cell phone), personal digital assistant (Personal Digital Assistant, PDA) mobile phone, smart mobile phone (Smart phone), or the palmtop computer of communication software is installed (Pocket PC), Tablet PC (Tablet PC) or mobile computer etc..Mobile terminal apparatus 100 can be appointed What possesses portable (Portable) mobile device of communication function, is not intended to limit its scope herein.In addition, mobile terminal apparatus 100 can be used Android operation system, microsoft operating system, Android operation system, (SuSE) Linux OS etc., It is not limited to above-mentioned.In the present embodiment, mobile terminal apparatus 100 can receive incoming call call C by carrying out communication unit 140. When carrying out communication unit 140 and receiving incoming call call C, mobile terminal apparatus 100 can be by voice-output unit 110, automatically Verbal announcement SO is sent to inquire how user is responded.Now, mobile terminal apparatus 100 can pass through voice receiving unit 120 to receive the voice signal SI from user, and this voice signal SI is parsed by language understanding module 130 To produce voice recognition result SD.Finally, mobile terminal apparatus 100 can be by carrying out communication unit 140, with according to speech recognition As a result SD performs corresponding response operation.Above-mentioned module and the function of unit are described below.
Voice-output unit 110 is, for example, loudspeaker.Voice-output unit 110 has sound amplification function, to export voice Notice and the voice from conversation object.Specifically, it is mobile whole when mobile terminal apparatus 100 receives incoming call call C End device 100 can by voice-output unit 110 send verbal announcement SO, with inform user send a telegram here call C source (such as Conversation object) or inquire whether user will answer this incoming call call C etc..Can be according to incoming call for example, carrying out communication unit 140 Call C and the telephone number information that the C that conversed on incoming call is sent by voice-output unit 110, or and then lead to according to coordinator The coordinator's title for transfering to this incoming call call C is recorded and found to letter, is not limited to above-mentioned.For example, carrying out communication unit 140 can lead to Cross voice-output unit 110 and send out " Wang Daming give you send a telegram here, answer now", " X companies give you send a telegram here, answer now ", " incoming call is 0922-123564, is answered now" or " incoming call is 886922-123564, is answered now" etc. on Incoming call call C information.In addition, if this incoming call call C does not provide telephone number, then carrying out communication unit 140 can also pass through Voice-output unit 110 and send out default verbal announcement SO, for example, " this is unknown phone, is answered now" etc..Separately On the one hand, after user's connecting incoming call converses C, user can also be answered by voice-output unit 110.
Voice receiving unit 120 is, for example, microphone, to receive the sound of user, to obtain the language from user Sound signal SI.
Language understanding module 130 is coupled to voice receiving unit 120, is received parsing voice receiving unit 120 Voice signal SI, to obtain voice recognition result.Specifically, language understanding module 130 may include voice identification module and Speech processing module (does not illustrate), wherein, voice identification module can receive the voice signal transmitted from voice receiving unit 120 SI, it is semantic (such as vocabulary or words and expressions etc.) to convert voice signals into multiple segmentations.Speech processing module then can be according to these Segmentation is semantic and parses and means (such as intention, time, place etc.) representated by these segmentation semantemes, and then judges above-mentioned The represented meaning in voice signal SI.In addition, speech processing module can also the response according to corresponding to producing the result parsed Content.
Still further, in the natural language understanding under computer system architecture, it will usually use fixed word method To extract voice signal SI sentence, (such as incoming call answering call C, refused with parsing instruction intended by these sentences or intention Exhausted incoming call answering call C sends the action such as news in brief) etc., and judge the voice signal SI meaning, use acquisition speech recognition As a result.In the present embodiment, the speech processing module of language understanding module 130, can be by semantic database 106, to inquire about language Which instruction the segmentation semanteme being divided into sound signal SI corresponds to, and wherein semantic database 106 is recordable various points Duan Yuyi and the relation of various orders.In the present embodiment, it is semantic according to above-mentioned various segmentations, the language of language understanding module 130 Sound processing module also can determine whether out in voice signal SI which is the information that user is intended to respond incoming call call C.
For example, represent to want incoming call answering call C when user responds " good ", " answering ", " connecing " or the like Voice signal SI when, language understanding module 130 can inquire about " good ", " answering ", " connecing " by semantic database 106 Deng corresponding order, and it is to represent incoming call answering call C to parse above-mentioned voice signal SI.In another embodiment In, represent to refuse incoming call answering call C voice signal SI when user responds " not connecing ", " no ", " not connecing first " or the like When, language understanding module 130 can inquire about the life corresponding to " not connecing ", " no ", " not connecing first " etc. by semantic database 106 Order, and it is to represent to refuse incoming call answering call C to parse above-mentioned voice signal SI.
In another embodiment, when user responds " do not connect first, tell he I to calling back to him after company " etc. it Class represents that language understanding module 130 can pass through semantic database when sending message to respond incoming call call C voice signal SI 106 inquire about the order corresponding to " first do not connect ", and parse voice signal SI to represent that refusal incoming call answering is conversed C.Also, Language understanding module 130 can also by semantic database 106 come judge " telling him " be represent send message order, use Traffic operation is performed according to this order, signal of communication (such as sending news in brief) is e.g. produced according to this order.Its In, language understanding module 130 also can determine whether out the voice after " telling him " be represent send message when response content (such as It is " being called back after to company ").
It should be noted that in the present embodiment, what language understanding module 130 can be combined by one or several gates Hardware circuit carrys out implementation or carrys out implementation with computer program code.It is noted that in another embodiment, on The language understanding module stated can also be configured in cloud server.That is, mobile terminal apparatus 100 also can be with cloud service Device (does not illustrate) line, and wherein cloud server line has language understanding module.Consequently, it is possible to mobile terminal apparatus 100 can By received voice signal SI, the language understanding module being sent in cloud server is parsed, then from cloud service Device obtains voice recognition result.
Carry out communication unit 140 and be coupled to voice receiving unit 120 and language understanding module 130.Carry out communication unit 140 To receive incoming call call C and perform traffic operation.Specifically, come communication unit 140 receive incoming call call C after, can According to the voice (will be described afterwards) of user, to carry out incoming call answering call C, refusing incoming call call C, the default voice answer-back of transmission To respond the call C that sends a telegram here, or the answer signal such as transmission news in brief, voice answer-back, to respond send a telegram here call C, wherein answer signal In there is the response content that user is intended to respond incoming call call C.
Described herein to be, the mobile terminal apparatus 100 of the present embodiment has normal mode and first mode.Wherein, One pattern is, for example, that mobile terminal apparatus 100 is used in crane device on the move and enters vehicle-mounted pattern.More specifically, exist In this first mode, when mobile terminal apparatus 100 receives incoming call call C, mobile terminal apparatus 100 can send voice automatically Notice (such as the source for call of sending a telegram here) is to inquire whether user answers this incoming call call C, i.e., mobile terminal apparatus 100 can Its hand-free system is automatically opened, to carry out interactive voice with user.Comparatively, normal mode is, for example, mobile terminal dress 100 are put when off-board pattern.That is, in this normal mode, mobile terminal apparatus 100 will not send voice automatically and lead to Know to inquire whether user answers this incoming call call C, and can not be responded according to the voice signal of user, that is, move Terminal installation 100 will not automatically open its hand-free system.
Consequently, it is possible to when mobile terminal apparatus 100 switches to first mode, if mobile terminal apparatus 100 receives Incoming call call, then can send verbal announcement user, and in a manner of allowing user by voice, transmission voice signal to movement is eventually End device 100 so that mobile terminal apparatus 100 can what is said or talked about according to user, come respond this incoming call call (such as answer or Refuse the traffic operations such as incoming call answering call).
It should be noted that the mobile terminal apparatus 100 of the present embodiment can switch to first mode from normal mode automatically.Tool For body, when the line of mobile terminal apparatus 100 is in servicing unit 104, mobile terminal apparatus 100 can switch to from normal mode First mode.On the other hand, when 100 non-line of mobile terminal apparatus is in servicing unit 104, mobile terminal apparatus 104 can be from First mode switches to normal mode.Here, mobile terminal apparatus 100 can be matched with servicing unit 104.Wherein, when mobile whole End device 100 can be such that mobile terminal apparatus 10 cuts automatically by wireless transmission signal or when being electrically connected at servicing unit 104 It is changed to first mode.
In addition, in another embodiment, when mobile terminal apparatus 100 is used for crane device on the move, mobile terminal Device 100 also can be according to the size of the speed of sensing crane device, to decide whether to switch to first mode.For example, when driving When the speed of device exceedes threshold value, mobile terminal apparatus 100 then can switch to first mode from normal mode.On the other hand, when When the speed of crane device is not less than threshold value, mobile terminal apparatus 100 then can switch to normal mode from from first mode.So One, user more can manipulate mobile terminal apparatus 100 conveniently by voice.
Fig. 2 is the flow chart according to the voice answering method depicted in one embodiment of the invention.Referring to Fig. 1 and figure 2, in step 202, mobile terminal apparatus 100 can switch to first mode from normal mode.In mobile terminal apparatus 100 in In the case of one pattern, as shown in step S204, when come communication unit 140 receive incoming call call C when, carry out communication unit 140 can send verbal announcement SO by voice-output unit 110, and start voice receiving unit 120 and receive voice signal SI.Root According to above-mentioned verbal announcement SO, user can learn incoming call call C source, and it is logical that incoming call can be manipulated by way of voice Unit 140 is believed to respond this incoming call call C.Therefore, when come communication unit 140 receive incoming call call C when, send a telegram here communication unit Member 140 can start voice receiving unit 120 to receive the voice signal SI from user.
In step S206, language understanding module 130 can parse the voice signal SI received by voice receiving unit 120, To obtain voice recognition result.Here, language understanding module 130 can receive the voice signal SI from voice receiving unit 120, And it is divided into multiple segmentations semantic voice signal SI.Also, language understanding module 130 semantic to above-mentioned segmentation can carry out nature Language understanding, to pick out the response message in voice signal SI.
Then, distinguished in step S208, the voice that carrying out communication unit 140 can be parsed according to language understanding module 130 Know result, traffic operation corresponding to execution.In the present embodiment, because user can be by way of voice, to order movement Terminal installation 100 is answered, refusing incoming call call C, sends message or other actions to respond incoming call call C, therefore language After Understanding Module 130 parses voice signal SI, the order in voice signal SI can determine whether out.Therefore carry out communication unit 140 can Ordering to perform the traffic operation to one in voice signal SI.It is above-mentioned come traffic operation performed by communication unit 140 It can be incoming call answering call C, refusal incoming call answering call C, transmit default voice answer-back to respond incoming call call C, Huo Zhechuan The answer signals such as news in brief, voice answer-back are sent, to respond the call C that sends a telegram here, wherein in answer signal there is user to be intended to respond incoming call Call C response content.
In order that those skilled in the art further appreciate that the present embodiment carrys out the communication behaviour performed by communication unit 140 Make, hereafter again for all embodiments, wherein, the Fig. 1 that still arranges in pairs or groups mobile terminal apparatus 100 illustrates.
When mobile terminal apparatus 100 switches to first mode (such as mobile terminal apparatus 100 is used for driving on the move Enter vehicle-mounted pattern in device), it is assumed that carry out communication unit 140 and receive incoming call call C, and carry out the meeting of communication unit 140 Send that " Wang Daming is sent a telegram here to you, is answered now by voice-output unit 110" this verbal announcement SO.In the present embodiment In, if user responds " good " this voice signal SI, then this incoming call call C can be answered by carrying out communication unit 140.
On the other hand, if user responds " not connecing " this voice signal SI, then carrying out communication unit 140 can refuse to connect Listen this incoming call call C.In one embodiment, come communication unit 140 it is also transmittable " phone that you dial can not temporarily answer, Please dial, or left a message after " serge " sound again later " this presets voice answer-back to respond incoming call call C.
In addition, if user responds " do not connect first, tell he I to calling back to him after company " this voice signal SI, then carrying out communication unit 140 can refuse to answer this incoming call call C, and can obtain response content from voice recognition result, This response content that " called back after to company " to send news in brief, wherein for example described in news in brief " I in session, later This news in brief content of clawback again " come respond incoming call call C.
Consequently, it is possible in the case where mobile terminal apparatus 100 enters vehicle-mounted pattern, mobile terminal apparatus 100 can be automatic Inquire user whether incoming call answering call C, mobile terminal apparatus 100 is manipulated in a manner of allowing user directly by voice Answered, refuse to answer or other traffic operations.
It should be noted that, this implementation profit is not intended to limit user by way of voice to respond incoming call call C in addition. In other embodiment, user can be configured at the button (not illustrating) of mobile terminal apparatus 100 by pressing, carry out telecommunication to make Unit 140 carries out answering/rejection.Or user also can be by line in the auxiliary operation device of mobile terminal apparatus 100 (not illustrating) (portable device e.g. with Bluetooth function or wireless transmission function), to manipulate to carry out communication unit 140 and enter Row answers/rejection.
According to above-mentioned, mobile terminal apparatus 100 can switch to first mode from normal mode automatically.Also, when incoming call is logical When first mode receives incoming call call, voice-output unit 110 can send verbal announcement and be used with inquiring letter unit 140 Person.When user sends voice signal, language understanding module 130 can parse to this voice signal, and communication unit of sending a telegram here The voice recognition result that member 140 is obtained after being parsed according to language understanding module 130, traffic operation corresponding to execution.So One, mobile terminal apparatus can provide voice service more quickly, wherein when mobile terminal apparatus 100 is in the situation of first mode Under, such as during for crane device on the move, the voice that user can be easily according to transmitted by mobile terminal apparatus 100 leads to Know, incoming call call is responded by way of voice.Thereby, user can more advantageously manipulate mobile terminal apparatus.
Fig. 3 is the block diagram according to the mobile terminal apparatus depicted in one embodiment of the invention.Fig. 3 is refer to, it is mobile whole End device 300 has voice-output unit 310, voice receiving unit 320, language understanding module 330 and voice wake-up module 350.The mobile terminal apparatus 300 of the present embodiment is similar to Fig. 1 mobile terminal apparatus 100, and its difference is:This implementation The mobile terminal apparatus 300 of example has more voice wake-up module 350.
Voice wake-up module 350 is judging whether to receive the voice signal with identification information.In the present embodiment, When voice wake-up module 350 does not receive the voice signal with identification information, voice-output unit 310, phonetic incepting list Member 320 and language understanding module 330 may be at standby or close isotype, i.e. mobile terminal apparatus 300 will not be with user Carry out interactive voice.And when voice wake-up module 350 receives the voice signal with identification information, mobile terminal apparatus 300 can start voice receiving unit 320 with the voice signal after reception, and be solved by language understanding module 330 Analysis, i.e. mobile terminal apparatus 300 can carry out interactive voice according to this voice signal and user, and can also carry out corresponding to voice Response operation of signal etc..Therefore in the present embodiment, user can say the language with identification information directly in a manner of voice Sound (such as specific vocabulary, such as name), voice interactive function is performed to wake up mobile terminal apparatus 300.In addition, the present embodiment Voice wake-up module 350 can by the hardware circuit that one or several gates combine come implementation or with calculate Machine program code carrys out implementation.
It is noted that because voice receiving unit 320 is after voice wake-up module 350 picks out identification information And be activated, therefore language understanding module 330 can avoid parsing non-speech audio (such as noise signals).In addition, by As long as in voice wake-up module 350 can pick out corresponding to identification information message (such as " small madder " this identification information institute it is right The message answered), i.e., it can judge that received voice signal has identification information, therefore voice wake-up module 350 can not have The ability of natural language understanding is had, and there is the consumption of lower-wattage.Consequently, it is possible to when user is not provided with identification letter During the voice signal of breath, mobile terminal apparatus 300 will not start voice interactive function, therefore mobile terminal apparatus 300 not only can be square Just user is manipulated by voice, can also save electrical source consumption.
Therefore in the present embodiment, mobile terminal apparatus 300 can judge whether to receive symbol by voice wake-up module 350 The voice signal (hereafter being represented with voice signal V1) of identification information is closed, if it is, mobile terminal apparatus 300 can start voice Whether receiving unit 320 judges voice receiving unit 320 in voice letter to receive message by language understanding module 330 Another voice signal (hereafter being represented with voice signal V2) is received after number V1.If language understanding module 330 judges voice Receiving unit 320 receives voice signal V2, and language understanding module 330 can parse voice signal V2 and obtain speech recognition knot Fruit, and judge whether there is executable solicited message in voice recognition result.If voice recognition result has executable ask When seeking information, then mobile terminal apparatus 300 can perform response operation, and terminated speech interaction work(by language understanding module 330 Energy.
However, if above-mentioned voice receiving unit 320 after voice signal V1, does not receive another voice signal V2, Or the voice recognition result that language understanding module 330 parses voice signal V2 and obtained, without executable solicited message When, then mobile terminal apparatus 300 can perform voice dialogue pattern by language understanding module 330, to carry out language with user Sound is linked up.Wherein, for language understanding module 330 when performing voice dialogue pattern, language understanding module 330 can send voice automatically Response is to inquire the solicited message of user (i.e. the intention of user).Now, language understanding module 330 can judge user institute Whether the voice signal of output meets termination of a session prompt message, or whether has executable solicited message.If so, then can be whole Only voice dialogue pattern, or the terminated speech dialogue mode after solicited message is can perform corresponding to execution;If it is not, then language Speech Understanding Module 330 then may proceed to perform voice dialogue pattern, until the voice signal that user is exported meets termination of a session Prompt message has untill can perform solicited message.
Above-mentioned mobile terminal apparatus 300 is arranged in pairs or groups to illustrate the method for speech control below.Fig. 4 is real according to the present invention one Apply the flow chart of the voice control method depicted in example.Referring to Fig. 3 and Fig. 4, in step S402, voice wake-up module 350 determine whether to receive the voice signal for meeting identification information (hereafter representing with voice signal V1).Specifically, identify Information can be the default sound corresponding to specific vocabulary (such as name), and wherein this default sound can be in special audio scope or spy Within the scope of surely measuring.That is, voice wake-up module 350 can determine whether to receive in special audio scope or specific energy Default sound within the scope of amount, and judge whether to receive the voice signal V1 with identification information.In the present embodiment, make User can set this identification information beforehand through the system of mobile terminal apparatus 300, such as be provided previously by identification information institute Corresponding default sound, and whether voice wake-up module 350 can meet this default sound by comparing voice signal V1, to judge language Whether sound signal V1 has identification information.As an example it is assumed that identification information is the default sound corresponding to " small madder " this name, Then voice wake-up module 350 determines whether to receive the voice signal V1 with " small madder ".
If voice wake-up module 350 does not receive the voice signal V1 for meeting identification information, then as shown in step S404, Mobile terminal apparatus 300 will not start voice interactive function.Meet identification information because voice wake-up module 350 does not receive Voice signal V1, therefore voice receiving unit 320 is into the reception of closed mode or resting state without carrying out voice signal, Therefore the voice signal after language understanding module 330 in mobile terminal apparatus 300 will not obtain is parsed.Citing comes To say, it is assumed that identification information is " small madder ", if user does not say " small madder " and said other voices such as " Xiao Wang ", i.e. voice Wake module 350 can not receive the voice signal V1 for meeting " small madder ", therefore the voice interactive function of mobile terminal apparatus 300 is not It can be activated.
In step S406, when voice wake-up module 350 judges that voice signal V1 meets identification information, mobile terminal dress Voice receiving unit 320 can be started to receive message by putting 300.Also, language understanding module 330 can be according to voice receiving unit Message received by 320, judges whether voice receiving unit 320 receives another voice signal after voice signal V1 (hereafter being represented with voice signal V2).In the present embodiment, language understanding module 330 can determine whether that voice receiving unit 320 is connect Whether the energy of the message received is more than a setting value.If the energy of the message is not less than setting value, language understanding mould Block 330 can judge this message for noise, use and judge that voice receiving unit 320 does not receive voice signal V2;If the sound The energy of news reaches setting value, then language understanding module 330 can determine whether that voice receiving unit 320 has been received by voice signal V2, And then follow-up step is performed according to this voice signal V2.
If language understanding module 330 judges that voice receiving unit 320 does not receive voice signal V2, then such as step S408 Shown, language understanding module 330 can perform voice dialogue pattern.In voice dialogue pattern, language understanding module 330 can pass through Voice-output unit 310 sends voice answer-back, and can continue to and parse from user's by voice receiving unit 320 Another voice signal, another voice answer-back or response operation are made according to this, until language understanding module 330 judges to provide There is the voice signal of termination of a session prompt message, or untill mobile terminal apparatus 300 has completed the order or request of user. , will be in rear detailed description (as shown in Figure 5) on the detailed step of voice dialogue pattern.
If language understanding module 330 judges that voice receiving unit 320 receives voice signal V2, then such as step S410 institutes Show, language understanding module 330 can parse voice signal V2 and obtain voice recognition result.Language understanding module 330, which can receive, to be come It is divided into multiple segmentations semantic from the voice signal V2 of voice receiving unit 320, and by voice signal V2, and to above-mentioned segmentation Semanteme carries out natural language understanding, to pick out the content in voice signal V2.Such as Fig. 1 language understanding module 130, this reality Voice signal V2 sentence can be extracted according to fixed word method by applying the language understanding module 330 of example, to parse these sentence institutes The instruction meant or intention (such as imperative sentence or inquiry sentence) etc., and judge the voice signal V2 meaning, use acquisition language Sound identification result.Wherein, language understanding module 330 can come in voice inquirement signal V2 to be divided into by semantic database 306 Segmentation semanteme which instruction corresponded to, and above-mentioned semantic database 306 is recordable has various segmentations semantic and various orders Relation.
Then, as shown in step S412, whether language understanding module 330, which can judge to have in voice recognition result, can perform Solicited message.Specifically, solicited message is can perform for example to refer to allow mobile terminal apparatus 300 to complete request operation.Namely Say, language understanding module 330 can allow mobile terminal apparatus 300 to perform according to the executable solicited message in voice recognition result One action, wherein mobile terminal apparatus 300 can for example be completed by one or more application programs.For example, language is worked as Sound signal V2 is " helping me to phone Wang Daming ", " helping me to look into the weather of Taibei tomorrow " or " now some " etc., then voice is believed Number V2 has executable solicited message, therefore, after language understanding module 330 parses above-mentioned voice signal V2, can make mobile terminal Device 300 calls the weather that Taibei tomorrow is looked into and returned to Wang Daming, online or inquiry and returns now time etc. These actions.
On the other hand, if voice recognition result does not have executable solicited message, then it represents that the nothing of language understanding module 330 Method judges the intention of user according to voice recognition result, therefore mobile terminal apparatus 300 can not be allowed to complete request operation.Lift For example, when voice signal V2 is " helping me to make a phone call ", " helping me to look into weather ", " present " etc., then language understanding module 330 parses After voice signal V2, no decree mobile terminal apparatus 300 completes above-mentioned request operation.That is, language understanding module 330 can not Judge the conversation object in above-mentioned voice signal V2, inquire about which time in or which place weather, and can not basis One sentence for not having the complete meaning of one's words performs.
When voice recognition result has executable solicited message, then as shown in step S414, the meeting of language understanding module 330 Response operation is performed, and mobile terminal apparatus 300 can close and receive other voice signals (hereafter being represented with voice signal V3), by With the voice interactive function of turning-off mobile terminal device 300.
Specifically, when executable solicited message is operational order, then language understanding module 330 can start corresponding to behaviour Make the operating function instructed.For example, when executable solicited message is " brightness for turning down screen ", then the meeting of language understanding module 330 The signal of an adjustment brightness is sent in the system of mobile terminal apparatus 300, it is turned down the brightness of screen.In addition, working as to hold When row solicited message is inquires sentence, then language understanding module 330 can send the corresponding voice answer-back for inquiring sentence herein.Now language Understanding Module 330 can pick out one or more of inquiry sentence keyword, and according to these keywords and from Search engine Carry out inquiring about corresponding answer, then voice answer-back is exported by voice-output unit 310.For example, when executable solicited message For " temperature in the Taibei will be the several years tomorrow", then language understanding module 330 can send an inquiry signal to be inquired about by Search engine Corresponding answer, and " temperature in the Taibei will be 26 degree tomorrow " this voice answer-back is exported by voice-output unit 310.
It is described herein to be, because above-mentioned executable solicited message can allow mobile terminal apparatus 300 to complete request operation, Therefore after language understanding module 330 performs response operation, voice receiving unit 320 now is understood into closing or resting state, Without receiving other voice signal V3.Still further, when voice receiving unit 320 is closed reception voice signal During V3, if user is intended to by way of voice to make mobile terminal apparatus 300 perform request operation, user need to exhale again The voice with identification information is, is used by voice wake-up module 350 to be judged, and then is again started up phonetic incepting list Member 320.
When voice recognition result does not have executable solicited message, then as shown in step S408, language understanding module 330 Voice dialogue pattern can be performed (on the detailed step of voice dialogue pattern, will be in rear detailed description, as shown in Figure 5).Here, language Understanding Module 330 can send voice answer-back according to voice signal V2 by voice-output unit 310, and can pass through phonetic incepting Unit 320, continue to another voice signal.Make that is, language understanding module 330 may proceed to receive and parse to come from The voice signal of user, another voice answer-back or response operation are made according to this, until language understanding module 330 judges to provide There is the voice signal of termination of a session prompt message, or untill mobile terminal apparatus 300 has completed the order or request of user. Consequently, it is possible in the present embodiment, user only needs to send the voice signal with identification information, you can easily with movement eventually End device 300 carries out voice communication.Due to the Reclosable voice receiving unit 320 of mobile terminal apparatus 300 and then secondary basis The voice signal with identification information and automatically open up voice interactive function, therefore user can fully liberate both hands, and Engaged in the dialogue with mobile terminal apparatus 300, and manipulated completely by way of voice corresponding to mobile terminal apparatus 300 performs Response operation etc..
In order that those skilled in the art further appreciate that the voice dialogue mould performed by above-mentioned language understanding module 330 Formula, hereafter again for all embodiments exemplified by, wherein the Fig. 3 that still arranges in pairs or groups mobile terminal apparatus 300 illustrates.
Fig. 5 is the flow chart according to the voice control method depicted in one embodiment of the invention.Referring to Fig. 3, Fig. 4 With Fig. 5, language understanding module 330 is when performing voice dialogue pattern (such as Fig. 4 step S408), in Fig. 5 step S502, Language understanding module 330 can produce voice answer-back, hereafter be represented with voice answer-back A1, and be exported by voice-output unit 310. Because language understanding module 330 can perform voice dialogue pattern because not receiving voice signal V2 (such as Fig. 4 step S406), Voice dialogue pattern (the step of such as Fig. 4 is performed the voice signal V2 for not having executable solicited message because receiving either S412), so when, language understanding module 330 can send voice answer-back A1 automatically to inquire that the solicited message of user (uses The intention of person).
For example, when voice receiving unit 320 does not receive voice signal V2, language understanding module 330 can pass through Voice-output unit 310, which is sent, " has what", " what service needed to provide" etc., not limited to this, use inquiry and use Person.In addition, when the voice signal V2 received by language understanding module 330 does not have executable solicited message, language understanding Module 330 can be sent by voice-output unit 310 " you say be which place weather", " you say be whose electricity Words" or " you say be what the meaning" etc., not limited to this.
It should be noted that language understanding module 330 can not have the voice signal of executable solicited message according to this yet V2, and find out the voice answer-back for matching this voice signal V2.In other words, language understanding module 330 can enter voice-enabled chat Pattern, to be linked up with user.Wherein, language understanding module 330 can realize above-mentioned voice by semantic database 306 thoroughly The pattern of chat.Specifically, semantic database 306 is recordable a variety of candidate answers, and language understanding module 330 is according to excellent First sequentially it is used as voice answer-back to choose one of these candidate answers.For example, language understanding module 330 can be according to crowd People's use habit, to determine the priority of these candidate answers.Or language understanding module 330 can be according to the happiness of user Good or custom, to determine the priority of these candidate answers.It is noted that also it can record in semantic database 306 The content for the voice answer-back that earlier language Understanding Module 330 is exported, and voice answer-back is produced according to previous content.It is above-mentioned The method for selecting voice answer-back is for example, the present embodiment is not limited thereto system.
After language understanding module 330 exports voice answer-back by voice-output unit 310, in step S504, language Speech Understanding Module 330 can judge whether voice receiving unit 320 receives other voice signals (hereafter with voice signal V4 tables again Show).It is similar to Fig. 4 step S406 herein, it can refer to foregoing explanation.
When voice receiving unit 320 receives voice signal V4, then as shown in step S506, the meeting of language understanding module 330 Judge whether voice signal V4 meets termination of a session prompt message, or whether voice signal V4 has executable solicited message. Termination of a session prompt message is, for example, specific vocabulary, to represent termination of a session.That is, language understanding module 330 can be to voice Signal V4 is parsed, if being resolved to above-mentioned specific vocabulary, then judges that voice signal V4 meets termination of a session prompt message. For example, when voice signal V4 meets these termination of a session prompt messages such as " goodbye " or " it is over ", then phonetic incepting list Member 320 will not continue to receive voice signal.On the other hand, if voice signal V4 has executable solicited message, language reason Solution module 330 is that can perform the response operation corresponding to executable solicited message.Also, language understanding module 330 can terminate language Sound dialogue mode, and voice receiving unit 320 does not also continue to receive voice signal.It is similar to Fig. 4 step S414 herein, can With reference to foregoing explanation.
In step S506, if voice signal V4 meets termination of a session prompt message, or believe with executable request During breath, then as shown in step S508, the then terminated speech dialogue mode, and terminate the voice after receiving of language understanding module 330 Signal, terminates mobile terminal apparatus 300 according to this and user carries out voice communication.That is, now if user is intended to pass through The mode of voice manipulates mobile terminal apparatus 300, then needs to say the language with identification information (such as " small madder " this name) Sound signal, it can just restart mobile terminal apparatus 300 and perform interactive voice.
In addition, in step S506, if voice signal V4 does not meet termination of a session prompt message, also without executable During solicited message, then step S502 is returned to, language understanding module 330 may proceed to should by the transmission voice of voice-output unit 310 Answer to inquire user.
On the other hand, return to step S504, when voice receiving unit 320 does not receive voice signal V4, then such as step Shown in S510, language understanding module 330 can be judged in the number for not receiving voice signal V4 in preset time, if more than pre- If number.Specifically, if in not receiving voice signal V4 in preset time, language understanding module 330 can record one Number.Consequently, it is possible to when the number recorded is not less than preset times, then step S502, language understanding module 330 are returned to It may proceed to send voice answer-back by voice-output unit 310, use the intention of inquiry user.Wherein, language understanding module 330 can produce voice answer-back after the preset time that voice receiving unit 320 does not receive voice signal V4.Above-mentioned language Sound response is, for example, that " you also exist", " what service needed to provide" etc. question sentence, not limited to this.
Conversely, in step S510, when the number recorded is more than preset times, then as shown in step S508, language Speech Understanding Module 330 can terminate this voice dialogue pattern, and voice receiving unit 320 can terminate the voice signal after receiving, That is, mobile terminal apparatus 300 can terminate to carry out voice communication with user, to terminate interactive voice.
It is noted that after mobile terminal apparatus 300 terminates voice interactive function, user can not only call tool Have the voice signal of identification information, to be linked up with mobile terminal apparatus 300, user also can by auxiliary operation device 304, from Auxiliary operation device 304 sends wireless transmission signal to mobile terminal apparatus 300, to start voice interactive function.It is here, mobile Terminal installation 300 will start voice receiving unit 320 to receive voice signal.
According to above-mentioned, the mobile terminal apparatus 300 of the present embodiment can be according to the voice signal for meeting identification information, and starts and move The voice interactive function of dynamic terminal installation 300, voice service can be provided more quickly by using.Wherein, in mobile terminal apparatus 300 During its not actuated voice interactive function, voice wake-up module 350 can detect the voice signal for meeting identification information.If voice is called out Wake up module 350 receive it is above-mentioned meet identification information voice signal when, voice receiving unit 320 can be then activated, with receive Another voice signal after above-mentioned voice signal.Afterwards, language understanding module 330 then can be according to another above-mentioned voice Signal operates to respond and terminates the voice interactive function of mobile terminal apparatus 300;Or according to another above-mentioned voice Signal sends voice answer-back, uses the intention for obtaining user or talks with user, until being resolved to termination of a session prompting letter Untill ceasing or responding operation.Consequently, it is possible to user only needs to send the voice signal with identification information, you can easily Voice communication is carried out with mobile terminal apparatus 300, and both hands can be liberated completely in communication process, because mobile terminal apparatus 300 be to automatically open up voice interactive function after a dialog turns.Thereby, user can more advantageously manipulate mobile terminal Device 300.
In summary, in voice answering method and the mobile terminal apparatus of the present invention, mobile terminal apparatus can automatically from Normal mode switches to first mode.Also, when mobile terminal apparatus receives incoming call call in first mode, mobile terminal Verbal announcement can be transmitted to inquire user in device, and allows user to send voice signal by way of voice and moved to manipulate Dynamic terminal installation is responded.Now, mobile terminal apparatus can be parsed according to the voice signal from user, and according to The voice recognition result obtained after parsing, response operation corresponding to execution.Consequently, it is possible to user can be easily according to movement Verbal announcement transmitted by terminal installation, incoming call call is responded by way of voice.
In addition, in the voice control method and mobile terminal apparatus of the present invention, mobile terminal apparatus can identify according to meeting The voice signal of information, to start voice interactive function.In its not actuated voice interactive function of mobile terminal apparatus, if moving Dynamic terminal installation receives the voice signal for meeting identification information, and mobile terminal apparatus can be then received after above-mentioned voice signal Another voice signal.Afterwards, mobile terminal apparatus can according to another above-mentioned voice signal come respond operation and it is whole Only voice interactive function;Or according to another above-mentioned voice signal send voice answer-back, use obtain user intention or Talk with user, untill being resolved to termination of a session prompt message or responding operation.Consequently, it is possible to user only needs Send the voice signal with identification information, you can voice communication easily is carried out with mobile terminal apparatus, and in communication process In can liberate both hands completely because mobile terminal apparatus always automatically opens up phonetic entry after a dialog turns.And move Dynamic terminal installation can be interacted according to the content described in user come terminated speech, and voice service can be provided more quickly by using.Base This, voice answering method, voice control method and mobile terminal apparatus of the invention, can allow user more advantageously to manipulate Mobile terminal apparatus.
Although the present invention is disclosed as above with embodiment, so it is not limited to the present invention, and those skilled in the art exist Do not depart from the spirit and scope of the present invention, when can make a little change and retouching, therefore protection scope of the present invention is appended when regarding Claims confining spectrum is defined.

Claims (12)

1. a kind of voice answering method for mobile terminal apparatus, this method includes:
Judge whether to receive the first voice signal for meeting identification information;
When receiving the first voice signal for meeting identification information, start voice receiving unit to receive message;
Judge whether to receive the second voice signal after the first voice signal;
When receiving the second voice signal, the second voice signal is parsed according to semantic database to obtain voice recognition result;
Judge whether there is executable solicited message in voice recognition result;And
When voice identification result has executable solicited message, perform response operation and close the 3rd voice signal of reception,
Wherein, when not receiving the second voice signal or when voice identification result does not have executable solicited message, perform Voice dialogue pattern.
2. voice answering method as claimed in claim 1, wherein identification information are default vocabulary or volume.
3. voice answering method as claimed in claim 1, wherein judging whether that receiving the second voice signal includes judging institute Whether the energy for stating message exceedes preset value.
4. voice answering method as claimed in claim 1, wherein perform voice dialogue pattern include sending voice answer-back with to User inquires solicited message, and receives and parse the voice signal from user.
5. voice answering method as claimed in claim 4, wherein performing voice dialogue pattern also includes:If in preset time The number for not receiving voice signal inside exceedes preset times, then terminated speech dialogue mode.
6. voice answering method as claimed in claim 4, wherein performing voice dialogue pattern also includes:If voice signal has There is the request for terminating prompt message or having completed user, then terminated speech dialogue mode.
7. a kind of mobile terminal apparatus, including:
One voice wake-up module, to judge whether to receive the first voice signal with identification information;
One voice-output unit, to send a verbal announcement;
One voice receiving unit, to receive a voice signal;
One semantic database, to record the various semantic relations with order;
One language understanding module, be coupled to the voice receiving unit and the semantic database, to by the semantic database come Parse the voice signal;
One carrys out communication unit, is coupled to the voice-output unit and the language understanding module, and this carrys out communication unit to connect Receive an incoming call call and perform a traffic operation,
Wherein, when voice wake-up module receives the first voice signal with identification information, start voice receiving unit with Receive message;When language understanding module judges to receive the second voice signal after the first voice signal, according to semantic number The second voice signal is parsed according to storehouse to obtain voice recognition result;And when voice identification result has executable solicited message When, mobile terminal apparatus, which performs response operation and closed, receives the 3rd voice signal, and
Wherein, it is mobile when not receiving the second voice signal or when voice identification result does not have executable solicited message Terminal installation performs voice dialogue pattern.
8. mobile terminal apparatus as claimed in claim 7, wherein identification information are default vocabulary or volume.
9. mobile terminal apparatus as claimed in claim 7, wherein language understanding module judge whether to receive the second voice letter Number include judging whether the energy of the message exceedes preset value.
10. mobile terminal apparatus as claimed in claim 7, wherein mobile terminal apparatus, which perform voice dialogue pattern, to be included sending Voice answer-back to user to inquire solicited message, and receives and parse the voice signal from user.
11. mobile terminal apparatus as claimed in claim 10, wherein mobile terminal apparatus, which perform voice dialogue pattern, also to be included: If the number for not receiving voice signal in preset time exceedes preset times, terminated speech dialogue mode.
12. mobile terminal apparatus as claimed in claim 10, wherein mobile terminal apparatus, which perform voice dialogue pattern, also to be included: If voice signal has the request for terminating prompt message or having completed user, terminated speech dialogue mode.
CN201710903738.2A 2013-04-10 2013-07-11 Voice answering method and mobile terminal apparatus Pending CN107613132A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN 201310122236 CN103220423A (en) 2013-04-10 2013-04-10 Voice answering method and mobile terminal device
CN2013101222368 2013-04-10
CN201310291083.XA CN104104789A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201310291083.XA Division CN104104789A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal device

Publications (1)

Publication Number Publication Date
CN107613132A true CN107613132A (en) 2018-01-19

Family

ID=48817867

Family Applications (3)

Application Number Title Priority Date Filing Date
CN 201310122236 Pending CN103220423A (en) 2013-04-10 2013-04-10 Voice answering method and mobile terminal device
CN201710903738.2A Pending CN107613132A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal apparatus
CN201310291083.XA Pending CN104104789A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN 201310122236 Pending CN103220423A (en) 2013-04-10 2013-04-10 Voice answering method and mobile terminal device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201310291083.XA Pending CN104104789A (en) 2013-04-10 2013-07-11 Voice answering method and mobile terminal device

Country Status (2)

Country Link
CN (3) CN103220423A (en)
TW (1) TWI535258B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108847236A (en) * 2018-07-26 2018-11-20 珠海格力电器股份有限公司 Method and device for receiving voice information and method and device for analyzing voice information
CN110060678A (en) * 2019-04-16 2019-07-26 深圳欧博思智能科技有限公司 A kind of virtual role control method and smart machine based on smart machine
CN111160002A (en) * 2019-12-27 2020-05-15 北京百度网讯科技有限公司 Method and device for analyzing abnormal information in output spoken language understanding
CN111191005A (en) * 2019-12-27 2020-05-22 恒大智慧科技有限公司 Community query method and system, community server and computer readable storage medium

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929532A (en) * 2014-03-18 2014-07-16 联想(北京)有限公司 Information processing method and electronic equipment
CN104464723B (en) * 2014-12-16 2018-03-20 科大讯飞股份有限公司 A kind of voice interactive method and system
CN107395867B (en) * 2015-03-06 2020-05-05 Oppo广东移动通信有限公司 Convenient call method and system for mobile terminal
CN105049591A (en) * 2015-05-26 2015-11-11 腾讯科技(深圳)有限公司 Method and device for processing incoming call
CN105007375A (en) * 2015-07-20 2015-10-28 广东小天才科技有限公司 Method and device for automatically answering external incoming call
CN105472152A (en) * 2015-12-03 2016-04-06 广东小天才科技有限公司 Method and system for intelligent terminal to automatically answer call
CN105810194B (en) * 2016-05-11 2019-07-05 北京奇虎科技有限公司 Speech-controlled information acquisition methods and intelligent terminal under standby mode
JP6508251B2 (en) * 2017-04-27 2019-05-08 トヨタ自動車株式会社 Voice dialogue system and information processing apparatus
CN107465805A (en) * 2017-06-28 2017-12-12 深圳天珑无线科技有限公司 A kind of incoming call answering method, the device and communication terminal with store function
TWI639115B (en) 2017-11-01 2018-10-21 塞席爾商元鼎音訊股份有限公司 Method of detecting audio inputting mode
CN108880993A (en) * 2018-07-02 2018-11-23 广东小天才科技有限公司 Voice instant messaging method, system and mobile terminal
CN112995929A (en) * 2019-11-29 2021-06-18 长城汽车股份有限公司 Short message sending method and device and vehicle

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101211504A (en) * 2006-12-31 2008-07-02 康佳集团股份有限公司 Method, system and apparatus for remote control for TV through voice
CN101657033A (en) * 2008-08-22 2010-02-24 环达电脑(上海)有限公司 Portable communication apparatus and method with voice control
TW201013635A (en) * 2008-09-24 2010-04-01 Mitac Int Corp Intelligent voice system and method thereof
US8165886B1 (en) * 2007-10-04 2012-04-24 Great Northern Research LLC Speech interface system and method for control and interaction with applications on a computing system
CN202413790U (en) * 2011-12-15 2012-09-05 浙江吉利汽车研究院有限公司 Automobile self-adapting speech prompting system
CN102932595A (en) * 2012-10-22 2013-02-13 北京小米科技有限责任公司 Method and device for sound-control photographing and terminal
CN103024177A (en) * 2012-12-13 2013-04-03 广东欧珀移动通信有限公司 Mobile terminal driving mode operation method and mobile terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1494299A (en) * 2002-10-30 2004-05-05 英华达(上海)电子有限公司 Device and method for converting speech sound input into characters on handset
CN102843471A (en) * 2012-08-17 2012-12-26 广东欧珀移动通信有限公司 Method for intelligently controlling answer mode of mobile phone and mobile phone
CN103139396A (en) * 2013-03-28 2013-06-05 上海斐讯数据通信技术有限公司 Implementation method of contextual model and mobile terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101211504A (en) * 2006-12-31 2008-07-02 康佳集团股份有限公司 Method, system and apparatus for remote control for TV through voice
US8165886B1 (en) * 2007-10-04 2012-04-24 Great Northern Research LLC Speech interface system and method for control and interaction with applications on a computing system
CN101657033A (en) * 2008-08-22 2010-02-24 环达电脑(上海)有限公司 Portable communication apparatus and method with voice control
TW201013635A (en) * 2008-09-24 2010-04-01 Mitac Int Corp Intelligent voice system and method thereof
CN202413790U (en) * 2011-12-15 2012-09-05 浙江吉利汽车研究院有限公司 Automobile self-adapting speech prompting system
CN102932595A (en) * 2012-10-22 2013-02-13 北京小米科技有限责任公司 Method and device for sound-control photographing and terminal
CN103024177A (en) * 2012-12-13 2013-04-03 广东欧珀移动通信有限公司 Mobile terminal driving mode operation method and mobile terminal

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108847236A (en) * 2018-07-26 2018-11-20 珠海格力电器股份有限公司 Method and device for receiving voice information and method and device for analyzing voice information
CN110060678A (en) * 2019-04-16 2019-07-26 深圳欧博思智能科技有限公司 A kind of virtual role control method and smart machine based on smart machine
CN111160002A (en) * 2019-12-27 2020-05-15 北京百度网讯科技有限公司 Method and device for analyzing abnormal information in output spoken language understanding
CN111191005A (en) * 2019-12-27 2020-05-22 恒大智慧科技有限公司 Community query method and system, community server and computer readable storage medium
KR20210084207A (en) * 2019-12-27 2021-07-07 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Method and apparatus for outputting analysis abnormality information in spoken language understanding
CN111160002B (en) * 2019-12-27 2022-03-01 北京百度网讯科技有限公司 Method and device for analyzing abnormal information in output spoken language understanding
KR102382421B1 (en) 2019-12-27 2022-04-05 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Method and apparatus for outputting analysis abnormality information in spoken language understanding
US11482211B2 (en) 2019-12-27 2022-10-25 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for outputting analysis abnormality information in spoken language understanding

Also Published As

Publication number Publication date
TW201440482A (en) 2014-10-16
CN103220423A (en) 2013-07-24
TWI535258B (en) 2016-05-21
CN104104789A (en) 2014-10-15

Similar Documents

Publication Publication Date Title
CN107274897A (en) Voice control method and mobile terminal apparatus
CN107613132A (en) Voice answering method and mobile terminal apparatus
CN103279508B (en) Revise method and the natural language dialogue system of voice answer-back
US9111538B2 (en) Genius button secondary commands
US8099289B2 (en) Voice interface and search for electronic devices including bluetooth headsets and remote systems
US7400712B2 (en) Network provided information using text-to-speech and speech recognition and text or speech activated network control sequences for complimentary feature access
CN1220176C (en) Method for training or adapting to phonetic recognizer
CN101971250B (en) Mobile electronic device with active speech recognition
CN104168353B (en) Bluetooth headset and its interactive voice control method
CN108141498B (en) Translation method and terminal
CN108108142A (en) Voice information processing method, device, terminal device and storage medium
JP2007529916A (en) Voice communication with a computer
US6563911B2 (en) Speech enabled, automatic telephone dialer using names, including seamless interface with computer-based address book programs
TW201246899A (en) Handling a voice communication request
CN101415257A (en) Man-machine conversation chatting method
KR20140067687A (en) Car system for interactive voice recognition
CN111554280A (en) Real-time interpretation service system for mixing interpretation contents using artificial intelligence and interpretation contents of interpretation experts
WO2022012413A1 (en) Three-party call terminal for use in mobile man-machine collaborative calling robot
CN111835923B (en) Mobile voice interactive dialogue system based on artificial intelligence
CN109036401A (en) A method of opening speech control system
CN104575496A (en) Method and device for automatically sending multimedia documents and mobile terminal
CN111775165A (en) System, robot terminal and back-end processing module for realizing mobile intelligent customer service robot
CN107465823A (en) A kind of audio communication method, remote control and audio communication system
CN102456305A (en) Portable intelligent multimedia navigation system based on voice recognition
CN111274828A (en) Language translation method, system, computer program and handheld terminal based on message leaving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180119