CN106657543B - Voice information processing method and device - Google Patents
Voice information processing method and device Download PDFInfo
- Publication number
- CN106657543B CN106657543B CN201610932254.6A CN201610932254A CN106657543B CN 106657543 B CN106657543 B CN 106657543B CN 201610932254 A CN201610932254 A CN 201610932254A CN 106657543 B CN106657543 B CN 106657543B
- Authority
- CN
- China
- Prior art keywords
- language type
- voice information
- opposite
- information
- operation instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/64—Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
- H04M1/65—Recording arrangements for recording a message from the calling party
- H04M1/656—Recording arrangements for recording a message from the calling party for recording conversations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/60—Details of telephonic subscriber devices logging of communication history, e.g. outgoing or incoming calls, missed calls, messages or URLs
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Telephonic Communication Services (AREA)
Abstract
The disclosure relates to a voice information processing method and device. The method comprises the following steps: acquiring voice information of an opposite-end user in the current call; acquiring the language type of the voice information according to the voice information; and storing the call record of the current call according to the language type, wherein the call record comprises the contact telephone, the call time or the call duration of the opposite-end user. According to the technical scheme, the call records for communicating with the opposite-end user are stored according to the language type of the opposite-end user, when the local-end user searches the call records again, the language type of the opposite-end user during communication at that time can be firstly recalled, then the terminal searches the call records according to the language type indicated by the local-end user, the accuracy of searching the call records is improved, the time of searching the call records is shortened, and further the user experience is improved.
Description
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a method and an apparatus for processing voice information.
Background
With the development of communication technology, communication equipment is widely used, and the social range of people is greatly expanded. Meanwhile, based on the expansion of the social contact range, the frequency of the user using the communication device to perform voice call is higher. In practical application, a user can not only answer the call dialed by acquaintances or friends, but also answer the call dialed by some strange numbers, and information such as the number, time, call duration and the like of the call can be stored in the call record of the communication equipment.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide a method and an apparatus for processing voice information. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for processing voice information, including:
acquiring voice information of an opposite-end user in the current call;
acquiring the language type of the voice information according to the voice information;
and storing the call record of the current call according to the language type, wherein the call record comprises the contact telephone, the call time or the call duration of the opposite-end user.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: according to the language type of the opposite-end user, the call record of the call with the opposite-end user is stored, so that when the local-end user searches the call record again, the language type of the opposite-end user during the current call can be firstly recalled, then the terminal searches the call record according to the language type indicated by the local-end user, the accuracy of searching the call record is improved, the time of searching the call record is shortened, and further the user experience is improved.
In one embodiment, the method further comprises:
determining whether the contact information of the opposite-end user is stored in an address book;
and if the contact information of the opposite-end user is stored in the address book, adding the language type in the contact information of the opposite-end user.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: when the contact way of the opposite-end user is stored in the address book, the acquired language type can be added into the contact way of the opposite-end user, so that when the local-end user searches the contact way of the opposite-end user again, the language type of the opposite-end user during the current call can be firstly memorized, then the terminal searches the address book according to the language type indicated by the local-end user, the accuracy of searching the contact way of the opposite-end user is improved, the time of searching the contact way of the opposite-end user is shortened, and further the user experience is improved.
In one embodiment, the method further comprises:
acquiring a first operation instruction, wherein the first operation instruction comprises a reference language type;
and displaying at least one call record corresponding to the reference language type and/or a contact way of at least one contact corresponding to the reference language type in the address book according to the first operation instruction.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: when the home terminal user needs to search the contact way corresponding to the reference language type, a first operation instruction can be input, wherein the first operation instruction comprises the reference language type. After receiving a first operation instruction input by a home terminal user, the terminal displays the contact ways or call records of all contacts corresponding to the reference language type specified by the home terminal user, so that the user can conveniently search, and the user experience is improved.
In one embodiment, the obtaining, according to the instruction, a first operation instruction, the first operation instruction including a reference language type includes:
receiving input first voice information;
acquiring the language type of the first voice information;
determining a language type of the first speech information as the reference language type.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the terminal can receive the voice information input by the home terminal user, and the language type of the voice information is determined as the reference language type, so that the contact way or the call record corresponding to the language type of the voice information is displayed, inconvenience caused when the home terminal user manually inputs the language type is avoided, and user experience is improved.
In one embodiment, the obtaining the language type of the voice information includes:
sending a query request to a server, wherein the query request comprises voice information of the opposite-end user;
and receiving query result information sent by the server, wherein the query result information comprises the language type of the voice information queried by the server.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the language type of the voice information of the opposite-end user is acquired through the server, so that hardware or software resources occupied when the language type is acquired at the terminal side are avoided, and the convenience of acquiring the language type is improved.
In one embodiment, the obtaining the language type of the voice information includes:
acquiring tone characteristics of the voice information;
and acquiring the language type of the voice information according to the corresponding relation between the tone characteristics and the language type.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the language type of the voice information can be obtained on the terminal side through the corresponding relation between the tone characteristics and the language type, and the timeliness and convenience for obtaining the language type of the voice information are improved.
In one embodiment, the obtaining the language type of the voice information includes:
acquiring a second operation instruction, wherein the second operation instruction comprises a first standard language type;
and storing the language type of the voice information as the first standard language type according to the second operation instruction.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: if the terminal or the server cannot automatically identify the language type of the voice information, the user can manually input the language type of the voice information, and the terminal stores the language type of the voice information according to the information input by the user, so that the condition that the terminal cannot acquire the language type of the voice information is avoided, and the user experience is improved.
In one embodiment, the method further comprises:
acquiring a third operation instruction, wherein the third operation instruction comprises a second standard language type;
and modifying the language type of the voice information into the second standard language type according to the third operation instruction.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: when the language type of the voice information automatically identified by the terminal or the server is wrong, the user can input the correct language type, and the terminal can modify the language type of the stored voice information according to the information input by the user, so that the accuracy of the terminal for acquiring the language type is improved, and the user experience is further improved.
According to a second aspect of the embodiments of the present disclosure, there is provided a voice information processing apparatus including:
the first acquisition module is used for acquiring the voice information of an opposite terminal user in the current call;
the second acquisition module is used for acquiring the language type of the voice information according to the voice information;
and the storage module is used for storing the call record of the current call according to the language type, wherein the call record comprises the contact telephone, the call time or the call duration of the opposite-end user.
In one embodiment, the apparatus further comprises:
the first determining module is used for determining whether the contact information of the opposite-end user is stored in the address book;
and the processing module is used for adding the language type in the contact way of the opposite-end user if the contact way of the opposite-end user is stored in the address book.
In one embodiment, the apparatus further comprises:
the third acquisition module is used for acquiring a first operation instruction, and the first operation instruction comprises a reference language type;
and the display module is used for displaying at least one call record corresponding to the reference language type and/or the contact way of at least one contact corresponding to the reference language type in the address book according to the first operation instruction.
In one embodiment, the third obtaining module comprises:
the first receiving submodule is used for receiving input first voice information;
the first obtaining submodule is used for obtaining the language type of the first voice information;
a determining submodule, configured to determine a language type of the first speech information as the reference language type.
In one embodiment, the second obtaining module comprises:
the sending submodule is used for sending a query request to a server, wherein the query request comprises the voice information of the opposite-end user;
and the second receiving submodule is used for receiving query result information sent by the server, wherein the query result information comprises the language type of the voice information queried by the server.
In one embodiment, the second obtaining module comprises:
the second obtaining submodule is used for obtaining tone characteristics of the voice information;
and the third obtaining submodule is used for obtaining the language type of the voice information according to the corresponding relation between the tone characteristic and the language type.
In one embodiment, the second obtaining module comprises:
the fourth obtaining submodule is used for obtaining a second operation instruction, and the second operation instruction comprises a first standard language type;
and the storage submodule is used for storing the language type of the voice information as the first standard language type according to the second operation instruction.
In one embodiment, the apparatus further comprises:
the fourth obtaining module is used for obtaining a third operation instruction, and the third operation instruction comprises a second standard language type;
and the modifying module is used for modifying the language type of the voice information into the second standard language type according to the third operation instruction.
According to a third aspect of the embodiments of the present disclosure, there is provided a voice information processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring voice information of an opposite-end user in the current call;
acquiring the language type of the voice information according to the voice information;
and storing the call record of the current call according to the language type, wherein the call record comprises the contact telephone, the call time or the call duration of the opposite-end user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1a is a flow chart diagram 1 illustrating a method of speech information processing according to an example embodiment.
FIG. 1b is a flowchart 2 illustrating a method of processing voice information according to an example embodiment.
FIG. 1c is a flowchart illustrating a method of processing speech information according to an example embodiment.
FIG. 1d is a flowchart 4 illustrating a method of speech information processing according to an example embodiment.
Fig. 2 is an interaction diagram illustrating a voice information processing method according to an exemplary embodiment.
Fig. 3 is a flowchart 5 illustrating a voice information processing method according to an example embodiment.
Fig. 4 is a flowchart 6 illustrating a voice information processing method according to an example embodiment.
Fig. 5a is a schematic diagram 1 illustrating the structure of a speech information processing apparatus according to an exemplary embodiment.
Fig. 5b is a schematic diagram 2 illustrating the structure of a speech information processing apparatus according to an exemplary embodiment.
Fig. 5c is a schematic diagram 3 illustrating the structure of a speech information processing apparatus according to an exemplary embodiment.
Fig. 5d is a schematic diagram 4 illustrating the structure of a speech information processing apparatus according to an exemplary embodiment.
Fig. 5e is a schematic diagram 5 illustrating the structure of a speech information processing apparatus according to an exemplary embodiment.
Fig. 5f is a schematic diagram 6 illustrating the structure of a speech information processing apparatus according to an exemplary embodiment.
Fig. 5g is a schematic diagram 7 illustrating the structure of a speech information processing apparatus according to an exemplary embodiment.
Fig. 5h is a schematic diagram 8 illustrating the structure of a speech information processing apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating a voice information processing apparatus according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Usually, a user can not only answer the calls dialed by acquaintances or friends, but also answer the calls dialed by strange numbers, although the terminal stores the call records of the strange numbers, the call records of a plurality of strange numbers may exist in the same day or the same hour, and as time goes by, the user is difficult to remember the specific call time and call duration of each call. When the user searches again, the user is difficult to accurately distinguish the call time or the call duration, but the user usually has firm memory of the language used during the call, so that if the call record is searched according to the language type of the opposite-end user during the call, the call record required by the user can be searched more accurately.
Fig. 1a is a flowchart illustrating a voice information processing method according to an exemplary embodiment, and as shown in fig. 1a, the voice information processing method is applied to a terminal, which may be a mobile phone, a tablet computer, a smart watch, or other devices capable of performing a voice call, which is not limited in this disclosure. The voice information processing method includes the following steps 101 to 103:
in step 101, the voice information of the opposite terminal user in the current call is obtained.
For example, when the terminal is currently in a call, the opposite-end user can communicate with the home-end user through voice, so that the terminal can monitor a voice channel during the call through a system interface, and further acquire voice information of the opposite-end user.
Because the voice channel not only has the voice of the opposite-end user, but also has the environmental noise or electromagnetic noise of the opposite end, and the like, when the voice channel is monitored, all the sound information in the voice channel can be firstly acquired, and then the voice information of the opposite-end user is acquired from the sound information in the modes of amplification, filtering and the like.
In step 102, the language type of the voice message is obtained according to the voice message.
For example, the language type describes the language of the voice information or the accent of the opposite end user, for example, if the opposite end user uses english to communicate with the local end user, the language type of the voice information is english; if the opposite-end user communicates with the home-end user by using Russian, the language type of the voice information is Russian; if the opposite end user speaks to the south of the river accent, the language type of the voice information is the south of the river; if the user at the opposite end speaks to the southern Fujian accent, the language type of the voice information is the southern Fujian dialect.
In step 103, storing a call record of the current call according to the language type, wherein the call record comprises a contact telephone, a call time or a call duration of the opposite-end user.
For example, after the language type of the voice information is acquired, the terminal may save the call record with the opposite-end user by using the language type as an identifier.
For example, if the language type of the voice message is english, english may be used as the identifier of the call record; if the language type of the voice message is the south-Henan language, the south-Henan language can be used as the identifier of the call record.
According to the technical scheme provided by the embodiment of the disclosure, the call record of the call with the opposite-end user is stored according to the language type of the opposite-end user, so that when the local-end user searches the call record again, the language type of the opposite-end user during the current call can be firstly recalled, and then the terminal searches the call record according to the language type indicated by the local-end user, so that the accuracy of searching the call record is improved, the time of searching the call record is shortened, and further the user experience is improved.
In one embodiment, as shown in fig. 1b, the method further comprises the following steps 104 to 105:
in step 104, it is determined whether the contact information of the opposite-end user is stored in the address book.
For example, the address book stores contact addresses of a plurality of contacts, and each contact address includes information such as contact phone, address or mailbox of the contact. After the communication with the opposite-end user is finished, the terminal can sequentially traverse the contact way of each contact in the address book according to the contact telephone of the opposite-end user, and determine whether the contact way of the opposite-end user is stored in the local address book.
In step 105, if the contact address of the opposite end user is stored in the address book, a language type is added to the contact address of the opposite end user.
For example, the contacts in the address book are usually arranged according to the first letter of the last name, so that there are many contacts recorded under one last name, some contacts are the home and the old of the home terminal user, some contacts are friends of the home terminal user, and some contacts are colleagues of the home terminal user. Therefore, if the contact information of the opposite-end user is stored in the address book, the language type can be added in the contact information of the opposite-end user, so that when the local-end user needs to search the contact information of the opposite-end user, the query can be performed according to the language type under the corresponding surname, and the accuracy of searching the contact information of the opposite-end user by the user is improved.
According to the technical scheme provided by the embodiment of the disclosure, when the contact way of the opposite-end user is stored in the address book, the acquired language type can be added in the contact way of the opposite-end user, so that when the local-end user searches the contact way of the opposite-end user again, the language type of the opposite-end user during the current call can be firstly recalled, then the terminal searches the address book according to the language type indicated by the local-end user, the accuracy of searching the contact way of the opposite-end user is improved, the time of searching the contact way of the opposite-end user is shortened, and further the user experience is improved.
In one embodiment, as shown in fig. 1c, the method further comprises steps 106 and 107:
in step 106, a first operation instruction is obtained, wherein the first operation instruction comprises a reference language type.
For example, when the local user searches for the contact of the opposite user, the language type of the opposite user may be first recalled, and then the language type obtained by the conference is input to the terminal, that is, a first operation instruction is input to the terminal, where the first operation instruction includes the language type of the opposite user recalled by the user, and the language type is a reference language type.
In step 107, according to the first operation instruction, at least one call record corresponding to the reference language type and/or a contact way of at least one contact corresponding to the reference language type in the address book is displayed.
For example, after receiving the first operation instruction, the terminal obtains a reference language type in the first operation instruction, and then displays the contact ways of all contacts corresponding to the reference language type in the locally stored address book, or all call records corresponding to the reference language type, so that the local end user searches the contact way of the opposite end user in the information displayed by the terminal.
In the technical scheme provided by the embodiment of the disclosure, when the home terminal user needs to search the contact way corresponding to the reference language type, a first operation instruction can be input, and the first operation instruction comprises the reference language type. After receiving a first operation instruction input by a home terminal user, the terminal displays the contact ways or call records of all contacts corresponding to the reference language type specified by the home terminal user, so that the user can conveniently search, and the user experience is improved.
In one embodiment, in step 106, a first operation instruction is obtained, the first operation instruction includes a reference language type, the first voice information may be first received, the language type of the first voice information is then obtained, and the language type of the first voice information is determined as the reference language type.
For example, assuming that the end user is currently taking a bus, in order to avoid falling down during the driving process of the bus, the center of gravity needs to be stabilized by holding the handle with both hands, and if the end user needs to query the contact information of the opposite end user at this time, the language type of the opposite end user needing to be queried cannot be manually input, so the terminal can turn on the microphone according to the instruction of the end user, the end user can use the language type of the opposite end user to read a section of voice, for example, the terminal can display a section of text on the screen, the end user can use the language type of the opposite end user to read the section of text aloud, at this time, the terminal receives the voice of the end user aloud reading the section of text aloud through the microphone, that is, the terminal receives the first voice information input by the end user, then obtains the language type of the first voice information, and the terminal can determine the language type as the, and the contact information of all the contacts corresponding to the reference language type in the address book and/or all the call records corresponding to the reference language type are displayed, so that the inconvenience caused when a user manually inputs the language type is avoided.
According to the technical scheme provided by the embodiment of the disclosure, the terminal can receive the voice information input by the home terminal user, and the language type of the voice information is determined as the reference language type, so that the contact or call record corresponding to the language type of the voice information is displayed, inconvenience caused when the home terminal user manually inputs the language type is avoided, and user experience is improved.
In one embodiment, in step 102, the language type of the voice information is obtained, a query request including the voice information of the opposite-end user may be sent to the server, and then query result information sent by the server is received, where the query result information includes the language type of the voice information queried by the server.
For example, if the device for recognizing the language type is provided on the terminal side, the hardware cost of the terminal may be increased, or the computing resources may be increased, which may affect the processing speed of the terminal. In order to avoid the above situation, a device for identifying the language type may be disposed in the server, after the terminal acquires the language type of the voice information, the query request may be sent to the server so that the server can identify the language type of the voice information, after the server acquires the language type of the voice information included in the query request, the server sends the language type as query result information to the terminal, and the terminal can acquire the language type of the voice information.
For example, the server is provided with a corresponding relationship between a tone feature and a language type, the tone feature indicates different tones obtained when the same text is read aloud through different accents, the accent of the voice information can be distinguished through the tone of the voice information, and the language type of the voice information can be further acquired according to the accent of the voice information. After receiving a query request sent by a terminal, a server firstly acquires voice information included in the query request, acquires tone characteristics of the voice information, and then acquires the language type of the voice information according to the tone characteristics.
According to the technical scheme provided by the embodiment of the disclosure, the language type of the voice information of the opposite-end user is acquired through the server, so that hardware or software resources occupied when the language type is acquired at the terminal side are avoided, and the convenience for acquiring the language type is improved.
In one embodiment, in step 102, the language type of the voice information is obtained, the tone feature of the voice information may be first obtained, and then the language type of the voice information is obtained according to the correspondence between the tone feature and the language type.
For example, during initialization, a corresponding relationship between a tone feature and a language type may be set in the terminal, where the corresponding relationship describes a corresponding relationship between different tones and language types, and the tone feature describes different tones obtained when reading the same text through different accents, so that the accent of the voice information may be distinguished through the tone of the voice information, and the language type of the voice information may be obtained according to the accent of the voice information.
After the terminal acquires the voice information, the tone feature of the terminal may be acquired first, and then the language type corresponding to the tone feature of the terminal is acquired, where the language type is the language type of the voice information. The language type is acquired through the terminal, so that time waste when the terminal and the server interact is avoided, and the speed of acquiring the language type by the terminal is improved.
In the technical scheme provided by the embodiment of the disclosure, the language type of the voice information can be obtained on the terminal side through the corresponding relation between the tone characteristic and the language type, so that the timeliness and convenience for obtaining the language type of the voice information are improved.
In one embodiment, in step 102, the language type of the voice information may be obtained by first obtaining a second operation instruction, where the second operation instruction includes a first standard language type, and then storing the language type of the voice information as the first standard language type according to the second operation instruction.
For example, in practical application, the situation that the accent of the end user is special, the terminal or the server cannot automatically recognize the accent, and only the identification of the end user is relied on may occur. Therefore, after the terminal acquires the voice information of the opposite-end user, an input box can be displayed on a screen to prompt the local-end user to input the language type of the voice information, the user inputs the identified first standard language type, and after the terminal receives the information input by the local-end user, the first standard language type is used as the language type of the voice information to be stored, and the call record of the opposite-end user is stored according to the first standard language type.
According to the technical scheme provided by the embodiment of the disclosure, if the terminal or the server cannot automatically identify the language type of the voice information, the user can manually input the language type of the voice information, and the terminal stores the language type of the voice information according to the information input by the user, so that the situation that the terminal cannot acquire the language type of the voice information is avoided, and the user experience is improved.
In one embodiment, as shown in fig. 1d, the method further comprises step 108 and step 109:
in step 108, a third operation instruction is obtained, wherein the third operation instruction comprises a second standard language type.
For example, if the accent of the opposite end user is special, the terminal or the server is difficult to accurately recognize, and a recognition error condition is easy to occur, so that after the terminal recognizes the language type of the voice information, the language type can be displayed on a screen, and meanwhile, a modification option is displayed.
In step 109, the language type of the voice message is modified to the second standard language type according to the third operation instruction.
In an example, after receiving a third operation instruction input by a user, the terminal acquires a second standard language type in the third operation instruction, then modifies the language type of the voice information into the second standard language type, and stores a call record of an opposite-end user according to the second standard language type.
The above embodiments are equally applicable to the solutions shown in fig. 1b and 1 c.
According to the technical scheme, when the language type of the voice information automatically identified by the terminal or the server is wrong, the user can input the correct language type, the terminal can modify the language type of the stored voice information according to the information input by the user, the accuracy of the terminal for acquiring the language type is improved, and further the user experience is improved.
The implementation is described in detail below by way of several embodiments.
Fig. 2 is a flowchart illustrating a voice information processing method according to an exemplary embodiment, where an execution subject is a terminal, and the terminal may be a mobile phone, a tablet computer, a smart watch, or other devices capable of performing voice call, which is not limited in this disclosure, and as shown in fig. 2, the voice information processing method includes the following steps:
in step 201, the terminal obtains the voice information of the opposite terminal user in the current call.
In step 202, the terminal sends a query request to the server, where the query request includes voice information of the peer user.
In step 203, the server obtains the pitch characteristics of the voice information.
In step 204, the server obtains the language type of the voice message according to the tone feature of the voice message.
In step 205, the server transmits query result information including the language type of the voice information to the terminal.
In step 206, the terminal stores the call record of the current call according to the language type.
In step 207, the terminal receives the reference language type input by the home terminal user.
In step 208, the terminal presents a plurality of call records corresponding to the reference language type.
The embodiment of the disclosure provides a voice information processing method, in the method, a call record for communicating with an opposite terminal user is stored according to a language type of the opposite terminal user, when a local terminal user searches the call record again, the language type of the opposite terminal user during the current communication can be firstly recalled, then the terminal searches the call record according to the language type indicated by the local terminal user, the accuracy of searching the call record is improved, the time for searching the call record is shortened, and further the user experience is improved.
Fig. 3 is a flowchart illustrating a voice information processing method according to an exemplary embodiment, where an execution subject is a terminal, and the terminal may be a mobile phone, a tablet computer, a smart watch, or other devices capable of performing voice call, which is not limited in this disclosure, and as shown in fig. 3, the voice information processing method includes the following steps:
in step 301, the terminal obtains the voice information of the opposite terminal user in the current call, and executes step 302.
In step 302, the terminal acquires the tone characteristic of the voice message and performs step 303.
In step 303, the terminal obtains the language type of the voice message according to the corresponding relationship between the tone feature and the language type, and executes step 304.
In step 304, the terminal stores the call record of the current call according to the language type, and executes step 305.
In step 305, the terminal determines whether a contact way of the opposite terminal user is stored in the address book; when the contact information of the opposite terminal user is not stored in the address book, executing step 306; when the contact information of the opposite terminal user is stored in the address book, step 307 is executed.
In step 306, the terminal prompts the user whether to save the contact information of the opposite terminal user, and the process is ended.
In step 307, the terminal adds a language type to the contact information of the opposite terminal user, and then executes step 308.
In step 308, the terminal receives the reference language type input by the user, and executes step 309.
In step 309, the terminal presents the contact addresses of the plurality of contacts corresponding to the reference language type input by the user, and/or the plurality of call records.
The embodiment of the disclosure provides a voice information processing method, in the method, a call record for communicating with an opposite terminal user is stored according to a language type of the opposite terminal user, when a local terminal user searches the call record again, the language type of the opposite terminal user during the current communication can be firstly recalled, then the terminal searches the call record according to the language type indicated by the local terminal user, the accuracy of searching the call record is improved, the time for searching the call record is shortened, and further the user experience is improved.
Fig. 4 is a flowchart illustrating a voice information processing method according to an exemplary embodiment, where an execution subject is a terminal, and the terminal may be a mobile phone, a tablet computer, a smart watch, or other devices capable of performing voice call, which is not limited in this disclosure, and as shown in fig. 4, the voice information processing method includes the following steps:
in step 401, the terminal obtains the voice information of the opposite terminal user in the current call, and executes step 402.
In step 402, the terminal acquires the tone characteristic of the voice information and performs step 403.
In step 403, the terminal obtains the language type of the voice message according to the corresponding relationship between the tone feature and the language type, and executes step 404.
In step 404, the terminal obtains a third operation instruction, where the third operation instruction includes a second standard language type, and performs step 405.
In step 405, the terminal modifies the language type of the voice message to the second standard language type according to a third operation instruction, and executes step 406.
In step 406, the terminal determines whether a contact way of the opposite-end user is stored in the address book; when the contact information of the opposite-end user is not stored in the address book, executing step 407; when the contact address of the opposite terminal user is stored in the address book, step 408 is executed.
In step 407, the terminal prompts the user whether to save the contact information of the opposite terminal user, and the process is ended.
In step 408, the terminal stores the language type of the opposite user as the second standard language type, and executes step 409.
In step 409, the terminal receives the first voice message input by the home terminal user, and step 410 is executed.
In step 410, the terminal acquires the language type of the first voice message, and performs step 411.
In step 411, the terminal displays the contact information of the plurality of contacts corresponding to the language type of the first voice message in the address book and/or the plurality of call records corresponding to the language type of the first voice message.
The embodiment of the disclosure provides a voice information processing method, in the method, a call record for communicating with an opposite terminal user is stored according to a language type of the opposite terminal user, when a local terminal user searches the call record again, the language type of the opposite terminal user during the current communication can be firstly recalled, then the terminal searches the call record according to the language type indicated by the local terminal user, the accuracy of searching the call record is improved, the time for searching the call record is shortened, and further the user experience is improved.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 5a is a block diagram illustrating a speech information processing apparatus 50 according to an exemplary embodiment, where the apparatus 50 may be implemented as part or all of an electronic device through software, hardware or a combination of both. As shown in fig. 5a, the speech information processing apparatus 50 includes:
a first obtaining module 501, configured to obtain voice information of an opposite-end user in a current call.
A second obtaining module 502, configured to obtain a language type of the voice information according to the voice information.
A storage module 503, configured to store a call record of the current call according to the language type, where the call record includes a contact telephone, a call time, or a call duration of the peer user.
In one embodiment, as shown in fig. 5b, the apparatus 50 further comprises:
a first determining module 504, configured to determine whether the contact information of the peer user is stored in the address book.
And the processing module 505 is configured to add the language type to the contact information of the opposite-end user if the contact information of the opposite-end user is stored in the address book.
In one embodiment, as shown in fig. 5c, the apparatus 50 further comprises:
a third obtaining module 506, configured to obtain a first operation instruction, where the first operation instruction includes a reference language type;
the displaying module 507 is configured to display, according to the first operation instruction, at least one call record corresponding to the reference language type and/or a contact address of at least one contact corresponding to the reference language type in the address book.
In one embodiment, as shown in fig. 5d, the third obtaining module 506 includes:
the first receiving sub-module 5061 is configured to receive the input first voice information.
The first obtaining sub-module 5062 is configured to obtain a language type of the first speech information.
A determining submodule 5063, configured to determine a language type of the first speech information as the reference language type.
In one embodiment, as shown in fig. 5e, the second obtaining module 502 includes:
the sending submodule 5021 is configured to send a query request to a server, where the query request includes the voice information of the peer user.
The second receiving sub-module 5022 is configured to receive query result information sent by the server, where the query result information includes a language type of the voice information queried by the server.
The above-described embodiments are equally applicable to the speech information processing apparatus 50 shown in fig. 5a, 5b, 5c and 5 d.
In one embodiment, as shown in fig. 5f, the second obtaining module 502 includes:
the second obtaining submodule 5023 is used for obtaining the tone characteristics of the voice information.
The third obtaining sub-module 5024 is configured to obtain the language type of the voice message according to the correspondence between the tone feature and the language type.
The above-described embodiments are equally applicable to the speech information processing apparatus 50 shown in fig. 5a, 5b, 5c and 5 d.
In one embodiment, as shown in fig. 5g, the second obtaining module 502 includes:
the fourth obtaining sub-module 5025 is configured to obtain a second operation instruction, where the second operation instruction includes the first standard language type.
The storage submodule 5026 is configured to store the language type of the voice message as the first standard language type according to the second operation instruction.
The above-described embodiments are equally applicable to the speech information processing apparatus 50 shown in fig. 5a, 5b, 5c and 5 d.
In one embodiment, as shown in fig. 5h, the apparatus 50 further comprises:
a fourth obtaining module 508, configured to obtain a third operation instruction, where the third operation instruction includes a second standard language type;
a modifying module 509, configured to modify the language type of the voice message into the second standard language type according to the third operation instruction.
The above-described embodiments are equally applicable to the speech information processing apparatus 50 shown in fig. 5a, 5b, 5c, 5d, 5e, 5f and 5 g.
The embodiment of the disclosure provides a voice information processing device, which can store a call record for communicating with an opposite-end user according to a language type of the opposite-end user, when a local-end user searches the call record again, the user firstly remembers the language type of the opposite-end user during the current communication, and then the device searches the call record according to the language type, so that the accuracy of searching the call record is improved, the time for searching the call record is shortened, and further the user experience is improved.
An exemplary embodiment of the present disclosure shows a voice information processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: acquiring voice information of an opposite-end user in the current call; acquiring the language type of the voice information; and storing the call record of the current call according to the language type, wherein the call record comprises the contact telephone, the call time or the call duration of the opposite-end user.
In one embodiment, the processor may be further configured to: determining whether the contact information of the opposite-end user is stored in an address book; and if the contact information of the opposite-end user is stored in the address book, adding the language type in the contact information of the opposite-end user.
In one embodiment, the processor may be further configured to: and displaying at least one call record corresponding to the language type and/or the contact way of at least one contact corresponding to the language type in the address book according to the indication.
In one embodiment, the processor may be further configured to: receiving input first voice information; acquiring the language type of the first voice information; and displaying at least one call record corresponding to the language type of the first voice message and/or the contact way of at least one contact corresponding to the language type of the first voice message in the address book.
In one embodiment, the processor may be further configured to: sending a query request to a server, wherein the query request comprises voice information of the opposite-end user; and receiving query result information sent by the server, wherein the query result information comprises the language type of the voice information queried by the server.
In one embodiment, the processor may be further configured to: acquiring tone characteristics of the voice information; and acquiring the language type of the voice information according to the corresponding relation between the tone characteristics and the language type.
In one embodiment, the processor may be further configured to: receiving the language type of the input voice information.
In one embodiment, the processor may be further configured to: and modifying the language type of the voice information according to the indication.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating a voice information processing apparatus 60, which is suitable for a terminal device, according to an exemplary embodiment. For example, the apparatus 60 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
The apparatus 60 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the device 60, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the apparatus 60. Examples of such data include instructions for any application or method operating on the device 60, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 608 includes a screen that provides an output interface between the device 60 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 60 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 60 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 614 includes one or more sensors for providing various aspects of status assessment for the device 60. For example, the sensor assembly 614 may detect the open/closed status of the device 60, the relative positioning of the components, such as the display and keypad of the device 60, the sensor assembly 614 may also detect a change in the position of the device 60 or a component of the device 60, the presence or absence of user contact with the device 60, the orientation or acceleration/deceleration of the device 60, and a change in the temperature of the device 60. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 60 and other devices in a wired or wireless manner. The device 60 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 60 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as the memory 604 including instructions executable by the processor 620 of the apparatus 60 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer-readable storage medium in which instructions, when executed by a processor of an apparatus 60, enable the apparatus 60 to perform the above-described voice information processing method, the method comprising:
acquiring voice information of an opposite-end user in the current call; acquiring the language type of the voice information; and storing the call record of the current call according to the language type, wherein the call record comprises the contact telephone, the call time or the call duration of the opposite-end user.
In one embodiment, the method further comprises: determining whether the contact information of the opposite-end user is stored in an address book; and if the contact information of the opposite-end user is stored in the address book, adding the language type in the contact information of the opposite-end user.
In one embodiment, the method further comprises: and displaying at least one call record corresponding to the language type and/or the contact way of at least one contact corresponding to the language type in the address book according to the indication.
In an embodiment, the displaying, according to the instruction, at least one call record corresponding to the language type and/or a contact way of at least one contact corresponding to the language type in the address book includes: receiving input first voice information; acquiring the language type of the first voice information; and displaying at least one call record corresponding to the language type of the first voice message and/or the contact way of at least one contact corresponding to the language type of the first voice message in the address book.
In one embodiment, the obtaining the language type of the voice information includes: sending a query request to a server, wherein the query request comprises voice information of the opposite-end user; and receiving query result information sent by the server, wherein the query result information comprises the language type of the voice information queried by the server.
In one embodiment, the obtaining the language type of the voice information includes: acquiring tone characteristics of the voice information; and acquiring the language type of the voice information according to the corresponding relation between the tone characteristics and the language type.
In one embodiment, the obtaining the language type of the voice information includes: receiving the language type of the input voice information.
In one embodiment, the method further comprises: and modifying the language type of the voice information according to the indication.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (14)
1. A method for processing speech information, comprising:
acquiring voice information of an opposite-end user in the current call;
acquiring the language type of the voice information through information interaction with a server or through locally pre-stored information according to the voice information;
storing a call record of the current call according to the language type, wherein the call record comprises a contact telephone, call time or call duration of the opposite-end user;
the method further comprises the following steps:
acquiring a first operation instruction, wherein the first operation instruction comprises a reference language type;
displaying the contact way of at least one contact corresponding to the reference language type in at least one call record and/or address book corresponding to the reference language type according to the first operation instruction;
the obtaining the first operation instruction, where the first operation instruction includes the reference language type, includes:
starting a microphone and receiving first voice information input by a local terminal user, wherein the language type of the first voice information is the same as that of the voice information of the opposite terminal user;
acquiring the language type of the first voice information;
determining a language type of the first speech information as the reference language type.
2. The method of claim 1, further comprising:
determining whether the contact information of the opposite-end user is stored in an address book;
and if the contact information of the opposite-end user is stored in the address book, adding the language type in the contact information of the opposite-end user.
3. The method according to claim 1 or 2, wherein the obtaining the language type of the voice information through information interaction with a server according to the voice information comprises:
sending a query request to a server, wherein the query request comprises voice information of the opposite-end user;
and receiving query result information sent by the server, wherein the query result information comprises the language type of the voice information queried by the server.
4. The method according to claim 1 or 2, wherein the obtaining the language type of the voice message through the locally pre-stored information according to the voice message comprises:
acquiring tone characteristics of the voice information;
and acquiring the language type of the voice information according to the corresponding relation between the tone characteristics and the language type which are pre-stored locally.
5. The method according to claim 1 or 2, wherein the obtaining the language type of the voice information comprises:
acquiring a second operation instruction, wherein the second operation instruction comprises a first standard language type;
and storing the language type of the voice information as the first standard language type according to the second operation instruction.
6. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring a third operation instruction, wherein the third operation instruction comprises a second standard language type;
and modifying the language type of the voice information into the second standard language type according to the third operation instruction.
7. A speech information processing apparatus characterized by comprising:
the first acquisition module is used for acquiring the voice information of an opposite terminal user in the current call;
the second acquisition module is used for acquiring the language type of the voice information through information interaction with a server or through locally pre-stored information according to the voice information;
the storage module is used for storing the call record of the current call according to the language type, wherein the call record comprises the contact telephone, the call time or the call duration of the opposite-end user;
the device further comprises:
the third acquisition module is used for acquiring a first operation instruction, and the first operation instruction comprises a reference language type;
the display module is used for displaying at least one call record corresponding to the reference language type and/or a contact way of at least one contact corresponding to the reference language type in the address book according to the first operation instruction;
the third obtaining module includes:
the first receiving submodule is used for starting a microphone and receiving first voice information input by a local terminal user, and the language type of the first voice information is the same as that of the voice information of the opposite terminal user;
the first obtaining submodule is used for obtaining the language type of the first voice information;
a determining submodule, configured to determine a language type of the first speech information as the reference language type.
8. The apparatus of claim 7, further comprising:
the first determining module is used for determining whether the contact information of the opposite-end user is stored in the address book;
and the processing module is used for adding the language type in the contact way of the opposite-end user if the contact way of the opposite-end user is stored in the address book.
9. The apparatus of claim 7 or 8, wherein the second obtaining module comprises:
the sending submodule is used for sending a query request to a server, wherein the query request comprises the voice information of the opposite-end user;
and the second receiving submodule is used for receiving query result information sent by the server, wherein the query result information comprises the language type of the voice information queried by the server.
10. The apparatus of claim 7 or 8, wherein the second obtaining module comprises:
the second obtaining submodule is used for obtaining tone characteristics of the voice information;
and the third obtaining submodule is used for obtaining the language type of the voice information according to the corresponding relation between the tone characteristics and the language type which are pre-stored locally.
11. The apparatus of claim 7 or 8, wherein the second obtaining module comprises:
the fourth obtaining submodule is used for obtaining a second operation instruction, and the second operation instruction comprises a first standard language type;
and the storage submodule is used for storing the language type of the voice information as the first standard language type according to the second operation instruction.
12. The apparatus of claim 7 or 8, further comprising:
the fourth obtaining module is used for obtaining a third operation instruction, and the third operation instruction comprises a second standard language type;
and the modifying module is used for modifying the language type of the voice information into the second standard language type according to the third operation instruction.
13. A speech information processing apparatus characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring voice information of an opposite-end user in the current call;
acquiring the language type of the voice information through information interaction with a server or through locally pre-stored information according to the voice information;
storing a call record of the current call according to the language type, wherein the call record comprises a contact telephone, call time or call duration of the opposite-end user;
the processor is further configured to:
acquiring a first operation instruction, wherein the first operation instruction comprises a reference language type;
displaying the contact way of at least one contact corresponding to the reference language type in at least one call record and/or address book corresponding to the reference language type according to the first operation instruction;
the obtaining the first operation instruction, where the first operation instruction includes the reference language type, includes:
starting a microphone and receiving first voice information input by a local terminal user, wherein the language type of the first voice information is the same as that of the voice information of the opposite terminal user;
acquiring the language type of the first voice information;
determining a language type of the first speech information as the reference language type.
14. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610932254.6A CN106657543B (en) | 2016-10-31 | 2016-10-31 | Voice information processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610932254.6A CN106657543B (en) | 2016-10-31 | 2016-10-31 | Voice information processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106657543A CN106657543A (en) | 2017-05-10 |
CN106657543B true CN106657543B (en) | 2020-02-07 |
Family
ID=58820431
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610932254.6A Active CN106657543B (en) | 2016-10-31 | 2016-10-31 | Voice information processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106657543B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107707459A (en) * | 2017-10-09 | 2018-02-16 | 深圳市沃特沃德股份有限公司 | Intercom sends the method and intercom of voice messaging |
CN109582976A (en) * | 2018-10-15 | 2019-04-05 | 华为技术有限公司 | A kind of interpretation method and electronic equipment based on voice communication |
CN109448699A (en) * | 2018-12-15 | 2019-03-08 | 深圳壹账通智能科技有限公司 | Voice converting text method, apparatus, computer equipment and storage medium |
CN112995564A (en) * | 2019-12-17 | 2021-06-18 | 佛山市云米电器科技有限公司 | Call method based on display device, display device and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103067608A (en) * | 2013-01-23 | 2013-04-24 | 广东欧珀移动通信有限公司 | Method and system for mobile terminal recent call searching |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103581410B (en) * | 2012-07-24 | 2017-12-08 | 中兴通讯股份有限公司 | Message registration remarks method, device and mobile terminal |
CN103634472B (en) * | 2013-12-06 | 2016-11-23 | 惠州Tcl移动通信有限公司 | User mood and the method for personality, system and mobile phone is judged according to call voice |
CN105654950B (en) * | 2016-01-28 | 2019-07-16 | 百度在线网络技术(北京)有限公司 | Adaptive voice feedback method and device |
-
2016
- 2016-10-31 CN CN201610932254.6A patent/CN106657543B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103067608A (en) * | 2013-01-23 | 2013-04-24 | 广东欧珀移动通信有限公司 | Method and system for mobile terminal recent call searching |
Also Published As
Publication number | Publication date |
---|---|
CN106657543A (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105489220B (en) | Voice recognition method and device | |
US10783459B2 (en) | Method and device for providing ticket information | |
CN111063354B (en) | Man-machine interaction method and device | |
CN104539789A (en) | Method and device for prompting call request | |
CN104219644A (en) | Emergency communication method and device | |
CN105224601B (en) | A kind of method and apparatus of extracting time information | |
CN105100513B (en) | Method for processing message of incoming call and device, incoming call service server | |
CN106657543B (en) | Voice information processing method and device | |
EP3026876A1 (en) | Method for acquiring recommending information, terminal and server | |
US11335348B2 (en) | Input method, device, apparatus, and storage medium | |
CN106331328B (en) | Information prompting method and device | |
CN111510556B (en) | Call information processing method and device and computer storage medium | |
CN108011990B (en) | Contact management method and device | |
CN107493366B (en) | Address book information updating method and device and storage medium | |
CN106384586A (en) | Method and device for reading text information | |
CN108270661B (en) | Information reply method, device and equipment | |
US10154128B2 (en) | Methods and apparatuses for interpreting a phone number | |
CN109274825B (en) | Message reminding method and device | |
CN105704286A (en) | Communication information display method and device | |
CN105100352B (en) | Obtain the method and device of associated person information | |
CN106506808B (en) | Method and device for prompting communication message | |
CN108241438B (en) | Input method, input device and input device | |
CN110213062B (en) | Method and device for processing message | |
CN111382242A (en) | Information providing method, device and readable medium | |
CN113127613B (en) | Chat information processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |