WO2023219686A1 - Speculative execution of dataflow program nodes - Google Patents
Speculative execution of dataflow program nodes Download PDFInfo
- Publication number
- WO2023219686A1 WO2023219686A1 PCT/US2023/013449 US2023013449W WO2023219686A1 WO 2023219686 A1 WO2023219686 A1 WO 2023219686A1 US 2023013449 W US2023013449 W US 2023013449W WO 2023219686 A1 WO2023219686 A1 WO 2023219686A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- program
- utterance
- prefix
- predicted
- complete
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 100
- 230000009471 action Effects 0.000 claims description 83
- 238000012549 training Methods 0.000 claims description 54
- 230000008859 change Effects 0.000 claims description 20
- 238000010801 machine learning Methods 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 12
- 208000004547 Hallucinations Diseases 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 description 170
- 230000004044 response Effects 0.000 description 28
- 230000008569 process Effects 0.000 description 20
- 238000012545 processing Methods 0.000 description 20
- 230000036961 partial effect Effects 0.000 description 13
- 230000006399 behavior Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 11
- 230000003993 interaction Effects 0.000 description 10
- 230000000670 limiting effect Effects 0.000 description 10
- 238000003058 natural language processing Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 238000013459 approach Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- PLQDLOBGKJCDSZ-UHFFFAOYSA-N Cypromid Chemical group C1=C(Cl)C(Cl)=CC=C1NC(=O)C1CC1 PLQDLOBGKJCDSZ-UHFFFAOYSA-N 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 235000013305 food Nutrition 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 241000538568 Brachydeuterus auritus Species 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 3
- 235000021450 burrito Nutrition 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 231100000957 no side effect Toxicity 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 238000012896 Statistical algorithm Methods 0.000 description 1
- 101100323865 Xenopus laevis arg1 gene Proteins 0.000 description 1
- 230000002730 additional effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000005352 clarification Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- PWPJGUXAGUPAHP-UHFFFAOYSA-N lufenuron Chemical compound C1=C(Cl)C(OC(F)(F)C(C(F)(F)F)F)=CC(Cl)=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F PWPJGUXAGUPAHP-UHFFFAOYSA-N 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000010937 topological data analysis Methods 0.000 description 1
- 230000026683 transduction Effects 0.000 description 1
- 238000010361 transduction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1822—Parsing for meaning understanding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Definitions
- Conversational systems process user utterances and respond by automatically performing actions, such as answering questions, invoking application programming interfaces (APIs), or otherwise assisting a user based on the user utterances.
- actions such as answering questions, invoking application programming interfaces (APIs), or otherwise assisting a user based on the user utterances.
- APIs application programming interfaces
- a computer-implemented method comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance. For each sequentially received utterance prefix, a complete dataflow program is predicted based on the utterance prefix.
- the complete dataflow program includes a plurality of program nodes to be executed to satisfy the full user utterance. One or more program nodes are selected from the predicted complete dataflow program to speculatively execute based on at least the utterance prefix.
- FIGS. 1A-1D show an example data-flow architecture for an automated assistant.
- FIG. IE shows an example dialogue between a user and an automated assistant.
- FIG. 2 is a flow diagram illustrating an example method for speculatively executing nodes of a data-flow program.
- FIG. 3 schematically depicts the execution of nodes of a data-flow program based on received utterance prefixes.
- FIG. 4 is a flow diagram illustrating an example method for training a data-flow program based on execution costs.
- FIG. 5 is a flow diagram illustrating an example method for speculatively executing nodes of a data-flow program.
- FIG. 6 shows an example computing system.
- Conversational computing interfaces may be used to interact with users via natural language, for example via speech.
- an automated assistant as an example conversational computing interface
- this example is non-limiting and conversational computing interfaces may be implemented according to the present disclosure for any suitable purpose, for example, to permit a user to use natural language to interact with any suitable computer hardware and/or computer software.
- every reference to an automated assistant in the present disclosure applies equally to any other conversational computer interface or other computing framework configured to respond to speech.
- Automated assistants can use natural language processing (NLP) techniques to process an input user utterance, e.g., user speech, in order to perform a predefined, hard-coded action related to the input user utterance.
- NLP natural language processing
- an automated assistant may support a predefined plurality of hard-coded templates, where each template has a number of slots that can be filled to parametrize a hard-coded action.
- an automated assistant may support a pre-defined interaction to invoke an application programming interface (API), e.g., to reserve a seat at a restaurant, call a ride-hailing service, or look up the weather.
- API application programming interface
- automated assistants may support a plurality of different predefined actions via the predefined templates, an automated assistant that only supports predefined actions via templates may not be configured to perform more complex or novel behaviors.
- the present disclosure is directed to an automated assistant that processes user utterances using data-flow programs in a data-flow programming language, e.g., in addition to or instead of using templates.
- the automated assistant uses a previously-trained code generation machine to generate and/or output a data-flow program for a user utterance.
- the data-flow program uses a plurality of pre-defined functions to define individual steps for processing the user utterance.
- data-flow programs are used synonymously with dataflow graphs.
- Processing user utterances by using data-flow programs generated by a previously-trained code generation machine may result in an improved user experience, improved efficiency of an automated assistant or other interactive computer service, e.g., improved storage usage and/or improved processing time, and/or improved ability for responding to different user utterances.
- the data-flow programs may encode a variety of different processing strategies for different user utterances, including performing calculations based on the user utterances, accessing APIs to respond to the user utterances, etc.
- the code generation machine may generate data-flow programs that are specific to a user utterance being processed, which may enable processing the user utterance more efficiently (e.g., without performing irrelevant steps) and with improved user satisfaction (e.g., by generating programs that better address requests expressed in the user utterance).
- FIG. 1A shows a data-flow architecture for an automated assistant system 100.
- Automated assistant system 100 is configured to process a user utterance 102 by operating a previously-trained code-generation machine 104 configured to output a data-flow program 106 for the user utterance 102.
- conversational computing interfaces such as automated assistant system 100 may further permit interaction via any other suitable input methods, for example, via touch screen inputs and/or button presses.
- the present disclosure focuses on processing a user utterance 102, other inputs such as button press events may be processed in similar fashion.
- previously-trained code-generation machine 104 may be configured to output a data-flow program 106 for one or more button press event(s) supplementing user speech.
- Previously-trained code generation machine 104 may be trained to recognize different kinds of non-linguistic input events, e.g., based on a specific button being pressed, timing of the button press relative to other input events, etc.
- Data-flow program 106 is shown as a graph including a plurality of function nodes, wherein the function nodes are depicted with inputs and outputs shown by edges/arrows.
- the data-flow program 106 is configured to produce a return value indicated by the bottom-most arrow.
- Previously-trained code-generation machine 104 is configured to add any of a plurality of predefined functions 110 to the data-flow program based on the user utterance. Each pre-defined function defines one or more individual steps for processing the user utterance 102.
- the data-flow program 106 is executable to cause the automated assistant to respond to the user utterance, for example by performing any suitable response action.
- the pre-defined functions of the data-flow program 106 may be executable to cause the automated assistant to perform any suitable response action, for example, outputting a response as speech and/or text, e.g., outputting an assistant response 120 as shown in FIG. 1 A, and/or invoking an API to perform an action using the API, e.g., ordering food from a restaurant, scheduling a ride with a ride-hailing service, scheduling a meeting in a calendar service, placing a phone call.
- any suitable response action for example, outputting a response as speech and/or text, e.g., outputting an assistant response 120 as shown in FIG. 1 A, and/or invoking an API to perform an action using the API, e.g., ordering food from a restaurant, scheduling a ride with a ride-hailing service, scheduling a meeting in a calendar service, placing a phone call.
- the present disclosure focuses on examples in which the automated assistant responds to utterances by outputting an assistant response, e.g., as speech and/or text, these examples are non-limiting and the pre-defined functions and data-flow programs described herein may be configured to cause the automated assistant to respond to utterances in any suitable manner, for example by performing one or more actions using an API instead of or in addition to outputting an assistant response.
- the previously-described code-generation machine 104 described herein may be used to respond to user utterances in any suitable fashion.
- the previously-described code-generation machine 104 may be configured to recognize a user utterance, and produce, from the user utterance, a data-flow program that defines an executable plan for responding to the user utterance.
- the resulting data-flow program may be executed, for example by an automated assistant, to process the user utterance.
- the data-flow program for responding to the user utterance may be executed to respond to the user utterance without needing to generate any additional code using the code-generation machine 104.
- code-generation machine 104 is configured to output a complete plan for processing a user utterance.
- code-generation machine 104 may be configured to output a data-flow program for responding to a user utterance, wherein some or all of the data-flow program is executed before code-generation machine 104 is used to generate further code defining a further plan for completing processing of the user utterance.
- Code-generation machine 104 may be used to plan and execute data-flow programs in any suitable fashion, including completing planning before execution and/or interleaving planning and execution in any suitable fashion.
- the previously-trained code-generation machine 104 may be based on any suitable technology, such as state-of-the-art or future machine learning (ML), artificial intelligence (Al), and/or natural language processing (NLP) technologies.
- the previously-trained codegeneration machine 104 includes an encoder machine configured to encode the user utterance 102 as a semantic feature, e.g., a vector in a semantic vector space learned by the previously-trained code-generation machine 104 and a decoder machine configured to decode the semantic feature by outputting one or more functions from the plurality of pre-defined functions 110.
- the decoder machine is configured to output the one or more functions according to a typed grammar for composing functions from the plurality of pre-defined functions 110, thereby constraining the outputs of the decoder machine to be well-typed, fully-executable data-flow programs.
- the plurality of pre-defined, composable functions 110 supports a range of different automated assistant behaviors, e.g., invoking APIs, answering questions, looking up data, and/or utilizing historical context from a context-specific dialogue history 130 maintained for the automated assistant.
- the context-specific dialogue history 130 may include any suitable aspects of the dialogue, for example user utterance 102, dataflow program 106, and/or a resulting assistant response 120.
- the user utterance 102, data-flow program 106 generated for the user utterance 102, and/or any relevant assistant response 120 by the automated assistant may be stored in the contextspecific dialogue history 130.
- the context-specific dialogue history 130 defines a plurality of concepts, e.g., concepts 130A through 130N. “Concept” is used herein to refer to any relevant or potentially relevant aspects of the interaction between the user and the automated assistant.
- a concept may include an entity (e.g., a person, place, thing, number, date), an intent of a user query (e.g., an intent to order food, an intent to look up the weather, an intent to schedule a meeting), an action performed by the automated assistant (e.g., ordering food, looking up the weather, invoking an API, looking up information pertaining to an entity, recognizing a specific user utterance, performing a composite action composed of more than one action), or any other suitable feature.
- a concept may be defined in any suitable manner, for example based on text content of the user utterance 102.
- a concept 130A is defined in terms of a data-flow program fragment 132A.
- the data-flow program fragment 132A may include one or more functions configured to look up information pertaining to a specific entity, and/or one or more functions configured to cause the automated assistant to perform a specific action.
- the plurality of pre-defined functions 110 includes one or more historyaccessing functions configured to access the context-specific dialogue history 130. Accordingly, data-flow program 106 may include such history-accessing functions.
- the plurality of pre-defined functions 110 and data-flow program 106 each include a history-accessing function 112 configured to access the context-specific dialogue history 130 as shown by the arrow.
- the plurality of pre-defined functions 110 includes a search-history function configured to look up a concept from the context-specific dialogue history 130, e.g., a getSalient() function configured to look up an entity that was previously discussed, or to look up an action that was previously performed.
- the plurality of pre-defined functions 110 includes a program-rewriting function parametrized by a designated concept stored in the context-specific dialogue history and configured to generate a new data-flow program fragment related to the designated concept based on concepts from context-specific dialogue history 130, e.g., a Clobber() function configured to generate a new, re-written program based on the designated concept, for example, to write a new program for performing a previous action with regard to a new entity, or for performing a new action with regard to a previously-discussed entity.
- a program-rewriting function parametrized by a designated concept stored in the context-specific dialogue history and configured to generate a new data-flow program fragment related to the designated concept based on concepts from context-specific dialogue history 130, e.g., a Clobber() function configured to generate a new, re-written program based on the designated concept, for example, to write a new program for performing a previous action with regard to a new entity, or for performing a new action with regard
- the designated concept stored in the context-specific dialogue history includes a target sub-concept
- the program-rewriting function is further parametrized by a replacing sub-concept for replacing the target sub-concept.
- the new data-flow program fragment corresponds to the designated concept with the target sub-concept being replaced by the replacing sub-concept.
- the new data-flow program fragment includes a rewritten dataflow program fragment, based on a historical data-flow program fragment corresponding to the designated concept, wherein a sub-program fragment corresponding to the target sub-concept is replaced by a different sub-program fragment corresponding to the replacing sub-concept.
- the automated assistant may be able to repeat actions, perform actions with modifications, look up relevant entities or other details related to previous actions, etc. For example, the automated assistant may repeat an action or perform an action with modifications, by re-executing code based on a program fragment 132A corresponding to a concept 130A from the context-specific dialogue history 130. Furthermore, if any error condition is reached during execution of data-flow program 106 prior to outputting an assistant response 120, the data-flow program 106 and/or other program fragments stored in context-specific dialogue history 130 may be re-executed either with or without modifications to recover from the error.
- FIG. IB shows a different view of automated assistant system 100 focusing on a search-history function 112 of the plurality of pre-defined functions 110.
- the search-history function 112 is configured to process an ambiguous user utterance 102 by resolving any ambiguities using concepts from context-specific dialogue history 130.
- searchhistory function 112 is configured to access one or more concepts from the context-specific dialogue history 130 to determine a disambiguating concept 134.
- disambiguating concept 134 is defined by a program fragment 136.
- search-history function 112 may be configured to search for the ambiguous entity, e.g., based on the partial name “Tom”, in the context-specific dialogue history 130 to find a clarifying entity that matches the ambiguous entity, e.g., an entity having a first name “Tom” or a related name such as “Thomas”.
- the clarifying entity may be defined by a full name, e.g., “Thomas Jones”.
- the clarifying entity may be defined by code configured to find the clarifying entity, e.g., code to lookup people named “Tom” in an address book of the user.
- disambiguating concept 134 indicates a bridging anaphora between a user utterance and a previous user utterance from the context-specific dialogue history.
- search-history function 112 may be configured to search for a concept corresponding to the location of the lunch with Charles.
- the concept may include a data-flow program fragment that recursively uses the search-history function 112 to find a salient event, namely the meeting with Charles, as well as further instructions to obtain the location of the salient event.
- concepts found by the search-history function 112 may include data-flow programs that recursively invoke the search-history function 112 to find any suitable sub-concepts, e.g., so that a salient concept is defined in terms of searching for other salient sub-concepts.
- FIG. 1C shows another different view of automated assistant system 100 focusing on a programrewriting function 112’.
- Program-rewriting function 112’ is configured to produce a rewritten concept 140, e.g., defined by a rewritten program fragment 142, by starting with a designated concept 152 from the context-specific dialogue history 130, wherein the designated concept 152 includes at least one replacement target sub-concept 154 to be modified/replaced with a different, replacing sub-concept 156.
- designated concept 152 may refer to an action by the automated assistant including “placing a reservation at a local sushi restaurant”.
- target sub-concept 154 may be the description “sushi restaurant,” and the replacing sub-concept 156 may be an alternate description “burrito restaurant.”
- designated concept 152 may be defined by a program fragment for invoking an API to place restaurant reservations, wherein the API is parametrized by a program fragment corresponding to target sub-concept 154 that indicates a restaurant at which to place reservations.
- program-rewriting function 112’ is configured to output a re-written concept 140 that corresponds to “placing a reservation at the local burrito restaurant,” so that re-written concept 140 may be used to perform a new action that includes placing the reservation at the local burrito restaurant, e.g., by executing program fragment 142.
- FIG. ID shows another view of automated assistant system 100 focusing on handling errors that may arise during execution of data-flow program 106.
- the user utterance 102, data-flow program 106 and/or assistant response 120 may be stored into context-specific dialogue history 130 while processing user utterances.
- Error-handling execution machine 170 is configured to recover from the error condition in order to produce the assistant response 120’.
- error-handling execution machine 170 may implement method 400, which is described with regard to FIG. 4, below.
- error-handling execution machine 170 may modify and/or re- execute the suspended execution 160.
- error-handling execution machine 170 may execute an alternate program fragment 180.
- error-handling execution machine 170 may modify the suspended execution 160 by using a program-rewriting function to replace a program fragment of the data-flow program 106 with the alternate program fragment 180, and/or by executing the alternate program fragment instead of the data-flow program 106.
- the alternate program fragment 180 may be derived from the context-specific dialogue history 130, e.g., the alternate program fragment 180 may be a previously-executed program fragment, built from the plurality of pre-defined functions 110 and/or output by the previously-trained code-generation machine 104.
- previously-trained codegeneration machine 104 is configured to recognize the error condition and to output a new program fragment configured to recover from the error.
- previously-trained code-generation machine 104 may be trained with regard to one or more training examples, each training example including an example error condition, and an example data-flow program fragment for responding to the error.
- previously-trained code-generation machine 104 may be trained with regard to a large plurality of annotated dialogue histories, where some or all of the annotated dialogue histories include an occurrence of an error condition.
- the error condition may arise due to an ambiguity in the user utterance 102 in which insufficient information is provided by the user utterance 102 to fully serve the user based on the user utterance.
- error-handling execution machine 170 is configured to execute code to produce an initial assistant response 120’ including a clarifying question 122.
- the clarifying question 122 is output for the user to respond to in a new, clarifying user utterance 102.
- the clarifying user utterance can be processed using the previously-trained code-generation machine 104 to produce a new data-flow program 106 for responding to the clarifying user utterance.
- the previously-trained code-generation machine 104 may be trained via supervised training on a plurality of annotated dialogue histories.
- the previously-trained code-generation machine 104 is trained on a large plurality of annotated dialogue histories, e.g., hundreds, thousands, tens of thousands or more.
- Annotated dialogue histories describe state information associated with a user interacting with an automated assistant, annotated with an example data-flow program for responding to the user interaction.
- an annotated dialogue history may include a context-specific dialogue history according to the present disclosure, annotated with an example data-flow program suitable for execution by the automated assistant in response to the context established in the context-specific dialogue history.
- a context-specific dialogue history includes a plurality of events arranged in a temporal order, e.g., time-stamped events, including user utterances, data-flow programs executed by an automated assistant, responses output by the automated assistant, and/or error conditions reached while executing data-flow programs.
- an annotated dialogue history may include a context-specific dialogue history in which the most recent event is an example user utterance, annotated with an example data-flow program for responding to the example user utterance with regard to the context established by the context-specific dialogue history.
- the previously-trained codegeneration machine 104 may be trained to reproduce the example data-flow program given the example user utterance and the context-specific dialogue history.
- the example data-flow program may include any suitable functions in any suitable sequence/arrangement, so that via training, the code-generation machine is configured to output suitable functions in a suitable sequence.
- the example data-flow program may include the search-history function and accordingly, the code-generation machine may be trained to output the search-history function along with other functions in a suitable sequence for responding to a user utterance, e.g., to cause an automated assistant to perform any suitable response action, such as outputting a response as text and/or speech, or invoking an API.
- an annotated dialogue history includes a context-specific dialogue history in which the most recent event is the occurrence of an error condition, e.g., instead of a user utterance being the most recent event, annotated with a data-flow program for recovering from the error. Accordingly, the previously-trained code-generation machine 104 may be trained with regard to such annotated dialogue histories to generate suitable data-flow programs for recovering from error conditions.
- Annotated dialogue histories may be obtained in any suitable manner, for example, from human demonstrators.
- one or more human demonstrators may be shown context-specific dialogue histories, e.g., context-specific dialogue histories derived from usage data obtained from interaction with humans, and/or machine-generated context-specific dialogue histories, and for each context-specific dialogue history, be asked to provide a suitable data-flow program for responding to the context-specific dialogue history.
- the data-flow programs provided by the human demonstrators may use the pre-defined functions in any suitable manner to perform a wide range of tasks responsive to user utterances and/or error conditions.
- the human demonstrators may provide example data-flow programs that perform any suitable calculations, output responses, e.g., to ask clarifying questions of the user, or answer a query issued by the user, listen for utterances from a user, e.g., to obtain clarifications from the user, invoke APIs, etc.
- the example data-flow programs may include the search-history function, e.g., “getSalient()”, and/or program-rewriting function, e.g., “Clobber()”, invoked with any suitable parameters, for example, to execute data-flow program fragments from the contextspecific dialogue history.
- the code-generation machine 104 may be trained to generate data-flow programs similar to those provided by the human demonstrators, in order to respond to user utterances and/or recover from errors.
- the data-flow programs e.g., data-flow programs generated by the previously-trained code generation machine and/or example data-flow programs, are built from a plurality of pre-defined, composable functions.
- the pre-defined, composable functions may be composed into programs that invoke the pre-defined, composable functions in in any suitable order and parametrize the pre-defined, composable functions in any suitable fashion.
- the previously-trained code-generation machine may be trained to output suitable data-flow programs for a user utterance, based on the example data-flow programs provided during training.
- the previously- trained code-generation machine is not limited to hard-coded behaviors.
- the previously-trained code-generation machine is configured to process novel user utterances (that may not have been provided during training) by generating corresponding, novel data-flow programs (that also may not have been provided during training).
- the previously-trained code-generation machine may be trained in any suitable fashion, e.g., using any suitable ML, Al, and/or NLP models as will be described below with regard to FIG. 6, on any suitable training data, for example, a large plurality of annotated dialogue histories.
- the previously-trained codegeneration machine may be trained with regard to a loss function for assessing whether a dataflow program is a suitable response to a user utterance, wherein the loss function is configured to indicate zero loss or a relatively small loss, e.g., requiring no adjustment, or a relatively smaller adjustment of training parameters, when the code-generation machine successfully reproduces a training example, e.g., by producing the same data-flow program as was provided by a human annotator, given a context-specific dialogue history).
- the loss function may be configured for a relatively small loss when the code-generation machine successfully reproduces a training example
- the loss function may also be configured for a relatively small loss when the code-generation machine reproduces a different data-flow program, for example, a data-flow program that has a similar effect when executed, and/or a data-flow program that is indicated to be satisfactory by a human user, e.g., a human demonstrator and/or end-user of an automated assistant device.
- a human user e.g., a human demonstrator and/or end-user of an automated assistant device.
- error conditions are recognized by operating a previously-trained error detection model.
- the previously-trained error detection model may be trained via supervised training on a plurality of annotated dialogue histories, wherein the annotated dialogue histories are annotated to indicate when errors occur.
- the annotated dialogue histories may be obtained by showing context-specific dialogue histories to one or more human demonstrators and asking the human demonstrators to indicate when the context-specific dialogue histories indicate an erroneous state.
- the plurality of pre-defined composable functions may include any suitable functions, for example, a listen function configured to listen for a specific user utterance before continuing with execution of a data-flow program and/or while speculatively executing selected nodes of a data-flow program, a response function configured to output a description of a value determined during execution of a data-flow program, and/or a primitive calculation function for processing values obtained from user utterances and/or computed during execution of the dataflow program, e.g., data structure operations such as forming tuples from data, or arithmetic operations.
- a listen function configured to listen for a specific user utterance before continuing with execution of a data-flow program and/or while speculatively executing selected nodes of a data-flow program
- a response function configured to output a description of a value determined during execution of a data-flow program
- a primitive calculation function for processing values obtained from user utterances and/or computed during execution of the
- the plurality of pre-defined composable functions includes a foreign function configured to invoke a foreign (i.e., third party) API.
- foreign APIs may be invoked to interact with real-world services, e.g., schedule a car in a ride-hailing service, order food or make reservation at a restaurant.
- the plurality of pre-defined composable functions includes an inference function configured to perform a calculation with regard to a result of a foreign function. The inference function may encapsulate high-level behaviors with regard to the API, which would otherwise require multiple different low-level functions using the API.
- a foreign ride-hailing API may support functions for scheduling a car, adding a stop to a route, and finalizing a scheduled route.
- an inference function may be configured to receive a destination and, based on the destination, schedule a car, add a stop corresponding to a pickup location for the user, add a stop corresponding to the destination, and finalize the scheduled route including the stops corresponding to the pickup location and destination.
- an inference function may be defined with regard to one or more constraints and executing the inference function may include running a constraint satisfaction program to satisfy the one or more constraints, before invoking a foreign API using parameters defined by the solution for the constraints.
- the one or more constraints may include “fuzzy” or “soft” constraints and accordingly, solving the constraints may include executing an inference program suitable for “fuzzy” logical inference, for example a Markov logic inference program.
- the plurality of pre-defined composable functions includes a user-customized function that is configured to access a user customization setting and to perform a calculation based on the user customization setting.
- the user-customized function may be configured to determine whether the user is free based on a user-customized schedule, e.g., calendar data.
- User-customized functions may be implemented using foreign functions configured to invoke a foreign API, e.g., an API for looking up calendar data.
- the plurality of pre-defined composable functions includes an intelligent decision function, wherein the intelligent decision function is configured to use a previously- trained machine learning model to perform a calculation.
- the search-history function may be an intelligent decision function configured to use a previously-trained relevance detection machine to search the context-specific dialogue history.
- the plurality of pre-defined composable functions may include an intelligent decision function configured to assess whether it is “morning” in a user-specific and/or population-specific way, e.g., the function may be configured to recognize that what time the user considers “morning” could vary depending on the day of the week or time of the year.
- the intelligent decision function may be configured to assess that it is “morning” if it is between 6 AM and 11AM on a weekday, but may be configured to assess that it is morning if it is between 9AM and 12PM on a weekend.
- the intelligent decision function may be trained in any suitable fashion, for example based on labeled examples of times and whether or not a user considers the time morning.
- the intelligent decision function may take into account auxiliary information such as a work schedule, calendar, and/or mobile phone usage of a user, for example to determine if it is “morning” based on whether the user has likely woken up yet on a given day.
- an intelligent decision function may be configured to assess an ambiguity, e.g., an ambiguous user utterance, or an ambiguous constraint, and select a disambiguating data-flow program to respond to the ambiguity.
- the plurality of pre-defined composable functions includes a macro function, wherein the macro function includes a plurality of other pre-defined composable functions, and is configured to execute the plurality of other pre-defined composable functions.
- a macro function can be used to sequence and organize related low-level steps of a high-level task using low-level functions.
- the codegeneration machine may be able to readily output well-typed code for performing the high-level behaviors without needing to output individual steps using low-level functions.
- FIG. IE shows a first example of a dialogue between a user and an automated assistant, in which the automated assistant processes a user utterance 102’ in which there is no ambiguity.
- Context-specific dialogue history 130 is shown including concepts 130A, 130B, and further concepts through 130N.
- the user utterance 102’ is unambiguous and may be processed without referring to the context-specific dialogue history 130.
- the user asks, “When is my next meeting with Tom Jones?” and the previously-trained codegeneration machine outputs a data-flow program 106’.
- Data-flow program 106’ is shown with a non-limiting example syntax in which square brackets indicate return values of expressions, e.g., [events] indicates a return value of an expression for finding all events matching a set of criteria, and [time] indicates a start time of the first such event in the set.
- the example syntax includes various functions including the search-history function, e.g., “getSalient()”, and program-rewriting function, e.g., “Clobber()”, as well as other functions such as primitive functions, API functions, etc., as described herein.
- the example functions are shown in a function-call syntax indicating the name of a function, e.g., “Find”, “getSalient”, “Clobber”, among other named functions, and parentheses enclosing parameters for invoking the function.
- the example function-call syntax is non-limiting, and functions may be invoked and parametrized in any suitable manner, e.g., using any suitable formal-language syntax.
- the example function names represent pre-defined functions with implementations that are not shown herein.
- each pre-defined function may be implemented by any suitable sequence of one or more instructions executable by the automated assistant to perform any suitable steps, e.g., to result in behavior indicated by the function name, behavior set forth in this disclosure, and/or any other suitable behavior such as performing calculations, invoking APIs, outputting audio, visually presenting information via a display.
- the “Find” function may be implemented in any suitable manner, for example, by invoking an API to look up information stored in a user calendar.
- the data-flow program finds events that have an attendee named “Tom Jones.”
- the data-flow program computes values such as [events] and [time] and then outputs a response 120’ using a “describe” function configured to output a description of the value [time], e.g., as speech via a speaker of an automated assistant device. Accordingly, response 120’ indicates the meeting time for the next meeting with Tom Jones, namely “at 12:30”.
- Standard conversational semantic parsing maps a complete user utterance into an executable program, e.g., a data-flow program, executes the program to query information sources or effect external actions, then, after execution, responds to the user. This process can be slow and will lag behind the user utterance when the program contains expensive function calls, particularly if the function calls are nested and cannot be parallelized.
- latency may be reduced by predicting and speculatively executing function calls while the user is still speaking based on partial results from automatic speech recognition (ASR) and the current state of execution. Identifying and pre-executing partial programs while the user is still speaking provides the technical benefit of expediting the final response to the user, thus improving user satisfaction due to improved computer-human interaction.
- ASR automatic speech recognition
- Such predictive decisions may include spotting user intents that have already been expressed (in some cases, without the help of aligned training data) and anticipating user intents from context that have not been expressed yet.
- a generalized framework comprising a two-stage approach.
- a learned prefix-to-program prediction module is invoked.
- One example employs a neural graph-based semantic parser, which is specially trained to parse an incomplete utterance into a full program.
- a pipeline that uses a language model (LM) may be employed to predict how a user will finish the incomplete utterance. The predicted completed utterance may then be parsed.
- subprograms may be selected for execution as soon as the semantic parser predicts that the subprogram has a high probability of being in the correct parse.
- a thresholding heuristic may be applied for selecting subprograms for early execution.
- FIG. 2 shows a flow-diagram for an example computer-implemented method 200.
- method 200 is executed by a computing system configured to receive speech input, the output of automatic speech recognition algorithms, text input, etc. such as a digital assistant, kiosk, smart phone, tablet device, computer, driving assistant, etc.
- Method 200 describes example approaches for reducing latency that, in some examples, include predicting and executing function calls while the user is still speaking.
- graph and “program” are used interchangeably herein.
- method 200 includes sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance.
- each of the plurality of utterance prefixes are received from a speech to text module that employs automatic speech recognition algorithms.
- the plurality of utterance prefixes are accompanied by contextual information such as the identity of the speaker, the location of the speaker, any prompts presented prior to receiving speech etc.
- actions are described that apply to each sequentially received (utterance prefix.
- an offline system that models /?(G
- method 200 allows the system to simultaneously parse each partial utterance as the user utters it, so as to pre-execute subprograms and thus reduce the final response time.
- simultaneous machine translation (MT) approaches in some examples, the user is not presented any output until their utterance is complete.
- speculatively executing a predicted subprogram in the background does not commit the system to using the predicted subprogram in the final result.
- Any parse of u [m -i] therefore does not constrain a parse of u [m] . Rather, in some examples, each prefix is re- parsed from scratch after each partial utterance is received.
- method 200 includes predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance.
- an executable complete dataflow program G is predicted in response to a user utterance prefix u[m] and in some examples is predicted further based on context c of a dialogue history.
- ) and the context is encoded as a sequence c (ci, ci, . . . , c
- the value u [m] is used to denote the m ⁇ h prefix presented to an online system
- tm is used to denote the time at which it is presented.
- t denotes the time at which the complete utterance u is presented.
- different time points are recorded, e.g., one or more times at which a function invocation is selected by the system for execution, the time it is actually executed, and the time it returns.
- a selected function invocation is generally not considered to be executed until its arguments have returned from execution. However, by that point, the system may have deselected the function, and thus will not execute it, as the system’s predictions may have changed based on additional input.
- a system uses the noisy partial outputs returned by an ASR system from successively longer speech prefixes.
- each partial ASR output u [m] is returned at some time t m G IR.
- the system appends one or more words to the previous output U[ m -x], and/or revises some words.
- the complete graph G is predicted based on the current prefix u [m] and context c. As one example, the single most probable G is predicted. Additionally or alternatively, an estimate of the distribution p(G
- the complete dataflow program is predicted by a machine learning model previously trained with training data including pairs of training utterance prefixes and training complete dataflow programs.
- Each element of a graph can be regarded as an action that adds a node or a sequence of edges to the graph.
- subgraphs correspond to contiguous substrings of a, but this is not always the case.
- Each v k corresponds to a node, representing a constant value or function invocation, while each e k is a subsequence that lists all edges between v k and earlier nodes.
- graphs are converted to action sequences that enable training a sequence-to-sequence model.
- the model may output the action sequence, from which the graph can be constructed.
- Each node and each edge in G corresponds to one token in a, excepting for string literals, which can span multiple action tokens.
- a token of a string literal may appear directly as an action or can be copied from the 7 th token of the source.
- the machine learning model is previously trained to at least locally maximize u [m]’ a i:n-i)> where: a is a sequence of actions, each of which contributes to adding a node and/or a sequence of edges to the complete dataflow program; c is a context of a dialogue history; and U[ m ] is the utterance prefix.
- ) is trained to map each prefix of u to the complete graph.
- every ((c, u); G) pair in the original training data is multiplied into many training pairs ((c, U
- the full graph is used as the target, rather than attempting to predict only the part of the graph that aligns to the prefix. In this way, the method does not require any alignment.
- Such a model may attempt to predict any function calls that are likely based on the current prefix, even if they have not been explicitly mentioned yet.
- the target graph is unreachable, such as when the target graph contains string literals that have not been seen yet, e.g., a copy function for a prefix in a position beyond the current prefix.
- the target action sequence may be modified to instead copy the final position of the prefix, e.g., using a MASK token as a placeholder for all future tokens. In this way, hallucination of free text by the model is reduced or disabled, while keeping the graph structure intact with the MASK token-based placeholder.
- hallucination generally refers to the generation of one or more predicted utterances by a language model based on previously received utterances, utterance prefixes, and/or applicable context.
- the hallucinated utterances are generated with incomplete intent, and thus include utterances that are not ultimately verifiable or supported by the complete dataflow program.
- the complete dataflow program is predicted by hallucinating a predicted full utterance from the utterance prefix with a previously-trained language model.
- the predicted full utterance is hallucinated from the utterance prefix by a Transformer-based language model.
- the machine learning model is previously trained to at least locally maximize where: a is a sequence of actions, each of which contributes to adding a node and/or a sequence of edges to the complete dataflow program; c is a context of a dialogue history; and u is the predicted full utterance.
- the full utterance may first be predicted from the prefix.
- a pretrained bidirectional and auto-regressive transformers (BART) model is fine-tuned so that it can map an utterance prefix, e.g., terminated with the MASK symbol to a full utterance, e.g., freely hallucinating content words.
- the training data includes one example for each prefix of each utterance, so the fine-tuning objective is to maximize the sum of logp (u
- method 200 includes selecting, from the predicted complete dataflow program, one or more program nodes to speculatively execute based on at least the utterance prefix.
- nodes are selected based on the probability that they will eventually be executed, e.g., using a heuristic approximation to their marginal probability.
- selecting a node is additionally or alternatively based on a predicted execution cost and a predicted effect on overall latency. Additionally or alternatively, the proposal and selection steps may be combined into a single step that directly predicts some graph fragments to execute.
- steps are presented for selecting program nodes for each candidate node of the complete dataflow program predicted based on the utterance prefix.
- G m proposed from U
- selection includes determining which program nodes, e.g., function invocations, are worth executing at this time, given what is known about received utterance prefixes, context, etc.
- the selection process may iterate for each received utterance prefix.
- the subset of program nodes that are selected function as an update that replaces the previous list of selected nodes. Any formerly selected nodes that were not executed, e.g., still waiting for their arguments, are not executed until they are selected again.
- method 200 includes determining a conditional probability of a candidate subgraph including the candidate node and its descendants. For example, calculating the conditional probability of the candidate subgraph includes multiplying predicted probabilities of all relevant actions responsible for building the candidate subgraph. In some examples, the conditional probability of a subgraph is obtained by marginalizing over all possible action sequences, as well as all completions u where applicable. This probability may be approximated by sampling from the models. For simplicity and efficiency, the probability of a subgraph of G m is optionally approximated by the product of the conditional probabilities of the predicted actions that actually built that subgraph — that is, in some examples, each subgraph of the predicted G m is built by a subset of the predicted actions a. This approximates the marginal probability of the relevant action subsequence by its conditional probability given preceding actions. Nodes v G G m are then selected such that the subgraph rooted at v has probability above a constant threshold.
- the conditional probability is based on at least the predicted unseen tokens of u and/or the seen tokens of u. Because all of the tokens potentially contribute to such a subgraph, there is no alignment. As such, when p(u
- method 200 includes selecting the candidate node if the conditional probability is greater than a threshold.
- a node whose subgraph has previously executed e.g. after predicting and selecting it from a previous prefix, would not be selected again. This would be redundant, as either the results of execution are known or pending.
- Nodes whose function invocations have side effects are generally not selected until the utterance is complete, as these nodes may be unsafe to pre-execute.
- all unexecuted nodes of the final predicted graph, 6, may be selected, as they are both safe and necessary to execute.
- method 200 includes speculatively executing a selected program node prior to receiving a final version of the full user utterance.
- the selected program node is executed after arguments of children of the selected program node have returned from execution, provided the selected program node remains selected based on subsequently received utterance prefixes, subsequently predicted complete dataflow programs, and/or subsequently understood context.
- FIG. 3 schematically depicts the execution of nodes of a data-flow program based on received utterance prefixes.
- An example framework is shown for simultaneous semantic parsing. Graph nodes and edges are represented as actions in the target (following a depth-first traversal order).
- a full, utterance 305 is received in an offline parsing scenario.
- a full graph 310 is predicted including a series of actions. Nodes (circles) and edges (connecting arrows) can be executed, presuming they are not dependent on any child actions returning values.
- a partial online parsing scenario is depicted.
- the model first proposes a full graph which is then pruned based on predicted probabilities.
- the surviving (selected) nodes, in black, (also listed in bold-italics) can be executed once their children have returned.
- the graph is amended.
- upstream nodes that depend on those nodes can be executed.
- a first utterance prefix 330 is received.
- the system may predict programs responsive to the receipt of each partial utterance and/or utterance prefix from the ASR.
- Graph 335 is predicted, and node D is selected for speculative execution.
- a second utterance prefix 345 is received. This causes graph 335 to be updated to graph 350.
- an additional node X connected to node B is added, and an additional node C is added between nodes A and E.
- nodes B and X can be speculatively executed, as can node E.
- a third utterance prefix 360 is added. This causes graph 350 to be updated to graph 365.
- an additional node Z connected to node C is added, a new node Y is added to replace node B, and node E has been moved to depend from nodes Y and C, replacing node X.
- nodes D and E have returned values, nodes C, Y, and A can be speculatively executed.
- the subgraph selection decisions are made by a learned model that considers the cost and benefit of each call, rather than using a fixed threshold.
- the semantic parser could also condition on the execution status, instead of operating separately from the learned model. All subgraphs with the same probability have the same risk of being useless work, but they do not all represent the same amount of useless work. Some incorrect subgraphs will require more computation than others. Similarly, subgraphs with the same probability share the same likelihood of being useful work, but they are not all equally useful in reducing latency. Some correct subgraphs can be postponed for longer without affecting the overall latency, because they will run more quickly or will not be needed as soon. As such, it may be appropriate to raise the threshold for invoking a subgraph and wait for more evidence that the subgraph is actually correct.
- Method 200 describes a probability-based policy for deciding whether to execute a predicted function call based on whether its estimated probability is above a constant threshold.
- this policy may not always be effective, as it does not consider the cost of executing a possibly incorrect call or the benefit of executing a call before more information is available.
- FIG. 4 a flow diagram illustrates an example method 400 for training a data-flow program based on execution costs.
- method 400 is executed by a computing system employing numerous models and subsystems, such as an API for calling function invocations, an ASR for translating speech to utterance prefixes, a model for predicting full data-flow graphs based on the utterance prefixes, a translator, such as seq2seq for predicting future utterance prefixes, and a policy trainer.
- models and subsystems such as an API for calling function invocations, an ASR for translating speech to utterance prefixes, a model for predicting full data-flow graphs based on the utterance prefixes, a translator, such as seq2seq for predicting future utterance prefixes, and a policy trainer.
- Method 400 is presented for training a system that may begin with a probability -based policy, but is iteratively improved using a policy trainer, such as the locally optimal learning to search (LOLS) algorithm.
- LOLS locally optimal learning to search
- the policy trainer may then use them to improve the parameters of the policy.
- the LOLS algorithm as a non-limiting example, is initialized using a non-optimal reference policy, iteratively improving the policy through sequential rounds of policy execution in a search space, selecting the action leading to the minimal expected loss for each state.
- method 400 includes sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance.
- the output of the ASR e.g., utterance prefix
- the original speech and/or audio file itself is received.
- the utterance prefix evolves by appending new prefixes and/or by revising older words and prefixes as more context becomes available. As such, a previously received utterance prefix may be adjusted based on new information.
- the system predicts programs responsive to the receipt of each partial utterance and/or utterance prefix from the ASR.
- method 400 includes predicting a complete dataflow program comprising a plurality of program nodes to be executed to satisfy the full user utterance.
- the utterance prefix is mapped to possible full utterances using a sequence to sequence model, then a semantic parser used to map each possible full utterance to one or more possible plans.
- the utterance prefix is mapped directly to possible plans. In some cases, this mapping is be performed with a lightweight version of the semantic parser.
- the prefixes are mapped directly to complete plans without using the complete (predicted) utterance as an intermediate step. The target is then the complete plan that would be predicted from the complete utterance.
- Training may be performed on unannotated user data, actual ASR prefixes, and/or the predicted plan.
- the policy may be used to evaluate the function invocations in the k- best model-predicted plans and decide which plans have execution cost benefits.
- the policy is used to map the utterance prefix directly to a set of function invocations that are worth speculatively executing.
- method 400 includes, selecting from the predicted complete dataflow program, based on a current policy, one or more program nodes to speculatively execute based on at least the utterance prefix.
- method 400 includes, selecting from the predicted complete dataflow program, based on the current policy, one or more program nodes to not select for execution until at least a next utterance prefix is received.
- nodes are given a “select” status or a “non-selected” status, indicating that node has been selected for speculative execution or non-selected for speculative execution at least until additional information is received.
- the same architecture is used for prediction and modeling as for choosing the API calls.
- the architecture may use the same model that makes the pre-fetched calls and the residual calls when an utterance is completed.
- the current policy selects program nodes to speculatively execute based on an execution cost.
- Each action associated with a node has an execution cost including the passage of time. For example, it may be preferential to terminate the dataflow program sooner rather than later.
- the execution cost of a latency may be nonlinear in its length. The execution cost may be based on an execution time per action, a total number of actions, a latency on terminating the dataflow program, a number and/or complexity of processing operations performed while completing each action, etc.
- the policy may determine an execution cost for speculatively selecting and executing a candidate node and invoking an associated function call.
- the execution cost is based on a probability that the invocation will appear in the final dataflow program, and/or an expected latency of that invocation.
- the probability that the invocation will appear in the final data flow program evolves, either increasing or decreasing.
- the execution cost is also based on a distribution as to how the prefix will evolve. This may occur when a prefix has changed, and/or when invocations have returned. During an interval when the distribution remains constant, taking additional action is unlikely to be informative.
- method 400 includes, responsive to receiving the full user utterance, determining a change in execution cost for the predicted complete dataflow program based on a change in which program nodes were speculatively executed and which program nodes were not executed.
- a final plan may be predicted, and any or all of the function invocations in that plan that have already been called but not invoked may be run.
- the plan execution waits for all results to return before terminating.
- the policy trainer speeds up the execution of the final predicted program by making certain API calls early.
- the cost of an action sequence is evaluated on the predicted program, not the true program, even if known.
- the policy trainer is continually updated based on live unannotated user data.
- a one-step change for the policy trainer includes changing a node that is ready to run from selected status to non-selected status, or vice-versa.
- determining a change in the execution cost includes considering the current environmental states.
- Example environmental states include: a current utterance prefix as reported by the ASR system, a delay since the prefix last changed, e.g., increasing until the prefix changes again or the ASR system declares the utterance as having terminated; actions taken so far (needed to determine the reward in the final state); and the return values from those actions.
- the current utterance prefix and the delay since the prefix last changed evolve stochastically, independent of any actions taken. Actions taken so far have deterministic transitions, where actions are added to the state as they are taken.
- the return values evolve stochastically, with stochastic return values arriving after a stochastic latency.
- the API can handle multiple requests in parallel, so that the distribution of (return value, latency) does not depend on when the function is invoked, or what other functions are invoked.
- the stochastic evolution is known, and the latencies for function invocations in the actual final plan are known.
- the API calls have no side effects that are visible to the user, such as interruptions, highlighting display entities, reprompting, etc.
- function invocations have return values, but no side effects. Accordingly, the effects on latency, accuracy, and computing cost can be determined without considering such side effects. These effects will not change if the actions are changed, as invoking a function earlier will not change its latency or the utterance.
- the recorded action sequence will change along with the action sequence itself, but in a known deterministic way.
- the seq2seq or other suitable model may be used to sample a number of possible completions of the prefix. In some examples, these possible completions are run in parallel, with the resulting plans yielding an estimate of the distribution across plans. In some instances, when the utterance is complete, this collapses to an exact point distribution that puts a probability of 1 on the true final plan. In such examples, marginal probabilities of different function invocations are then computed given the prefix. In some examples, calls intelligently hallucinate entities that are not in the prefix but have been predicted by the seq2seq or other suitable model.
- the target plan in the training data is abstracted into a partial plan, for prediction.
- the complete plan is then modeled as a sequence of function invocations in a canonical order.
- an invocation can be deterministically deleted if it depends on an argument that is unpredictable from the prefix, or depends on such an invocation. Such invocations may be considered “inaccessible” from this prefix, and deleting these inaccessible prefixes will delete many of the function invocations that do not correspond to any words in the prefix.
- the same predicted autoregressive logits for each function invocation are used, for example, with multi-task finetuning. If the same invocation appears in multiple plans with different embeddings, these embeddings can be pooled before converting to a score.
- the classifier also takes as an input the log-marginal probability.
- function invocations are targeted if they include arguments that are currently available.
- references to previous calls may only be allowed for calls that have already returned or if they include arguments that indicate that there are no more actions worth taking at this time.
- Context for predicting a next action includes one or more of the dialogue history, possibly reduced to a set of types; the current ASR prefix; and the previous function calls that have actually been made as opposed to the calls that have been predicted but classified as not worth making. In some examples, calls are not required to have returned yet, as it may be beneficial to consider launching siblings in parallel.
- Previous calls may be de-weighted as an argument to an invocation if the call is estimated to be wrong given the current updated prefix. A previous call may thus be rescored given the current prefix, and that new score incorporated into the score of the proposed invocation.
- the policy is run repeatedly to choose a sequence of actions, but this process may be suspended once it predicts that there are no more potentially meaningful actions.
- the process may resume whenever the ASR prefix has changed, a call returns, and/or an assessment of whether any calls are worthwhile has changed.
- method 400 includes generating an updated policy from the current policy based on a determined reduction to the execution cost for the predicted complete dataflow program.
- the execution cost (or alternatively, the reward) of invoking each function compared to not invoking that function is considered.
- Any suitable cost-sensitive binary classifier can be used, e.g., perceptron, SVM, logistic regression.
- an utterance can be chosen at random and the current policy simulated on it.
- Various selected interventions change a single action, e.g., adding or suppressing a function invocation at an “action time”.
- the prediction may then continue according to the current policy, and a change in execution cost then determined.
- such measurements are used to improve the policy, with the policy adjusted to prefer actions that lead to a lower cost.
- the policy is operated with an assumption of a special environment where it can be determined whether an action improves the reward by actually executing a counterfactual. That is, multiple competing actions can be executed from the same state, yielding similar variance reduction as for paired experimental designs.
- the special environment assumptions include a state where actions do not change the utterance or sequence of prefixes. This can be enacted if the user and ASR system is not shown the actions of the system. Further, the assumption may be made that any action does not change the predictions from each prefix, which can be enacted if the predictor is not shown the return values from those actions. Further, in some examples the special environment assumptions include that a sequence of predicted graphs G m does not change when an action is modified. The only change that occurs responsive to a modified action is a change to a single e(v) value.
- importance sampling can be applied. For example, a random subset of alternative actions can be applied at each state, e.g., favoring actions that score most highly under the current policy.
- FIG. 5 is a flow diagram illustrating an example computer-implemented method 500 for speculatively executing nodes of a data-flow program.
- method 500 is executed by a computing system employing numerous models and subsystems, such as an API for calling function invocations, an ASR for translating speech to utterance prefixes, a modeler for predicting full data-flow graphs based on the utterance prefixes, and a translator for predicting future utterance prefixes.
- the computing system employs a trained cost policy, such as a cost policy trained as described with regard to FIG. 4.
- method 500 includes sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance.
- the output of the ASR e.g., utterance prefix
- actions are described that apply to each sequentially received utterance prefix.
- the system predicts programs responsive to the receipt of each partial utterance and/or utterance prefix from the ASR.
- method 500 includes predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance.
- the complete dataflow program is predicted directly from the utterance prefix, or is predicted after first predicting a full user utterance based on the received utterance prefixes and any available context information.
- method 500 includes determining an execution cost for the complete dataflow program based on speculatively executing a candidate subgraph, the candidate subgraph including the program node and its descendants based on the predicted dataflow program.
- the execution cost for the complete dataflow program may be determined for scenarios where the candidate subgraph is speculatively executed, and where the candidate subgraph is not executed until further information is received, e.g., a next utterance prefix.
- the execution cost is based on one or more of a latency, a total computing cost, and a likelihood distribution indicating a probability that the candidate subgraph will be executed in the final dataflow program.
- method 500 includes selecting the candidate node to speculatively execute if speculatively executing the candidate subgraph would result in a decrease in the execution cost for the complete predicted dataflow program.
- the candidate subgraph is speculatively executed if doing so would result in an above threshold decrease in the execution cost for the complete predicted dataflow program, e.g., decrease a latency by more than a threshold number of milliseconds.
- the candidate subgraph may not be speculatively executed if it is unlikely to be executed in the final dataflow program, even if speculative execution would decrease latency in the scenarios where the candidate subgraph were ultimately invoked.
- the methods and processes described herein may be tied to a computing system of one or more computing devices.
- such methods and processes may be implemented as an executable computer-application program, a network-accessible computing service, an application-programming interface (API), a library, or a combination of the above and/or other compute resources.
- API application-programming interface
- FIG. 6 schematically shows a simplified representation of a computing system 600 configured to provide any to all of the compute functionality described herein.
- Computing system 600 may take the form of one or more personal computers, network-accessible server computers, tablet computers, home-entertainment computers, gaming devices, mobile computing devices, mobile communication devices, e.g., smart phone, virtual/augmented/mixed reality computing devices, wearable computing devices, Internet of Things (loT) devices, embedded computing devices, and/or other computing devices.
- mobile computing devices e.g., smart phone, virtual/augmented/mixed reality computing devices, wearable computing devices, Internet of Things (loT) devices, embedded computing devices, and/or other computing devices.
- mobile communication devices e.g., smart phone, virtual/augmented/mixed reality computing devices, wearable computing devices, Internet of Things (loT) devices, embedded computing devices, and/or other computing devices.
- smart phone virtual/augmented/mixed reality computing devices
- wearable computing devices e.g., wearable computing
- Computing system 600 includes a logic subsystem 602 and a storage subsystem 604. Computing system 600 may optionally include an input/output subsystem 606, communication subsystem 608, and/or other subsystems not shown in FIG. 6.
- Logic subsystem 602 includes one or more physical devices configured to execute instructions.
- the logic subsystem may be configured to execute instructions that are part of one or more applications, services, or other logical constructs.
- the logic subsystem may include one or more hardware processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware devices configured to execute hardware or firmware instructions.
- Processors of the logic subsystem may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic subsystem optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem may be virtualized and executed by remotely-accessible, networked computing devices configured in a cloudcomputing configuration.
- Storage subsystem 604 includes one or more physical devices configured to temporarily and/or permanently hold computer information such as data and instructions executable by the logic subsystem. When the storage subsystem includes two or more devices, the devices may be collocated and/or remotely located. Storage subsystem 604 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. Storage subsystem 604 may include removable and/or built-in devices. When the logic subsystem executes instructions, the state of storage subsystem 604 may be transformed - e.g., to hold different data.
- logic subsystem 602 and storage subsystem 604 may be integrated together into one or more hardware-logic components.
- Such hardware-logic components may include program- and application-specific integrated circuits (PASIC / ASICs), program- and application-specific standard products (PSSP / ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
- PASIC / ASICs program- and application-specific integrated circuits
- PSSP / ASSPs program- and application-specific standard products
- SOC system-on-a-chip
- CPLDs complex programmable logic devices
- the logic subsystem and the storage subsystem may cooperate to instantiate one or more logic machines.
- the term “machine” is used to collectively refer to the combination of hardware, firmware, software, instructions, and/or any other components cooperating to provide computer functionality. In other words, “machines” are never abstract ideas and always have a tangible form.
- a machine may be instantiated by a single computing device, or a machine may include two or more sub-components instantiated by two or more different computing devices.
- a machine includes a local component, e.g., software application executed by a computer processor, cooperating with a remote component, e.g., cloud computing service provided by a network of server computers.
- the software and/or other instructions that give a particular machine its functionality may optionally be saved as one or more unexecuted modules on one or more suitable storage devices.
- the previously-trained code-generation machine, previously-trained relevance detection machine, and/or error-handling execution machine are examples of machines according to the present disclosure.
- Machines may be implemented using any suitable combination of state-of-the-art and/or future machine learning (ML), artificial intelligence (Al), and/or natural language processing (NLP) techniques.
- ML state-of-the-art and/or future machine learning
- Al artificial intelligence
- NLP natural language processing
- the previously-trained code-generation machine and/or previously- trained relevance detection machine may incorporate any suitable ML, Al, and/or NLP techniques, including any suitable language models.
- Non-limiting examples of techniques that may be incorporated in an implementation of one or more machines include support vector machines, multi-layer neural networks, convolutional neural networks (e.g., including spatial convolutional networks for processing images and/or videos, temporal convolutional neural networks for processing audio signals and/or natural language sentences, and/or any other suitable convolutional neural networks configured to convolve and pool features across one or more temporal and/or spatial dimensions), recurrent neural networks (e.g., long short-term memory networks), associative memories (e.g., lookup tables, hash tables, Bloom Filters, Neural Turing Machine and/or Neural Random Access Memory), word embedding models (e.g., GloVe or Word2Vec), unsupervised spatial and/or clustering methods (e.g., nearest neighbor algorithms, topological data analysis, and/or k-means clustering), graphical models (e.g., (hidden) Markov models, Markov random fields, (hidden) conditional random fields, and/or Al knowledge bases), and/
- the methods and processes described herein may be implemented using one or more differentiable functions, wherein a gradient of the differentiable functions may be calculated and/or estimated with regard to inputs and/or outputs of the differentiable functions, e.g., with regard to training data, and/or with regard to an objective function.
- Such methods and processes may be at least partially determined by a set of trainable parameters. Accordingly, the trainable parameters for a particular method or process may be adjusted through any suitable training procedure, in order to continually improve functioning of the method or process.
- Non-limiting examples of training procedures for adjusting trainable parameters include supervised training (e.g., using gradient descent or any other suitable optimization method), zeroshot, few-shot, unsupervised learning methods (e.g., classification based on classes derived from unsupervised clustering methods), reinforcement learning (e.g., deep Q learning based on feedback) and/or generative adversarial neural network training methods, belief propagation, RANSAC (random sample consensus), contextual bandit methods, maximum likelihood methods, and/or expectation maximization.
- a plurality of methods, processes, and/or components of systems described herein may be trained simultaneously with regard to an objective function measuring performance of collective functioning of the plurality of components (e.g., with regard to reinforcement feedback and/or with regard to labelled training data). Simultaneously training the plurality of methods, processes, and/or components may improve such collective functioning.
- one or more methods, processes, and/or components may be trained independently of other components (e.g., offline training on historical data).
- the previously-trained code-generation machine and/or previously-trained relevance detection machine may incorporate any suitable language models.
- Language models may utilize vocabulary features to guide sampling/searching for words for recognition of speech.
- a language model may be at least partially defined by a statistical distribution of words or other vocabulary features.
- a language model may be defined by a statistical distribution of n-grams, defining transition probabilities between candidate words according to vocabulary statistics.
- the language model may be further based on any other appropriate statistical features, and/or results of processing the statistical features with one or more machine learning and/or statistical algorithms, e.g., confidence values resulting from such processing.
- a statistical model may constrain what words may be recognized for an audio signal, e.g., based on an assumption that words in the audio signal come from a particular vocabulary.
- the language model may be based on one or more neural networks previously trained to represent audio inputs and words in a shared latent space, e.g., a vector space learned by one or more audio and/or word models, e.g., wav21etter and/or word2vec.
- finding a candidate word may include searching the shared latent space based on a vector encoded by the audio model for an audio input, in order to find a candidate word vector for decoding with the word model.
- the shared latent space may be utilized to assess, for one or more candidate words, a confidence that the candidate word is featured in the speech.
- the language model may be used in conjunction with an acoustical model configured to assess, for a candidate word and an audio signal, a confidence that the candidate word is included in speech in the audio signal based on acoustical features of the word, e.g., mel-frequency cepstral coefficients, formants, etc..
- the language model may incorporate the acoustical model, e.g., assessment and/or training of the language model may be based on the acoustical model.
- the acoustical model defines a mapping between acoustic signals and basic sound units such as phonemes, e.g., based on labelled speech.
- the acoustical model may be based on any suitable combination of state-of-the-art or future machine learning (ML) and/or artificial intelligence (Al) models, for example: deep neural networks (e.g., long short-term memory, temporal convolutional neural network, restricted Boltzmann machine, deep belief network), hidden Markov models (HMM), conditional random fields (CRF) and/or Markov random fields, Gaussian mixture models, and/or other graphical models (e.g., deep Bayesian network).
- Audio signals to be processed with the acoustic model may be pre-processed in any suitable manner, e.g., encoding at any suitable sampling rate, Fourier transform, band-pass filters.
- the acoustical model may be trained to recognize the mapping between acoustic signals and sound units based on training with labelled audio data.
- the acoustical model may be trained based on labelled audio data comprising speech and corrected text, in order to learn the mapping between the speech signals and sound units denoted by the corrected text. Accordingly, the acoustical model may be continually improved to improve its utility for correctly recognizing speech.
- the language model may incorporate any suitable graphical model, e.g., a hidden Markov model (HMM) or a conditional random field (CRF).
- HMM hidden Markov model
- CRF conditional random field
- the graphical model may utilize statistical features, e.g., transition probabilities, and/or confidence values to determine a probability of recognizing a word, given the speech and/or other words recognized so far. Accordingly, the graphical model may utilize the statistical features, previously trained machine learning models, and/or acoustical models to define transition probabilities between states represented in the graphical model.
- input/output subsystem 606 may comprise one or more displays, which may be used to present a visual representation of data held by storage subsystem 604. This visual representation may take the form of a graphical user interface (GUI).
- GUI graphical user interface
- Input/output subsystem 606 may include one or more display devices utilizing virtually any type of technology.
- display subsystem may include one or more virtual-, augmented-, or mixed reality displays.
- input/output subsystem 606 may further comprise one or more speakers configured to output speech, e.g., to present an audible representation of data held by storage subsystem 604, such as automated assistant responses.
- input/output subsystem 606 may comprise or interface with one or more input devices.
- An input device may include a sensor device or a user input device. Examples of user input devices include a keyboard, mouse, touch screen, or game controller.
- the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board.
- NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition.
- communication subsystem 608 may be configured to communicatively couple computing system 600 with one or more other computing devices.
- Communication subsystem 608 may include wired and/or wireless communication devices compatible with one or more different communication protocols.
- the communication subsystem may be configured for communication via personal-, local- and/or wide-area networks.
- a computer-implemented method comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance; for each sequentially received utterance prefix, predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance; and selecting, from the predicted complete dataflow program, one or more program nodes to speculatively execute based on at least the utterance prefix.
- the technical effect of speculatively executing one or more program nodes based on at least the utterance prefix is an increased user interaction performance due to at least reduced latency in generating a response.
- the computer- implemented method additionally or alternatively comprises speculatively executing a selected program node prior to receiving a final version of the full user utterance.
- execution of the selected program node is additionally or alternatively performed after arguments of the selected program node have returned from execution provided the selected program node remains selected based on subsequently received utterance prefixes, subsequently predicted complete dataflow programs, and/or subsequently understood context.
- the complete dataflow program is additionally or alternatively predicted based also on a context of a dialogue history.
- the complete dataflow program is additionally or alternatively predicted by a machine learning model previously trained with training data including pairs of training utterance prefixes and training complete dataflow programs.
- the machine learning model is additionally or alternatively previously trained to at least locally maximize a product based on at least: a sequence of actions, each of which contributes to adding a node and/or a sequence of edges to the complete dataflow program; a context of a dialogue history; and the utterance prefix.
- the complete dataflow program is additionally or alternatively predicted by hallucinating a predicted full utterance from the utterance prefix with a previously-trained language model and predicting the complete dataflow program with a machine learning model previously trained with training data including pairs of training full utterances and training complete dataflow programs.
- the previously-trained language model is additionally or alternatively a Transformer-based language model.
- selecting one or more program nodes to speculatively execute is additionally or alternatively based at least on a probability the program node will eventually be executed.
- selecting program nodes additionally or alternatively includes, for each candidate node of the complete dataflow program predicted based on the utterance prefix, calculating a conditional probability of a candidate subgraph including the candidate node and its descendants; and selecting the candidate node if the conditional probability is greater than a threshold.
- calculating the conditional probability of the candidate subgraph additionally or alternatively includes multiplying predicted probabilities of all relevant actions responsible for building the candidate subgraph.
- each of the plurality of utterance prefixes are additionally or alternatively received from a speech to text module.
- a computer-implemented method comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance; for each sequentially received utterance prefix, predicting a complete dataflow program comprising a plurality of program nodes to be executed to satisfy the full user utterance; and selecting from the predicted complete dataflow program, based on a current policy, one or more program nodes to speculatively execute based on at least the utterance prefix, selecting from the predicted complete dataflow program, based on the current policy, one or more program nodes to not execute until at least a next utterance prefix is received; responsive to receiving the full user utterance, determining a change in execution cost for the predicted complete dataflow program based on a change in which program nodes were speculatively executed and which program nodes were not executed; and generating an updated policy from the current policy based on a determined reduction to the execution cost for the
- predicting a plurality of program nodes to be executed to satisfy the full user utterance additionally or alternatively includes mapping the received utterance prefix to a plurality of possible full user utterances; and mapping each possible full user utterance to one or more possible complete dataflow programs.
- predicting a plurality of program nodes to be executed to satisfy the full user utterance additionally or alternatively includes mapping the received utterance prefix to one or more possible complete dataflow programs.
- predicting a plurality of program nodes to be executed to satisfy the full user utterance additionally or alternatively includes directly mapping the received utterance prefix to a set of program nodes.
- the change in execution cost is additionally or alternatively based on one or more of an expected latency, a total computing cost, and a likelihood distribution indicating a probability that the program node will appear in a final dataflow program.
- a computer-implemented method comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance; for each sequentially received utterance prefix, predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance; and for each program node of the complete dataflow program predicted based on the utterance prefix, determining an execution cost for the complete dataflow program based on speculatively executing a candidate subgraph, the candidate subgraph including the program node and its descendants based on the predicted dataflow program; and selecting the candidate node to speculatively execute if speculatively executing the candidate subgraph would result in a decrease in the execution cost for the complete predicted dataflow program.
- determining the execution cost for the complete dataflow program additionally or alternatively includes determining an execution cost for one or more scenarios where the candidate subgraph is speculatively executed, and for one or more scenarios where the candidate subgraph is not executed until further information is received.
- the computer-implemented method additionally or alternatively comprises not executing the candidate subgraph until at least a next utterance prefix is received if the likelihood of executing the candidate subgraph in the complete predicted dataflow program is less than a likelihood threshold.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Machine Translation (AREA)
Abstract
A computer-implemented method is presented. The method comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance. For each sequentially received utterance prefix, a complete dataflow program is predicted based on the utterance prefix. The complete dataflow program includes a plurality of program nodes to be executed to satisfy the full user utterance. One or more program nodes are selected from the predicted complete dataflow program to speculatively execute based on at least the utterance prefix.
Description
SPECULATIVE EXECUTION OF DATAFLOW PROGRAM NODES
BACKGROUND
Conversational systems process user utterances and respond by automatically performing actions, such as answering questions, invoking application programming interfaces (APIs), or otherwise assisting a user based on the user utterances.
SUMMARY
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
A computer-implemented method is presented. The method comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance. For each sequentially received utterance prefix, a complete dataflow program is predicted based on the utterance prefix. The complete dataflow program includes a plurality of program nodes to be executed to satisfy the full user utterance. One or more program nodes are selected from the predicted complete dataflow program to speculatively execute based on at least the utterance prefix.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A-1D show an example data-flow architecture for an automated assistant.
FIG. IE shows an example dialogue between a user and an automated assistant.
FIG. 2 is a flow diagram illustrating an example method for speculatively executing nodes of a data-flow program.
FIG. 3 schematically depicts the execution of nodes of a data-flow program based on received utterance prefixes.
FIG. 4 is a flow diagram illustrating an example method for training a data-flow program based on execution costs.
FIG. 5 is a flow diagram illustrating an example method for speculatively executing nodes of a data-flow program.
FIG. 6 shows an example computing system.
DETAILED DESCRIPTION
Conversational computing interfaces may be used to interact with users via natural language, for example via speech. Although the present disclosure uses an automated assistant as an example conversational computing interface, this example is non-limiting and conversational computing
interfaces may be implemented according to the present disclosure for any suitable purpose, for example, to permit a user to use natural language to interact with any suitable computer hardware and/or computer software. As such, every reference to an automated assistant in the present disclosure applies equally to any other conversational computer interface or other computing framework configured to respond to speech.
Automated assistants can use natural language processing (NLP) techniques to process an input user utterance, e.g., user speech, in order to perform a predefined, hard-coded action related to the input user utterance. For example, an automated assistant may support a predefined plurality of hard-coded templates, where each template has a number of slots that can be filled to parametrize a hard-coded action. As an example, an automated assistant may support a pre-defined interaction to invoke an application programming interface (API), e.g., to reserve a seat at a restaurant, call a ride-hailing service, or look up the weather. However, although automated assistants may support a plurality of different predefined actions via the predefined templates, an automated assistant that only supports predefined actions via templates may not be configured to perform more complex or novel behaviors.
The present disclosure is directed to an automated assistant that processes user utterances using data-flow programs in a data-flow programming language, e.g., in addition to or instead of using templates. The automated assistant uses a previously-trained code generation machine to generate and/or output a data-flow program for a user utterance. The data-flow program uses a plurality of pre-defined functions to define individual steps for processing the user utterance. As used herein, data-flow programs are used synonymously with dataflow graphs.
Processing user utterances by using data-flow programs generated by a previously-trained code generation machine may result in an improved user experience, improved efficiency of an automated assistant or other interactive computer service, e.g., improved storage usage and/or improved processing time, and/or improved ability for responding to different user utterances. As an example, the data-flow programs may encode a variety of different processing strategies for different user utterances, including performing calculations based on the user utterances, accessing APIs to respond to the user utterances, etc. The code generation machine may generate data-flow programs that are specific to a user utterance being processed, which may enable processing the user utterance more efficiently (e.g., without performing irrelevant steps) and with improved user satisfaction (e.g., by generating programs that better address requests expressed in the user utterance).
Accordingly, FIG. 1A shows a data-flow architecture for an automated assistant system 100. Automated assistant system 100 is configured to process a user utterance 102 by operating a previously-trained code-generation machine 104 configured to output a data-flow program 106
for the user utterance 102. Although the present disclosure focuses on interaction via natural language, e.g., speech, conversational computing interfaces such as automated assistant system 100 may further permit interaction via any other suitable input methods, for example, via touch screen inputs and/or button presses. Similarly, although the present disclosure focuses on processing a user utterance 102, other inputs such as button press events may be processed in similar fashion. For example, previously-trained code-generation machine 104 may be configured to output a data-flow program 106 for one or more button press event(s) supplementing user speech. Previously-trained code generation machine 104 may be trained to recognize different kinds of non-linguistic input events, e.g., based on a specific button being pressed, timing of the button press relative to other input events, etc.
Data-flow program 106 is shown as a graph including a plurality of function nodes, wherein the function nodes are depicted with inputs and outputs shown by edges/arrows. The data-flow program 106 is configured to produce a return value indicated by the bottom-most arrow. Previously-trained code-generation machine 104 is configured to add any of a plurality of predefined functions 110 to the data-flow program based on the user utterance. Each pre-defined function defines one or more individual steps for processing the user utterance 102. The data-flow program 106 is executable to cause the automated assistant to respond to the user utterance, for example by performing any suitable response action. The pre-defined functions of the data-flow program 106 may be executable to cause the automated assistant to perform any suitable response action, for example, outputting a response as speech and/or text, e.g., outputting an assistant response 120 as shown in FIG. 1 A, and/or invoking an API to perform an action using the API, e.g., ordering food from a restaurant, scheduling a ride with a ride-hailing service, scheduling a meeting in a calendar service, placing a phone call. Although the present disclosure focuses on examples in which the automated assistant responds to utterances by outputting an assistant response, e.g., as speech and/or text, these examples are non-limiting and the pre-defined functions and data-flow programs described herein may be configured to cause the automated assistant to respond to utterances in any suitable manner, for example by performing one or more actions using an API instead of or in addition to outputting an assistant response.
The previously-described code-generation machine 104 described herein may be used to respond to user utterances in any suitable fashion. For example, the previously-described code-generation machine 104 may be configured to recognize a user utterance, and produce, from the user utterance, a data-flow program that defines an executable plan for responding to the user utterance. The resulting data-flow program may be executed, for example by an automated assistant, to process the user utterance. In some examples, the data-flow program for responding to the user utterance may be executed to respond to the user utterance without needing to generate any
additional code using the code-generation machine 104. In other words, code-generation machine 104 is configured to output a complete plan for processing a user utterance. Alternately or additionally, code-generation machine 104 may be configured to output a data-flow program for responding to a user utterance, wherein some or all of the data-flow program is executed before code-generation machine 104 is used to generate further code defining a further plan for completing processing of the user utterance. Code-generation machine 104 may be used to plan and execute data-flow programs in any suitable fashion, including completing planning before execution and/or interleaving planning and execution in any suitable fashion.
The previously-trained code-generation machine 104 may be based on any suitable technology, such as state-of-the-art or future machine learning (ML), artificial intelligence (Al), and/or natural language processing (NLP) technologies. In some examples, the previously-trained codegeneration machine 104 includes an encoder machine configured to encode the user utterance 102 as a semantic feature, e.g., a vector in a semantic vector space learned by the previously-trained code-generation machine 104 and a decoder machine configured to decode the semantic feature by outputting one or more functions from the plurality of pre-defined functions 110. In some examples, the decoder machine is configured to output the one or more functions according to a typed grammar for composing functions from the plurality of pre-defined functions 110, thereby constraining the outputs of the decoder machine to be well-typed, fully-executable data-flow programs. The plurality of pre-defined, composable functions 110 supports a range of different automated assistant behaviors, e.g., invoking APIs, answering questions, looking up data, and/or utilizing historical context from a context-specific dialogue history 130 maintained for the automated assistant. As shown by the dashed arrows in FIG. 1A, the context-specific dialogue history 130 may include any suitable aspects of the dialogue, for example user utterance 102, dataflow program 106, and/or a resulting assistant response 120.
Accordingly, the user utterance 102, data-flow program 106 generated for the user utterance 102, and/or any relevant assistant response 120 by the automated assistant may be stored in the contextspecific dialogue history 130. Accordingly, the context-specific dialogue history 130 defines a plurality of concepts, e.g., concepts 130A through 130N. “Concept” is used herein to refer to any relevant or potentially relevant aspects of the interaction between the user and the automated assistant. For example, a concept may include an entity (e.g., a person, place, thing, number, date), an intent of a user query (e.g., an intent to order food, an intent to look up the weather, an intent to schedule a meeting), an action performed by the automated assistant (e.g., ordering food, looking up the weather, invoking an API, looking up information pertaining to an entity, recognizing a specific user utterance, performing a composite action composed of more than one action), or any other suitable feature. A concept may be defined in any suitable manner, for
example based on text content of the user utterance 102. In some examples, a concept 130A is defined in terms of a data-flow program fragment 132A. For example, the data-flow program fragment 132A may include one or more functions configured to look up information pertaining to a specific entity, and/or one or more functions configured to cause the automated assistant to perform a specific action.
In some examples, the plurality of pre-defined functions 110 includes one or more historyaccessing functions configured to access the context-specific dialogue history 130. Accordingly, data-flow program 106 may include such history-accessing functions. For example, the plurality of pre-defined functions 110 and data-flow program 106 each include a history-accessing function 112 configured to access the context-specific dialogue history 130 as shown by the arrow. In some examples, the plurality of pre-defined functions 110 includes a search-history function configured to look up a concept from the context-specific dialogue history 130, e.g., a getSalient() function configured to look up an entity that was previously discussed, or to look up an action that was previously performed. In some examples, the plurality of pre-defined functions 110 includes a program-rewriting function parametrized by a designated concept stored in the context-specific dialogue history and configured to generate a new data-flow program fragment related to the designated concept based on concepts from context-specific dialogue history 130, e.g., a Clobber() function configured to generate a new, re-written program based on the designated concept, for example, to write a new program for performing a previous action with regard to a new entity, or for performing a new action with regard to a previously-discussed entity. In some examples, the designated concept stored in the context-specific dialogue history includes a target sub-concept, and the program-rewriting function is further parametrized by a replacing sub-concept for replacing the target sub-concept. Accordingly, the new data-flow program fragment corresponds to the designated concept with the target sub-concept being replaced by the replacing sub-concept. In particular, in some examples, the new data-flow program fragment includes a rewritten dataflow program fragment, based on a historical data-flow program fragment corresponding to the designated concept, wherein a sub-program fragment corresponding to the target sub-concept is replaced by a different sub-program fragment corresponding to the replacing sub-concept.
By looking up and/or rewriting concepts from context-specific dialogue history 130, the automated assistant may be able to repeat actions, perform actions with modifications, look up relevant entities or other details related to previous actions, etc. For example, the automated assistant may repeat an action or perform an action with modifications, by re-executing code based on a program fragment 132A corresponding to a concept 130A from the context-specific dialogue history 130. Furthermore, if any error condition is reached during execution of data-flow program 106 prior to outputting an assistant response 120, the data-flow program 106 and/or other program
fragments stored in context-specific dialogue history 130 may be re-executed either with or without modifications to recover from the error.
Accordingly, FIG. IB shows a different view of automated assistant system 100 focusing on a search-history function 112 of the plurality of pre-defined functions 110. The search-history function 112 is configured to process an ambiguous user utterance 102 by resolving any ambiguities using concepts from context-specific dialogue history 130. As shown by the dashed arrow leading from context-specific dialogue history 130 to search-history function 112, searchhistory function 112 is configured to access one or more concepts from the context-specific dialogue history 130 to determine a disambiguating concept 134.
In some examples, disambiguating concept 134 is defined by a program fragment 136. As an example, if user utterance 102 refers to an ambiguous entity, e.g., by a pronoun or partial name, such as “Tom”, search-history function 112 may be configured to search for the ambiguous entity, e.g., based on the partial name “Tom”, in the context-specific dialogue history 130 to find a clarifying entity that matches the ambiguous entity, e.g., an entity having a first name “Tom” or a related name such as “Thomas”. In some examples, the clarifying entity may be defined by a full name, e.g., “Thomas Jones”. In other examples, the clarifying entity may be defined by code configured to find the clarifying entity, e.g., code to lookup people named “Tom” in an address book of the user.
In some examples, disambiguating concept 134 indicates a bridging anaphora between a user utterance and a previous user utterance from the context-specific dialogue history. As an example, if a user asks, “when is my lunch with Charles?” and then subsequently asks, “how long will it take to get there?” the word “there” in the subsequent user utterance refers to the location of the lunch with Charles. Accordingly, search-history function 112 may be configured to search for a concept corresponding to the location of the lunch with Charles. For example, the concept may include a data-flow program fragment that recursively uses the search-history function 112 to find a salient event, namely the meeting with Charles, as well as further instructions to obtain the location of the salient event. More generally, concepts found by the search-history function 112 may include data-flow programs that recursively invoke the search-history function 112 to find any suitable sub-concepts, e.g., so that a salient concept is defined in terms of searching for other salient sub-concepts.
FIG. 1C shows another different view of automated assistant system 100 focusing on a programrewriting function 112’. Program-rewriting function 112’ is configured to produce a rewritten concept 140, e.g., defined by a rewritten program fragment 142, by starting with a designated concept 152 from the context-specific dialogue history 130, wherein the designated concept 152 includes at least one replacement target sub-concept 154 to be modified/replaced with a different,
replacing sub-concept 156. As an example, designated concept 152 may refer to an action by the automated assistant including “placing a reservation at a local sushi restaurant”. Accordingly, target sub-concept 154 may be the description “sushi restaurant,” and the replacing sub-concept 156 may be an alternate description “burrito restaurant.” In particular, designated concept 152 may be defined by a program fragment for invoking an API to place restaurant reservations, wherein the API is parametrized by a program fragment corresponding to target sub-concept 154 that indicates a restaurant at which to place reservations. Accordingly, program-rewriting function 112’ is configured to output a re-written concept 140 that corresponds to “placing a reservation at the local burrito restaurant,” so that re-written concept 140 may be used to perform a new action that includes placing the reservation at the local burrito restaurant, e.g., by executing program fragment 142.
FIG. ID shows another view of automated assistant system 100 focusing on handling errors that may arise during execution of data-flow program 106. Although not shown in FIG. ID, as in FIGS. 1 A-1C, the user utterance 102, data-flow program 106 and/or assistant response 120 may be stored into context-specific dialogue history 130 while processing user utterances.
If an error condition is reached during execution of data-flow program 106 prior to outputting an assistant response 120, then the data-flow program may be suspended and saved as suspended execution 160. Error-handling execution machine 170 is configured to recover from the error condition in order to produce the assistant response 120’. For example, error-handling execution machine 170 may implement method 400, which is described with regard to FIG. 4, below. To recover from the error condition, error-handling execution machine 170 may modify and/or re- execute the suspended execution 160. Alternately or additionally, error-handling execution machine 170 may execute an alternate program fragment 180. For example, error-handling execution machine 170 may modify the suspended execution 160 by using a program-rewriting function to replace a program fragment of the data-flow program 106 with the alternate program fragment 180, and/or by executing the alternate program fragment instead of the data-flow program 106. The alternate program fragment 180 may be derived from the context-specific dialogue history 130, e.g., the alternate program fragment 180 may be a previously-executed program fragment, built from the plurality of pre-defined functions 110 and/or output by the previously-trained code-generation machine 104. In some examples, previously-trained codegeneration machine 104 is configured to recognize the error condition and to output a new program fragment configured to recover from the error. For example, previously-trained code-generation machine 104 may be trained with regard to one or more training examples, each training example including an example error condition, and an example data-flow program fragment for responding to the error. For example, previously-trained code-generation machine 104 may be trained with
regard to a large plurality of annotated dialogue histories, where some or all of the annotated dialogue histories include an occurrence of an error condition.
In some examples, the error condition may arise due to an ambiguity in the user utterance 102 in which insufficient information is provided by the user utterance 102 to fully serve the user based on the user utterance. As an example, if the user utterance is “Schedule a meeting with Tom” but there is more than one “Tom” in the user’s address book, it may not be clear with whom to schedule the meeting. Accordingly, in some examples, error-handling execution machine 170 is configured to execute code to produce an initial assistant response 120’ including a clarifying question 122. The clarifying question 122 is output for the user to respond to in a new, clarifying user utterance 102. Accordingly, the clarifying user utterance can be processed using the previously-trained code-generation machine 104 to produce a new data-flow program 106 for responding to the clarifying user utterance. The previously-trained code-generation machine 104 may be trained via supervised training on a plurality of annotated dialogue histories. In many examples, the previously-trained code-generation machine 104 is trained on a large plurality of annotated dialogue histories, e.g., hundreds, thousands, tens of thousands or more. Annotated dialogue histories describe state information associated with a user interacting with an automated assistant, annotated with an example data-flow program for responding to the user interaction. For example, an annotated dialogue history may include a context-specific dialogue history according to the present disclosure, annotated with an example data-flow program suitable for execution by the automated assistant in response to the context established in the context-specific dialogue history. In examples, a context-specific dialogue history includes a plurality of events arranged in a temporal order, e.g., time-stamped events, including user utterances, data-flow programs executed by an automated assistant, responses output by the automated assistant, and/or error conditions reached while executing data-flow programs.
As a non-limiting example, an annotated dialogue history may include a context-specific dialogue history in which the most recent event is an example user utterance, annotated with an example data-flow program for responding to the example user utterance with regard to the context established by the context-specific dialogue history. Accordingly, the previously-trained codegeneration machine 104 may be trained to reproduce the example data-flow program given the example user utterance and the context-specific dialogue history. The example data-flow program may include any suitable functions in any suitable sequence/arrangement, so that via training, the code-generation machine is configured to output suitable functions in a suitable sequence. For example, the example data-flow program may include the search-history function and accordingly, the code-generation machine may be trained to output the search-history function along with other functions in a suitable sequence for responding to a user utterance, e.g., to cause
an automated assistant to perform any suitable response action, such as outputting a response as text and/or speech, or invoking an API. In some examples, an annotated dialogue history includes a context-specific dialogue history in which the most recent event is the occurrence of an error condition, e.g., instead of a user utterance being the most recent event, annotated with a data-flow program for recovering from the error. Accordingly, the previously-trained code-generation machine 104 may be trained with regard to such annotated dialogue histories to generate suitable data-flow programs for recovering from error conditions.
Annotated dialogue histories may be obtained in any suitable manner, for example, from human demonstrators. For example, one or more human demonstrators may be shown context-specific dialogue histories, e.g., context-specific dialogue histories derived from usage data obtained from interaction with humans, and/or machine-generated context-specific dialogue histories, and for each context-specific dialogue history, be asked to provide a suitable data-flow program for responding to the context-specific dialogue history. The data-flow programs provided by the human demonstrators may use the pre-defined functions in any suitable manner to perform a wide range of tasks responsive to user utterances and/or error conditions.
For example, based on being shown an example user utterance or an example error condition, the human demonstrators may provide example data-flow programs that perform any suitable calculations, output responses, e.g., to ask clarifying questions of the user, or answer a query issued by the user, listen for utterances from a user, e.g., to obtain clarifications from the user, invoke APIs, etc. Furthermore, the example data-flow programs may include the search-history function, e.g., “getSalient()”, and/or program-rewriting function, e.g., “Clobber()”, invoked with any suitable parameters, for example, to execute data-flow program fragments from the contextspecific dialogue history. Accordingly, by training on a plurality of annotated dialogue histories, the code-generation machine 104 may be trained to generate data-flow programs similar to those provided by the human demonstrators, in order to respond to user utterances and/or recover from errors.
The data-flow programs, e.g., data-flow programs generated by the previously-trained code generation machine and/or example data-flow programs, are built from a plurality of pre-defined, composable functions. The pre-defined, composable functions may be composed into programs that invoke the pre-defined, composable functions in in any suitable order and parametrize the pre-defined, composable functions in any suitable fashion. Accordingly, the previously-trained code-generation machine may be trained to output suitable data-flow programs for a user utterance, based on the example data-flow programs provided during training. The previously- trained code-generation machine is not limited to hard-coded behaviors. For example, instead of or in addition to responding to example user utterances seen during training, the previously-trained
code-generation machine is configured to process novel user utterances (that may not have been provided during training) by generating corresponding, novel data-flow programs (that also may not have been provided during training).
In order to generalize from specific training examples seen during training and respond to novel user utterances with novel data-flow programs, the previously-trained code-generation machine may be trained in any suitable fashion, e.g., using any suitable ML, Al, and/or NLP models as will be described below with regard to FIG. 6, on any suitable training data, for example, a large plurality of annotated dialogue histories. In some examples, the previously-trained codegeneration machine may be trained with regard to a loss function for assessing whether a dataflow program is a suitable response to a user utterance, wherein the loss function is configured to indicate zero loss or a relatively small loss, e.g., requiring no adjustment, or a relatively smaller adjustment of training parameters, when the code-generation machine successfully reproduces a training example, e.g., by producing the same data-flow program as was provided by a human annotator, given a context-specific dialogue history). However, although the loss function may be configured for a relatively small loss when the code-generation machine successfully reproduces a training example, the loss function may also be configured for a relatively small loss when the code-generation machine reproduces a different data-flow program, for example, a data-flow program that has a similar effect when executed, and/or a data-flow program that is indicated to be satisfactory by a human user, e.g., a human demonstrator and/or end-user of an automated assistant device. These example approaches for generalization during training are non-limiting, and any suitable Al, ML, and/or NLP techniques may be utilized to suitably train a codegeneration machine to generate suitable data-flow programs in response to a wide variety of user utterances.
In some examples, error conditions are recognized by operating a previously-trained error detection model. For example, the previously-trained error detection model may be trained via supervised training on a plurality of annotated dialogue histories, wherein the annotated dialogue histories are annotated to indicate when errors occur. For example, the annotated dialogue histories may be obtained by showing context-specific dialogue histories to one or more human demonstrators and asking the human demonstrators to indicate when the context-specific dialogue histories indicate an erroneous state.
In addition to history-accessing functions such as a search-history function and a programrewriting function, the plurality of pre-defined composable functions may include any suitable functions, for example, a listen function configured to listen for a specific user utterance before continuing with execution of a data-flow program and/or while speculatively executing selected nodes of a data-flow program, a response function configured to output a description of a value
determined during execution of a data-flow program, and/or a primitive calculation function for processing values obtained from user utterances and/or computed during execution of the dataflow program, e.g., data structure operations such as forming tuples from data, or arithmetic operations.
In some examples, the plurality of pre-defined composable functions includes a foreign function configured to invoke a foreign (i.e., third party) API. For example, foreign APIs may be invoked to interact with real-world services, e.g., schedule a car in a ride-hailing service, order food or make reservation at a restaurant. In some examples, the plurality of pre-defined composable functions includes an inference function configured to perform a calculation with regard to a result of a foreign function. The inference function may encapsulate high-level behaviors with regard to the API, which would otherwise require multiple different low-level functions using the API. For example, a foreign ride-hailing API may support functions for scheduling a car, adding a stop to a route, and finalizing a scheduled route. Accordingly, an inference function may be configured to receive a destination and, based on the destination, schedule a car, add a stop corresponding to a pickup location for the user, add a stop corresponding to the destination, and finalize the scheduled route including the stops corresponding to the pickup location and destination. By using inference functions to encapsulate high-level behaviors, the code-generation machine may be able to readily output well-typed code for performing the high-level behaviors using a foreign API, without needing to output individual steps using low-level functions of the foreign API. In some examples, an inference function may be defined with regard to one or more constraints and executing the inference function may include running a constraint satisfaction program to satisfy the one or more constraints, before invoking a foreign API using parameters defined by the solution for the constraints. In some examples, the one or more constraints may include “fuzzy” or “soft” constraints and accordingly, solving the constraints may include executing an inference program suitable for “fuzzy” logical inference, for example a Markov logic inference program.
In some examples, the plurality of pre-defined composable functions includes a user-customized function that is configured to access a user customization setting and to perform a calculation based on the user customization setting. For example, the user-customized function may be configured to determine whether the user is free based on a user-customized schedule, e.g., calendar data. User-customized functions may be implemented using foreign functions configured to invoke a foreign API, e.g., an API for looking up calendar data.
In some examples, the plurality of pre-defined composable functions includes an intelligent decision function, wherein the intelligent decision function is configured to use a previously- trained machine learning model to perform a calculation. As an example, the search-history function may be an intelligent decision function configured to use a previously-trained relevance
detection machine to search the context-specific dialogue history. As another example, the plurality of pre-defined composable functions may include an intelligent decision function configured to assess whether it is “morning” in a user-specific and/or population-specific way, e.g., the function may be configured to recognize that what time the user considers “morning” could vary depending on the day of the week or time of the year. For example, the intelligent decision function may be configured to assess that it is “morning” if it is between 6 AM and 11AM on a weekday, but may be configured to assess that it is morning if it is between 9AM and 12PM on a weekend. The intelligent decision function may be trained in any suitable fashion, for example based on labeled examples of times and whether or not a user considers the time morning. As with user-customized functions, the intelligent decision function may take into account auxiliary information such as a work schedule, calendar, and/or mobile phone usage of a user, for example to determine if it is “morning” based on whether the user has likely woken up yet on a given day. In some examples, an intelligent decision function may be configured to assess an ambiguity, e.g., an ambiguous user utterance, or an ambiguous constraint, and select a disambiguating data-flow program to respond to the ambiguity.
In some examples, the plurality of pre-defined composable functions includes a macro function, wherein the macro function includes a plurality of other pre-defined composable functions, and is configured to execute the plurality of other pre-defined composable functions. For example, a macro function can be used to sequence and organize related low-level steps of a high-level task using low-level functions. By using macro functions to encapsulate high-level behaviors, the codegeneration machine may be able to readily output well-typed code for performing the high-level behaviors without needing to output individual steps using low-level functions.
Turning briefly to FIG. IE, FIG. IE shows a first example of a dialogue between a user and an automated assistant, in which the automated assistant processes a user utterance 102’ in which there is no ambiguity. Context-specific dialogue history 130 is shown including concepts 130A, 130B, and further concepts through 130N. However, in this example, the user utterance 102’ is unambiguous and may be processed without referring to the context-specific dialogue history 130. The user asks, “When is my next meeting with Tom Jones?” and the previously-trained codegeneration machine outputs a data-flow program 106’.
Data-flow program 106’ is shown with a non-limiting example syntax in which square brackets indicate return values of expressions, e.g., [events] indicates a return value of an expression for finding all events matching a set of criteria, and [time] indicates a start time of the first such event in the set. The example syntax includes various functions including the search-history function, e.g., “getSalient()”, and program-rewriting function, e.g., “Clobber()”, as well as other functions such as primitive functions, API functions, etc., as described herein. The example functions are
shown in a function-call syntax indicating the name of a function, e.g., “Find”, “getSalient”, “Clobber”, among other named functions, and parentheses enclosing parameters for invoking the function. The example function-call syntax is non-limiting, and functions may be invoked and parametrized in any suitable manner, e.g., using any suitable formal-language syntax. The example function names represent pre-defined functions with implementations that are not shown herein. For example, each pre-defined function may be implemented by any suitable sequence of one or more instructions executable by the automated assistant to perform any suitable steps, e.g., to result in behavior indicated by the function name, behavior set forth in this disclosure, and/or any other suitable behavior such as performing calculations, invoking APIs, outputting audio, visually presenting information via a display. For example, the “Find” function may be implemented in any suitable manner, for example, by invoking an API to look up information stored in a user calendar.
As shown, the data-flow program finds events that have an attendee named “Tom Jones.” The data-flow program computes values such as [events] and [time] and then outputs a response 120’ using a “describe” function configured to output a description of the value [time], e.g., as speech via a speaker of an automated assistant device. Accordingly, response 120’ indicates the meeting time for the next meeting with Tom Jones, namely “at 12:30”.
Standard conversational semantic parsing maps a complete user utterance into an executable program, e.g., a data-flow program, executes the program to query information sources or effect external actions, then, after execution, responds to the user. This process can be slow and will lag behind the user utterance when the program contains expensive function calls, particularly if the function calls are nested and cannot be parallelized.
As an alternative, latency may be reduced by predicting and speculatively executing function calls while the user is still speaking based on partial results from automatic speech recognition (ASR) and the current state of execution. Identifying and pre-executing partial programs while the user is still speaking provides the technical benefit of expediting the final response to the user, thus improving user satisfaction due to improved computer-human interaction.
Effectively, this constitutes an online decision problem, wherein decisions to invoke particular functions on particular arguments may be made before all information has arrived. This is referred to herein as “online semantic parsing”. Such predictive decisions may include spotting user intents that have already been expressed (in some cases, without the help of aligned training data) and anticipating user intents from context that have not been expressed yet.
A generalized framework is provided herein, comprising a two-stage approach. First, a learned prefix-to-program prediction module is invoked. One example employs a neural graph-based semantic parser, which is specially trained to parse an incomplete utterance into a full program.
Additionally or alternatively, a pipeline that uses a language model (LM) may be employed to predict how a user will finish the incomplete utterance. The predicted completed utterance may then be parsed. For either approach, subprograms may be selected for execution as soon as the semantic parser predicts that the subprogram has a high probability of being in the correct parse. For example, a thresholding heuristic may be applied for selecting subprograms for early execution.
FIG. 2 shows a flow-diagram for an example computer-implemented method 200. In some examples, method 200 is executed by a computing system configured to receive speech input, the output of automatic speech recognition algorithms, text input, etc. such as a digital assistant, kiosk, smart phone, tablet device, computer, driving assistant, etc. Method 200 describes example approaches for reducing latency that, in some examples, include predicting and executing function calls while the user is still speaking.
This example is primarily described with reference to data-flow graphs, though the method is applicable to other notations, programming languages and styles. In a data-flow graph, a program is represented as a labeled directed acyclic graph G = (V, E), where each node v E V represents a
function invocation or a constant value, and each directed edge u -> v E E represents that v fills the f argument of the function u. Positional arguments are given edge labels such as argO, argl, etc. The terms “graph” and “program” are used interchangeably herein.
At 210, method 200 includes sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance. In some examples, each of the plurality of utterance prefixes are received from a speech to text module that employs automatic speech recognition algorithms. In some examples, the plurality of utterance prefixes are accompanied by contextual information such as the identity of the speaker, the location of the speaker, any prompts presented prior to receiving speech etc.
Continuing at 215, actions are described that apply to each sequentially received (utterance prefix. For example, an offline system that models /?(G|c, u) (where p is a distribution, G is an executable dataflow program, c is a context of a dialog history, and u is a user utterance) may only predict the program after the user utterance u has been fully received. In contrast, method 200 allows the system to simultaneously parse each partial utterance as the user utters it, so as to pre-execute subprograms and thus reduce the final response time. In contrast to simultaneous machine translation (MT) approaches, in some examples, the user is not presented any output until their utterance is complete. As such, speculatively executing a predicted subprogram in the background does not commit the system to using the predicted subprogram in the final result. Any parse of u[m-i] therefore does not constrain a parse of u[m]. Rather, in some examples, each prefix is re-
parsed from scratch after each partial utterance is received.
At 220, method 200 includes predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance.
For task-oriented dialogue systems, an executable complete dataflow program G is predicted in response to a user utterance prefix u[m] and in some examples is predicted further based on context c of a dialogue history. As an example, the utterance prefix is part of a token sequence u = (ui, 112, . . . , z/|u|) and the context is encoded as a sequence c = (ci, ci, . . . , c|c|).
As an example, the value u[m] is used to denote the m{h prefix presented to an online system, and tm is used to denote the time at which it is presented. Additionally or alternatively, t denotes the time at which the complete utterance u is presented. In some examples, different time points are recorded, e.g., one or more times at which a function invocation is selected by the system for execution, the time it is actually executed, and the time it returns.
A selected function invocation is generally not considered to be executed until its arguments have returned from execution. However, by that point, the system may have deselected the function, and thus will not execute it, as the system’s predictions may have changed based on additional input.
In some examples, a system uses the noisy partial outputs returned by an ASR system from successively longer speech prefixes. In some examples, each partial ASR output u[m] is returned at some time tm G IR. In some such examples, the system appends one or more words to the previous output U[m-x], and/or revises some words.
In some examples, after each successive utterance prefix u[m], the complete graph G is predicted based on the current prefix u[m] and context c. As one example, the single most probable G is predicted. Additionally or alternatively, an estimate of the distribution p(G |c, U|m| ) is constructed. By predicting complete programs, more accurate fragments are likely to be yielded, by making the latent structure explicit. In some examples, the complete dataflow program is predicted by a machine learning model previously trained with training data including pairs of training utterance prefixes and training complete dataflow programs.
Generally, any graph G is able to be encoded as a sequence a =
. Each element of a graph can be regarded as an action that adds a node or a sequence of edges to the graph. In some examples, subgraphs correspond to contiguous substrings of a, but this is not always the case.
Each vk corresponds to a node, representing a constant value or function invocation, while each ek is a subsequence that lists all edges between vk and earlier nodes. In some examples, at training
time, graphs are converted to action sequences that enable training a sequence-to-sequence model. At inference time, the model may output the action sequence, from which the graph can be constructed. Each node and each edge in G corresponds to one token in a, excepting for string literals, which can span multiple action tokens. A token of a string literal may appear directly as an action or can be copied from the 7th token of the source.
In some examples, the machine learning model is previously trained to at least locally maximize
u[m]’ ai:n-i)> where: a is a sequence of actions, each of which contributes to adding a node and/or a sequence of edges to the complete dataflow program; c is a context of a dialogue history; and U[m] is the utterance prefix.
In some examples, the model p G | c, U|m|) is trained to map each prefix of u to the complete graph. In some such examples, every ((c, u); G) pair in the original training data is multiplied into many training pairs ((c, U|m|), G). In some such examples, the full graph is used as the target, rather than attempting to predict only the part of the graph that aligns to the prefix. In this way, the method does not require any alignment.
Such a model may attempt to predict any function calls that are likely based on the current prefix, even if they have not been explicitly mentioned yet. In some cases, the target graph is unreachable, such as when the target graph contains string literals that have not been seen yet, e.g., a copy function for a prefix in a position beyond the current prefix. To address this, the target action sequence may be modified to instead copy the final position of the prefix, e.g., using a MASK token as a placeholder for all future tokens. In this way, hallucination of free text by the model is reduced or disabled, while keeping the graph structure intact with the MASK token-based placeholder. As used herein, hallucination generally refers to the generation of one or more predicted utterances by a language model based on previously received utterances, utterance prefixes, and/or applicable context. In some examples, the hallucinated utterances are generated with incomplete intent, and thus include utterances that are not ultimately verifiable or supported by the complete dataflow program.
As such, in some examples, the complete dataflow program is predicted by hallucinating a predicted full utterance from the utterance prefix with a previously-trained language model. As a non-limiting example, the predicted full utterance is hallucinated from the utterance prefix by a Transformer-based language model.
In some examples, the machine learning model is previously trained to at least locally maximize
where: a is a sequence of actions, each of which contributes to adding a node and/or a sequence of edges to the complete dataflow program; c is a context of a dialogue history; and u is the predicted full utterance.
In other words, the full utterance may first be predicted from the prefix. In some examples, a pretrained bidirectional and auto-regressive transformers (BART) model is fine-tuned so that it can map an utterance prefix, e.g., terminated with the MASK symbol to a full utterance, e.g., freely hallucinating content words. In some embodiments, the training data includes one example for each prefix of each utterance, so the fine-tuning objective is to maximize the sum of logp (u|c, U|m|) over all prefixes U|m| of all utterances u.
Returning to FIG. 2, at 230, method 200 includes selecting, from the predicted complete dataflow program, one or more program nodes to speculatively execute based on at least the utterance prefix. In some examples, once the data-flow program has been predicted, nodes are selected based on the probability that they will eventually be executed, e.g., using a heuristic approximation to their marginal probability. As described further herein and with regard to FIGS. 4 and 5, selecting a node is additionally or alternatively based on a predicted execution cost and a predicted effect on overall latency. Additionally or alternatively, the proposal and selection steps may be combined into a single step that directly predicts some graph fragments to execute.
For example, at 235, steps are presented for selecting program nodes for each candidate node of the complete dataflow program predicted based on the utterance prefix. For a given graph Gm proposed from U|m| , it is beneficial to speculatively execute subgraphs that are likely to be executed. In some examples, selection includes determining which program nodes, e.g., function invocations, are worth executing at this time, given what is known about received utterance prefixes, context, etc. The selection process may iterate for each received utterance prefix. In some such examples, the subset of program nodes that are selected function as an update that replaces the previous list of selected nodes. Any formerly selected nodes that were not executed, e.g., still waiting for their arguments, are not executed until they are selected again.
In some embodiments, at 240, method 200 includes determining a conditional probability of a candidate subgraph including the candidate node and its descendants. For example, calculating the conditional probability of the candidate subgraph includes multiplying predicted probabilities of all relevant actions responsible for building the candidate subgraph. In some examples, the conditional probability of a subgraph is obtained by marginalizing over all possible action sequences, as well as all completions u where applicable. This probability may be approximated by sampling from the models. For simplicity and efficiency, the probability of a subgraph of Gm is optionally approximated by the product of the conditional probabilities of the predicted actions that actually built that subgraph — that is, in some examples, each subgraph of the predicted Gm is built by a subset of the predicted actions a. This approximates the marginal probability of the relevant action subsequence by its conditional probability given preceding actions. Nodes v G Gm are then selected such that the subgraph rooted at v has probability above a constant
threshold.
In some examples, when the entire dataflow program is predicted, the conditional probability is based on at least the predicted unseen tokens of u and/or the seen tokens of u. Because all of the tokens potentially contribute to such a subgraph, there is no alignment. As such, when p(u|c, U|m| ) is small (e.g., below a threshold), all subgraphs are regarded as uncertain.
At 250, method 200 includes selecting the candidate node if the conditional probability is greater than a threshold. Certain exceptions to this process may apply. For example, a node whose subgraph has previously executed, e.g. after predicting and selecting it from a previous prefix, would not be selected again. This would be redundant, as either the results of execution are known or pending. Nodes whose function invocations have side effects are generally not selected until the utterance is complete, as these nodes may be unsafe to pre-execute. Once the utterance is complete, all unexecuted nodes of the final predicted graph, 6, may be selected, as they are both safe and necessary to execute.
In some embodiments, at 260, method 200 includes speculatively executing a selected program node prior to receiving a final version of the full user utterance. In some examples, the selected program node is executed after arguments of children of the selected program node have returned from execution, provided the selected program node remains selected based on subsequently received utterance prefixes, subsequently predicted complete dataflow programs, and/or subsequently understood context.
By implementing a strong graph-based semantic parser, either trained to parse prefixes directly or combined with a pre-trained language model for utterance completion, followed by a heuristic for subgraph selection, it is possible to reduce latency significantly. As such, the technical effects of implementing such a method include improved human-computer interactions. Different types of parsers and executable semantic representations may be implemented with the general framework of method 200.
As a visual example, FIG. 3 schematically depicts the execution of nodes of a data-flow program based on received utterance prefixes. An example framework is shown for simultaneous semantic parsing. Graph nodes and edges are represented as actions in the target (following a depth-first traversal order).
At 300, a full, utterance 305 is received in an offline parsing scenario. A full graph 310 is predicted including a series of actions. Nodes (circles) and edges (connecting arrows) can be executed, presuming they are not dependent on any child actions returning values.
At 320, a partial online parsing scenario is depicted. At each prefix position, the model first proposes a full graph which is then pruned based on predicted probabilities. The surviving (selected) nodes, in black, (also listed in bold-italics) can be executed once their children have
returned. As new utterance prefixes are received, the graph is amended. As speculatively executed nodes return values, upstream nodes that depend on those nodes can be executed.
At 325, a first utterance prefix 330 is received. As described with regard to FIG. 2, the system may predict programs responsive to the receipt of each partial utterance and/or utterance prefix from the ASR. Graph 335 is predicted, and node D is selected for speculative execution.
At 340, a second utterance prefix 345 is received. This causes graph 335 to be updated to graph 350. In graph 350, an additional node X connected to node B is added, and an additional node C is added between nodes A and E. As node D has returned a value, nodes B and X can be speculatively executed, as can node E.
At 355, a third utterance prefix 360 is added. This causes graph 350 to be updated to graph 365. In graph 365, an additional node Z connected to node C is added, a new node Y is added to replace node B, and node E has been moved to depend from nodes Y and C, replacing node X. As nodes D and E have returned values, nodes C, Y, and A can be speculatively executed.
In some examples, the subgraph selection decisions are made by a learned model that considers the cost and benefit of each call, rather than using a fixed threshold. The semantic parser could also condition on the execution status, instead of operating separately from the learned model. All subgraphs with the same probability have the same risk of being useless work, but they do not all represent the same amount of useless work. Some incorrect subgraphs will require more computation than others. Similarly, subgraphs with the same probability share the same likelihood of being useful work, but they are not all equally useful in reducing latency. Some correct subgraphs can be postponed for longer without affecting the overall latency, because they will run more quickly or will not be needed as soon. As such, it may be appropriate to raise the threshold for invoking a subgraph and wait for more evidence that the subgraph is actually correct.
Method 200 describes a probability-based policy for deciding whether to execute a predicted function call based on whether its estimated probability is above a constant threshold. However, even with an accurate probability estimator, this policy may not always be effective, as it does not consider the cost of executing a possibly incorrect call or the benefit of executing a call before more information is available. In FIG. 4, a flow diagram illustrates an example method 400 for training a data-flow program based on execution costs. In some examples, method 400 is executed by a computing system employing numerous models and subsystems, such as an API for calling function invocations, an ASR for translating speech to utterance prefixes, a model for predicting full data-flow graphs based on the utterance prefixes, a translator, such as seq2seq for predicting future utterance prefixes, and a policy trainer.
Method 400 is presented for training a system that may begin with a probability -based policy, but is iteratively improved using a policy trainer, such as the locally optimal learning to search (LOLS)
algorithm. For each utterance on which the system runs, it can be measured how much changing each one of the system's decisions would have helped or hurt, assuming the preceding and subsequent decisions were still made by the current policy. Such counterfactual measurements may be obtained rapidly and do not require supervised data. The policy trainer may then use them to improve the parameters of the policy. The LOLS algorithm, as a non-limiting example, is initialized using a non-optimal reference policy, iteratively improving the policy through sequential rounds of policy execution in a search space, selecting the action leading to the minimal expected loss for each state.
At 410, method 400 includes sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance.
For example, the output of the ASR, e.g., utterance prefix, may be received. However, in some examples, the original speech and/or audio file itself is received. It should be noted that the utterance prefix evolves by appending new prefixes and/or by revising older words and prefixes as more context becomes available. As such, a previously received utterance prefix may be adjusted based on new information.
At 415, actions are described that apply to each sequentially received utterance prefix, in some examples, the system predicts programs responsive to the receipt of each partial utterance and/or utterance prefix from the ASR.
At 420, method 400 includes predicting a complete dataflow program comprising a plurality of program nodes to be executed to satisfy the full user utterance. In some examples, the utterance prefix is mapped to possible full utterances using a sequence to sequence model, then a semantic parser used to map each possible full utterance to one or more possible plans. In some examples, the utterance prefix is mapped directly to possible plans. In some cases, this mapping is be performed with a lightweight version of the semantic parser. In some examples, the prefixes are mapped directly to complete plans without using the complete (predicted) utterance as an intermediate step. The target is then the complete plan that would be predicted from the complete utterance. Training may be performed on unannotated user data, actual ASR prefixes, and/or the predicted plan. Once trained, the policy may be used to evaluate the function invocations in the k- best model-predicted plans and decide which plans have execution cost benefits. In yet another example, the policy is used to map the utterance prefix directly to a set of function invocations that are worth speculatively executing.
At 430, method 400 includes, selecting from the predicted complete dataflow program, based on a current policy, one or more program nodes to speculatively execute based on at least the utterance prefix. Continuing at 440, method 400 includes, selecting from the predicted complete
dataflow program, based on the current policy, one or more program nodes to not select for execution until at least a next utterance prefix is received. As an non-limiting example, nodes are given a “select” status or a “non-selected” status, indicating that node has been selected for speculative execution or non-selected for speculative execution at least until additional information is received.
In some examples, the same architecture is used for prediction and modeling as for choosing the API calls. For consistency, the architecture may use the same model that makes the pre-fetched calls and the residual calls when an utterance is completed.
In some examples, the current policy selects program nodes to speculatively execute based on an execution cost. Each action associated with a node has an execution cost including the passage of time. For example, it may be preferential to terminate the dataflow program sooner rather than later. The execution cost of a latency may be nonlinear in its length. The execution cost may be based on an execution time per action, a total number of actions, a latency on terminating the dataflow program, a number and/or complexity of processing operations performed while completing each action, etc.
The policy may determine an execution cost for speculatively selecting and executing a candidate node and invoking an associated function call. In some examples, the execution cost is based on a probability that the invocation will appear in the final dataflow program, and/or an expected latency of that invocation. As new information is received (new utterance prefixes, adjusted utterance prefixes, associate context), the probability that the invocation will appear in the final data flow program evolves, either increasing or decreasing. As such, in some examples the execution cost is also based on a distribution as to how the prefix will evolve. This may occur when a prefix has changed, and/or when invocations have returned. During an interval when the distribution remains constant, taking additional action is unlikely to be informative. Further, as a practical limitation, a function (f(y)) cannot be invoked until y has been computed, e.g., y=g(x), result of a previous function invocation such as getSalient().
At 450, method 400 includes, responsive to receiving the full user utterance, determining a change in execution cost for the predicted complete dataflow program based on a change in which program nodes were speculatively executed and which program nodes were not executed.
Once the utterance is complete, a final plan may be predicted, and any or all of the function invocations in that plan that have already been called but not invoked may be run. In some examples, the plan execution waits for all results to return before terminating. In some examples, the policy trainer speeds up the execution of the final predicted program by making certain API calls early. Thus, in some examples, the cost of an action sequence is evaluated on the predicted program, not the true program, even if known. In some embodiments, the policy trainer is
continually updated based on live unannotated user data. In some examples, a one-step change for the policy trainer includes changing a node that is ready to run from selected status to non-selected status, or vice-versa.
In some examples, determining a change in the execution cost includes considering the current environmental states. Example environmental states include: a current utterance prefix as reported by the ASR system, a delay since the prefix last changed, e.g., increasing until the prefix changes again or the ASR system declares the utterance as having terminated; actions taken so far (needed to determine the reward in the final state); and the return values from those actions. The current utterance prefix and the delay since the prefix last changed evolve stochastically, independent of any actions taken. Actions taken so far have deterministic transitions, where actions are added to the state as they are taken. The return values evolve stochastically, with stochastic return values arriving after a stochastic latency.
In some examples, the API can handle multiple requests in parallel, so that the distribution of (return value, latency) does not depend on when the function is invoked, or what other functions are invoked. For any received utterance, the stochastic evolution is known, and the latencies for function invocations in the actual final plan are known. In a simplified system, the API calls have no side effects that are visible to the user, such as interruptions, highlighting display entities, reprompting, etc. As such, function invocations have return values, but no side effects. Accordingly, the effects on latency, accuracy, and computing cost can be determined without considering such side effects. These effects will not change if the actions are changed, as invoking a function earlier will not change its latency or the utterance. The recorded action sequence will change along with the action sequence itself, but in a known deterministic way.
To estimate marginal probabilities of function calls from an ASR prefix, the seq2seq or other suitable model may be used to sample a number of possible completions of the prefix. In some examples, these possible completions are run in parallel, with the resulting plans yielding an estimate of the distribution across plans. In some instances, when the utterance is complete, this collapses to an exact point distribution that puts a probability of 1 on the true final plan. In such examples, marginal probabilities of different function invocations are then computed given the prefix. In some examples, calls intelligently hallucinate entities that are not in the prefix but have been predicted by the seq2seq or other suitable model.
In practice, there are plural sources for function arguments - e.g., references to the return values of previous calls, copied spans, detected entities, and constants. Constants are generally used for arguments that cannot be obtained in any other way, but constants are not allowed for all types. In some examples, predicting a complete plan based on only the prefix would require enlarging the set of constants to include copied spans and detected entities that have not yet been seen, and
allowing all types to have constants.
Additionally or alternatively, the target plan in the training data is abstracted into a partial plan, for prediction. In some examples, the complete plan is then modeled as a sequence of function invocations in a canonical order. To prepare training data, an invocation can be deterministically deleted if it depends on an argument that is unpredictable from the prefix, or depends on such an invocation. Such invocations may be considered “inaccessible” from this prefix, and deleting these inaccessible prefixes will delete many of the function invocations that do not correspond to any words in the prefix.
Even if a function invocation is classified as positive, the actual invocation must be delayed until its arguments have been computed. As an example, if f(...) is invoked based on a prefix, by the time the result returns, the prefix may have changed, and there is now a new current set of delayed invocations. Some of these such as g(f(...)) can be run immediately when f(...) returns.
In some examples, to convert the function invocation to a classification score, the same predicted autoregressive logits for each function invocation are used, for example, with multi-task finetuning. If the same invocation appears in multiple plans with different embeddings, these embeddings can be pooled before converting to a score. In some examples, the classifier also takes as an input the log-marginal probability.
In some examples, function invocations are targeted if they include arguments that are currently available. In particular, references to previous calls may only be allowed for calls that have already returned or if they include arguments that indicate that there are no more actions worth taking at this time.
Context for predicting a next action includes one or more of the dialogue history, possibly reduced to a set of types; the current ASR prefix; and the previous function calls that have actually been made as opposed to the calls that have been predicted but classified as not worth making. In some examples, calls are not required to have returned yet, as it may be beneficial to consider launching siblings in parallel.
Previous calls may be de-weighted as an argument to an invocation if the call is estimated to be wrong given the current updated prefix. A previous call may thus be rescored given the current prefix, and that new score incorporated into the score of the proposed invocation.
In some examples, the policy is run repeatedly to choose a sequence of actions, but this process may be suspended once it predicts that there are no more potentially meaningful actions. The process may resume whenever the ASR prefix has changed, a call returns, and/or an assessment of whether any calls are worthwhile has changed.
Returning to FIG. 4, at 460, method 400 includes generating an updated policy from the current policy based on a determined reduction to the execution cost for the predicted complete dataflow
program.
In some examples, to train the policy, the execution cost (or alternatively, the reward) of invoking each function compared to not invoking that function is considered. Any suitable cost-sensitive binary classifier can be used, e.g., perceptron, SVM, logistic regression.
For example, based on the current policy, an utterance can be chosen at random and the current policy simulated on it. Various selected interventions change a single action, e.g., adding or suppressing a function invocation at an “action time”. The prediction may then continue according to the current policy, and a change in execution cost then determined. In some examples, such measurements are used to improve the policy, with the policy adjusted to prefer actions that lead to a lower cost.
In some examples, the policy is operated with an assumption of a special environment where it can be determined whether an action improves the reward by actually executing a counterfactual. That is, multiple competing actions can be executed from the same state, yielding similar variance reduction as for paired experimental designs.
In some examples, the special environment assumptions include a state where actions do not change the utterance or sequence of prefixes. This can be enacted if the user and ASR system is not shown the actions of the system. Further, the assumption may be made that any action does not change the predictions from each prefix, which can be enacted if the predictor is not shown the return values from those actions. Further, in some examples the special environment assumptions include that a sequence of predicted graphs Gm does not change when an action is modified. The only change that occurs responsive to a modified action is a change to a single e(v) value.
However, if changing an action does enact one or more changes on a future graph, importance sampling can be applied. For example, a random subset of alternative actions can be applied at each state, e.g., favoring actions that score most highly under the current policy.
FIG. 5 is a flow diagram illustrating an example computer-implemented method 500 for speculatively executing nodes of a data-flow program. In some examples, method 500 is executed by a computing system employing numerous models and subsystems, such as an API for calling function invocations, an ASR for translating speech to utterance prefixes, a modeler for predicting full data-flow graphs based on the utterance prefixes, and a translator for predicting future utterance prefixes. In some examples, the computing system employs a trained cost policy, such as a cost policy trained as described with regard to FIG. 4.
At 510, method 500 includes sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance. In some examples, the output of the ASR, e.g., utterance prefix, is received.
At 515, actions are described that apply to each sequentially received utterance prefix. In some examples, the system predicts programs responsive to the receipt of each partial utterance and/or utterance prefix from the ASR.
At 520, method 500 includes predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance. As described with regard to FIG. 4, in some examples, the complete dataflow program is predicted directly from the utterance prefix, or is predicted after first predicting a full user utterance based on the received utterance prefixes and any available context information.
At 525, actions are described that apply to each program node of the complete dataflow program predicted based on the utterance prefix. At 530, method 500 includes determining an execution cost for the complete dataflow program based on speculatively executing a candidate subgraph, the candidate subgraph including the program node and its descendants based on the predicted dataflow program. For example, the execution cost for the complete dataflow program may be determined for scenarios where the candidate subgraph is speculatively executed, and where the candidate subgraph is not executed until further information is received, e.g., a next utterance prefix. As described with regard to FIG. 4, in some examples, the execution cost is based on one or more of a latency, a total computing cost, and a likelihood distribution indicating a probability that the candidate subgraph will be executed in the final dataflow program.
At 540, method 500 includes selecting the candidate node to speculatively execute if speculatively executing the candidate subgraph would result in a decrease in the execution cost for the complete predicted dataflow program. In some examples, the candidate subgraph is speculatively executed if doing so would result in an above threshold decrease in the execution cost for the complete predicted dataflow program, e.g., decrease a latency by more than a threshold number of milliseconds. Conversely, the candidate subgraph may not be speculatively executed if it is unlikely to be executed in the final dataflow program, even if speculative execution would decrease latency in the scenarios where the candidate subgraph were ultimately invoked.
The methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as an executable computer-application program, a network-accessible computing service, an application-programming interface (API), a library, or a combination of the above and/or other compute resources.
FIG. 6 schematically shows a simplified representation of a computing system 600 configured to provide any to all of the compute functionality described herein. Computing system 600 may take the form of one or more personal computers, network-accessible server computers, tablet
computers, home-entertainment computers, gaming devices, mobile computing devices, mobile communication devices, e.g., smart phone, virtual/augmented/mixed reality computing devices, wearable computing devices, Internet of Things (loT) devices, embedded computing devices, and/or other computing devices.
Computing system 600 includes a logic subsystem 602 and a storage subsystem 604. Computing system 600 may optionally include an input/output subsystem 606, communication subsystem 608, and/or other subsystems not shown in FIG. 6.
Logic subsystem 602 includes one or more physical devices configured to execute instructions. For example, the logic subsystem may be configured to execute instructions that are part of one or more applications, services, or other logical constructs. The logic subsystem may include one or more hardware processors configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware devices configured to execute hardware or firmware instructions. Processors of the logic subsystem may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic subsystem optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic subsystem may be virtualized and executed by remotely-accessible, networked computing devices configured in a cloudcomputing configuration.
Storage subsystem 604 includes one or more physical devices configured to temporarily and/or permanently hold computer information such as data and instructions executable by the logic subsystem. When the storage subsystem includes two or more devices, the devices may be collocated and/or remotely located. Storage subsystem 604 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. Storage subsystem 604 may include removable and/or built-in devices. When the logic subsystem executes instructions, the state of storage subsystem 604 may be transformed - e.g., to hold different data.
Aspects of logic subsystem 602 and storage subsystem 604 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include program- and application-specific integrated circuits (PASIC / ASICs), program- and application-specific standard products (PSSP / ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The logic subsystem and the storage subsystem may cooperate to instantiate one or more logic machines. As used herein, the term “machine” is used to collectively refer to the combination of hardware, firmware, software, instructions, and/or any other components cooperating to provide
computer functionality. In other words, “machines” are never abstract ideas and always have a tangible form. A machine may be instantiated by a single computing device, or a machine may include two or more sub-components instantiated by two or more different computing devices. In some implementations a machine includes a local component, e.g., software application executed by a computer processor, cooperating with a remote component, e.g., cloud computing service provided by a network of server computers. The software and/or other instructions that give a particular machine its functionality may optionally be saved as one or more unexecuted modules on one or more suitable storage devices. For example, the previously-trained code-generation machine, previously-trained relevance detection machine, and/or error-handling execution machine are examples of machines according to the present disclosure.
Machines may be implemented using any suitable combination of state-of-the-art and/or future machine learning (ML), artificial intelligence (Al), and/or natural language processing (NLP) techniques. For example, the previously-trained code-generation machine and/or previously- trained relevance detection machine may incorporate any suitable ML, Al, and/or NLP techniques, including any suitable language models.
Non-limiting examples of techniques that may be incorporated in an implementation of one or more machines include support vector machines, multi-layer neural networks, convolutional neural networks (e.g., including spatial convolutional networks for processing images and/or videos, temporal convolutional neural networks for processing audio signals and/or natural language sentences, and/or any other suitable convolutional neural networks configured to convolve and pool features across one or more temporal and/or spatial dimensions), recurrent neural networks (e.g., long short-term memory networks), associative memories (e.g., lookup tables, hash tables, Bloom Filters, Neural Turing Machine and/or Neural Random Access Memory), word embedding models (e.g., GloVe or Word2Vec), unsupervised spatial and/or clustering methods (e.g., nearest neighbor algorithms, topological data analysis, and/or k-means clustering), graphical models (e.g., (hidden) Markov models, Markov random fields, (hidden) conditional random fields, and/or Al knowledge bases), and/or natural language processing techniques (e.g., tokenization, stemming, constituency and/or dependency parsing, and/or intent recognition, segmental models, and/or super-segmental models (e.g., hidden dynamic models)).
In some examples, the methods and processes described herein may be implemented using one or more differentiable functions, wherein a gradient of the differentiable functions may be calculated and/or estimated with regard to inputs and/or outputs of the differentiable functions, e.g., with regard to training data, and/or with regard to an objective function. Such methods and processes may be at least partially determined by a set of trainable parameters. Accordingly, the trainable parameters for a particular method or process may be adjusted through any suitable training
procedure, in order to continually improve functioning of the method or process.
Non-limiting examples of training procedures for adjusting trainable parameters include supervised training (e.g., using gradient descent or any other suitable optimization method), zeroshot, few-shot, unsupervised learning methods (e.g., classification based on classes derived from unsupervised clustering methods), reinforcement learning (e.g., deep Q learning based on feedback) and/or generative adversarial neural network training methods, belief propagation, RANSAC (random sample consensus), contextual bandit methods, maximum likelihood methods, and/or expectation maximization. In some examples, a plurality of methods, processes, and/or components of systems described herein may be trained simultaneously with regard to an objective function measuring performance of collective functioning of the plurality of components (e.g., with regard to reinforcement feedback and/or with regard to labelled training data). Simultaneously training the plurality of methods, processes, and/or components may improve such collective functioning. In some examples, one or more methods, processes, and/or components may be trained independently of other components (e.g., offline training on historical data).
The previously-trained code-generation machine and/or previously-trained relevance detection machine may incorporate any suitable language models. Language models may utilize vocabulary features to guide sampling/searching for words for recognition of speech. For example, a language model may be at least partially defined by a statistical distribution of words or other vocabulary features. For example, a language model may be defined by a statistical distribution of n-grams, defining transition probabilities between candidate words according to vocabulary statistics. The language model may be further based on any other appropriate statistical features, and/or results of processing the statistical features with one or more machine learning and/or statistical algorithms, e.g., confidence values resulting from such processing. In some examples, a statistical model may constrain what words may be recognized for an audio signal, e.g., based on an assumption that words in the audio signal come from a particular vocabulary.
Alternately or additionally, the language model may be based on one or more neural networks previously trained to represent audio inputs and words in a shared latent space, e.g., a vector space learned by one or more audio and/or word models, e.g., wav21etter and/or word2vec. Accordingly, finding a candidate word may include searching the shared latent space based on a vector encoded by the audio model for an audio input, in order to find a candidate word vector for decoding with the word model. The shared latent space may be utilized to assess, for one or more candidate words, a confidence that the candidate word is featured in the speech.
The language model may be used in conjunction with an acoustical model configured to assess, for a candidate word and an audio signal, a confidence that the candidate word is included in
speech in the audio signal based on acoustical features of the word, e.g., mel-frequency cepstral coefficients, formants, etc.. Optionally, in some examples, the language model may incorporate the acoustical model, e.g., assessment and/or training of the language model may be based on the acoustical model. The acoustical model defines a mapping between acoustic signals and basic sound units such as phonemes, e.g., based on labelled speech. The acoustical model may be based on any suitable combination of state-of-the-art or future machine learning (ML) and/or artificial intelligence (Al) models, for example: deep neural networks (e.g., long short-term memory, temporal convolutional neural network, restricted Boltzmann machine, deep belief network), hidden Markov models (HMM), conditional random fields (CRF) and/or Markov random fields, Gaussian mixture models, and/or other graphical models (e.g., deep Bayesian network). Audio signals to be processed with the acoustic model may be pre-processed in any suitable manner, e.g., encoding at any suitable sampling rate, Fourier transform, band-pass filters. The acoustical model may be trained to recognize the mapping between acoustic signals and sound units based on training with labelled audio data. For example, the acoustical model may be trained based on labelled audio data comprising speech and corrected text, in order to learn the mapping between the speech signals and sound units denoted by the corrected text. Accordingly, the acoustical model may be continually improved to improve its utility for correctly recognizing speech.
In some examples, in addition to statistical models, neural networks, and/or acoustical models, the language model may incorporate any suitable graphical model, e.g., a hidden Markov model (HMM) or a conditional random field (CRF). The graphical model may utilize statistical features, e.g., transition probabilities, and/or confidence values to determine a probability of recognizing a word, given the speech and/or other words recognized so far. Accordingly, the graphical model may utilize the statistical features, previously trained machine learning models, and/or acoustical models to define transition probabilities between states represented in the graphical model.
When included, input/output subsystem 606 may comprise one or more displays, which may be used to present a visual representation of data held by storage subsystem 604. This visual representation may take the form of a graphical user interface (GUI). Input/output subsystem 606 may include one or more display devices utilizing virtually any type of technology. In some implementations, display subsystem may include one or more virtual-, augmented-, or mixed reality displays. When included, input/output subsystem 606 may further comprise one or more speakers configured to output speech, e.g., to present an audible representation of data held by storage subsystem 604, such as automated assistant responses.
When included, input/output subsystem 606 may comprise or interface with one or more input devices. An input device may include a sensor device or a user input device. Examples of user input devices include a keyboard, mouse, touch screen, or game controller. In some embodiments,
the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition.
When included, communication subsystem 608 may be configured to communicatively couple computing system 600 with one or more other computing devices. Communication subsystem 608 may include wired and/or wireless communication devices compatible with one or more different communication protocols. The communication subsystem may be configured for communication via personal-, local- and/or wide-area networks.
This disclosure is presented by way of example and with reference to the associated drawing figures. Components, process steps, and other elements that may be substantially the same in one or more of the figures are identified coordinately and are described with minimal repetition. It will be noted, however, that elements identified coordinately may also differ to some degree. It will be further noted that some figures may be schematic and not drawn to scale. The various drawing scales, aspect ratios, and numbers of components shown in the figures may be purposely distorted to make certain features or relationships easier to see. In one example, a computer-implemented method comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance; for each sequentially received utterance prefix, predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance; and selecting, from the predicted complete dataflow program, one or more program nodes to speculatively execute based on at least the utterance prefix. The technical effect of speculatively executing one or more program nodes based on at least the utterance prefix is an increased user interaction performance due to at least reduced latency in generating a response. In such an example, or any other example, the computer- implemented method additionally or alternatively comprises speculatively executing a selected program node prior to receiving a final version of the full user utterance. In any of the preceding examples, or any other example, execution of the selected program node is additionally or alternatively performed after arguments of the selected program node have returned from execution provided the selected program node remains selected based on subsequently received utterance prefixes, subsequently predicted complete dataflow programs, and/or subsequently understood context. In any of the preceding examples, or any other example, the complete dataflow program is additionally or alternatively predicted based also on a context of a dialogue
history. In any of the preceding examples, or any other example, the complete dataflow program is additionally or alternatively predicted by a machine learning model previously trained with training data including pairs of training utterance prefixes and training complete dataflow programs. In any of the preceding examples, or any other example, the machine learning model is additionally or alternatively previously trained to at least locally maximize a product based on at least: a sequence of actions, each of which contributes to adding a node and/or a sequence of edges to the complete dataflow program; a context of a dialogue history; and the utterance prefix. In any of the preceding examples, or any other example, the complete dataflow program is additionally or alternatively predicted by hallucinating a predicted full utterance from the utterance prefix with a previously-trained language model and predicting the complete dataflow program with a machine learning model previously trained with training data including pairs of training full utterances and training complete dataflow programs. In any of the preceding examples, or any other example, the previously-trained language model is additionally or alternatively a Transformer-based language model. In any of the preceding examples, or any other example, selecting one or more program nodes to speculatively execute is additionally or alternatively based at least on a probability the program node will eventually be executed. In any of the preceding examples, or any other example, selecting program nodes additionally or alternatively includes, for each candidate node of the complete dataflow program predicted based on the utterance prefix, calculating a conditional probability of a candidate subgraph including the candidate node and its descendants; and selecting the candidate node if the conditional probability is greater than a threshold. In any of the preceding examples, or any other example, calculating the conditional probability of the candidate subgraph additionally or alternatively includes multiplying predicted probabilities of all relevant actions responsible for building the candidate subgraph. In any of the preceding examples, or any other example, each of the plurality of utterance prefixes are additionally or alternatively received from a speech to text module.
In another example, a computer-implemented method comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance; for each sequentially received utterance prefix, predicting a complete dataflow program comprising a plurality of program nodes to be executed to satisfy the full user utterance; and selecting from the predicted complete dataflow program, based on a current policy, one or more program nodes to speculatively execute based on at least the utterance prefix, selecting from the predicted complete dataflow program, based on the current policy, one or more program nodes to not execute until at least a next utterance prefix is received; responsive to receiving the full user utterance, determining a change in execution cost for the predicted complete dataflow program based on a change in which program nodes were
speculatively executed and which program nodes were not executed; and generating an updated policy from the current policy based on a determined reduction to the execution cost for the predicted complete dataflow program. The technical effect of updating a policy based on a determined reduction to the execution cost is a reduction in consumption of computing resources. In such an example, or any other example, predicting a plurality of program nodes to be executed to satisfy the full user utterance additionally or alternatively includes mapping the received utterance prefix to a plurality of possible full user utterances; and mapping each possible full user utterance to one or more possible complete dataflow programs. In any of the preceding examples, or any other example, predicting a plurality of program nodes to be executed to satisfy the full user utterance additionally or alternatively includes mapping the received utterance prefix to one or more possible complete dataflow programs. In any of the preceding examples, or any other example, predicting a plurality of program nodes to be executed to satisfy the full user utterance additionally or alternatively includes directly mapping the received utterance prefix to a set of program nodes. In any of the preceding examples, or any other example, the change in execution cost is additionally or alternatively based on one or more of an expected latency, a total computing cost, and a likelihood distribution indicating a probability that the program node will appear in a final dataflow program.
In yet another example, a computer-implemented method, comprises sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance; for each sequentially received utterance prefix, predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance; and for each program node of the complete dataflow program predicted based on the utterance prefix, determining an execution cost for the complete dataflow program based on speculatively executing a candidate subgraph, the candidate subgraph including the program node and its descendants based on the predicted dataflow program; and selecting the candidate node to speculatively execute if speculatively executing the candidate subgraph would result in a decrease in the execution cost for the complete predicted dataflow program. The technical effect of selecting candidate nodes to speculatively execute based on a decrease in the execution cost for the complete predicted dataflow program includes at least improved human-computer interactions, as the latency of generating a response is decreased. In such an example, or any other example, determining the execution cost for the complete dataflow program additionally or alternatively includes determining an execution cost for one or more scenarios where the candidate subgraph is speculatively executed, and for one or more scenarios where the candidate subgraph is not executed until further information is received. In any of the preceding examples, or any
other example, the computer-implemented method additionally or alternatively comprises not executing the candidate subgraph until at least a next utterance prefix is received if the likelihood of executing the candidate subgraph in the complete predicted dataflow program is less than a likelihood threshold. It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Claims
1. A computer-implemented method, comprising: sequentially receiving a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance; for each sequentially received utterance prefix: predicting a complete dataflow program based on the utterance prefix, the complete dataflow program including a plurality of program nodes to be executed to satisfy the full user utterance; and selecting, from the predicted complete dataflow program, one or more program nodes to speculatively execute based on at least the utterance prefix.
2. The computer-implemented method of claim 1, further comprising speculatively executing a selected program node prior to receiving a final version of the full user utterance.
3. The computer-implemented method of claim 2, wherein execution of the selected program node is performed after arguments of the selected program node have returned from execution provided the selected program node remains selected based on subsequently received utterance prefixes, subsequently predicted complete dataflow programs, and/or subsequently understood context.
4. The computer-implemented method of claim 1, wherein the complete dataflow program is predicted based also on a context of a dialogue history.
5. The computer-implemented method of claim 1, wherein the complete dataflow program is predicted by a machine learning model previously trained with training data including pairs of training utterance prefixes and training complete dataflow programs.
6. The computer-implemented method of claim 5, wherein the machine learning model is previously trained to at least locally maximize a product based on at least: a sequence of actions, each of which contributes to adding a node and/or a sequence of edges to the complete dataflow program; a context of a dialogue history; and the utterance prefix.
7. The computer-implemented method of claim 1, wherein the complete dataflow program is predicted by hallucinating a predicted full utterance from the utterance prefix with a previously-trained language model and predicting the complete dataflow program with a machine learning model previously trained with training data including pairs of training full utterances and training complete dataflow programs.
8. The computer-implemented method of claim 1, wherein selecting one or more program nodes to speculatively execute is based at least on a probability the program node will
eventually be executed.
9. The computer-implemented method of claim 8, wherein selecting program nodes includes: for each candidate node of the complete dataflow program predicted based on the utterance prefix: calculating a conditional probability of a candidate subgraph including the candidate node and its descendants; and selecting the candidate node if the conditional probability is greater than a threshold.
10. The computer-implemented method of claim 9, wherein calculating the conditional probability of the candidate subgraph includes multiplying predicted probabilities of all relevant actions responsible for building the candidate subgraph.
11. A computing system, comprising: a logic machine; and a storage machine holding instructions that when executed, cause the logic machine to: sequentially receive a plurality of utterance prefixes, each sequentially received utterance prefix derived from a progressively longer incomplete portion of a full user utterance; for each sequentially received utterance prefix: predict a complete dataflow program comprising a plurality of program nodes to be executed to satisfy the full user utterance; select from the predicted complete dataflow program, based on a current policy, one or more program nodes to speculatively execute based on at least the utterance prefix; and select from the predicted complete dataflow program, based on the current policy, one or more program nodes to not execute until at least a next utterance prefix is received; responsive to receiving the full user utterance, determine a change in execution cost for the predicted complete dataflow program based on a change in which program nodes were speculatively executed and which program nodes were not executed; and generate an updated policy from the current policy based on a determined reduction to the execution cost for the predicted complete dataflow program.
12. The computing system of claim 11, wherein predicting a plurality of program nodes to be executed to satisfy the full user utterance includes:
mapping the received utterance prefix to a plurality of possible full user utterances; and mapping each possible full user utterance to one or more possible complete dataflow programs.
13. The computing system of claim 11, wherein predicting a plurality of program nodes to be executed to satisfy the full user utterance includes mapping the received utterance prefix to one or more possible complete dataflow programs.
14. The computing system of claim 11, wherein predicting a plurality of program nodes to be executed to satisfy the full user utterance includes directly mapping the received utterance prefix to a set of program nodes.
15. The computing system of claim 11, wherein the change in execution cost is based on one or more of an expected latency, a total computing cost, and a likelihood distribution indicating a probability that the program node will appear in a final dataflow program.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202263364702P | 2022-05-13 | 2022-05-13 | |
US63/364,702 | 2022-05-13 | ||
US202263368131P | 2022-07-11 | 2022-07-11 | |
US63/368,131 | 2022-07-11 | ||
US17/815,904 | 2022-07-28 | ||
US17/815,904 US12032964B2 (en) | 2022-05-13 | 2022-07-28 | Speculative execution of dataflow program nodes |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023219686A1 true WO2023219686A1 (en) | 2023-11-16 |
Family
ID=85704756
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2023/013449 WO2023219686A1 (en) | 2022-05-13 | 2023-02-21 | Speculative execution of dataflow program nodes |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023219686A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160148610A1 (en) * | 2014-11-26 | 2016-05-26 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
US20200074993A1 (en) * | 2016-12-20 | 2020-03-05 | Samsung Electronics Co., Ltd. | Electronic device, method for determining utterance intention of user thereof, and non-transitory computer-readable recording medium |
US20210027771A1 (en) * | 2019-07-23 | 2021-01-28 | Microsoft Technology Licensing, Llc | Ambiguity resolution with dialogue search history |
-
2023
- 2023-02-21 WO PCT/US2023/013449 patent/WO2023219686A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160148610A1 (en) * | 2014-11-26 | 2016-05-26 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
US20200074993A1 (en) * | 2016-12-20 | 2020-03-05 | Samsung Electronics Co., Ltd. | Electronic device, method for determining utterance intention of user thereof, and non-transitory computer-readable recording medium |
US20210027771A1 (en) * | 2019-07-23 | 2021-01-28 | Microsoft Technology Licensing, Llc | Ambiguity resolution with dialogue search history |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4004719B1 (en) | Ambiguity resolution with dialogue search history | |
US10762892B2 (en) | Rapid deployment of dialogue system | |
CN116724305B (en) | Integration of context labels with named entity recognition models | |
US11861315B2 (en) | Continuous learning for natural-language understanding models for assistant systems | |
US9269354B2 (en) | Semantic re-ranking of NLU results in conversational dialogue applications | |
US10204097B2 (en) | Efficient dialogue policy learning | |
US20180137854A1 (en) | Machine reading method for dialog state tracking | |
CN115398437A (en) | Improved out-of-domain (OOD) detection techniques | |
CN110741364A (en) | Determining a state of an automated assistant dialog | |
CN112262368A (en) | Natural language to API conversion | |
CN115398436A (en) | Noise data augmentation for natural language processing | |
CN116802629A (en) | Multi-factor modeling for natural language processing | |
US11106536B2 (en) | Error recovery for conversational systems | |
CN116547676A (en) | Enhanced logic for natural language processing | |
CN116583837A (en) | Distance-based LOGIT values for natural language processing | |
CN116615727A (en) | Keyword data augmentation tool for natural language processing | |
CN116635862A (en) | Outside domain data augmentation for natural language processing | |
US20230409615A1 (en) | Systems and Methods for Providing User Experiences on Smart Assistant Systems | |
CN118140230A (en) | Trimming a multi-headed network of single translator layers of a pre-trained language model | |
US20200152182A1 (en) | Dialogue system and a dialogue method | |
US20230113524A1 (en) | Reactive voice device management | |
US20240062108A1 (en) | Techniques for training and deploying a named entity recognition model | |
US20230376700A1 (en) | Training data generation to facilitate fine-tuning embedding models | |
US12032964B2 (en) | Speculative execution of dataflow program nodes | |
US20230102386A1 (en) | Updating constraints for computerized assistant actions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23712118 Country of ref document: EP Kind code of ref document: A1 |