US8972256B2 - System and method for dynamic noise adaptation for robust automatic speech recognition - Google Patents

System and method for dynamic noise adaptation for robust automatic speech recognition Download PDF

Info

Publication number
US8972256B2
US8972256B2 US13/274,694 US201113274694A US8972256B2 US 8972256 B2 US8972256 B2 US 8972256B2 US 201113274694 A US201113274694 A US 201113274694A US 8972256 B2 US8972256 B2 US 8972256B2
Authority
US
United States
Prior art keywords
model
dna
noise
dna model
null
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/274,694
Other versions
US20130096915A1 (en
Inventor
Steven J. Rennie
Pierre Dognin
Petr Fousek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuance Communications Inc
Original Assignee
Nuance Communications Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nuance Communications Inc filed Critical Nuance Communications Inc
Priority to US13/274,694 priority Critical patent/US8972256B2/en
Assigned to NUANCE COMMUNICATIONS, INC. reassignment NUANCE COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FOUSEK, PETR, RENNIE, STEVEN J., DOGNIN, PIERRE
Publication of US20130096915A1 publication Critical patent/US20130096915A1/en
Priority to US14/600,503 priority patent/US9741341B2/en
Application granted granted Critical
Publication of US8972256B2 publication Critical patent/US8972256B2/en
Assigned to CERENCE INC. reassignment CERENCE INC. INTELLECTUAL PROPERTY AGREEMENT Assignors: NUANCE COMMUNICATIONS, INC.
Assigned to CERENCE OPERATING COMPANY reassignment CERENCE OPERATING COMPANY CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 050836 FRAME: 0191. ASSIGNOR(S) HEREBY CONFIRMS THE INTELLECTUAL PROPERTY AGREEMENT. Assignors: NUANCE COMMUNICATIONS, INC.
Assigned to BARCLAYS BANK PLC reassignment BARCLAYS BANK PLC SECURITY AGREEMENT Assignors: CERENCE OPERATING COMPANY
Assigned to CERENCE OPERATING COMPANY reassignment CERENCE OPERATING COMPANY RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BARCLAYS BANK PLC
Assigned to WELLS FARGO BANK, N.A. reassignment WELLS FARGO BANK, N.A. SECURITY AGREEMENT Assignors: CERENCE OPERATING COMPANY
Assigned to CERENCE OPERATING COMPANY reassignment CERENCE OPERATING COMPANY CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE THE CONVEYANCE DOCUMENT WITH THE NEW ASSIGNMENT PREVIOUSLY RECORDED AT REEL: 050836 FRAME: 0191. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: NUANCE COMMUNICATIONS, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering

Definitions

  • the present invention relates to speech processing, and more specifically to noise adaptation in automatic speech recognition.
  • ASR Automatic speech recognition systems try to determine a representative meaning (e.g., text) corresponding to speech inputs.
  • the speech input is processed into a sequence of digital frames which are multi-dimensional vectors that represent various characteristics of the speech signal present during a short time window of the speech.
  • variable numbers of frames are organized as “utterances” representing a period of speech followed by a pause which in real life loosely corresponds to a spoken sentence or phrase.
  • the ASR system compares the input utterances to find statistical acoustic models that best match the vector sequence characteristics and determines corresponding representative text associated with the acoustic models. More formally, given some input observations A, the probability that some string of words W were spoken is represented as P(W
  • W ⁇ arg ⁇ max W ⁇ P ( W ) ⁇ P ( A
  • the acoustic models are typically probabilistic state sequence models such as hidden Markov models (HMMs) that model speech sounds using mixtures of probability distribution functions (Gaussians). Acoustic models often represent phonemes in specific contexts, referred to as PELs (Phonetic Elements), e.g. triphones or phonemes with known left and/or right contexts. State sequence models can be scaled up to represent words as connected sequences of acoustically modeled phonemes, and phrases or sentences as connected sequences of words. When the models are organized together as words, phrases, and sentences, additional language-related information is also typically incorporated into the models in the form of a statistical language model.
  • HMMs hidden Markov models
  • Gaussians mixtures of probability distribution functions
  • Acoustic models often represent phonemes in specific contexts, referred to as PELs (Phonetic Elements), e.g. triphones or phonemes with known left and/or right contexts.
  • State sequence models can be scaled up to represent words as connected sequences of
  • the words or phrases associated with the best matching model structures are referred to as recognition candidates or hypotheses.
  • a system may produce a single best recognition candidate—the recognition result—or multiple recognition hypotheses in various forms such as an N-best list, a recognition lattice, or a confusion network. Further details regarding continuous speech recognition are provided in U.S. Pat. No. 5,794,189, entitled “Continuous Speech Recognition,” and U.S. Pat. No. 6,167,377, entitled “Speech Recognition Language Models,” the contents of which are incorporated herein by reference.
  • Some ASR systems pre-process the input speech frames (observation vectors) to account for channel effects and noise, for example, using explicit models of noise, channel distortion, and their interaction with speech.
  • Many interesting and effective approximate modeling and inference techniques have been developed to represent these acoustic entities and the reasonably well understood but complicated interactions between them. While there are many results showing the promise of these techniques on less sophisticated systems trained on small amounts of artificially mixed data, there has been little evidence that these techniques can improve state of the art large vocabulary ASR systems.
  • Dynamic noise adaptation is a model-based technique for improving ASR performance in the presence of noise. See Rennie et al. Dynamic Noise Adaptation , Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2006, 14-19 May 2006; Rennie and Dognin, Beyond Linear Transforms: Efficient Non - Linear Dynamic Adaptation For Noise Robust Speech Recognition , in Proceedings of the 9th International Conference of Interspeech 2008, Brisbane, Australia, Sep. 23-26, 2008; Rennie et al., Robust Speech Recognition Using Dynamic Noise Adaptation , in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2011, Prague, Czech Republic, May 22-27, 2011; all incorporated herein by reference.
  • IICASSP Independent Speech Recognition
  • Embodiments of the present invention are directed to a speech processing method and arrangement.
  • a dynamic noise adaptation (DNA) model characterizes a speech input reflecting effects of background noise.
  • a null noise DNA model characterizes the speech input based on reflecting a null noise mismatch condition.
  • a model adaptation module performs Bayesian model selection and re-weighting of the DNA model and the null noise DNA model to realize a modified DNA model characterizing the speech input for automatic speech recognition and compensating for noise to a varying degree depending on relative probabilities of the DNA model and the null noise DNA model.
  • the Bayesian model selection and re-weighting may reflect a competing likelihood of which model best characterizes the speech input, for example, by averaging the models, and/or by further decreasing the probability of the DNA model when it does not best characterize the speech input, for example, to zero, and/or by increasing the probability of the DNA model when it best characterizes the input, for example by doubling the probability, and then subtracting 1.
  • the DNA model may include a probability based noise model reflecting transient and evolving components of a current noise estimate.
  • FIG. 1 shows various hardware components of an ASR system according to an embodiment of the present invention.
  • FIG. 2 shows an arrangement for null noise DNA processing according to an embodiment.
  • FIG. 3 shows a graph illustrating use of a hard threshold probability between the competing DNA models.
  • FIG. 1 shows various hardware components of an embodiment of an ASR system which uses a language model according to the present invention.
  • a computer system 10 includes a speech input microphone 11 which is connected through a suitable preamplifier 13 to an analog-to-digital (A/D) converter 15 .
  • a front-end DNA pre-processor 17 typically performs a Fourier transform so as to extract spectral features to characterize the input speech as a sequence of representative multi-dimensional vectors and performs the DNA analysis and adaptation in a potentially derived feature space.
  • a speech recognition processor 12 e.g., an Intel Core i7 processor or the like, is programmed to run one or more specialized computer software processes to determine a recognition output corresponding to the speech input.
  • processor memory 120 e.g., random access memory (RAM) and/or read-only memory (ROM) stores the speech processing software routines, the speech recognition models and data for use by the speech recognition processor 12 .
  • the recognition output may be displayed, for example, as representative text on computer workstation display 14 .
  • Such a computer workstation would also typically include a keyboard 16 and a mouse 18 for user interaction with the system 10 .
  • ASR implemented for a mobile device such as a cell phone
  • ASR for the cabin of an automobile
  • client-server based ASR etc.
  • FIG. 2 shows a simplified diagram of the DNA architecture (omitting an explicit channel distortion model).
  • the interaction model for that frame y t includes a speech observation vector component x, and a noise component n 1 like Eq. 4 above:
  • the speech model can specifically use a band-quantized gaussian mixture model (BQ-GMM) which is a constrained, diagonal covariance Gaussian Mixture Model (GMM).
  • BQ-GMMs have B ⁇ S shared Gaussians per feature, where S is the number of acoustic components, and so can be evaluated very efficiently.
  • Noise can be separated into evolving and transient components, which facilitates robust tracking of the noise level during inference.
  • the dynamically evolving component of this noise—the noise level—is assumed to be changing slowly relative to the frame rate, and can be modeled as follows: p ( l f,0 ) ( l f,0 ; ⁇ f , ⁇ f,0 2 ), (5) p ( l f, ⁇
  • l f, ⁇ 1 ) ( l f, ⁇ ;l f, ⁇ 1 , ⁇ f 2 ), (6) where l f, ⁇ is a random variable representing the noise level in frequency band f at frame ⁇ .
  • ⁇ a ⁇ ⁇ f ⁇ ⁇ ⁇ x ⁇
  • the posterior distribution of x and n is Gaussian.
  • the posterior distribution of l can be determined by integrating out the speech and transient noise to get a Gaussian posterior likelihood for l, and then combining it with the current noise level prior. This is more efficient than unnecessarily computing the joint posterior of x, n, and l.
  • MMSE Minimum Mean Square Error
  • f(y t ) consists of two terms—g(y t ) which is simply the log likelihood ratio of the two models, and c which is a bias term equal to the log of the prior ratio of the models.
  • Equation (15) does not directly take into account the relative complexity of the models that are competing to explain the observed speech data. When deciding what model best represents the observed test features, it makes sense to penalize model complexity. In this case, one model is actually contained within the other. If the clean model can explain the speech data just as well as the DNA model, then the clean model should have higher posterior probability because it has fewer parameters. Equation (15) estimates a frame-level model posterior for the DNA model which itself evolves stochastically in online fashion to adapt to changing noise conditions.
  • y 0 : t ] p ⁇ ( M DNA
  • the state of the DNA noise model is not affected by the current posterior probability of the competing model.
  • a competing noise model was introduced to make DNA more robust to abrupt changes in the noise level.
  • the evolving noise model in DNA would be re-initialized. But in embodiments of the present invention, the NN model competes with DNA only for influence in the reconstructed speech estimate.
  • FIG. 3 shows one example of use of such a thresholding arrangement where:
  • Embodiments of the present invention improve ASR performance in clean noise conditions, by allowing a noise-free NN speech model to compete with the DNA model.
  • Experimental results indicate that use of the NN model improves the Sentence Error Rate (SER) of a state-of-the-art embedded speech recognizer that utilizes commercial grade feature-space Maximum Mutual Information (fMMI), boosted MMI (bMMI), and feature-space Maximum Likelihood Linear Regression (fMLLR) compensation by 15% relative at signal-to-noise ratios (SNRs) below 10 dB, and over 8% relative overall.
  • SER Sentence Error Rate
  • fMMI boosted MMI
  • fMLLR feature-space Maximum Likelihood Linear Regression
  • Embodiments can be implemented in whole or in part as a computer program product for use with a computer system.
  • Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium.
  • the medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques).
  • the series of computer instructions embodies all or part of the functionality previously described herein with respect to the system.
  • Such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Machine Translation (AREA)

Abstract

A speech processing method and arrangement are described. A dynamic noise adaptation (DNA) model characterizes a speech input reflecting effects of background noise. A null noise DNA model characterizes the speech input based on reflecting a null noise mismatch condition. A DNA interaction model performs Bayesian model selection and re-weighting of the DNA model and the null noise DNA model to realize a modified DNA model characterizing the speech input for automatic speech recognition and compensating for noise to a varying degree depending on relative probabilities of the DNA model and the null noise DNA model.

Description

TECHNICAL FIELD
The present invention relates to speech processing, and more specifically to noise adaptation in automatic speech recognition.
BACKGROUND ART
Automatic speech recognition (ASR) systems try to determine a representative meaning (e.g., text) corresponding to speech inputs. Typically, the speech input is processed into a sequence of digital frames which are multi-dimensional vectors that represent various characteristics of the speech signal present during a short time window of the speech. In a continuous speech recognition system, variable numbers of frames are organized as “utterances” representing a period of speech followed by a pause which in real life loosely corresponds to a spoken sentence or phrase. The ASR system compares the input utterances to find statistical acoustic models that best match the vector sequence characteristics and determines corresponding representative text associated with the acoustic models. More formally, given some input observations A, the probability that some string of words W were spoken is represented as P(W|A), where the ASR system attempts to determine the most likely word string:
W ^ = arg max W P ( W | A )
Given a system of statistical acoustic models, this formula can be re-expressed as:
W ^ = arg max W P ( W ) P ( A | W )
where P(A|W) corresponds to the acoustic models and P(W) represents the value of a statistical language model reflecting the probability of given word in the recognition vocabulary occurring.
The acoustic models are typically probabilistic state sequence models such as hidden Markov models (HMMs) that model speech sounds using mixtures of probability distribution functions (Gaussians). Acoustic models often represent phonemes in specific contexts, referred to as PELs (Phonetic Elements), e.g. triphones or phonemes with known left and/or right contexts. State sequence models can be scaled up to represent words as connected sequences of acoustically modeled phonemes, and phrases or sentences as connected sequences of words. When the models are organized together as words, phrases, and sentences, additional language-related information is also typically incorporated into the models in the form of a statistical language model.
The words or phrases associated with the best matching model structures are referred to as recognition candidates or hypotheses. A system may produce a single best recognition candidate—the recognition result—or multiple recognition hypotheses in various forms such as an N-best list, a recognition lattice, or a confusion network. Further details regarding continuous speech recognition are provided in U.S. Pat. No. 5,794,189, entitled “Continuous Speech Recognition,” and U.S. Pat. No. 6,167,377, entitled “Speech Recognition Language Models,” the contents of which are incorporated herein by reference.
Some ASR systems pre-process the input speech frames (observation vectors) to account for channel effects and noise, for example, using explicit models of noise, channel distortion, and their interaction with speech. Many interesting and effective approximate modeling and inference techniques have been developed to represent these acoustic entities and the reasonably well understood but complicated interactions between them. While there are many results showing the promise of these techniques on less sophisticated systems trained on small amounts of artificially mixed data, there has been little evidence that these techniques can improve state of the art large vocabulary ASR systems.
There a number of fundamental challenges to designing noise-robust ASR systems. Efficient modeling and inference is needed that balances the trade-off between computational complexity and performance. System modeling also needs to be robust to improve system ASR performance in noisy conditions without degrading performance in clean (low-noise) conditions. And robust adaptation also is desired that improves system performance in noise conditions not seen during system training.
Dynamic noise adaptation (DNA) is a model-based technique for improving ASR performance in the presence of noise. See Rennie et al. Dynamic Noise Adaptation, Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing 2006, 14-19 May 2006; Rennie and Dognin, Beyond Linear Transforms: Efficient Non-Linear Dynamic Adaptation For Noise Robust Speech Recognition, in Proceedings of the 9th International Conference of Interspeech 2008, Brisbane, Australia, Sep. 23-26, 2008; Rennie et al., Robust Speech Recognition Using Dynamic Noise Adaptation, in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2011, Prague, Czech Republic, May 22-27, 2011; all incorporated herein by reference. DNA is designed to compensate for mismatch between training and testing conditions, and recently, DNA has been shown to improve the performance of even commercial-grade ASR systems trained on large amounts of data. However, new investigations with yet more data and yet stronger baseline systems have revealed that conventional DNA can sometimes harm ASR performance, especially when the existing noise conditions are well characterized by the back-end acoustic models. Such issues could be mitigated by applying the model-based approach to the recognizer itself and training acoustic models of speech that recover a canonical representation of speech, together with a noise model, which could be adapted. But this paradigm is not yet fully mature.
SUMMARY
Embodiments of the present invention are directed to a speech processing method and arrangement. A dynamic noise adaptation (DNA) model characterizes a speech input reflecting effects of background noise. A null noise DNA model characterizes the speech input based on reflecting a null noise mismatch condition. A model adaptation module performs Bayesian model selection and re-weighting of the DNA model and the null noise DNA model to realize a modified DNA model characterizing the speech input for automatic speech recognition and compensating for noise to a varying degree depending on relative probabilities of the DNA model and the null noise DNA model.
The Bayesian model selection and re-weighting may reflect a competing likelihood of which model best characterizes the speech input, for example, by averaging the models, and/or by further decreasing the probability of the DNA model when it does not best characterize the speech input, for example, to zero, and/or by increasing the probability of the DNA model when it best characterizes the input, for example by doubling the probability, and then subtracting 1. The DNA model may include a probability based noise model reflecting transient and evolving components of a current noise estimate.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows various hardware components of an ASR system according to an embodiment of the present invention.
FIG. 2 shows an arrangement for null noise DNA processing according to an embodiment.
FIG. 3 shows a graph illustrating use of a hard threshold probability between the competing DNA models.
DETAILED DESCRIPTION
Various embodiments of the present invention are directed to an enhancement of dynamic noise adaptation (DNA) that substantially improves the performance of commercial grade speech recognizers trained on large amounts of data. Embodiments of the present invention automatically detect when mismatch noise modeling is not beneficial. Online Bayesian model selection and averaging is performed to regularize the influence that mismatch noise modeling has on the output clean feature estimate. Specifically, a Null Noise Model (NN) is introduced as a degenerate DNA model which is clamped to a noise-free condition. The NN model competes with the current DNA model which tracks the evolving state of the background noise. The importance of the DNA and the noise-free streams is adaptively inferred and their relative weighting adjusted based on their ability to explain the observed speech features. There is significant performance improvement in low SNR conditions without degrading performance in clean conditions. No prior knowledge about the noise conditions is needed, no system re-training is required, and there is low computational complexity.
More specifically, system noise modeling rapidly adapts during a speech utterance, effectively instantaneously when the noise in a frequency band is inferred to be observed. The uncertainty associated with the current noise estimate is modeled so that the speech/noise decision in each frequency band is more robust than previous noise adaptive techniques. The noise model can decompose noise into transient and evolving components and model the uncertainty associated with these estimates. Such arrangements aid in automatically detecting when explicitly modeling the noise background is not advantageous, so that explicit noise modeling can be shut off. More generally, the noise can be compensated for to a varying degree depending on how much the noise modeling improves the probability of the data under a speech model. This avoids degradation in clean conditions and actually improves ASR performance in low SNR conditions.
FIG. 1 shows various hardware components of an embodiment of an ASR system which uses a language model according to the present invention. A computer system 10 includes a speech input microphone 11 which is connected through a suitable preamplifier 13 to an analog-to-digital (A/D) converter 15. A front-end DNA pre-processor 17 typically performs a Fourier transform so as to extract spectral features to characterize the input speech as a sequence of representative multi-dimensional vectors and performs the DNA analysis and adaptation in a potentially derived feature space. A speech recognition processor 12, e.g., an Intel Core i7 processor or the like, is programmed to run one or more specialized computer software processes to determine a recognition output corresponding to the speech input. To that end, processor memory 120, e.g., random access memory (RAM) and/or read-only memory (ROM) stores the speech processing software routines, the speech recognition models and data for use by the speech recognition processor 12. The recognition output may be displayed, for example, as representative text on computer workstation display 14. Such a computer workstation would also typically include a keyboard 16 and a mouse 18 for user interaction with the system 10. Of course, many other typical arrangements are also familiar such as an ASR implemented for a mobile device such as a cell phone, ASR for the cabin of an automobile, client-server based ASR, etc.
A DNA model includes a speech model, a noise model, a channel model, and an interaction model which describes how these acoustic entities combine to generate the observed speech data. The interaction between speech x, noise n and channel effects h is modeled in time domain as:
y(t)=h(t)*x(t)+n(t).  (1)
where * denotes linear convolution. In the frequency domain:
Y 2 = H 2 X 2 + N 2 + 2 H X N cos θ = H 2 X 2 + N 2 + ε , ( 2 )
where |X| and θx represent the magnitude and phase spectrum of x(t), and θ=θxh−θn. Ignoring the phase term ε and assuming that the channel response |H| is constant over each Mel frequency band, in the log Mel spectral domain:
y≈f(x+h, n)=log(exp(x+h)+exp(n))  (3)
where y represents the log Mel transform of |Y|2. The error of this approximation can be modeled as zero mean and Gaussian distributed:
p(y|x+h,n)=
Figure US08972256-20150303-P00001
(y:f(x+h+n), ψ2).  (4)
FIG. 2 shows a simplified diagram of the DNA architecture (omitting an explicit channel distortion model). In this visually simplified diagram, it can be seen that for a given frame of data at time t, the interaction model for that frame yt, includes a speech observation vector component x, and a noise component n1 like Eq. 4 above:
p ( y t | x t , n t ) = N [ y ; ln ( exp ( x t ) + exp ( n t ) ) , Ψ ] N [ y t ; A x ( x t i , n t i ) x t + A n ( x t i , n t i ) n t , Ψ ]
The speech model can specifically use a band-quantized gaussian mixture model (BQ-GMM) which is a constrained, diagonal covariance Gaussian Mixture Model (GMM). BQ-GMMs have B<<S shared Gaussians per feature, where S is the number of acoustic components, and so can be evaluated very efficiently.
DNA models noise in the Mel spectrum as a Gaussian process. Noise can be separated into evolving and transient components, which facilitates robust tracking of the noise level during inference. The dynamically evolving component of this noise—the noise level—is assumed to be changing slowly relative to the frame rate, and can be modeled as follows:
p(l f,0)=
Figure US08972256-20150303-P00001
(l f,0f, ωf,0 2),  (5)
p(l f,τ |l f,τ−1)=
Figure US08972256-20150303-P00001
(l f,τ ;l f,τ−1, γf 2),  (6)
where lf,τis a random variable representing the noise level in frequency band f at frame τ. Note that it is assumed that the noise evolves independently at each frequency band. The transient component of the noise process at each frequency band is modeled as zero-mean and Gaussian:
p(n f,τ |l f,τ)=
Figure US08972256-20150303-P00001
(n f,τ ;l f,τf 2).  (7)
Channel distortion h can be modeled as a parameter which is stochastically adapted:
p(h f,τ)=δ(h f,τ −ĥ f(τ)),  (8)
where ^ hf(r) is the current estimate of the channel in frequency bin f at frame τ.
The DNA model can be evaluated in sequential fashion. For a GMM speech model with |s|=K components and an utterance with T frames, the exact noise posterior for a given frame τ is a KT component GMM, so approximations need to be made for inference to be tractable. The noise posterior at each given frame may be approximated as Gaussian:
p(l f,τ+1)≈
Figure US08972256-20150303-P00001
(l f,τ+1f,τ+1, ωf,τ+1 2)  (9)(10)
A variation of Algonquin can be used to iteratively estimate the conditional posterior of the noise level and speech for each speech Gaussian. Algonquin iteratively linearizes the interaction function given a context-dependent expansion point, usually taken as the current estimates of the speech and noise. For a given Gaussian α:
p ( y | x , n , h ) 𝒩 ( y : α a ( x + h ) + ( 1 - α a ) n + b a · ψ 2 ) , ( 11 ) α a = δ f δ x | x ^ a , l ^ a , n ^ a = H ^ a 2 X ^ a 2 H ^ a 2 X ^ a 2 + N ^ a 2 , ( 12 ) b a = f ( x ^ a + h ^ a · n ^ a ) - α a ( x ^ a + h ^ a - n ^ a ) - n ^ a . ( 13 )
Given αa, the posterior distribution of x and n is Gaussian. Once the final estimate of αa has been determined, the posterior distribution of l can be determined by integrating out the speech and transient noise to get a Gaussian posterior likelihood for l, and then combining it with the current noise level prior. This is more efficient than unnecessarily computing the joint posterior of x, n, and l.
The approximate Minimum Mean Square Error (MMSE) estimate of the Mel speech features for frame τ under DNA is:
x ^ f , τ = E [ x f , τ | y 0 : τ ] = s τ p ( s τ | y 0 : τ ) E [ x f , τ | y 0 : τ , s τ ] . ( 14 )
These features can be passed to the ASR backend for speech recognition.
To detect matched noise conditions, a Null Noise Model (NN) (a degenerate DNA model) is introduced to compete with the current DNA model. Let MDNA and Mmatched denote the current estimates of the DNA model and Null Noise Model (NN) respectively. The posterior probability of the DNA model for a given frame of data is given by:
p ( DNA | y t ) = 1 1 + exp ( - α f ( y t ) ) , where ( 15 ) f ( y t ) = g ( y t ) + c , with ( 16 ) g ( y t ) = log p ( y t | DNA ) p ( y t | matched ) , c = log p ( DNA ) p ( matched ) , ( 17 )
and α=1. This is simply Bayes' rule for a binary random variable, with states MDNA and Mmatched respectively. α can be tuned to control how “sharp” the posterior estimate is. f(yt) consists of two terms—g(yt) which is simply the log likelihood ratio of the two models, and c which is a bias term equal to the log of the prior ratio of the models.
Equation (15) does not directly take into account the relative complexity of the models that are competing to explain the observed speech data. When deciding what model best represents the observed test features, it makes sense to penalize model complexity. In this case, one model is actually contained within the other. If the clean model can explain the speech data just as well as the DNA model, then the clean model should have higher posterior probability because it has fewer parameters. Equation (15) estimates a frame-level model posterior for the DNA model which itself evolves stochastically in online fashion to adapt to changing noise conditions. Here the model posterior at time t given all previous data y0:t can be approximated as:
p(
Figure US08972256-20150303-P00002
DNA |y 0:t)=γp(
Figure US08972256-20150303-P00002
DNA |y 0:t−1)+(1−γ)p(
Figure US08972256-20150303-P00002
matched |y t), γε(0.1)  (18)
The clean speech estimate output at time t is then given by:
E [ x t | y 0 : t ] = p ( DNA | y 0 : t ) E DNA [ x | y 0 : t ] + ( 1 - p ( DNA | y 0 : t ) ) E matched [ x | y 0 : t ] ( 19 )
Note that the state of the DNA noise model is not affected by the current posterior probability of the competing model. In a previous investigation a competing noise model was introduced to make DNA more robust to abrupt changes in the noise level. When a reset condition was triggered by a high noise model probability, the evolving noise model in DNA would be re-initialized. But in embodiments of the present invention, the NN model competes with DNA only for influence in the reconstructed speech estimate.
Several criterion (Akaike, MDL, etc.) exist for penalizing the number of parameters in a model when doing model selection. For example, a simple online adaptive model selection scheme could assign zero probability to the DNA model if the clean model can just as well explain the observed speech data, and then correspondingly increase the probability under the standard model averaging update when DNA is the better explanation. FIG. 3 shows one example of use of such a thresholding arrangement where:
p t = { 2 p t - 1 , p t > 1 2 0 , otherwise
Embodiments of the present invention such as those described above improve ASR performance in clean noise conditions, by allowing a noise-free NN speech model to compete with the DNA model. Experimental results indicate that use of the NN model improves the Sentence Error Rate (SER) of a state-of-the-art embedded speech recognizer that utilizes commercial grade feature-space Maximum Mutual Information (fMMI), boosted MMI (bMMI), and feature-space Maximum Likelihood Linear Regression (fMLLR) compensation by 15% relative at signal-to-noise ratios (SNRs) below 10 dB, and over 8% relative overall.
Embodiments of the invention may be implemented in whole or in part in any conventional computer programming language. For example, preferred embodiments may be implemented in a procedural programming language (e.g., “C”) or an object oriented programming language (e.g., “C++”, Python). Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components. For example, a pseudo code representation of such an embodiment might be set forth as follows:
    • Process DNA_Null_Noise
      • DNA(speech_input);
      • DNA_NN(speech input);
      • DNA_select(DNA, DNA_NN).
Embodiments can be implemented in whole or in part as a computer program product for use with a computer system. Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).
Although various exemplary embodiments of the invention have been disclosed, it should be apparent to those skilled in the art that various changes and modifications can be made which will achieve some of the advantages of the invention without departing from the true scope of the invention.

Claims (12)

What is claimed is:
1. A method comprising:
characterizing, by a computing device, a speech input based on a dynamic noise adaptation (DNA) model reflecting effects of background noise;
characterizing the speech input based on a null noise DNA model reflecting a null noise mismatch condition; and
performing Bayesian model selection and re-weighting of the DNA model and the null noise DNA model to realize a modified DNA model characterizing the speech input for automatic speech recognition and compensating for noise to a varying degree depending on relative probabilities of the DNA model and the null noise DNA model,
wherein the Bayesian model selection and re-weighting reflects a competing likelihood of which model best characterizes the speech input, and
wherein re-weighting the DNA model and the null noise DNA model includes assigning zero probability to the DNA model predicted by Bayesian model averaging when the DNA model does not best characterize the speech input.
2. The method of claim 1, wherein re-weighting the DNA model and the null noise DNA model includes averaging.
3. The method of claim 1, wherein re-weighting the DNA model and the null noise DNA model includes increasing the probability of the DNA model predicted by Bayesian model averaging when the DNA model best characterizes the speech input.
4. The method of claim 1, wherein the DNA model includes a probability based noise model reflecting transient and evolving components of a current noise estimate.
5. A non-transitory computer-readable medium storing computer-readable instructions that, when executed by a processor, cause a device to:
characterize a speech input based on a dynamic noise adaptation (DNA) model reflecting effects of background noise;
characterize the speech input based on a null noise DNA model reflecting a null noise mismatch condition; and
perform Bayesian model selection and re-weighting of the DNA model and the null noise DNA model to realize a modified DNA model characterizing the speech input for automatic speech recognition and compensating for noise to a varying degree depending on relative probabilities of the DNA model and the null noise DNA model,
wherein the Bayesian model selection and re-weighting reflects a competing likelihood of which model best characterizes the speech input, and
wherein re-weighting the DNA model and the null noise DNA model includes assigning zero probability to the DNA model predicted by Bayesian model averaging when the DNA model does not best characterize the speech input.
6. The non-transitory computer-readable medium of claim 5, wherein re-weighting the DNA model and the null noise DNA model includes averaging.
7. The non-transitory computer-readable medium of claim 5, wherein re-weighting the DNA model and the null noise DNA model includes increasing the probability of the DNA model predicted by Bayesian model averaging when the DNA model best characterizes the speech input.
8. The non-transitory computer-readable medium of claim 5, wherein the DNA model includes a probability based noise model reflecting transient and evolving components of a current noise estimate.
9. A method comprising:
characterizing, by a computing device, a speech input based on a dynamic noise adaptation (DNA) model reflecting effects of background noise;
characterizing the speech input based on a null noise DNA model reflecting a null noise mismatch condition; and
performing Bayesian model selection and re-weighting of the DNA model and the null noise DNA model to realize a modified DNA model characterizing the speech input for automatic speech recognition and compensating for noise to a varying degree depending on relative probabilities of the DNA model and the null noise DNA model,
wherein the Bayesian model selection and re-weighting reflects a competing likelihood of which model best characterizes the speech input,
wherein re-weighting the DNA model and the null noise DNA model includes reducing the probability of the DNA model predicted by Bayesian model averaging when the DNA model does not best characterize the speech input, and
wherein re-weighting the DNA model and the null noise DNA model includes increasing the probability of the DNA model predicted by Bayesian model averaging when the DNA model best characterizes the speech input.
10. The method of claim 9, wherein re-weighting the DNA model and the null noise DNA model includes averaging.
11. The method of claim 9, wherein re-weighting the DNA model and the null noise DNA model includes assigning zero probability to the DNA model predicted by Bayesian model averaging when the DNA model does not best characterize the speech input.
12. The method of claim 9, wherein the DNA model includes a probability based noise model reflecting transient and evolving components of a current noise estimate.
US13/274,694 2011-10-17 2011-10-17 System and method for dynamic noise adaptation for robust automatic speech recognition Active 2034-01-01 US8972256B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/274,694 US8972256B2 (en) 2011-10-17 2011-10-17 System and method for dynamic noise adaptation for robust automatic speech recognition
US14/600,503 US9741341B2 (en) 2011-10-17 2015-01-20 System and method for dynamic noise adaptation for robust automatic speech recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/274,694 US8972256B2 (en) 2011-10-17 2011-10-17 System and method for dynamic noise adaptation for robust automatic speech recognition

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/600,503 Continuation US9741341B2 (en) 2011-10-17 2015-01-20 System and method for dynamic noise adaptation for robust automatic speech recognition

Publications (2)

Publication Number Publication Date
US20130096915A1 US20130096915A1 (en) 2013-04-18
US8972256B2 true US8972256B2 (en) 2015-03-03

Family

ID=48086575

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/274,694 Active 2034-01-01 US8972256B2 (en) 2011-10-17 2011-10-17 System and method for dynamic noise adaptation for robust automatic speech recognition
US14/600,503 Active 2032-02-03 US9741341B2 (en) 2011-10-17 2015-01-20 System and method for dynamic noise adaptation for robust automatic speech recognition

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/600,503 Active 2032-02-03 US9741341B2 (en) 2011-10-17 2015-01-20 System and method for dynamic noise adaptation for robust automatic speech recognition

Country Status (1)

Country Link
US (2) US8972256B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076719A1 (en) * 2015-09-10 2017-03-16 Samsung Electronics Co., Ltd. Apparatus and method for generating acoustic model, and apparatus and method for speech recognition
CN106683663A (en) * 2015-11-06 2017-05-17 三星电子株式会社 Neural network training apparatus and method, and speech recognition apparatus and method
US11881211B2 (en) 2020-03-24 2024-01-23 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device for augmenting learning data for a recognition model

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10075630B2 (en) 2013-07-03 2018-09-11 HJ Laboratories, LLC Providing real-time, personal services by accessing components on a mobile device
US9373324B2 (en) 2013-12-06 2016-06-21 International Business Machines Corporation Applying speaker adaption techniques to correlated features
US9378735B1 (en) * 2013-12-19 2016-06-28 Amazon Technologies, Inc. Estimating speaker-specific affine transforms for neural network based speech recognition systems
US9530408B2 (en) 2014-10-31 2016-12-27 At&T Intellectual Property I, L.P. Acoustic environment recognizer for optimal speech processing
CN109087659A (en) * 2018-08-03 2018-12-25 三星电子(中国)研发中心 Audio optimization method and apparatus
US11887583B1 (en) * 2021-06-09 2024-01-30 Amazon Technologies, Inc. Updating models with trained model update objects

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5749068A (en) * 1996-03-25 1998-05-05 Mitsubishi Denki Kabushiki Kaisha Speech recognition apparatus and method in noisy circumstances
US5970446A (en) * 1997-11-25 1999-10-19 At&T Corp Selective noise/channel/coding models and recognizers for automatic speech recognition
US6188982B1 (en) * 1997-12-01 2001-02-13 Industrial Technology Research Institute On-line background noise adaptation of parallel model combination HMM with discriminative learning using weighted HMM for noisy speech recognition
US20020087306A1 (en) * 2000-12-29 2002-07-04 Lee Victor Wai Leung Computer-implemented noise normalization method and system
US20020165712A1 (en) * 2000-04-18 2002-11-07 Younes Souilmi Method and apparatus for feature domain joint channel and additive noise compensation
US20030115055A1 (en) * 2001-12-12 2003-06-19 Yifan Gong Method of speech recognition resistant to convolutive distortion and additive distortion
US20030182114A1 (en) * 2000-05-04 2003-09-25 Stephane Dupont Robust parameters for noisy speech recognition
US20030191636A1 (en) * 2002-04-05 2003-10-09 Guojun Zhou Adapting to adverse acoustic environment in speech processing using playback training data
US20040064314A1 (en) * 2002-09-27 2004-04-01 Aubert Nicolas De Saint Methods and apparatus for speech end-point detection
US20040093210A1 (en) * 2002-09-18 2004-05-13 Soichi Toyama Apparatus and method for speech recognition
US20040158465A1 (en) * 1998-10-20 2004-08-12 Cannon Kabushiki Kaisha Speech processing apparatus and method
US20040260546A1 (en) * 2003-04-25 2004-12-23 Hiroshi Seo System and method for speech recognition
US20050071159A1 (en) * 2003-09-26 2005-03-31 Robert Boman Speech recognizer performance in car and home applications utilizing novel multiple microphone configurations
US20060195317A1 (en) * 2001-08-15 2006-08-31 Martin Graciarena Method and apparatus for recognizing speech in a noisy environment
US20070050189A1 (en) * 2005-08-31 2007-03-01 Cruz-Zeno Edgardo M Method and apparatus for comfort noise generation in speech communication systems
US20070055508A1 (en) * 2005-09-03 2007-03-08 Gn Resound A/S Method and apparatus for improved estimation of non-stationary noise for speech enhancement
US7236930B2 (en) * 2004-04-12 2007-06-26 Texas Instruments Incorporated Method to extend operating range of joint additive and convolutive compensating algorithms
US20090076813A1 (en) * 2007-09-19 2009-03-19 Electronics And Telecommunications Research Institute Method for speech recognition using uncertainty information for sub-bands in noise environment and apparatus thereof
US20090187402A1 (en) * 2004-06-04 2009-07-23 Koninklijke Philips Electronics, N.V. Performance Prediction For An Interactive Speech Recognition System
US20090271188A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Adjusting A Speech Engine For A Mobile Computing Device Based On Background Noise
US20100204988A1 (en) * 2008-09-29 2010-08-12 Xu Haitian Speech recognition method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005036525A1 (en) * 2003-10-08 2005-04-21 Philips Intellectual Property & Standards Gmbh Adaptation of environment mismatch for speech recognition systems
US8180635B2 (en) * 2008-12-31 2012-05-15 Texas Instruments Incorporated Weighted sequential variance adaptation with prior knowledge for noise robust speech recognition

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5749068A (en) * 1996-03-25 1998-05-05 Mitsubishi Denki Kabushiki Kaisha Speech recognition apparatus and method in noisy circumstances
US5970446A (en) * 1997-11-25 1999-10-19 At&T Corp Selective noise/channel/coding models and recognizers for automatic speech recognition
US6188982B1 (en) * 1997-12-01 2001-02-13 Industrial Technology Research Institute On-line background noise adaptation of parallel model combination HMM with discriminative learning using weighted HMM for noisy speech recognition
US20040158465A1 (en) * 1998-10-20 2004-08-12 Cannon Kabushiki Kaisha Speech processing apparatus and method
US20020165712A1 (en) * 2000-04-18 2002-11-07 Younes Souilmi Method and apparatus for feature domain joint channel and additive noise compensation
US20030182114A1 (en) * 2000-05-04 2003-09-25 Stephane Dupont Robust parameters for noisy speech recognition
US20020087306A1 (en) * 2000-12-29 2002-07-04 Lee Victor Wai Leung Computer-implemented noise normalization method and system
US20060195317A1 (en) * 2001-08-15 2006-08-31 Martin Graciarena Method and apparatus for recognizing speech in a noisy environment
US20030115055A1 (en) * 2001-12-12 2003-06-19 Yifan Gong Method of speech recognition resistant to convolutive distortion and additive distortion
US20030191636A1 (en) * 2002-04-05 2003-10-09 Guojun Zhou Adapting to adverse acoustic environment in speech processing using playback training data
US20040093210A1 (en) * 2002-09-18 2004-05-13 Soichi Toyama Apparatus and method for speech recognition
US20040064314A1 (en) * 2002-09-27 2004-04-01 Aubert Nicolas De Saint Methods and apparatus for speech end-point detection
US20040260546A1 (en) * 2003-04-25 2004-12-23 Hiroshi Seo System and method for speech recognition
US20050071159A1 (en) * 2003-09-26 2005-03-31 Robert Boman Speech recognizer performance in car and home applications utilizing novel multiple microphone configurations
US7236930B2 (en) * 2004-04-12 2007-06-26 Texas Instruments Incorporated Method to extend operating range of joint additive and convolutive compensating algorithms
US20090187402A1 (en) * 2004-06-04 2009-07-23 Koninklijke Philips Electronics, N.V. Performance Prediction For An Interactive Speech Recognition System
US20070050189A1 (en) * 2005-08-31 2007-03-01 Cruz-Zeno Edgardo M Method and apparatus for comfort noise generation in speech communication systems
US20070055508A1 (en) * 2005-09-03 2007-03-08 Gn Resound A/S Method and apparatus for improved estimation of non-stationary noise for speech enhancement
US20090076813A1 (en) * 2007-09-19 2009-03-19 Electronics And Telecommunications Research Institute Method for speech recognition using uncertainty information for sub-bands in noise environment and apparatus thereof
US20090271188A1 (en) * 2008-04-24 2009-10-29 International Business Machines Corporation Adjusting A Speech Engine For A Mobile Computing Device Based On Background Noise
US20100204988A1 (en) * 2008-09-29 2010-08-12 Xu Haitian Speech recognition method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Kristjansson, et al. "Towards non-stationary model-based noise adaptation for large vocabulary speech recognition." Acoustics, Speech, and Signal Processing, 2001. Proceedings.(ICASSP'01). 2001 IEEE International Conference on. vol. 1. IEEE, 2001. *
Steven J. Rennie et al. "Dynamic noise adaptation." Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on. vol. 1. IEEE, 2006. *
Steven J. Rennie, "Graphical Models for Robust Speech Recognition in Adverse Environments", A PhD thesis submit to Department of Electrical and Computer Engineering University of Toronto, 2008. *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170076719A1 (en) * 2015-09-10 2017-03-16 Samsung Electronics Co., Ltd. Apparatus and method for generating acoustic model, and apparatus and method for speech recognition
US10127905B2 (en) * 2015-09-10 2018-11-13 Samsung Electronics Co., Ltd. Apparatus and method for generating acoustic model for speech, and apparatus and method for speech recognition using acoustic model
CN106683663A (en) * 2015-11-06 2017-05-17 三星电子株式会社 Neural network training apparatus and method, and speech recognition apparatus and method
CN106683663B (en) * 2015-11-06 2022-01-25 三星电子株式会社 Neural network training apparatus and method, and speech recognition apparatus and method
US11881211B2 (en) 2020-03-24 2024-01-23 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device for augmenting learning data for a recognition model

Also Published As

Publication number Publication date
US9741341B2 (en) 2017-08-22
US20130096915A1 (en) 2013-04-18
US20150199964A1 (en) 2015-07-16

Similar Documents

Publication Publication Date Title
US9741341B2 (en) System and method for dynamic noise adaptation for robust automatic speech recognition
Tu et al. Speech enhancement based on teacher–student deep learning using improved speech presence probability for noise-robust speech recognition
US9406299B2 (en) Differential acoustic model representation and linear transform-based adaptation for efficient user profile update techniques in automatic speech recognition
EP2216775B1 (en) Speaker recognition
US8612224B2 (en) Speech processing system and method
US7664643B2 (en) System and method for speech separation and multi-talker speech recognition
US9280979B2 (en) Online maximum-likelihood mean and variance normalization for speech recognition
US8386254B2 (en) Multi-class constrained maximum likelihood linear regression
US10460729B1 (en) Binary target acoustic trigger detecton
EP2189976A1 (en) Method for adapting a codebook for speech recognition
Chowdhury et al. Bayesian on-line spectral change point detection: a soft computing approach for on-line ASR
US9037463B2 (en) Efficient exploitation of model complementariness by low confidence re-scoring in automatic speech recognition
US20070143112A1 (en) Time asynchronous decoding for long-span trajectory model
US10460722B1 (en) Acoustic trigger detection
Stouten et al. Model-based feature enhancement with uncertainty decoding for noise robust ASR
US20040064315A1 (en) Acoustic confidence driven front-end preprocessing for speech recognition in adverse environments
Soe Naing et al. Discrete Wavelet Denoising into MFCC for Noise Suppressive in Automatic Speech Recognition System.
US9478216B2 (en) Guest speaker robust adapted speech recognition
CN113327596A (en) Training method of voice recognition model, voice recognition method and device
Yu et al. Bayesian adaptive inference and adaptive training
Li et al. Improved cepstra minimum-mean-square-error noise reduction algorithm for robust speech recognition
Raj Real-time pre-processing for improved feature extraction of noisy speech
Delcroix et al. Discriminative feature transforms using differenced maximum mutual information
BabaAli et al. A model distance maximizing framework for speech recognizer-based speech enhancement
Shigli et al. Automatic dialect and accent speech recognition of South Indian English

Legal Events

Date Code Title Description
AS Assignment

Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RENNIE, STEVEN J.;DOGNIN, PIERRE;FOUSEK, PETR;SIGNING DATES FROM 20111003 TO 20111004;REEL/FRAME:027160/0029

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: CERENCE INC., MASSACHUSETTS

Free format text: INTELLECTUAL PROPERTY AGREEMENT;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:050836/0191

Effective date: 20190930

AS Assignment

Owner name: CERENCE OPERATING COMPANY, MASSACHUSETTS

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNEE NAME PREVIOUSLY RECORDED AT REEL: 050836 FRAME: 0191. ASSIGNOR(S) HEREBY CONFIRMS THE INTELLECTUAL PROPERTY AGREEMENT;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:050871/0001

Effective date: 20190930

AS Assignment

Owner name: BARCLAYS BANK PLC, NEW YORK

Free format text: SECURITY AGREEMENT;ASSIGNOR:CERENCE OPERATING COMPANY;REEL/FRAME:050953/0133

Effective date: 20191001

AS Assignment

Owner name: CERENCE OPERATING COMPANY, MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BARCLAYS BANK PLC;REEL/FRAME:052927/0335

Effective date: 20200612

AS Assignment

Owner name: WELLS FARGO BANK, N.A., NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNOR:CERENCE OPERATING COMPANY;REEL/FRAME:052935/0584

Effective date: 20200612

AS Assignment

Owner name: CERENCE OPERATING COMPANY, MASSACHUSETTS

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE THE CONVEYANCE DOCUMENT WITH THE NEW ASSIGNMENT PREVIOUSLY RECORDED AT REEL: 050836 FRAME: 0191. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:NUANCE COMMUNICATIONS, INC.;REEL/FRAME:059804/0186

Effective date: 20190930

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8