EP3386215B1 - Hearing aid and method for operating a hearing aid - Google Patents
Hearing aid and method for operating a hearing aid Download PDFInfo
- Publication number
- EP3386215B1 EP3386215B1 EP18157220.7A EP18157220A EP3386215B1 EP 3386215 B1 EP3386215 B1 EP 3386215B1 EP 18157220 A EP18157220 A EP 18157220A EP 3386215 B1 EP3386215 B1 EP 3386215B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- assigned
- signal
- hearing
- acoustic
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 62
- 238000012545 processing Methods 0.000 claims description 62
- 230000006870 function Effects 0.000 claims description 13
- 230000006641 stabilisation Effects 0.000 claims description 9
- 238000011105 stabilization Methods 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 8
- 230000003595 spectral effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000004048 modification Effects 0.000 claims description 2
- 238000012986 modification Methods 0.000 claims description 2
- 230000002123 temporal effect Effects 0.000 claims description 2
- 230000005484 gravity Effects 0.000 claims 2
- 238000006243 chemical reaction Methods 0.000 claims 1
- 238000001514 detection method Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- MOVRNJGDXREIBM-UHFFFAOYSA-N aid-1 Chemical compound O=C1NC(=O)C(C)=CN1C1OC(COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C(NC(=O)C(C)=C2)=O)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)COP(O)(=O)OC2C(OC(C2)N2C3=C(C(NC(N)=N3)=O)N=C2)CO)C(O)C1 MOVRNJGDXREIBM-UHFFFAOYSA-N 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000006978 adaptation Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000003203 everyday effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 206010011878 Deafness Diseases 0.000 description 1
- 208000009205 Tinnitus Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010370 hearing loss Effects 0.000 description 1
- 231100000888 hearing loss Toxicity 0.000 description 1
- 208000016354 hearing loss disease Diseases 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 231100000886 tinnitus Toxicity 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/70—Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
- G10L25/81—Detection of presence or absence of voice signals for discriminating voice from music
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
- G10L25/84—Detection of presence or absence of voice signals for discriminating voice from noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/43—Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
- H04R25/507—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing implemented by neural network or fuzzy logic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/39—Aspects relating to automatic logging of sound environment parameters and the performance of the hearing aid during use, e.g. histogram logging, or of user selected programs or settings in the hearing aid, e.g. usage logging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/43—Signal processing in hearing aids to enhance the speech intelligibility
Definitions
- the invention relates to a method for operating a hearing device and a hearing device which is set up in particular to carry out the method.
- Hearing devices are usually used to output a sound signal to the hearing of the wearer of this hearing device.
- the output takes place by means of an output transducer, mostly acoustically via airborne sound by means of a loudspeaker (also referred to as “listener” or “receiver”).
- Such hearing devices are often used as so-called hearing aids (also known as hearing aids for short).
- the hearing devices normally include an acoustic input transducer (in particular a microphone) and a signal processor which is set up to process the input signal (also: microphone signal) generated by the input transducer from the ambient sound using at least one usually user-specifically stored signal processing algorithm in such a way that a Hearing loss of the wearer of the hearing device is at least partially compensated for.
- the output transducer can also be a so-called bone conduction receiver or a cochlear implant, in addition to a loudspeaker, which are set up for mechanical or electrical coupling of the audio signal into the wearer's hearing.
- hearing devices also includes, in particular, devices such as so-called tinnitus maskers, headsets, headphones and the like.
- Modern hearing devices in particular hearing aids, often include a so-called classifier, which is usually designed as part of the signal processor that executes the respective signal processing algorithm or algorithms.
- a classifier is usually in turn an algorithm which is used to infer an existing hearing situation on the basis of the ambient sound recorded by means of the microphone.
- the respective signal processing algorithm or algorithms are then usually adapted to the characteristic properties of the present hearing situation.
- the hearing device is intended to pass on the information relevant to the user in accordance with the hearing situation. For example, different settings (parameter values of different parameters) of the or one of the signal processing algorithms are required for the clearest possible output of music than for the intelligible output of speech in the case of loud ambient noise.
- the correspondingly assigned parameters are then changed as a function of the recognized hearing situation.
- Usual listening situations are e.g. B. Speech at rest, speech with background noises, listening to music, (driving in) a vehicle.
- a classifier is often "trained" for the respective hearing situation by means of databases in which a large number of different representative audio samples are stored for the respective hearing situations.
- the disadvantage of this is that in most cases not all combinations of noises that may occur in everyday life can be mapped in such a database. This can therefore lead to misclassification of some listening situations.
- EP1858291 A1 describes a method for operating a hearing system which comprises a transmission unit and input / output units linked therewith.
- a transfer function of the transfer unit describes how audio signals generated by the input unit are processed in order to derive audio signals which are fed to the output unit and which can be set by one or more transfer parameters.
- US 2003/0144838 A1 describes a method and a device for identifying an acoustic scene, wherein an acoustic input signal is processed in at least two processing stages so that an extraction phase is provided in at least one of the processing stages, in which characteristic features are extracted from the input signal, and wherein in each processing stage an identification phase is provided in which the extracted characteristic features are classified.
- class information that characterizes or identifies the acoustic scene is generated in at least one of the processing stages.
- WO 2008/084116 A2 describes a method for operating a hearing apparatus comprising an input transducer, an output transducer and a signal processing unit for processing an output signal of the input transducer in order to obtain an input signal for the output transducer by applying a transfer function to the output signal of the input transducer.
- the method comprises the steps of: extracting features of the output signal of the input transducer, classifying the extracted features by at least two classification experts, weighting the outputs of the at least two classification experts by a weight vector in order to obtain a classification output, setting at least some parameters of the transfer function according to the classification output, Monitoring a user feedback received from the hearing device and updating the weight vector and / or one of the at least two classification experts in accordance with the user feedback.
- One aspect of the present topic includes a method of operating a hearing aid for a wearer. Acoustic inputs are received and a variety of acoustic environments determined by parallel signal processing based on the received acoustic inputs. According to various embodiments, an audiological parameter of the hearing aid device is adapted based on the determined plurality of acoustic environments.
- the invention is based on the object of making an improved hearing device possible.
- the method according to the invention is used to operate a hearing device which comprises at least one microphone for converting ambient sound into a microphone signal.
- a number of characteristics also referred to as “features”
- At least three classifiers which are implemented independently of one another for the analysis of a (preferably permanently) assigned acoustic dimension, are each supplied with a specifically assigned selection from these features. By means of the respective classifier, information is then generated in each case about a characteristic of the acoustic dimension assigned to this classifier.
- At least one signal processing algorithm is then used, which is used to process the microphone signal or the Input signal is processed into an output signal (ie executed), changed.
- Changing the signal processing algorithm is understood here and below in particular to mean that at least one parameter contained in the signal processing algorithm is set to a different parameter value as a function of the characteristic of the acoustic dimension or at least one of the acoustic dimensions. In other words, another setting of the signal processing algorithm is "approached" (i.e., effected or made).
- acoustic dimension is understood here and below to mean a group of listening situations that are related due to their specific properties.
- the hearing situations depicted in such an acoustic dimension are preferably each described by the same features and differ in particular on the basis of the current value of the respective features.
- the term "expression" of the respective acoustic dimension is understood here and in the following in particular as to whether (in the sense of a binary distinction) or (in a preferred variant) to what degree (for example, to what percentage) the respective in the respective acoustic dimension the listening situation shown is present.
- a degree or percentage preferably represents a probability value for the presence of the respective hearing situation.
- the listening situations "speech in rest", “speech in background noise” or (in particular only) "Background noise” (ie there is no speech)
- the information about the expression preferably again contains percentages (for example 30% probability for speech in background noise and 70% probability for only background noise).
- the hearing device comprises at least one microphone for converting the ambient sound into the microphone signal and a signal processor in which at least the three above-described classifiers are implemented independently of one another for analyzing the respectively (preferably permanently) assigned acoustic dimension.
- the signal processor is set up to carry out the method according to the invention, preferably automatically.
- the signal processor is set up to derive the number of features from the microphone signal or the input signal formed therefrom, to supply each of the three classifiers with a specifically assigned selection from the features, with the aid of the respective classifier information about the expression of the respectively assigned acoustic To generate dimension and, depending on at least one of the three items of information, to change at least one signal processing algorithm (preferably correspondingly assigned to the acoustic dimension) and preferably to apply it to the microphone signal or the input signal.
- the signal processor (also referred to as a signal processing unit) is formed at least in its core by a microcontroller with a processor and a data memory in which the functionality for carrying out the method according to the invention is implemented in the form of operating software ("firmware"), so that the method - possibly in interaction with a user of the hearing device - is carried out automatically when the operating software is executed in the microcontroller.
- the signal processor is formed by a non-programmable electronic component, e.g. an ASIC, in which the functionality for carrying out the method according to the invention is implemented using circuitry.
- At least three classifiers are set up and provided for analyzing an associated acoustic dimension and thus in particular for recognizing one hearing situation in each case, advantageously enables at least three hearing situations to be recognized independently of one another.
- This increases the flexibility of the hearing device in the Recognition of listening situations advantageously increased.
- the invention is based on the knowledge that at least some listening situations can also be completely independent (ie in particular not influencing one another or only influencing one another to an insignificant extent) and parallel to one another.
- the risk can thus be reduced that, at least with regard to the at least three acoustic dimensions analyzed by means of the respectively assigned classifier, mutually exclusive and in particular contradicting classifications (i.e. assessment of the currently existing acoustic situation) can be reduced. comes.
- (completely) parallel listening situations can be recognized in a simple manner and taken into account in the change in the signal processing algorithm.
- the hearing device according to the invention has the same advantages as the method according to the invention for operating the hearing device.
- the signal processing algorithms are used, in particular in parallel for processing the microphone signal or the input signal.
- the signal processing algorithms "work" preferably on (at least) one assigned acoustic dimension, ie the signal processing algorithms are used to process (e.g. filtering, amplifying, attenuating) signal components that are relevant for the listening situations contained or mapped in the assigned acoustic dimension .
- the signal processing algorithms comprise at least one, preferably several parameters, the parameter values of which can be changed.
- the parameter values can preferably also be changed in several steps (gradually or continuously) depending on the respective probability of the expression. This enables signal processing that is particularly flexible and advantageously adaptable to a large number of gradual differences between a number of listening situations.
- a different selection from the features is also supplied to at least two of the at least three classifiers. This is understood here and in the following in particular to mean that a different number and / or different features are selected for the respective classifier and supplied to it.
- each of the classifiers is supplied with the correspondingly assigned selection, in particular features that are only relevant for the analysis of the assigned acoustic dimension. In other words, only those features are selected and supplied for each classifier that are actually required to determine the hearing situation mapped in the respective acoustic dimension.
- computational effort and effort in the implementation of the respective classifier can advantageously be saved, since features that are insignificant for the respective acoustic dimension are not taken into account from the outset. The risk of a misclassification due to an erroneous consideration of non-relevant features can thereby advantageously be further reduced.
- each of the classifiers is thus on a specific "problem", ie with regard to its own Analysis algorithm “tailored” (ie adapted or designed) to the acoustic dimension specifically assigned to this classifier.
- the dimensions “vehicle”, “music” and “language” are used as the at least three acoustic dimensions.
- These three acoustic dimensions are, in particular, the dimensions that usually occur particularly frequently in the everyday life of a user of the hearing device and are also independent of one another.
- a fourth classifier is used to analyze a fourth acoustic dimension, which is in particular the loudness (also: “volume”) of ambient noises (also referred to as “interference noises”).
- the characteristics of this acoustic dimension extend gradually or continuously over several intermediate stages from very quiet to very loud.
- the information on the characteristics, in particular of the acoustic dimensions vehicle and music can optionally be "binary", ie it is only recognized whether or not the vehicle is being driven, or whether music is being heard or not.
- all information of the other three acoustic dimensions is continuously available as a kind of probability value. This is particularly advantageous because errors in the analysis of the respective acoustic dimension cannot be ruled out, and because this also makes it simple, in contrast to binary information Way "smoother" transitions between different settings can be achieved.
- further classifiers are used in each case for wind and / or reverberation estimation and for the detection of the wearer's own voice.
- features are derived from the microphone signal or the input signal that are selected from a (in particular non-final) group, which in particular includes the features of signal level, 4 Hz envelope modulation, onset content, level of background noise (also called “Noise Floor Level” denotes, optionally at a predetermined frequency), the spectral focus of the background noise, stationarity (in particular at a predetermined frequency), tonality and wind activity.
- a (in particular non-final) group which in particular includes the features of signal level, 4 Hz envelope modulation, onset content, level of background noise (also called “Noise Floor Level” denotes, optionally at a predetermined frequency), the spectral focus of the background noise, stationarity (in particular at a predetermined frequency), tonality and wind activity.
- At least the characteristics level of the background noise, spectral focus of the background noise and stationarity (and optionally also the characteristic of wind activity) are assigned to the acoustic dimension vehicle.
- the characteristics of onset content, tonality and level of the background noise are preferably assigned to the acoustic dimension of music.
- the characteristics of onset content and 4 Hz envelope modulation in particular are assigned to the acoustic dimension of language.
- the loudness dimension of the ambient noise which may be present, is assigned in particular the characteristics of the level of the background noise, the signal level and the spectral focus of the background noise.
- a specifically assigned temporal stabilization is taken into account for each classifier.
- this state is present in the past (for example, in a previous time segment of predetermined duration) (ie in particular with a certain characteristic of the acoustic dimension) (the expression) is then still present with a high degree of probability at the current point in time.
- a sliding mean value is formed over (in particular a predetermined number of) previous time segments.
- a type of "dead time element" can also be provided, by means of which, in a subsequent time segment, the probability is increased that the expression present in the previous time segment is still present.
- a further optional variant for stabilization can also take place via a counting principle in which a counter (“counter”) is incremented with a comparatively fast (e.g. 100 milliseconds to a few seconds) detection cycle and the “detection" is only activated when a limit value is exceeded for this counter "the respective hearing situation is triggered.
- a counter incremented with a comparatively fast (e.g. 100 milliseconds to a few seconds) detection cycle and the "detection” is only activated when a limit value is exceeded for this counter "the respective hearing situation is triggered.
- the respective signal processing algorithm or the respective signal processing algorithm is or are dependent on at least two of the at least three pieces of information about the characteristics of the respectively assigned acoustic Dimension adjusted.
- the information from several classifiers is therefore taken into account in at least one signal processing algorithm.
- the respective information from the individual classifiers is in particular first fed to a merger element for a common evaluation (“merged”).
- merged On the basis of this joint evaluation of all information, in particular overall information about the present listening situations is created.
- a dominant hearing situation is preferably determined in this case - in particular on the basis of the degree of expression reflecting the probability.
- the respective signal processing algorithm or algorithms are adapted to this dominant hearing situation.
- a hearing situation (namely the dominant one) is optionally prioritized by changing the respective signal processing algorithm only as a function of the dominant hearing situation, while other signal processing algorithms and / or the parameters dependent on other hearing situations remain unchanged or to a parameter value that does not have any Has an impact on signal processing.
- a hearing situation referred to as a sub-situation is determined, which has a lower dominance compared to the dominant hearing situation.
- This or the respective sub-situation is additionally taken into account in the aforementioned adaptation of the respective signal processing algorithm or algorithms to the dominant hearing situation and / or to adapt a signal processing algorithm specifically assigned to the acoustic dimension of this sub-situation.
- this sub-situation leads to a smaller change in the parameter or parameters assigned in each case compared to the dominant hearing situation.
- one or more parameters of a signal processing algorithm which is used to ensure the clearest possible speech intelligibility in the case of background noise, are changed comparatively strongly in order to achieve the highest possible speech intelligibility. But since there is also music, parameters that serve to attenuate ambient noise are set less strongly (than if only background noise is present) so that the tones of the music are not completely attenuated.
- a signal processing algorithm (in particular additional) serving for clear sound reproduction of music is also set less strongly than with music as the dominant listening situation (but stronger than with no music) so as not to cover up the speech components.
- the parallel presence of several hearing situations is preferably taken into account in at least one of the possibly several signal processing algorithms.
- each signal processing algorithm is assigned to at least one of the classifiers.
- at least one parameter of each signal processing algorithm is changed (in particular directly) as a function of the information output by the respective classifier about the characteristic of the assigned acoustic dimension.
- This parameter or its parameter value is preferably designed as a function of the respective information.
- each classifier "controls" at least one parameter of at least one signal processing algorithm. A joint evaluation of all information can be omitted here.
- At least one of the classifiers is supplied with status information that is generated independently of the microphone signal or the input signal.
- This status information is also taken into account, in particular, for evaluating the respective acoustic dimension. For example, this involves movement and / or location information that is used, for example, to evaluate the acoustic dimension of the vehicle.
- This movement and / or location information is generated, for example, with an acceleration or (global) position sensor arranged in the hearing device itself or in a system connected to it for signal transmission purposes (e.g. a smartphone).
- the probability of the presence of the hearing situation driving in the vehicle can be increased in a simple manner in addition to the acoustic evaluation.
- a hearing aid device referred to as "hearing device 1" for short, is shown as a hearing device.
- the hearing aid 1 comprises, as electrical components housed in a housing 2, two microphones 3, a signal processor 4 and a loudspeaker 5 ) or as a secondary cell (ie as a rechargeable battery).
- a microphone signal S M is generated therefrom.
- These two microphone signals S M are fed to the signal processor 4, which, while processing four signal processing algorithms A 1 , A 2 , A 3 and A 4, generates an output signal S A from these microphone signals S M and sends this to a loudspeaker 5, which is an output transducer, issues.
- the loudspeaker 5 converts the output signal S A into airborne sound, which is transmitted to the hearing of a user or wearer (short: hearing aid wearer) of the hearing aid via a sound tube 7 connected to the housing 2 and an earpiece 8 connected to it at the end (when the hearing aid 1 is properly worn) 1 is issued.
- the hearing device 1, specifically its signal processor 4 is set up to use a method described below on the basis of FIG Figure 2 and Figure 3 to carry out the procedure described in more detail automatically.
- the hearing aid 1, specifically its signal processor 4 comprises at least three classifiers Ks, K M and K F. These three classifiers K S , K M and K F are each set up and designed to analyze a specifically assigned acoustic dimension.
- the classifier Ks is specifically designed to evaluate the acoustic dimension “language”, ie whether language, language is present in background noise or just background noise.
- the classifier K M is specifically designed to evaluate the acoustic dimension “music”, ie whether the ambient sound is dominated by music.
- the classifier K F is specifically designed to evaluate the acoustic dimension “vehicle”, ie to determine whether the hearing aid wearer is driving in a vehicle.
- the signal processor 4 further comprises a feature analysis module 10 (also referred to as a "feature extraction module”) which is set up to derive a number of (signal) features from the microphone signals S M , specifically from an input signal S E formed from these microphone signals S M.
- the classifiers Ks, K M and K F are each supplied with a different and specifically assigned selection from these features.
- the respective classifier Ks, K M or K F determines a characteristic of the respective assigned acoustic dimension, ie the degree to which a hearing situation specifically assigned to the acoustic dimension is present, and outputs this characteristic as the respective information.
- the microphone signals S M are generated from the detected ambient sound and are combined by the signal processor 4 to form the input signal S E (specifically mixed to form a directional microphone signal).
- the input signal S E formed from the microphone signals S M is fed to the feature analysis module 10, and the number of features is derived therefrom.
- a background noise characteristic "M P "
- a spectral focus of the background noise characteristic "M Z”
- a stationarity of the signal characteristic "M M”
- wind activity Feature “M W”
- an onset content of the signal feature “M O”
- a tonality feature “M T ”
- a 4-Hertz envelope modulation feature “M E ”
- the features M E and Mo are fed to the classifier Ks for analyzing the acoustic dimension of speech.
- the features M O , M T and M P are fed to the classifier K M for analyzing the acoustic dimension of music.
- the characteristics M P , M W , M Z and M M are fed to the classifier K F to analyze the acoustic dimension of driving in the vehicle.
- the classifiers K S , K M and K F then use specifically adapted analysis algorithms to determine the extent to which, ie to what degree, the respective acoustic dimension is pronounced on the basis of the features supplied in each case.
- the classifier Ks is used to determine the probability with which speech is present at rest, speech in background noise or only background noise.
- the classifier K M is used to determine with what probability there is music.
- the classifier K F is used to determine the probability with which the hearing aid wearer is driving or not in a vehicle.
- the respective characteristics of the acoustic dimensions are output to a fusion module 60 in a method step 50 (see FIG Figure 2 ) by bringing the respective information together and comparing it with one another.
- a decision is made in the fusion module 60 as to which dimension, specifically which hearing situation depicted therein, is currently to be regarded as dominant and which hearing situations are currently of subordinate importance or can be completely excluded.
- the fusion module then changes a number of the parameters relating to the dominant and the less relevant hearing situations with a number of the stored signal processing algorithms A 1 to A 4 , so that the signal processing is primarily adapted to the dominant hearing situation and to a lesser extent to the less relevant hearing situation.
- Each of the signal processing algorithms A 1 to A 4 is in each case adapted to the presence of a hearing situation, possibly also parallel to other hearing situations.
- the classifier K F includes a stabilization over time in a manner not shown in detail. This is geared in particular to the fact that a journey in the vehicle usually lasts a longer time, and thus in the event that driving in the vehicle has already been recognized in previous time segments, for example from 30 seconds to five minutes each, and on the assumption that the driving situation in the vehicle is still ongoing, the probability that this hearing situation is present is already increased in advance. The same is set up and provided in the classifier K M.
- the fusion module 60 is missing in the signal flow diagram shown.
- Each of the classifiers Ks, K M and K F is assigned at least one of the signal processing algorithms A 1 , A 2 , A 3 and A 4 in such a way that several in the respective signal processing algorithm A 1 , A 2 , A 3 or A 4 contained parameters are designed to be changeable as a function of the characteristics of the respective acoustic dimension. This means that on the basis of the respective information about the respective characteristic, at least one parameter is changed directly - that is, without any intermediate merging.
- the signal processing algorithm A 1 is only dependent on the information from the classifier Ks.
- the information from all classifiers K S , K M and K F flow into the signal processing algorithm A 3 and lead to a change in several parameters there.
- the subject matter of the invention is not limited to the exemplary embodiments described above. Rather, further embodiments of the invention can be derived from the above description by a person skilled in the art. In particular, the individual features of the invention described on the basis of the various exemplary embodiments and their design variants can also be combined with one another in other ways.
- the hearing aid 1 can also be designed as an in-the-ear hearing aid instead of the behind-the-ear hearing aid shown.
- the subject matter of the invention is defined in the following claims.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Neurosurgery (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Evolutionary Computation (AREA)
- Automation & Control Theory (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Circuit For Audible Band Transducer (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
Description
Die Erfindung betrifft ein Verfahren zum Betrieb einer Hörvorrichtung sowie eine Hörvorrichtung, die insbesondere zur Durchführung des Verfahrens eingerichtet ist.The invention relates to a method for operating a hearing device and a hearing device which is set up in particular to carry out the method.
Hörvorrichtungen dienen üblicherweise zur Ausgabe eines Tonsignals an das Gehör des Trägers dieser Hörvorrichtung. Die Ausgabe erfolgt dabei mittels eines Ausgabewandlers, meist auf akustischem Weg über Luftschall mittels eines Lautsprechers (auch als "Hörer" oder "Receiver" bezeichnet). Häufig kommen derartige Hörvorrichtungen dabei als sogenannte Hörhilfegeräte (auch kurz: Hörgeräte) zum Einsatz. Dazu umfassen die Hörvorrichtungen normalerweise einen akustischen Eingangswandler (insbesondere ein Mikrophon) und einen Signalprozessor, der dazu eingerichtet ist, das von dem Eingangswandler aus dem Umgebungsschall erzeugte Eingangssignal (auch: Mikrophonsignal) unter Anwendung mindestens eines üblicherweise nutzerspezifisch hinterlegten Signalverarbeitungsalgorithmus derart zu verarbeiten, dass eine Hörminderung des Trägers der Hörvorrichtung zumindest teilweise kompensiert wird. Insbesondere im Fall eines Hörhilfegeräts kann es sich bei dem Ausgabewandler neben einem Lautsprecher auch alternativ um einen sogenannten Knochenleitungshörer oder ein Cochlea-Implantat handeln, die zur mechanischen oder elektrischen Einkopplung des Tonsignals in das Gehör des Trägers eingerichtet sind. Unter dem Begriff Hörvorrichtungen fallen zusätzlich insbesondere auch Geräte wie z.B. sogenannte Tinnitus-Masker, Headsets, Kopfhörer und dergleichen.Hearing devices are usually used to output a sound signal to the hearing of the wearer of this hearing device. The output takes place by means of an output transducer, mostly acoustically via airborne sound by means of a loudspeaker (also referred to as "listener" or "receiver"). Such hearing devices are often used as so-called hearing aids (also known as hearing aids for short). For this purpose, the hearing devices normally include an acoustic input transducer (in particular a microphone) and a signal processor which is set up to process the input signal (also: microphone signal) generated by the input transducer from the ambient sound using at least one usually user-specifically stored signal processing algorithm in such a way that a Hearing loss of the wearer of the hearing device is at least partially compensated for. In the case of a hearing aid in particular, the output transducer can also be a so-called bone conduction receiver or a cochlear implant, in addition to a loudspeaker, which are set up for mechanical or electrical coupling of the audio signal into the wearer's hearing. The term hearing devices also includes, in particular, devices such as so-called tinnitus maskers, headsets, headphones and the like.
Moderne Hörvorrichtungen, insbesondere Hörhilfegeräte umfassen häufig einen sogenannten Klassifikator, der üblicherweise als Teil des Signalprozessors, der den oder den jeweiligen Signalverarbeitungsalgorithmus ausführt, ausgebildet ist. Bei einem solchen Klassifikator handelt es sich üblicherweise wiederum um einen Algorithmus, der dazu dient, anhand des mittels des Mikrophons erfassten Umgebungsschalls auf eine vorliegende Hörsituation zu schließen. Auf Basis der erkannten Hörsituation wird dann meist eine Anpassung des oder des jeweiligen Signalverarbeitungsalgorithmus an die charakteristischen Eigenschaften der vorliegenden Hörsituation vorgenommen. Insbesondere soll dadurch die Hörvorrichtung der Hörsituation entsprechend die für den Nutzer relevanten Informationen weitergeben. Zum Beispiel sind zur möglichst klaren Ausgabe von Musik andere Einstellungen (Parameterwerte unterschiedlicher Parameter) des oder eines der Signalverarbeitungsalgorithmen erforderlich, als zur verständlichen Ausgabe von Sprache bei lautem Umgebungsgeräusch. In Abhängigkeit von der erkannten Hörsituation werden die entsprechend zugeordneten Parameter dann verändert.Modern hearing devices, in particular hearing aids, often include a so-called classifier, which is usually designed as part of the signal processor that executes the respective signal processing algorithm or algorithms. Such a classifier is usually in turn an algorithm which is used to infer an existing hearing situation on the basis of the ambient sound recorded by means of the microphone. On the basis of the recognized hearing situation, the respective signal processing algorithm or algorithms are then usually adapted to the characteristic properties of the present hearing situation. In particular, the hearing device is intended to pass on the information relevant to the user in accordance with the hearing situation. For example, different settings (parameter values of different parameters) of the or one of the signal processing algorithms are required for the clearest possible output of music than for the intelligible output of speech in the case of loud ambient noise. The correspondingly assigned parameters are then changed as a function of the recognized hearing situation.
Übliche Hörsituationen sind z. B. Sprache in Ruhe, Sprache bei Störgeräuschen, Musik-Hören, (Fahren im) Fahrzeug. Zur Analyse des Umgebungsschalls (konkret des Mikrophonsignals) und zur Erkennung der jeweiligen Hörsituationen werden dabei zunächst aus dem Mikrophonsignal (oder einem daraus gebildeten Eingangssignal) verschiedene Merkmale (oft auch als "Features" bezeichnet) abgeleitet. Diese Merkmale werden dem Klassifikator zugeführt, der mit Hilfe von Analysemodellen wie z. B. einer sogenannten "Gauss'schen-Mischmoden-Analyse", einem "Hidden-Markov-Modell", einem neuronalen Netz oder dergleichen Wahrscheinlichkeiten für das Vorliegen bestimmter Hörsituationen ausgibt.Usual listening situations are e.g. B. Speech at rest, speech with background noises, listening to music, (driving in) a vehicle. To analyze the ambient sound (specifically the microphone signal) and to identify the respective listening situation, various features (often also referred to as "features") are first derived from the microphone signal (or an input signal formed from it). These features are fed to the classifier, which with the help of analysis models such as B. a so-called "Gaussian mixed mode analysis", a "Hidden Markov model", a neural network or the like outputs probabilities for the presence of certain hearing situations.
Häufig wird ein Klassifikator mittels Datenbanken, in denen für die jeweiligen Hörsituationen eine Vielzahl unterschiedlicher repräsentativer Hörproben abgelegt ist, auf die jeweilige Hörsituation "trainiert". Nachteilig hieran ist jedoch, dass in einer solchen Datenbank meist nicht alle im Alltag möglicherweise auftretenden Kombinationen von Geräuschen abgebildet sein können. Somit kann es bereits deshalb zu Fehlklassifikationen mancher Hörsituationen kommen.A classifier is often "trained" for the respective hearing situation by means of databases in which a large number of different representative audio samples are stored for the respective hearing situations. The disadvantage of this, however, is that in most cases not all combinations of noises that may occur in everyday life can be mapped in such a database. This can therefore lead to misclassification of some listening situations.
In
Der Erfindung liegt die Aufgabe zugrunde, eine verbesserte Hörvorrichtung zu ermöglichen.The invention is based on the object of making an improved hearing device possible.
Diese Aufgabe wird erfindungsgemäß gelöst durch ein Verfahren zum Betrieb einer Hörvorrichtung mit den Merkmalen des Anspruchs 1. Des Weiteren wird diese Aufgabe erfindungsgemäß gelöst durch eine Hörvorrichtung mit den Merkmalen des Anspruchs 13. Vorteilhafte Ausführungsformen und Weiterentwicklungen der Erfindung sind in den Unteransprüchen und in der nachfolgenden Beschreibung dargelegt.This object is achieved according to the invention by a method for operating a hearing device with the features of claim 1. Furthermore, this object is achieved according to the invention by a hearing device with the features of claim 13. Advantageous embodiments and further developments of the invention are in the subclaims and in the following Description set out.
Das erfindungsgemäße Verfahren dient zum Betrieb einer Hörvorrichtung, die wenigstens ein Mikrophon zur Wandlung von Umgebungsschall in ein Mikrophonsignal umfasst. Verfahrensgemäß wird dabei aus dem Mikrophonsignal oder einem daraus gebildeten Eingangssignal eine Anzahl von Merkmalen (auch als "Features" bezeichnet) abgeleitet. Wenigstens drei Klassifikatoren, die unabhängig voneinander zur Analyse jeweils einer (vorzugsweise fest) zugeordneten akustischen Dimension implementiert sind, wird jeweils eine spezifisch zugeordnete Auswahl aus diesen Merkmalen zugeführt. Mittels des jeweiligen Klassifikators wird anschließend jeweils eine Information über eine Ausprägung der diesem Klassifikator zugeordneten akustischen Dimension generiert. In Abhängigkeit von mindestens einer der wenigstens drei Informationen über die jeweilige Ausprägung der zugeordneten akustischen Dimension wird dann wenigstens ein Signalverarbeitungsalgorithmus, der zur Verarbeitung des Mikrophonsignals bzw. des Eingangssignals in ein Ausgangssignal abgearbeitet (d. h. ausgeführt) wird, verändert.The method according to the invention is used to operate a hearing device which comprises at least one microphone for converting ambient sound into a microphone signal. According to the method, a number of characteristics (also referred to as “features”) are derived from the microphone signal or an input signal formed therefrom. At least three classifiers, which are implemented independently of one another for the analysis of a (preferably permanently) assigned acoustic dimension, are each supplied with a specifically assigned selection from these features. By means of the respective classifier, information is then generated in each case about a characteristic of the acoustic dimension assigned to this classifier. Depending on at least one of the at least three pieces of information about the respective characteristics of the assigned acoustic dimension, at least one signal processing algorithm is then used, which is used to process the microphone signal or the Input signal is processed into an output signal (ie executed), changed.
Unter Veränderung des Signalverarbeitungsalgorithmus wird hier und im Folgenden insbesondere verstanden, dass wenigstens ein in dem Signalverarbeitungsalgorithmus enthaltener Parameter in Abhängigkeit von der Ausprägung der akustischen Dimension oder wenigstens einer der akustischen Dimensionen auf einen anderen Parameterwert gesetzt wird. Mit anderen Worten wird eine andere Einstellung des Signalverarbeitungsalgorithmus "angefahren" (d. h. bewirkt oder vorgenommen).Changing the signal processing algorithm is understood here and below in particular to mean that at least one parameter contained in the signal processing algorithm is set to a different parameter value as a function of the characteristic of the acoustic dimension or at least one of the acoustic dimensions. In other words, another setting of the signal processing algorithm is "approached" (i.e., effected or made).
Unter dem Begriff "akustische Dimension" wird hier und im Folgenden eine Gruppe von Hörsituationen verstanden, die aufgrund ihrer spezifischen Eigenschaften zusammenhängen. Vorzugsweise werden die in einer solchen akustischen Dimension abgebildeten Hörsituationen jeweils durch die gleichen Merkmale (Features) beschrieben und unterscheiden sich dabei insbesondere aufgrund des aktuellen Werts der jeweiligen Merkmale.The term “acoustic dimension” is understood here and below to mean a group of listening situations that are related due to their specific properties. The hearing situations depicted in such an acoustic dimension are preferably each described by the same features and differ in particular on the basis of the current value of the respective features.
Unter dem Begriff "Ausprägung" der jeweiligen akustischen Dimension wird hier und im Folgenden insbesondere verstanden, ob (im Sinne einer binären Unterscheidung) oder (in bevorzugter Variante) zu welchem Grad (beispielsweise zu welchem Prozentsatz) die oder die jeweilige in der jeweiligen akustischen Dimension abgebildete Hörsituation vorliegt. Ein solcher Grad bzw. Prozentsatz stellt dabei vorzugsweise einen Wahrscheinlichkeitswert für das Vorliegen der jeweiligen Hörsituation dar. Beispielsweise können hierbei in einer auf das Vorhandensein von Sprache gerichteten akustischen Dimension die Hörsituationen "Sprache in Ruhe", "Sprache im Störgeräusch" oder (insbesondere nur) "Störgeräusch" (d. h. es liegt keine Sprache vor) abgebildet sein, wobei die Information über die Ausprägung vorzugsweise wiederum jeweils Prozentangaben enthält (bspw. 30 % Wahrscheinlichkeit für Sprache im Störgeräusch und 70 % Wahrscheinlichkeit für nur Störgeräusch).The term "expression" of the respective acoustic dimension is understood here and in the following in particular as to whether (in the sense of a binary distinction) or (in a preferred variant) to what degree (for example, to what percentage) the respective in the respective acoustic dimension the listening situation shown is present. Such a degree or percentage preferably represents a probability value for the presence of the respective hearing situation. For example, in an acoustic dimension directed to the presence of speech, the listening situations "speech in rest", "speech in background noise" or (in particular only) "Background noise" (ie there is no speech), the information about the expression preferably again contains percentages (for example 30% probability for speech in background noise and 70% probability for only background noise).
Die erfindungsgemäße Hörvorrichtung umfasst wie vorstehend beschrieben wenigstens das eine Mikrophon zur Wandlung des Umgebungsschalls in das Mikrophonsignal sowie einen Signalprozessor, in dem wenigstens die drei vorstehend beschriebenen Klassifikatoren unabhängig voneinander zur Analyse der jeweils (vorzugsweise fest) zugeordneten akustischen Dimension implementiert sind. Der Signalprozessor ist dabei dazu eingerichtet, das erfindungsgemäße Verfahren vorzugsweise selbsttätig durchzuführen. Mit anderen Worten ist der Signalprozessor dazu eingerichtet, aus dem Mikrophonsignal oder dem daraus gebildeten Eingangssignal die Anzahl von Merkmalen abzuleiten, den drei Klassifikatoren jeweils eine spezifisch zugeordnete Auswahl aus den Merkmalen zuzuführen, mit Hilfe des jeweiligen Klassifikators eine Information über die Ausprägung der jeweils zugeordneten akustischen Dimension zu generieren und in der Abhängigkeit von mindestens einer der drei Informationen wenigstens einen (vorzugsweise der akustischen Dimension entsprechend zugeordneten) Signalverarbeitungsalgorithmus zu verändern und vorzugsweise auf das Mikrophonsignal bzw. das Eingangssignal anzuwenden.As described above, the hearing device according to the invention comprises at least one microphone for converting the ambient sound into the microphone signal and a signal processor in which at least the three above-described classifiers are implemented independently of one another for analyzing the respectively (preferably permanently) assigned acoustic dimension. The signal processor is set up to carry out the method according to the invention, preferably automatically. In other words, the signal processor is set up to derive the number of features from the microphone signal or the input signal formed therefrom, to supply each of the three classifiers with a specifically assigned selection from the features, with the aid of the respective classifier information about the expression of the respectively assigned acoustic To generate dimension and, depending on at least one of the three items of information, to change at least one signal processing algorithm (preferably correspondingly assigned to the acoustic dimension) and preferably to apply it to the microphone signal or the input signal.
In bevorzugter Ausgestaltung ist der Signalprozessor (auch als Signalverarbeitungseinheit bezeichnet) zumindest im Kern durch einen Mikrocontroller mit einem Prozessor und einem Datenspeicher gebildet, in dem die Funktionalität zur Durchführung des erfindungsgemäßen Verfahrens in Form einer Betriebssoftware ("Firmware") programmtechnisch implementiert ist, so dass das Verfahren - gegebenenfalls in Interaktion mit einem Nutzer der Hörvorrichtung - bei Ausführung der Betriebssoftware in dem Mikrocontroller automatisch durchgeführt wird. Alternativ ist der Signalprozessor durch ein nicht-programmierbares elektronisches Bauteil, z.B. einen ASIC, gebildet, in dem die Funktionalität zur Durchführung des erfindungsgemäßen Verfahrens mit schaltungstechnischen Mitteln implementiert ist.In a preferred embodiment, the signal processor (also referred to as a signal processing unit) is formed at least in its core by a microcontroller with a processor and a data memory in which the functionality for carrying out the method according to the invention is implemented in the form of operating software ("firmware"), so that the method - possibly in interaction with a user of the hearing device - is carried out automatically when the operating software is executed in the microcontroller. Alternatively, the signal processor is formed by a non-programmable electronic component, e.g. an ASIC, in which the functionality for carrying out the method according to the invention is implemented using circuitry.
Dadurch, dass erfindungsgemäß mindestens drei Klassifikatoren zur Analyse jeweils einer zugeordneten akustischen Dimension und somit insbesondere zur Erkennung jeweils einer Hörsituation eingerichtet und vorgesehen sind, wird vorteilhafterweise ermöglicht, dass zumindest drei Hörsituationen unabhängig voneinander erkannt werden können. Dadurch wird die Flexibilität der Hörvorrichtung im Erkennen von Hörsituationen vorteilhaft erhöht. Die Erfindung geht dabei von der Erkenntnis aus, dass zumindest manche Hörsituationen auch vollständig unabhängig (d. h. sich insbesondere nicht oder lediglich in einem unerheblichen Maß gegenseitig beeinflussend) voneinander und parallel zueinander vorliegen können. Mittels des erfindungsgemäßen Verfahrens sowie mittels der erfindungsgemäßen Hörvorrichtung kann somit das Risiko vermindert werden, dass es zumindest hinsichtlich der wenigstens drei mittels des jeweils zugeordneten Klassifika-tors analysierten akustischen Dimension zu sich gegenseitig ausschließenden und insbesondere widersprüchlichen Klassifikationen (d. h. Einschätzung der aktuell vorliegenden akustischen Situation) kommt. Insbesondere können auf einfache Weise (vollständig) parallel vorliegende Hörsituationen erkannt und in der Veränderung des Signalverarbeitungsalgorithmus berücksichtigt werden.The fact that, according to the invention, at least three classifiers are set up and provided for analyzing an associated acoustic dimension and thus in particular for recognizing one hearing situation in each case, advantageously enables at least three hearing situations to be recognized independently of one another. This increases the flexibility of the hearing device in the Recognition of listening situations advantageously increased. The invention is based on the knowledge that at least some listening situations can also be completely independent (ie in particular not influencing one another or only influencing one another to an insignificant extent) and parallel to one another. By means of the method according to the invention and by means of the hearing device according to the invention, the risk can thus be reduced that, at least with regard to the at least three acoustic dimensions analyzed by means of the respectively assigned classifier, mutually exclusive and in particular contradicting classifications (i.e. assessment of the currently existing acoustic situation) can be reduced. comes. In particular, (completely) parallel listening situations can be recognized in a simple manner and taken into account in the change in the signal processing algorithm.
Der erfindungsgemäßen Hörvorrichtung kommen dabei die gleichen Vorteile zu wie dem erfindungsgemäßen Verfahren zum Betrieb der Hörvorrichtung.The hearing device according to the invention has the same advantages as the method according to the invention for operating the hearing device.
In einer bevorzugten Verfahrensvariante werden mehrere, d. h. wenigstens zwei oder mehr Signalverarbeitungsalgorithmen insbesondere parallel zur Verarbeitung des Mikrophonsignals bzw. des Eingangssignals herangezogen. Die Signalverarbeitungsalgorithmen "arbeiten" dabei vorzugsweise auf (wenigstens) jeweils einer zugeordneten akustischen Dimension, d. h. die Signalverarbeitungsalgorithmen dienen zur Verarbeitung (bspw. Filterung, Verstärkung, Dämpfung) von Signalanteilen, die für die in der jeweils zugeordneten akustischen Dimension enthaltenen oder abgebildeten Hörsituationen relevant sind. Zur Anpassung der Signalverarbeitung in Abhängigkeit von der Ausprägung der jeweiligen akustischen Dimension umfassen die Signalverarbeitungsalgorithmen wenigstens einen, vorzugsweise mehrere Parameter, die in ihren Parameterwerten verändert werden können. Vorzugsweise können die Parameterwerte dabei in Abhängigkeit von der jeweiligen Wahrscheinlichkeit der Ausprägung auch in mehreren Abstufungen (graduell oder kontinuierlich) verändert werden. Dadurch wird eine besonders flexible und vorteilhafterweise an eine Vielzahl von graduellen Unterschieden zwischen mehreren Hörsituationen anpassbare Signalverarbeitung ermöglicht.In a preferred variant of the method, several, ie at least two or more signal processing algorithms are used, in particular in parallel for processing the microphone signal or the input signal. The signal processing algorithms "work" preferably on (at least) one assigned acoustic dimension, ie the signal processing algorithms are used to process (e.g. filtering, amplifying, attenuating) signal components that are relevant for the listening situations contained or mapped in the assigned acoustic dimension . To adapt the signal processing as a function of the characteristics of the respective acoustic dimension, the signal processing algorithms comprise at least one, preferably several parameters, the parameter values of which can be changed. The parameter values can preferably also be changed in several steps (gradually or continuously) depending on the respective probability of the expression. This enables signal processing that is particularly flexible and advantageously adaptable to a large number of gradual differences between a number of listening situations.
Verfahrensgemäß wird außerdem mindestens zwei der mindestens drei Klassifikatoren jeweils eine unterschiedliche Auswahl aus den Merkmalen zugeführt. Darunter wird hier und im Folgenden insbesondere verstanden, dass für den jeweiligen Klassifikator eine unterschiedliche Anzahl und/oder unterschiedliche Merkmale ausgewählt und diesem zugeführt werden.According to the method, a different selection from the features is also supplied to at least two of the at least three classifiers. This is understood here and in the following in particular to mean that a different number and / or different features are selected for the respective classifier and supplied to it.
Die Konjunktion "und/oder" ist hier und im Folgenden derart zu verstehen, dass die mittels dieser Konjunktion verknüpften Merkmale sowohl gemeinsam als auch als Alternativen zueinander ausgebildet sein können.The conjunction “and / or” is to be understood here and below in such a way that the features linked by means of this conjunction can be designed both together and as alternatives to one another.
In einer weiteren zweckmäßigen Verfahrensvariante werden jedem der Klassifikatoren mit der entsprechend zugeordneten Auswahl insbesondere nur für die Analyse der zugeordneten akustischen Dimension relevante Merkmale zugeführt. Mit anderen Worten werden für jeden Klassifikator vorzugsweise nur die Merkmale ausgewählt und zugeführt, die zur Bestimmung der in der jeweiligen akustischen Dimension abgebildeten Hörsituation auch tatsächlich erforderlich sind. Dadurch kann bei der Analyse der jeweiligen akustischen Dimension vorteilhafterweise Rechenaufwand sowie Aufwand bei der Implementierung des jeweiligen Klassifikators eingespart werden, da für die jeweilige akustische Dimension unerhebliche Merkmale von vornherein unberücksichtigt bleiben. Vorteilhafterweise kann hierdurch auch das Risiko einer Fehlklassifikation aufgrund einer irrtümlichen Berücksichtigung nicht relevanter Merkmale weiter verringert werden.In a further expedient variant of the method, each of the classifiers is supplied with the correspondingly assigned selection, in particular features that are only relevant for the analysis of the assigned acoustic dimension. In other words, only those features are selected and supplied for each classifier that are actually required to determine the hearing situation mapped in the respective acoustic dimension. In this way, when analyzing the respective acoustic dimension, computational effort and effort in the implementation of the respective classifier can advantageously be saved, since features that are insignificant for the respective acoustic dimension are not taken into account from the outset. The risk of a misclassification due to an erroneous consideration of non-relevant features can thereby advantageously be further reduced.
In einer vorteilhaften Verfahrensvariante wird, insbesondere für den Fall, dass in jedem Klassifikator nur die jeweils relevanten Merkmale herangezogen werden, für jeden der Klassifikatoren ein spezifischer Analysealgorithmus zur Auswertung der (jeweils spezifisch) zugeführten Merkmale herangezogen. Auch hierdurch lässt sich wiederum vorteilhafterweise Rechenaufwand einsparen. Des Weiteren können vergleichsweise komplizierte Algorithmen oder Analysemodelle wie z. B. Gauss'sche Mischmoden, neuronale Netze oder Hidden-Markov-Modelle, die insbesondere zur Analyse einer Vielzahl von verschiedenen, voneinander unabhängigen Merkmalen herangezogen werden, entfallen. Vielmehr ist insbesondere jeder der Klassifikatoren somit auf ein konkretes "Problem", d. h. hinsichtlich seines Analysealgorithmus auf die diesem Klassifikator konkret zugeordnete akustische Dimension "zugeschnitten" (d. h. angepasst oder ausgelegt). Die vorstehend beschriebenen, vergleichsweise komplexen Analysemodelle können im Rahmen der Erfindung dennoch für spezifische akustische Dimensionen zum Einsatz kommen, wobei auch hierbei aufgrund der Ausrichtung des entsprechenden Klassifikators auf eine oder wenige von der spezifischen akustischen Dimension umfassten Hörsituationen Aufwand bei der Implementierung eines solchen vergleichsweise aufwendigen Modells eingespart werden kann.In an advantageous variant of the method, especially in the event that only the respectively relevant features are used in each classifier, a specific analysis algorithm is used for each of the classifiers to evaluate the (each specifically) supplied features. This, in turn, can also advantageously save computational effort. Furthermore, comparatively complicated algorithms or analysis models such as B. Gaussian mixed modes, neural networks or hidden Markov models, which are used in particular for the analysis of a large number of different, mutually independent features, are omitted. Rather, in particular, each of the classifiers is thus on a specific "problem", ie with regard to its own Analysis algorithm "tailored" (ie adapted or designed) to the acoustic dimension specifically assigned to this classifier. The above-described, comparatively complex analysis models can nevertheless be used for specific acoustic dimensions within the scope of the invention, whereby, due to the orientation of the corresponding classifier to one or a few listening situations included in the specific acoustic dimension, there is an effort involved in implementing such a comparatively complex model can be saved.
In einer bevorzugten Verfahrensvariante werden als die wenigstens drei akustischen Dimensionen insbesondere die Dimensionen "Fahrzeug", "Musik" und "Sprache" herangezogen. Insbesondere wird innerhalb der jeweiligen akustischen Dimension somit ermittelt, ob der Nutzer der Hörvorrichtung sich in einem Fahrzeug befindet, konkret mit diesem Fahrzeug fährt, Musik hört bzw. ob Sprache vorliegt. In letzterem Fall wird vorzugsweise im Rahmen dieser akustischen Dimension ermittelt, ob Sprache in Ruhe, Sprache im Störgeräusch oder keine Sprache und dabei vorzugsweise nur Störgeräusch vorliegt. Bei diesen drei akustischen Dimensionen handelt es sich insbesondere um die Dimensionen, die im Alltag eines Nutzers der Hörvorrichtung üblicherweise besonders häufig auftreten und dabei auch unabhängig voneinander sind. In einer optionalen Weiterbildung dieser Verfahrensvariante wird ein vierter Klassifikator zur Analyse einer vierten akustischen Dimension herangezogen, bei der es sich insbesondere um die Lautheit (auch: "Lautstärke") von Umgebungsgeräuschen (auch als "Störgeräusche" bezeichnet) handelt. Die Ausprägungen dieser akustischen Dimension erstrecken sich dabei vorzugsweise graduell oder kontinuierlich über mehrere Zwischenstufen von sehr leise bis sehr laut. Die Informationen zu den Ausprägungen insbesondere der akustischen Dimensionen Fahrzeug und Musik können im Gegensatz dazu optional "binär" sein, d. h. es wird nur erkannt, ob Fahren im Fahrzeug vorliegt oder nicht, bzw. ob Musik gehört wird oder nicht. Vorzugsweise liegen aber alle Informationen der anderen drei akustischen Dimensionen als eine Art Wahrscheinlichkeitswert kontinuierlich vor. Dies ist insbesondere vorteilhaft, da Fehler bei der Analyse der jeweiligen akustischen Dimension nicht ausgeschlossen werden können, sowie da dadurch auch im Gegensatz zu binären Informationen auf einfache Weise "weichere" Übergänge zwischen verschiedenen Einstellungen bewirkt werden können.In a preferred variant of the method, in particular the dimensions “vehicle”, “music” and “language” are used as the at least three acoustic dimensions. In particular, it is determined within the respective acoustic dimension whether the user of the hearing device is in a vehicle, is actually driving this vehicle, is listening to music or whether speech is present. In the latter case, it is preferably determined within the scope of this acoustic dimension whether speech is present in quiet, speech in background noise or no language and preferably only background noise. These three acoustic dimensions are, in particular, the dimensions that usually occur particularly frequently in the everyday life of a user of the hearing device and are also independent of one another. In an optional development of this variant of the method, a fourth classifier is used to analyze a fourth acoustic dimension, which is in particular the loudness (also: “volume”) of ambient noises (also referred to as “interference noises”). The characteristics of this acoustic dimension extend gradually or continuously over several intermediate stages from very quiet to very loud. In contrast, the information on the characteristics, in particular of the acoustic dimensions vehicle and music, can optionally be "binary", ie it is only recognized whether or not the vehicle is being driven, or whether music is being heard or not. Preferably, however, all information of the other three acoustic dimensions is continuously available as a kind of probability value. This is particularly advantageous because errors in the analysis of the respective acoustic dimension cannot be ruled out, and because this also makes it simple, in contrast to binary information Way "smoother" transitions between different settings can be achieved.
In zusätzlichen oder optional alternativen Weiterbildungen werden jeweils weitere Klassifikatoren zur Wind- und/oder Nachhallschätzung sowie zur Detektion der eigenen Stimme des Trägers der Hörvorrichtung herangezogen.In additional or optionally alternative developments, further classifiers are used in each case for wind and / or reverberation estimation and for the detection of the wearer's own voice.
In einer zweckmäßigen Verfahrensvariante werden aus dem Mikrophonsignal bzw. dem Eingangssignal Merkmale abgeleitet, die aus einer (insbesondere nichtabschließenden) Gruppe ausgewählt sind, die insbesondere die Merkmale Signalpegel, 4-Hz-Einhüllenden-Modulation, Onset-Gehalt, Pegel eines Hintergrundgeräuschs (auch als "Noise Floor Level" bezeichnet, optional bei einer vorgegebenen Frequenz), spektraler Schwerpunkt des Hintergrundgeräuschs, Stationarität (insbesondere bei einer vorgegebenen Frequenz), Tonalität und Windaktivität umfasst.In an expedient variant of the method, features are derived from the microphone signal or the input signal that are selected from a (in particular non-final) group, which in particular includes the features of signal level, 4 Hz envelope modulation, onset content, level of background noise (also called “Noise Floor Level” denotes, optionally at a predetermined frequency), the spectral focus of the background noise, stationarity (in particular at a predetermined frequency), tonality and wind activity.
In einer weiteren zweckmäßigen Verfahrensvariante werden der akustischen Dimension Fahrzeug zumindest die Merkmale Pegel des Hintergrundgeräuschs, spektraler Schwerpunkt des Hintergrundgeräuschs und Stationarität (sowie optional auch das Merkmal der Windaktivität) zugeordnet. Der akustischen Dimension Musik werden vorzugsweise die Merkmale Onset-Gehalt, Tonalität und Pegel des Hintergrundgeräuschs zugeordnet. Der akustischen Dimension Sprache werden insbesondere die Merkmale Onset-Gehalt und 4-Hz-Einhüllenden-Modulation zugeordnet. Der gegebenenfalls vorhandenen Dimension Lautheit des Umgebungsgeräuschs werden insbesondere die Merkmale Pegel des Hintergrundgeräuschs, Signalpegel und spektraler Schwerpunkt des Hintergrundgeräuschs zugeordnet.In a further expedient variant of the method, at least the characteristics level of the background noise, spectral focus of the background noise and stationarity (and optionally also the characteristic of wind activity) are assigned to the acoustic dimension vehicle. The characteristics of onset content, tonality and level of the background noise are preferably assigned to the acoustic dimension of music. The characteristics of onset content and 4 Hz envelope modulation in particular are assigned to the acoustic dimension of language. The loudness dimension of the ambient noise, which may be present, is assigned in particular the characteristics of the level of the background noise, the signal level and the spectral focus of the background noise.
In einer weiteren zweckmäßigen Verfahrensvariante wird für jeden Klassifikator eine spezifisch zugeordnete zeitliche Stabilisierung berücksichtigt. Insbesondere wird hierbei bei manchen der Klassifikatoren vorzugsweise bei bereits in der Vergangenheit (bspw. in einem vorangegangenem Zeitabschnitt von vorgegebener Dauer) erkanntem Vorliegen einer Hörsituation (d. h. insbesondere bei einer bestimmten Ausprägung der akustischen Dimension) angenommen, dass dieser Zustand (die Ausprägung) dann auch mit hoher Wahrscheinlichkeit zum aktuellen Zeitpunkt noch vorliegt. Beispielsweise wird hierzu ein gleitender Mittelwert über (insbesondere eine vorgegebene Anzahl von) vorangegangenen Zeitabschnitten gebildet. Alternativ kann auch eine Art "Totzeitglied" vorgesehen werden, mittels dessen in einem nachfolgenden Zeitabschnitt die Wahrscheinlichkeit erhöht wird, dass die im vorangegangenen Zeitabschnitt vorliegende Ausprägung immer noch vorliegt. Beispielsweise wird angenommen, wenn Fahren im Fahrzeug in den vorausgegangenen fünf Minuten erkannt wurde, dass diese Situation weiterhin vorliegt. Vorzugsweise für die Dimensionen Fahrzeug und Musik werden vergleichsweise "starke" Stabilisierungen herangezogen, d. h. es werden nur vergleichsweise langsame oder seltene Veränderungen der entsprechend zugeordneten Hörsituationen angenommen. Für die Dimension Sprache wird hingegen zweckmäßigerweise keine oder nur eine "schwache" Stabilisierung vorgenommen, da hier schnelle und/oder häufige Veränderungen der Hörsituationen angenommen werden. Sprachsituationen können oft nur wenige Sekunden (bspw. etwa 5 Sekunden) oder wenige Minuten andauern, wohingegen Fahren im Fahrzeug meist für mehrere Minuten (bspw. mehr als 3 bis 30 Minuten oder sogar Stunden) vorliegt. Eine weitere optionale Variante zur Stabilisierung kann auch über ein Zählprinzip erfolgen, bei dem bei einer vergleichsweise schnellen (bspw. 100 Millisekunden bis wenige Sekunden) Detektionstaktung ein Zähler ("counter") inkrementiert wird und erst bei Überschreiten eines Grenzwerts für diesen Zähler die "Erkennung" der jeweiligen Hörsituation ausgelöst wird. Dies ist bspw. als Kurzzeitstabilisierung bei einem gemeinsamen Klassifikator für "alle" Hörsituationen zweckmäßig. Als Abwandlung zur Stabilisierung im vorliegenden Fall ist es dabei bspw. denkbar, jeder Hörsituation einen eigenen Grenzwert zuzuweisen und diesen insbesondere für die Hörsituation "Fahren im Fahrzeug" und/oder "Musik-Hören" herabzusetzen, wenn bereits für eine vorgegebene vorausgehende Zeitspanne die jeweilige Hörsituation erkannt wurde.In a further expedient variant of the method, a specifically assigned temporal stabilization is taken into account for each classifier. In particular, with some of the classifiers it is preferably assumed that this state is present in the past (for example, in a previous time segment of predetermined duration) (ie in particular with a certain characteristic of the acoustic dimension) (the expression) is then still present with a high degree of probability at the current point in time. For this purpose, for example, a sliding mean value is formed over (in particular a predetermined number of) previous time segments. Alternatively, a type of "dead time element" can also be provided, by means of which, in a subsequent time segment, the probability is increased that the expression present in the previous time segment is still present. For example, if driving in the vehicle was recognized in the previous five minutes, it is assumed that this situation continues to exist. Comparatively “strong” stabilizations are preferably used for the dimensions vehicle and music, ie only comparatively slow or rare changes in the correspondingly assigned hearing situations are assumed. For the language dimension, on the other hand, no stabilization or only a “weak” stabilization is expediently undertaken, since rapid and / or frequent changes in the listening situation are assumed here. Speech situations can often only last a few seconds (e.g. about 5 seconds) or a few minutes, whereas driving in the vehicle usually lasts for several minutes (e.g. more than 3 to 30 minutes or even hours). A further optional variant for stabilization can also take place via a counting principle in which a counter ("counter") is incremented with a comparatively fast (e.g. 100 milliseconds to a few seconds) detection cycle and the "detection" is only activated when a limit value is exceeded for this counter "the respective hearing situation is triggered. This is useful, for example, as short-term stabilization in the case of a common classifier for "all" listening situations. As a modification to stabilization in the present case, it is, for example, conceivable to assign a separate limit value to each listening situation and to reduce this value, in particular for the listening situation "driving in the vehicle" and / or "listening to music", if the respective limit value has already been set for a predetermined preceding period of time Listening situation has been recognized.
In einer weiteren zweckmäßigen Verfahrensvariante wird der oder der jeweilige Signalverarbeitungsalgorithmus in Abhängigkeit von mindestens zwei der wenigstens drei Informationen über die Ausprägung der jeweils zugeordneten akustischen Dimension angepasst. In zumindest einem Signalverarbeitungsalgorithmus werden also die Informationen mehrerer Klassifikatoren berücksichtigt.In a further expedient variant of the method, the respective signal processing algorithm or the respective signal processing algorithm is or are dependent on at least two of the at least three pieces of information about the characteristics of the respectively assigned acoustic Dimension adjusted. The information from several classifiers is therefore taken into account in at least one signal processing algorithm.
In einer zweckmäßigen Verfahrensvariante werden die jeweiligen Informationen der einzelnen Klassifikatoren insbesondere zunächst einem Fusionsglied zu einer gemeinsamen Auswertung zugeführt ("fusioniert"). Anhand dieser gemeinsamen Auswertung aller Informationen wird insbesondere eine Gesamtinformation über die vorliegenden Hörsituationen erstellt. Vorzugsweise wird dabei eine dominante Hörsituation ermittelt - insbesondere anhand des die Wahrscheinlichkeit wiedergebenden Grads der Ausprägung. Der oder der jeweilige Signalverarbeitungsalgorithmus wird dabei an diese dominante Hörsituation angepasst. Optional erfolgt hierbei eine Priorisierung einer Hörsituation (nämlich er dominanten), indem der oder der jeweilige Signalverarbeitungsalgorithmus nur in Abhängigkeit von der dominanten Hörsituation verändert wird, während andere Signalverarbeitungsalgorithmen und/oder die von anderen Hörsituationen abhängigen Parameter unverändert bleiben oder auf einen Parameterwert, der keinen Einfluss auf die Signalverarbeitung hat, gesetzt werden.In an expedient variant of the method, the respective information from the individual classifiers is in particular first fed to a merger element for a common evaluation (“merged”). On the basis of this joint evaluation of all information, in particular overall information about the present listening situations is created. A dominant hearing situation is preferably determined in this case - in particular on the basis of the degree of expression reflecting the probability. The respective signal processing algorithm or algorithms are adapted to this dominant hearing situation. A hearing situation (namely the dominant one) is optionally prioritized by changing the respective signal processing algorithm only as a function of the dominant hearing situation, while other signal processing algorithms and / or the parameters dependent on other hearing situations remain unchanged or to a parameter value that does not have any Has an impact on signal processing.
In einer Weiterbildung der vorstehend beschriebenen Verfahrensvariante wird anhand der gemeinsamen Auswertung aller Informationen insbesondere eine als Subsituation bezeichnete Hörsituation ermittelt, die gegenüber der dominanten Hörsituation eine geringere Dominanz aufweist. Diese oder die jeweilige Subsituation wird dabei zusätzlich bei der vorgenannten Anpassung des oder des jeweiligen Signalverarbeitungsalgorithmus an die dominante Hörsituation und/oder zur Anpassung eines spezifisch der akustischen Dimension dieser Subsituation zugeordneten Signalverarbeitungsalgorithmus berücksichtigt. Insbesondere führt hierbei diese Subsituation zu einer im Vergleich zu der dominanten Hörsituation geringeren Veränderung des oder des jeweils zugeordneten Parameters. Für den Fall, dass beispielsweise als dominante Hörsituation Sprache im Störgeräusch ermittelt und als Subsituation Musik werden, wird daraufhin ein zur möglichst klaren Sprachverständlichkeit bei Störgeräusch dienender Signalverarbeitungsalgorithmus in einem oder mehreren Parametern vergleichsweise stark verändert, um eine möglichst hohe Sprachverständlichkeit zu erreichen. Da aber auch Musik vorliegt, werden Parameter, die zur Dämpfung von Umgebungsgeräuschen dienen, weniger stark eingestellt (als wenn nur ein Störgeräusch vorliegt), um die Töne der Musik nicht völlig abzudämpfen. Ein (insbesondere zusätzlicher) zur klaren Klangwiedergabe von Musik dienender Signalverarbeitungsalgorithmus wird dabei außerdem weniger stark eingestellt als bei Musik als dominanter Hörsituation (aber stärker als bei keiner Musik), um die Sprachanteile nicht zu überdecken. Somit kann, insbesondere aufgrund der voneinander unabhängigen Detektion von unterschiedlichen Hörsituationen sowie aufgrund der dadurch ermöglichten feineren Anpassung der Signalverarbeitungsalgorithmen eine besonders präzise Anpassung der Signalverarbeitung der Hörvorrichtung an die tatsächlich vorliegende Hörsituation erfolgen.In a further development of the method variant described above, based on the joint evaluation of all information, in particular a hearing situation referred to as a sub-situation is determined, which has a lower dominance compared to the dominant hearing situation. This or the respective sub-situation is additionally taken into account in the aforementioned adaptation of the respective signal processing algorithm or algorithms to the dominant hearing situation and / or to adapt a signal processing algorithm specifically assigned to the acoustic dimension of this sub-situation. In particular, this sub-situation leads to a smaller change in the parameter or parameters assigned in each case compared to the dominant hearing situation. In the event that, for example, speech is determined in background noise as the dominant listening situation and music is used as the sub-situation, one or more parameters of a signal processing algorithm, which is used to ensure the clearest possible speech intelligibility in the case of background noise, are changed comparatively strongly in order to achieve the highest possible speech intelligibility. But since there is also music, parameters that serve to attenuate ambient noise are set less strongly (than if only background noise is present) so that the tones of the music are not completely attenuated. A signal processing algorithm (in particular additional) serving for clear sound reproduction of music is also set less strongly than with music as the dominant listening situation (but stronger than with no music) so as not to cover up the speech components. Thus, in particular because of the mutually independent detection of different hearing situations and because of the finer adaptation of the signal processing algorithms made possible by this, a particularly precise adaptation of the signal processing of the hearing device to the actually present hearing situation can take place.
Wie bereits vorstehend beschrieben wird vorzugsweise in wenigstens einem der gegebenenfalls mehreren Signalverarbeitungsalgorithmen das parallele Vorliegen mehrerer Hörsituationen berücksichtigt.As already described above, the parallel presence of several hearing situations is preferably taken into account in at least one of the possibly several signal processing algorithms.
In einer alternativen Verfahrensvariante wird der oder vorzugsweise jeder Signalverarbeitungsalgorithmus wenigstens einem der Klassifikatoren zugeordnet. In diesem Fall wird vorzugsweise wenigstens ein Parameter eines jeden Signalverarbeitungsalgorithmus (insbesondere unmittelbar) in Abhängigkeit von der von dem jeweiligen Klassifikator ausgegebenen Information über die Ausprägung der zugeordneten akustischen Dimension verändert. Vorzugsweise ist dieser Parameter bzw. dessen Parameterwert als eine Funktion der jeweiligen Information ausgebildet. Somit wird die Information über die Ausprägung der jeweiligen akustischen Dimension insbesondere direkt für eine Anpassung der Signalverarbeitung genutzt. Mit anderen Worten "steuert" jeder Klassifikator wenigstens einen Parameter wenigstens eines Signalverarbeitungsalgorithmus. Eine gemeinsame Auswertung aller Informationen kann hierbei ausbleiben. Insbesondere werden in diesem Fall besonders viele Informationen über die Verteilung der voneinander unabhängigen Hörsituationen im aktuell vorliegenden, vom Umgebungsschalls beschriebenen "Bild" berücksichtigt, so dass wiederum eine besonders feine Anpassung der Signalverarbeitung gefördert wird. Insbesondere können hierbei auch vollständig parallele Hörsituationen - bspw. 100 % Sprache im Störgeräusch bei 100 % Fahren im Fahrzeug, oder 100 % Musik bei 100 % Fahren im Fahrzeug - auf einfache Weise und mit geringem Informationsverlust berücksichtigt werden.In an alternative variant of the method, the or preferably each signal processing algorithm is assigned to at least one of the classifiers. In this case, preferably at least one parameter of each signal processing algorithm is changed (in particular directly) as a function of the information output by the respective classifier about the characteristic of the assigned acoustic dimension. This parameter or its parameter value is preferably designed as a function of the respective information. In this way, the information about the characteristics of the respective acoustic dimension is used, in particular, directly for adapting the signal processing. In other words, each classifier "controls" at least one parameter of at least one signal processing algorithm. A joint evaluation of all information can be omitted here. In particular, in this case a particularly large amount of information about the distribution of the mutually independent hearing situations in the currently available "image" described by the ambient sound is taken into account, so that in turn a particularly fine adjustment of the signal processing is promoted. In particular, completely parallel listening situations - for example 100% speech in background noise 100% driving in the vehicle, or 100% music with 100% driving in the vehicle - can be taken into account in a simple manner and with little loss of information.
In einer weiteren zweckmäßigen Verfahrensvariante wird wenigstens einem der Klassifikatoren eine Zustandsinformation zugeführt, die unabhängig von dem Mikrophonsignal oder dem Eingangssignal erzeugt wird. Diese Zustandsinformation wird dabei insbesondere zusätzlich zur Auswertung der jeweiligen akustischen Dimension berücksichtigt. Beispielsweise handelt es sich dabei um eine Bewegungs- und/oder Ortsinformation die beispielsweise zur Auswertung der akustischen Dimension Fahrzeug herangezogen wird. Diese Bewegungs- und/oder Ortsinformation wird beispielsweise mit einem in der Hörvorrichtung selbst oder in einem signalübertragungstechnisch mit dieser verbundenen System (bspw. einem Smartphone) angeordneten Beschleunigungs- oder (globalen) Positionssensor erzeugt. Beispielsweise lässt sich dabei anhand einer vorliegenden (einen vorgegebenen Wert aufweisenden) Bewegungsgeschwindigkeit bei der Auswertung der akustischen Dimension Fahrzeug die Wahrscheinlichkeit für das Vorliegen der Hörsituation Fahren im Fahrzeug auf einfache Weise zusätzlich zu der akustischen Auswertung erhöhen. Man spricht hier auch von "Augmentierung" eines Klassifikators.In a further expedient variant of the method, at least one of the classifiers is supplied with status information that is generated independently of the microphone signal or the input signal. This status information is also taken into account, in particular, for evaluating the respective acoustic dimension. For example, this involves movement and / or location information that is used, for example, to evaluate the acoustic dimension of the vehicle. This movement and / or location information is generated, for example, with an acceleration or (global) position sensor arranged in the hearing device itself or in a system connected to it for signal transmission purposes (e.g. a smartphone). For example, using an existing movement speed (having a predetermined value) when evaluating the acoustic dimension vehicle, the probability of the presence of the hearing situation driving in the vehicle can be increased in a simple manner in addition to the acoustic evaluation. One speaks here of "augmenting" a classifier.
Nachfolgend werden Ausführungsbeispiele der Erfindung anhand einer Zeichnung näher erläutert. Darin zeigen:
- Fig. 1
- in einer schematischen Übersichtsdarstellung eine Hörvorrichtung,
- Fig. 2
- in einem schematischen Blockschaltbild einen Signallaufplan der Hörvorrichtung gemäß
Fig. 1 , - Fig. 3
- in einem schematischen Ablaufplan ein Verfahren zum Betrieb der Hörvorrichtung gemäß
Fig. 1 , und - Fig. 4
- in Ansicht gemäß
Fig. 2 ein alternatives Ausführungsbeispiel des Signallaufplans.
- Fig. 1
- in a schematic overview representation of a hearing device,
- Fig. 2
- in a schematic block diagram, a signal flow diagram of the hearing device according to FIG
Fig. 1 , - Fig. 3
- in a schematic flow chart a method for operating the hearing device according to FIG
Fig. 1 , and - Fig. 4
- in view according to
Fig. 2 an alternative embodiment of the signal flow diagram.
Einander entsprechende Teile und Größen sind in allen Figuren stets mit gleichen Bezugszeichen versehen.Corresponding parts and sizes are always provided with the same reference symbols in all figures.
In
Zur Erkennung unterschiedlicher Hörsituationen und zur darauffolgenden Anpassung der Signalverarbeitung ist das Hörgerät 1, konkret dessen Signalprozessor 4, dazu eingerichtet, ein im Folgenden anhand von
Konkret werden, wie aus
In einem alternativen Ausführungsbeispiel wird lediglich "binär" ermittelt, ob Sprache, ggf. im Störgeräusch oder nur Störgeräusch, bzw. ob Musik oder Fahren im Fahrzeug vorliegt oder nicht.In an alternative exemplary embodiment, it is only determined in a “binary” manner whether speech, possibly in background noise or only background noise, or whether there is music or driving in the vehicle or not.
Die jeweilige Ausprägung der akustischen Dimensionen wird in einem Verfahrensschritt 50 an ein Fusionsmodul 60 ausgegeben (siehe
Der Klassifikator KF umfasst dabei in nicht näher dargestellter Art und Weise eine zeitliche Stabilisierung. Diese ist insbesondere darauf ausgerichtet, dass eine Fahrt im Fahrzeug üblicherweise längere Zeit andauert, und somit für den Fall, dass bereits in vorausgegangenen Zeitabschnitten, von beispielsweise jeweils 30 Sekunden bis zu fünf Minuten Dauer, Fahren im Fahrzeug erkannt wurde und unter der Annahme, dass die Situation Fahren im Fahrzeug immer noch andauert die Wahrscheinlichkeit für das Vorliegen dieser Hörsituation bereits vorab erhöht ist. Entsprechendes ist auch in dem Klassifikator KM eingerichtet und vorgesehen.The classifier K F includes a stabilization over time in a manner not shown in detail. This is geared in particular to the fact that a journey in the vehicle usually lasts a longer time, and thus in the event that driving in the vehicle has already been recognized in previous time segments, for example from 30 seconds to five minutes each, and on the assumption that the driving situation in the vehicle is still ongoing, the probability that this hearing situation is present is already increased in advance. The same is set up and provided in the classifier K M.
In einem alternativen Ausführungsbeispiel gemäß
Der Gegenstand der Erfindung ist nicht auf die vorstehend beschriebenen Ausführungsbeispiele beschränkt. Vielmehr können weitere Ausführungsformen der Erfindung von dem Fachmann aus der vorstehenden Beschreibung abgeleitet werden. Insbesondere können die anhand der verschiedenen Ausführungsbeispiele beschriebenen Einzelmerkmale der Erfindung und deren Ausgestaltungsvarianten auch in anderer Weise miteinander kombiniert werden. So kann bspw. das Hörgerät 1 anstelle des dargestellten hinter-dem-Ohr-Hörgeräts auch als in-dem-Ohr-Hörgerät ausgebildet sein. Der Gegenstand der Erfindung ist in den nachfolgenden Ansprüchen definiert.The subject matter of the invention is not limited to the exemplary embodiments described above. Rather, further embodiments of the invention can be derived from the above description by a person skilled in the art. In particular, the individual features of the invention described on the basis of the various exemplary embodiments and their design variants can also be combined with one another in other ways. For example, the hearing aid 1 can also be designed as an in-the-ear hearing aid instead of the behind-the-ear hearing aid shown. The subject matter of the invention is defined in the following claims.
- 11
- HörgerätHearing aid
- 22
- Gehäusecasing
- 33
- Mikrophonmicrophone
- 44th
- SignalprozessorSignal processor
- 55
- Lautsprecherspeaker
- 66th
- Batteriebattery
- 77th
- SchallschlauchSound tube
- 88th
- OhrstückEarpiece
- 1010
- MerkmalsanalysemodulFeature analysis module
- 2020th
- VerfahrensschrittProcess step
- 3030th
- VerfahrensschrittProcess step
- 4040
- VerfahrensschrittProcess step
- 5050
- VerfahrensschrittProcess step
- 6060
- FusionsmodulFusion module
- A1-A4A1-A4
- SignalverarbeitungsalgorithmusSignal processing algorithm
- KS, KM, KFKS, KM, KF
- KlassifikatorClassifier
- ME, MO, MT, MP, MW, MZ, MMME, MO, MT, MP, MW, MZ, MM
- Merkmalcharacteristic
- SASA
- AusgangssignalOutput signal
- SESE
- EingangssignalInput signal
- SMSM
- MikrophonsignalMicrophone signal
Claims (13)
- Method for operating a hearing device (1), which comprises at least one microphone (3) for converting ambient sound into a microphone signal (SM), wherein according to the method- a plurality of features (ME, MO, MT, MP, MW, MZ, MM) are derived from the microphone signal (SM) or from an input signal (SE) formed therefrom,- at least three classifiers (KS, KM, KF), which are implemented independently of each other for respectively analyzing an assigned acoustic dimension - i.e. a group of hearing situations, which are related on the basis of their specific properties - are each supplied with a specifically assigned selection from these features (ME, MO, MT, MP, MW, MZ, MM), wherein at least two of the at least three classifiers (KS, KM, KF) each are supplied with a different selection from the features (ME, MO, MT, MP, MW, MZ, MM),- by means of the respective classifier (KS, KM, KF), in each case an information is generated about a characteristic of the acoustic dimension assigned to this classifier (KS, KM, KF), and- at least one signal processing algorithm (A1, A2, A3, A4), which is executed in order to process the microphone signal (SM) or the input signal (SE) into an output signal (SA), is modified as a function of at least one of the at least three pieces of information about the respective characteristic of the assigned acoustic dimension.
- Method according to claim 1,
wherein only features (ME, MO, MT, MP, MW, MZ, MM) relevant for analyzing the respectively assigned acoustic dimension are supplied to each of the classifiers (KS, KM, KF) with the correspondingly assigned selection. - Method according to one of claims 1 to 2,
wherein for each of the classifiers (KS, KM, KF) a specific analysis algorithm is used for evaluating the respective supplied features (ME, Mo, MT, MP, MW, MZ, MM). - Method according to one of claims 1 to 3,
wherein vehicle, music and speech are used as the at least three acoustic dimensions. - Method according to one of claims 1 to 4,
wherein characteristics selected from signal level, 4-hertz envelope modulation (ME), onset content (MO), level of a background noise (MP), spectral center of gravity of the background noise (MZ), stationarity (MM), tonality (MT), wind activity (MW) are derived from the microphone signal (SM) or respectively the input signal (SE). - Method according to one of claims 4 and 5,
wherein at least the features level of background noise (MP), spectral center of gravity of background noise (MZ) and stationarity (MM) are assigned to the acoustic dimension vehicle, wherein the features onset content (MO), tonality (MT) and level of background noise (MP) are assigned to the acoustic dimension music, and wherein the features onset content (MO) and 4-hertz envelope modulation (ME) are assigned to the acoustic dimension speech. - Method according to one of claims 1 to 6,
wherein for each classifier (KS, KM, KF) a specifically assigned temporal stabilization is taken into consideration. - Method according to one of claims 1 to 7,
wherein the or the respective signal processing algorithm (A1, A2, A3, A4) is modified as a function of at least two of the at least three pieces of information about the manifestation of the respectively assigned acoustic dimension. - Method according to one of claims 1 to 8,
wherein the information of the respective classifiers (KS, KM, KF) are supplied to a joint evaluation, wherein a dominant hearing situation is determined on the basis of this joint evaluation, and wherein the or the respective signal processing algorithm (A1, A2, A3, A4) is adapted to this dominant hearing situation. - Method according to claim 9,
wherein at least one subsituation with lower dominance in comparison to the dominant listening situation is determined, and wherein this or the respective subsituation is taken into account in the modification of the signal processing algorithm (A1, A2, A3, A4) or at least one of the signal processing algorithms (A1, A2, A3, A4). - Method according to one of claims 1 to 7,
wherein each signal processing algorithm (A1, A2, A3, A4) is assigned to at least one of the classifiers (KS, KM, KF), and wherein at least one parameter of each signal processing algorithm (A1, A2, A3, A4) is modified as a function of the information about the manifestation of the corresponding acoustic dimension output by the assigned classifier (KS, KM, KF). - Method according to one of claims 1 to 11,
wherein at least one of the classifiers (KS, KM, KF) is supplied with a status information generated independently of the microphone signal (SM) or the input signal (SE), which status information is taken into account in addition to the evaluation of the respective acoustic dimension. - Hearing device (1),- with at least one microphone (3) for the conversion for converting ambient sound into a microphone signal (SM), and- with a signal processor (4), in which at least three classifiers (Ks, KM, KF) are implemented independently of each other for respectively analyzing an assigned acoustic dimension - i.e. a group of hearing situations, which are related on the basis of their specific properties - and wherein the signal processor (4) is configured to perform the method according to one of claims 1 to 12.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102017205652.5A DE102017205652B3 (en) | 2017-04-03 | 2017-04-03 | Method for operating a hearing device and hearing device |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3386215A1 EP3386215A1 (en) | 2018-10-10 |
EP3386215B1 true EP3386215B1 (en) | 2021-11-17 |
Family
ID=61231167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18157220.7A Active EP3386215B1 (en) | 2017-04-03 | 2018-02-16 | Hearing aid and method for operating a hearing aid |
Country Status (5)
Country | Link |
---|---|
US (1) | US10462584B2 (en) |
EP (1) | EP3386215B1 (en) |
CN (1) | CN108696813B (en) |
DE (1) | DE102017205652B3 (en) |
DK (1) | DK3386215T3 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102019203786A1 (en) * | 2019-03-20 | 2020-02-13 | Sivantos Pte. Ltd. | Hearing aid system |
DE102019218808B3 (en) * | 2019-12-03 | 2021-03-11 | Sivantos Pte. Ltd. | Method for training a hearing situation classifier for a hearing aid |
DE102020208720B4 (en) * | 2019-12-06 | 2023-10-05 | Sivantos Pte. Ltd. | Method for operating a hearing system depending on the environment |
US11601765B2 (en) * | 2019-12-20 | 2023-03-07 | Sivantos Pte. Ltd. | Method for adapting a hearing instrument and hearing system therefor |
DE102022212035A1 (en) | 2022-11-14 | 2024-05-16 | Sivantos Pte. Ltd. | Method for operating a hearing aid and hearing aid |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2307582A (en) * | 1994-09-07 | 1997-05-28 | Motorola Inc | System for recognizing spoken sounds from continuous speech and method of using same |
DK1273205T3 (en) * | 2000-04-04 | 2006-10-09 | Gn Resound As | A hearing prosthesis with automatic classification of the listening environment |
US7158931B2 (en) * | 2002-01-28 | 2007-01-02 | Phonak Ag | Method for identifying a momentary acoustic scene, use of the method and hearing device |
EP1513371B1 (en) * | 2004-10-19 | 2012-08-15 | Phonak Ag | Method for operating a hearing device as well as a hearing device |
US8249284B2 (en) * | 2006-05-16 | 2012-08-21 | Phonak Ag | Hearing system and method for deriving information on an acoustic scene |
EP1858291B1 (en) * | 2006-05-16 | 2011-10-05 | Phonak AG | Hearing system and method for deriving information on an acoustic scene |
CN101529929B (en) * | 2006-09-05 | 2012-11-07 | Gn瑞声达A/S | A hearing aid with histogram based sound environment classification |
US8948428B2 (en) * | 2006-09-05 | 2015-02-03 | Gn Resound A/S | Hearing aid with histogram based sound environment classification |
EP2255548B1 (en) * | 2008-03-27 | 2013-05-08 | Phonak AG | Method for operating a hearing device |
US20100002782A1 (en) * | 2008-07-02 | 2010-01-07 | Yutaka Asanuma | Radio communication system and radio communication method |
DK2792165T3 (en) | 2012-01-27 | 2019-01-21 | Sivantos Pte Ltd | CUSTOMIZING A CLASSIFICATION OF A SOUND SIGN IN A HEARING DEVICE |
EP2670168A1 (en) * | 2012-06-01 | 2013-12-04 | Starkey Laboratories, Inc. | Adaptive hearing assistance device using plural environment detection and classification |
WO2015024585A1 (en) * | 2013-08-20 | 2015-02-26 | Widex A/S | Hearing aid having an adaptive classifier |
DE102014207311A1 (en) * | 2014-04-16 | 2015-03-05 | Siemens Medical Instruments Pte. Ltd. | Automatic selection of listening situations |
EP3360136B1 (en) | 2015-10-05 | 2020-12-23 | Widex A/S | Hearing aid system and a method of operating a hearing aid system |
JP6402810B1 (en) * | 2016-07-22 | 2018-10-10 | 株式会社リコー | Three-dimensional modeling resin powder, three-dimensional model manufacturing apparatus, and three-dimensional model manufacturing method |
-
2017
- 2017-04-03 DE DE102017205652.5A patent/DE102017205652B3/en not_active Expired - Fee Related
-
2018
- 2018-02-16 DK DK18157220.7T patent/DK3386215T3/en active
- 2018-02-16 EP EP18157220.7A patent/EP3386215B1/en active Active
- 2018-03-30 US US15/941,106 patent/US10462584B2/en active Active
- 2018-04-03 CN CN201810287586.2A patent/CN108696813B/en active Active
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
CN108696813A (en) | 2018-10-23 |
DE102017205652B3 (en) | 2018-06-14 |
US20180288534A1 (en) | 2018-10-04 |
EP3386215A1 (en) | 2018-10-10 |
US10462584B2 (en) | 2019-10-29 |
CN108696813B (en) | 2021-02-19 |
DK3386215T3 (en) | 2022-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3386215B1 (en) | Hearing aid and method for operating a hearing aid | |
EP3451705B1 (en) | Method and apparatus for the rapid detection of own voice | |
EP2603018B1 (en) | Hearing aid with speaking activity recognition and method for operating a hearing aid | |
EP1379102B1 (en) | Sound localization in binaural hearing aids | |
DE102019206743A1 (en) | Hearing aid system and method for processing audio signals | |
DE102017214164B3 (en) | Method for operating a hearing aid and hearing aid | |
WO2001020965A2 (en) | Method for determining a current acoustic environment, use of said method and a hearing-aid | |
EP3873108A1 (en) | Hearing system with at least one hearing instrument worn in or on the ear of the user and method for operating such a hearing system | |
EP2991379B1 (en) | Method and device for improved perception of own voice | |
EP3840418A1 (en) | Method for adjusting a hearing aid and corresponding hearing system | |
EP3396978B1 (en) | Hearing aid and method for operating a hearing aid | |
EP3836139A1 (en) | Hearing aid and method for coupling two hearing aids together | |
EP2141941A2 (en) | Method for suppressing interference noises and corresponding hearing aid | |
EP2182741B1 (en) | Hearing aid with special situation recognition unit and method for operating a hearing aid | |
EP3693960B1 (en) | Method for individualized signal processing of an audio signal of a hearing aid | |
EP3585073A1 (en) | Method for controlling data transmission between at least one hearing aid and a peripheral device of a hearing aid and corresponding hearing aid system | |
DE102008046040A1 (en) | Method for operating a hearing device with directivity and associated hearing device | |
EP1926087A1 (en) | Adjustment of a hearing device to a speech signal | |
EP2658289B1 (en) | Method for controlling an alignment characteristic and hearing aid | |
EP1881738A2 (en) | Method of operating a hearing aid and assembly with a hearing aid | |
EP3116236A1 (en) | Method for processing signals for a hearing aid, hearing aid, hearing aid system and interference transmitter for a hearing aid system | |
DE102020216439A1 (en) | Method for operating a hearing system with a hearing instrument | |
DE102007043081A1 (en) | Method and arrangements for detecting the type of a sound signal source with a hearing aid | |
EP3048813B1 (en) | Method and device for suppressing noise based on inter-subband correlation | |
EP3985997A1 (en) | Hearing aid and method for its operation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20190409 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20191014 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20200225 |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: AUBREVILLE, MARC Inventor name: LUGGER, MARKO |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
INTC | Intention to grant announced (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20210407 |
|
GRAJ | Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted |
Free format text: ORIGINAL CODE: EPIDOSDIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 502018007846 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: H04R0025000000 Ipc: G10L0025810000 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTC | Intention to grant announced (deleted) | ||
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: AUBREVILLE, MARC Inventor name: LUGGER, MARKO |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 25/81 20130101AFI20210610BHEP Ipc: H04R 25/00 20060101ALI20210610BHEP Ipc: G10L 25/84 20130101ALI20210610BHEP |
|
INTG | Intention to grant announced |
Effective date: 20210629 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 502018007846 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1448738 Country of ref document: AT Kind code of ref document: T Effective date: 20211215 |
|
REG | Reference to a national code |
Ref country code: DK Ref legal event code: T3 Effective date: 20220201 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20211117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220217 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220317 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220317 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220217 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20220218 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 502018007846 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20220228 |
|
26N | No opposition filed |
Effective date: 20220818 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220216 Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220216 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20220228 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20180216 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MM01 Ref document number: 1448738 Country of ref document: AT Kind code of ref document: T Effective date: 20230216 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230216 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 Ref country code: AT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230216 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240216 Year of fee payment: 7 Ref country code: CH Payment date: 20240301 Year of fee payment: 7 Ref country code: GB Payment date: 20240222 Year of fee payment: 7 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20240222 Year of fee payment: 7 Ref country code: DK Payment date: 20240221 Year of fee payment: 7 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20211117 |