EP3346468B1 - Musical-piece analysis device, musical-piece analysis method, and musical-piece analysis program - Google Patents

Musical-piece analysis device, musical-piece analysis method, and musical-piece analysis program Download PDF

Info

Publication number
EP3346468B1
EP3346468B1 EP15903039.4A EP15903039A EP3346468B1 EP 3346468 B1 EP3346468 B1 EP 3346468B1 EP 15903039 A EP15903039 A EP 15903039A EP 3346468 B1 EP3346468 B1 EP 3346468B1
Authority
EP
European Patent Office
Prior art keywords
sound data
reproduction time
fft
execution interval
music piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15903039.4A
Other languages
German (de)
French (fr)
Other versions
EP3346468A1 (en
EP3346468A4 (en
Inventor
Shiro Suzuki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AlphaTheta Corp
Original Assignee
AlphaTheta Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AlphaTheta Corp filed Critical AlphaTheta Corp
Publication of EP3346468A1 publication Critical patent/EP3346468A1/en
Publication of EP3346468A4 publication Critical patent/EP3346468A4/en
Application granted granted Critical
Publication of EP3346468B1 publication Critical patent/EP3346468B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use

Definitions

  • the present invention relates to a music piece analyzer, a music piece analysis method, and a music piece analysis program.
  • Sound data for a certain period of time is sampled and the inputted waveforms are analyzed using, for instance, FFT (Fast Fourier Transform).
  • FFT Fast Fourier Transform
  • Such a sound data analysis also has been used as a technology relating to a BPM (Beats Per Minute) music piece in a field of a DJ-related device.
  • BPM Beats Per Minute
  • a music piece can be comfortably connected to another music piece using the analyzed tempo, key, and scale and the like. Accordingly, a high DJ-performance can be provided.
  • JP 2003 263170 A discloses a method for tone/tonality estimation in music data.
  • peaks in the spectrum of the sound signal are localised.
  • the analysis of the input sound signal is based on a sequence of windowed time signals with a hop size of 1/8 of a detected fundamental frequency period, and again on a second sequence of signal frames windowed with a second window and a hop size of 1/64 of the detected fundamental frequency period.
  • Katy C Noland "Computational tonality estimation: signal processing and hidden markov models",, 31 March 2009 (2009-03-31), pages 1-169, XP055568467, London, UK discloses methods for tonality estimation.
  • a pre-processing for extraction of the musical features from the input signal is disclosed and the framing prior to Fourier-transforming the input signal is discussed.
  • the influence of the window and hop size on the estimation is discussed, and an optimal hop size is found to be linked to a change rate of keys in the sound signal as well as on tempo and rate of harmonic change.
  • JP 2007 052394 A is concerned with a beat/tempo estimation of music signals based on an FFT transferred input signal. A good compromise between frequency and tie resolution has to be found.
  • Patent Literature 1 JP200-97084A
  • An FFT is typically executed at a fixed interval. Accordingly, in order to analyze long sound data, the FFT needs to be executed at an increased number of times, so that it takes time to analyze the sound data.
  • An object of the invention is to provide a music piece analyzer, a music piece analysis method, and a music piece analysis program, which are capable of shortening an analysis time irrespective of a length of sound data.
  • a music piece analyzer includes:
  • a music piece analyzer includes:
  • a music piece analysis method includes:
  • a music piece analysis program to be run on a computer includes:
  • Fig. 1 shows a music piece analyzer 1 according to the exemplary embodiment.
  • the music piece analyzer 1 is configured to analyze sound data SD obtained by digitalizing inputted PCM data and the like, judge a key of the sound data SD, and display the key as a key display KD of the inputted sound data on a display screen of a display device and the like.
  • the music piece analyzer 1 which is in a form of a software application to be run in a general computer or a mobile information terminal installed with OS (Operation System), includes a reproduction time detector 2, a sound data judging unit 3, a sound data copier 4, a sound data analyzing unit 5, an execution interval setting unit 6, and a key judging unit 7.
  • the reproduction time detector 2 is configured to detect a reproduction time of the inputted sound data SD. Specifically, the reproduction time detector 2 detects a reproduction time of the inputted sound data SD by counting the number of sampling times from the start to the end of the sound data SD. After detecting the reproduction time, the reproduction time detector 2 outputs the detected reproduction time to the sound data judging unit 3 and the execution interval setting unit 6.
  • the sound data judging unit 3 is configured to judge whether the inputted sound data SD has the reproduction time equal to or more than a predetermined length based on the reproduction time detected by the reproduction time detector 2. Specifically, the sound data judging unit 3 judges whether the sound data SD has the reproduction time enough for the sound data analyzing unit 5 (later described) to analyze the sound data SD.
  • Whether the sound data SD can be analyzed or not is judged based on whether or not the sound data SD has the reproduction time equal to or more than the shortest time required for a window function applied to the sound data analyzing unit 5.
  • a time window length of the window function is determined based on a sampling frequency, a lowest frequency to be detected and a frequency resolution of the sound data SD.
  • the sound data judging unit 3 When judging the reproduction time of the sound data SD is less than the predetermined length, the sound data judging unit 3 outputs this judgement result to the sound data copier 4.
  • the sound data copier 4 is configured to copy the inputted sound data SD and put together the inputted sound data and the copied sound data so that the reproduction time of the obtained sound data becomes equal to or more than the predetermined length. Specifically, as shown in Fig. 2 , the sound data copier 4 copies the sound data SD of the inputted reproduction time t1, pastes the copied data CD posteriorly to the sound data SD to put together, thereby generating a continuous sound data SD' of a reproduction time t2.
  • the sound data copier 4 repeats copying the sound data SD to generate the copied data CD, until the reproduction time t2 of the continuous sound data SD' becomes equal to or more than 1.2 sec.
  • the copying of the sound data SD only needs to be repeated so that the reproduction time t2 is long enough for the sound data analyzing unit 5 to analyze.
  • the number N of the times of the copying may not always be an integer.
  • the sound data copier 4 outputs the sound data SD', the reproduction time of which is made equal to or more than a predetermined length by the copying, to the sound data analyzing unit 5.
  • the sound data analyzing unit 5 is configured to analyze frequency spectra of the sound data SD or SD'.
  • the analysis is conducted using the FFT.
  • the analysis method is not particularly limited to the method using the FFT. For instance, an analysis using DCT (Discrete Cosine transform), an analysis in terms of a time axis, an analysis in terms of a signal level, and an analysis in terms of a sound volume and an attack may be conducted.
  • DCT Discrete Cosine transform
  • a hamming window HMW (window function) is usually applied to the FFT.
  • the hamming window HMW is applied in order to soften an increase in a signal intensity at both ends of the time axis during the FFT execution to be less affected by a discontinuous joint of the sampled waveforms at the FFT execution.
  • signal intensities in an analysis period T0 in which the signal intensity is not weakened, during the execution of FFT are used as analyzable data to analyze the frequency spectra.
  • the analysis period T0 can be determined as needed.
  • the analysis period T0 is set based on a value obtained by the hamming window HMW being 0.7 (70%).
  • the window function is not limited to the hamming window HMW but may be a hanning window, a flat-top window and the like.
  • the execution interval setting unit 6 is configured to set an interval at which the sound data analyzing unit 5 executes the FFT.
  • the execution interval setting unit 6 sets, as an execution interval TI, the start time of the second FFT2 after the first FFT1 is executed.
  • the third FFT3 is started after the elapse of the double execution interval TI (i.e., a time 2TI). Thus, another FFT is sequentially executed.
  • the setting of the execution interval TI depends on the reproduction time of the sound data SD or SD'.
  • the execution interval setting unit 6 sets the execution interval TI to be large as shown in Fig. 5 .
  • the execution interval setting unit 6 sets the execution interval TI to be small as shown in Fig. 6 .
  • the minimum value of the execution interval TI is determined so that the analysis period T0 of each of FFT1, FFT2 and the like is continuous to a subsequent one of the analysis periods T0.
  • the execution interval setting unit 6 outputs the set execution interval TI to the above sound data analyzing unit 5.
  • the sound data analyzing unit 5 repeats executing the FFT based on the execution interval TI. Each time the sound data analyzing unit 5 executes the FFT, the sound data analyzing unit 5 outputs an analysis result to the key judging unit 7.
  • the key judging unit 7 is configured to judge a key of the sound data SD or SD' based on the analysis result outputted from the sound data analyzing unit 5.
  • the key judging unit 7 stores reference frequencies of 24 musical notes including two kinds of keys (i.e., a minor key and a major key) of each of 12 musical notes in an octave.
  • the key judging unit 7 sums up the analysis results inputted each at the execution interval TI in the time axis direction to provide a total value, selects the reference frequency close to the frequency having a strong signal intensity based on the obtained total value, and obtains the signal intensity of each of the musical notes as shown in Fig. 7 .
  • the key judging unit 7 rearranges the signal intensities in an order of a higher signal intensity, normalizes the signal intensities, selects some of the musical notes having high signal intensities, and judges a key of the sound data SD or SD'.
  • the key judging unit 7 displays key judgement results of the sound data SD or SD' as a key display KD on a display of a computer or a screen of a mobile terminal.
  • a user of the computer or the mobile terminal selects the music piece analyzer 1 on the screen to start a program and select the sound data SD that is an analysis target, the sound data SD is inputted to the music piece analyzer 1 (Step S1).
  • the reproduction time detector 2 detects the reproduction time of the sound data SD (Step S2).
  • the sound data judging unit 3 judges whether the reproduction time of the sound data SD is equal to or more than a predetermined length (Step S3).
  • the sound data copier 4 copies the sound data SD (Step S4) and pastes the copied data CD to the sound data SD to generate the continuous sound data SD'.
  • the execution interval setting unit 6 sets the execution interval TI at the sound data analyzing unit 5 based on the reproduction time of the sound data SD or SD' (Step S6).
  • the sound data analyzing unit 5 repeats the FFT based on the set execution interval TI to analyze the frequency spectra of the sound data SD or SD' (Step S7).
  • the sound data analyzing unit 5 judges whether the sound data SD or SD' ends (Step S8). After judging that the sound data SD or SD' ends, the sound data analyzing unit 5 outputs the analysis result to the key judging unit.
  • the key judging unit 7 judges a key of the sound data SD or SD' based on the analysis result (Step S9).
  • the key judging unit 7 displays the key of the sound data SD or SD' as the judgement result on a display of a computer or a screen of a mobile terminal (Step S10).
  • the exemplary embodiment provides the following advantages.
  • the music piece analyzer 1 includes the sound data copier 4, even a very short sound data SD can be transformed by copying into the sound data SD' having the reproduction time equal to or more than a predetermined length. Accordingly, irrespective of the reproduction time of the sound data SD, the sound data analyzing unit 5 can execute the FFT to analyze the frequency spectra, so that the key of the sound data SD or SD' can be judged.
  • the music piece analyzer 1 includes the reproduction time detector 2 and the execution interval setting unit 6, the execution interval TI of the FFT by the sound data analyzing unit 5 can be changed depending on the reproduction time of the sound data SD. Accordingly, when the reproduction time of the sound data SD is short, an analysis accuracy of the sound data SD can be improved by decreasing the execution interval TI and increasing the number of the FFT execution.
  • an analysis time of the sound data SD can be shortened by prolonging the execution interval TI and decreasing the number of the FFT execution during the reproduction of the sound data SD.
  • the long sound data SD tends to be roughly analyzed since the number of the FFT execution is relatively decreased, this number of the FFT execution is sufficient for use for the key judgement and the like, so that a favorable result can be obtained without any trouble.
  • the invention is by no means limited to the above exemplary embodiment, but includes the following modification(s).
  • the music piece analyzer 1 of the above exemplary embodiment judges the key of the sound data SD
  • the invention is not limited to the music piece analyzer 1 for the key judgement.
  • the music piece analyzer 1 may be used for judging a key and a scale.
  • the execution interval setting unit 6 of the above exemplary embodiment sets the execution interval TI on the basis of the reproduction time of the sound data SD or SD'
  • the execution interval TI in the invention is not necessarily set on the basis of the reproduction time.
  • the execution interval may be set on the basis of a data length of the inputted sound data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Auxiliary Devices For Music (AREA)
  • Electrophonic Musical Instruments (AREA)

Description

    TECHNICAL FIELD
  • The present invention relates to a music piece analyzer, a music piece analysis method, and a music piece analysis program.
  • BACKGROUND ART
  • There has been typically known a technology of automatically analyzing a music piece in terms of a beat, a tempo, and a key and a scale of the music piece based on sound data of the music piece and the like (see, for instance, Patent Literature 1).
  • Sound data for a certain period of time is sampled and the inputted waveforms are analyzed using, for instance, FFT (Fast Fourier Transform).
  • Such a sound data analysis also has been used as a technology relating to a BPM (Beats Per Minute) music piece in a field of a DJ-related device.
  • When the sound data analysis is used in the DJ-related device, a music piece can be comfortably connected to another music piece using the analyzed tempo, key, and scale and the like. Accordingly, a high DJ-performance can be provided.
  • JP 2003 263170 A discloses a method for tone/tonality estimation in music data. For tone detection, peaks in the spectrum of the sound signal are localised. The analysis of the input sound signal is based on a sequence of windowed time signals with a hop size of 1/8 of a detected fundamental frequency period, and again on a second sequence of signal frames windowed with a second window and a hop size of 1/64 of the detected fundamental frequency period.
  • Katy C Noland: "Computational tonality estimation: signal processing and hidden markov models",, 31 March 2009 (2009-03-31), pages 1-169, XP055568467, London, UK discloses methods for tonality estimation. A pre-processing for extraction of the musical features from the input signal is disclosed and the framing prior to Fourier-transforming the input signal is discussed. The influence of the window and hop size on the estimation is discussed, and an optimal hop size is found to be linked to a change rate of keys in the sound signal as well as on tempo and rate of harmonic change.
  • JP 2007 052394 A is concerned with a beat/tempo estimation of music signals based on an FFT transferred input signal. A good compromise between frequency and tie resolution has to be found.
  • CITATION LIST PATENT LITERATURE(S)
  • Patent Literature 1 : JP200-97084A
  • SUMMARY OF THE INVENTION PROBLEM(S) TO BE SOLVED BY THE INVENTION
  • An FFT is typically executed at a fixed interval. Accordingly, in order to analyze long sound data, the FFT needs to be executed at an increased number of times, so that it takes time to analyze the sound data.
  • An object of the invention is to provide a music piece analyzer, a music piece analysis method, and a music piece analysis program, which are capable of shortening an analysis time irrespective of a length of sound data.
  • MEANS FOR SOLVING THE PROBLEM(S)
  • According to an aspect of the invention, a music piece analyzer includes:
    • a reproduction time detector configured to detect a reproduction time of an inputted sound data;
    • an execution interval setting unit configured to set an execution interval of Fast Fourier Transform (FFT) depending on the reproduction time detected by the reproduction time detector; and
    • a sound data analyzing unit configured to execute the FFT at the execution interval set by the execution interval setting unit to analyze the inputted sound data.
  • According to another aspect of the invention, a music piece analyzer includes:
    • a data length detector configured to detect a data length of an inputted sound data;
    • an execution interval setting unit configured to set an execution interval of Fast Fourier Transform (FFT) depending on the data length detected by the data length detector; and
    • a sound data analyzing unit configured to execute the FFT at the execution interval set by the execution interval setting unit to analyze the inputted sound data.
  • According to still another aspect of the invention, a music piece analysis method includes:
    • detecting a reproduction time of an inputted sound data;
    • setting an execution interval of Fast Fourier Transform (FFT) depending on the detected reproduction time; and
    • executing the FFT at the set execution interval to analyze the inputted sound data.
  • According to a further aspect of the invention, a music piece analysis program to be run on a computer includes:
    • detecting a reproduction time of an inputted sound data;
    • setting an execution interval of Fast Fourier Transform (FFT) depending on the detected reproduction time; and
    • executing the FFT at the set execution interval to analyze the inputted sound data.
    BRIEF DESCRIPTION OF DRAWING(S)
    • Fig. 1 is a block diagram showing a music piece analyzer according to an exemplary embodiment of the invention.
    • Fig. 2 is a schematic illustration for explaining copying of sound data in the exemplary embodiment.
    • Fig. 3 is a schematic illustration for explaining a window function in the exemplary embodiment.
    • Fig. 4 is a schematic illustration for explaining an execution interval of FFT in the exemplary embodiment.
    • Fig. 5 is a schematic illustration for explaining an execution interval of sound data requiring a long reproduction time in the exemplary embodiment.
    • Fig. 6 is a schematic illustration for explaining an execution interval of sound data requiring a short reproduction time in the exemplary embodiment.
    • Fig. 7 is a schematic illustration for explaining a key judgement after the FFT is executed in the exemplary embodiment.
    • Fig. 8 is another schematic illustration for explaining a key judgement after the FFT is executed in the exemplary embodiment.
    • Fig. 9 is a flowchart for explaining a music piece analysis method in the exemplary embodiment.
    DESCRIPTION OF EMBODIMENT(S)
  • An exemplary embodiment of the invention will be described below.
  • Fig. 1 shows a music piece analyzer 1 according to the exemplary embodiment. The music piece analyzer 1 is configured to analyze sound data SD obtained by digitalizing inputted PCM data and the like, judge a key of the sound data SD, and display the key as a key display KD of the inputted sound data on a display screen of a display device and the like.
  • The music piece analyzer 1, which is in a form of a software application to be run in a general computer or a mobile information terminal installed with OS (Operation System), includes a reproduction time detector 2, a sound data judging unit 3, a sound data copier 4, a sound data analyzing unit 5, an execution interval setting unit 6, and a key judging unit 7.
  • The reproduction time detector 2 is configured to detect a reproduction time of the inputted sound data SD. Specifically, the reproduction time detector 2 detects a reproduction time of the inputted sound data SD by counting the number of sampling times from the start to the end of the sound data SD. After detecting the reproduction time, the reproduction time detector 2 outputs the detected reproduction time to the sound data judging unit 3 and the execution interval setting unit 6.
  • The sound data judging unit 3 is configured to judge whether the inputted sound data SD has the reproduction time equal to or more than a predetermined length based on the reproduction time detected by the reproduction time detector 2. Specifically, the sound data judging unit 3 judges whether the sound data SD has the reproduction time enough for the sound data analyzing unit 5 (later described) to analyze the sound data SD.
  • Whether the sound data SD can be analyzed or not is judged based on whether or not the sound data SD has the reproduction time equal to or more than the shortest time required for a window function applied to the sound data analyzing unit 5.
  • A time window length of the window function is determined based on a sampling frequency, a lowest frequency to be detected and a frequency resolution of the sound data SD.
  • For instance, in a typical BPM 200 music piece in four-quarter time, it takes 300 msec for one beat and it takes 75 msec for a semiquaver. When the FFT is executed and the sound data SD of a low sound at 27.5 Hz corresponding to a musical note A0 is analyzed, the sound data SD at least for 1.2 sec is required.
  • When judging the reproduction time of the sound data SD is less than the predetermined length, the sound data judging unit 3 outputs this judgement result to the sound data copier 4.
  • Based on the judgement result of the sound data judging unit 3, the sound data copier 4 is configured to copy the inputted sound data SD and put together the inputted sound data and the copied sound data so that the reproduction time of the obtained sound data becomes equal to or more than the predetermined length. Specifically, as shown in Fig. 2, the sound data copier 4 copies the sound data SD of the inputted reproduction time t1, pastes the copied data CD posteriorly to the sound data SD to put together, thereby generating a continuous sound data SD' of a reproduction time t2.
  • For instance, when the reproduction time t1 of the sound data SD is shorter than 1.2 sec in the above example, the sound data copier 4 repeats copying the sound data SD to generate the copied data CD, until the reproduction time t2 of the continuous sound data SD' becomes equal to or more than 1.2 sec.
  • It should be noted that the copying of the sound data SD only needs to be repeated so that the reproduction time t2 is long enough for the sound data analyzing unit 5 to analyze. The number N of the times of the copying may not always be an integer.
  • The sound data copier 4 outputs the sound data SD', the reproduction time of which is made equal to or more than a predetermined length by the copying, to the sound data analyzing unit 5.
  • The sound data analyzing unit 5 is configured to analyze frequency spectra of the sound data SD or SD'. In the exemplary embodiment, the analysis is conducted using the FFT. However, the analysis method is not particularly limited to the method using the FFT. For instance, an analysis using DCT (Discrete Cosine transform), an analysis in terms of a time axis, an analysis in terms of a signal level, and an analysis in terms of a sound volume and an attack may be conducted.
  • As shown in Fig. 3, a hamming window HMW (window function) is usually applied to the FFT. The hamming window HMW is applied in order to soften an increase in a signal intensity at both ends of the time axis during the FFT execution to be less affected by a discontinuous joint of the sampled waveforms at the FFT execution.
  • Accordingly, since being too weak, the signal intensities at both the end of the time axis of the FFT-executed data cannot be used as the analysis data.
  • For this reason, in the exemplary embodiment, signal intensities in an analysis period T0, in which the signal intensity is not weakened, during the execution of FFT are used as analyzable data to analyze the frequency spectra. The analysis period T0 can be determined as needed. In the exemplary embodiment, the analysis period T0 is set based on a value obtained by the hamming window HMW being 0.7 (70%).
  • Although the hamming window HMW is used in the exemplary embodiment, the window function is not limited to the hamming window HMW but may be a hanning window, a flat-top window and the like.
  • Based on the reproduction time detected by the reproduction time detector 2, the execution interval setting unit 6 is configured to set an interval at which the sound data analyzing unit 5 executes the FFT.
  • Specifically, the execution interval setting unit 6 sets, as an execution interval TI, the start time of the second FFT2 after the first FFT1 is executed. In the exemplary embodiment, the third FFT3 is started after the elapse of the double execution interval TI (i.e., a time 2TI). Thus, another FFT is sequentially executed.
  • The setting of the execution interval TI depends on the reproduction time of the sound data SD or SD'.
  • For instance, with respect to a long sound data SD having the reproduction time of 30 sec or more, the execution interval setting unit 6 sets the execution interval TI to be large as shown in Fig. 5. With respect to a short sound data SD having the reproduction time less than 30 sec, the execution interval setting unit 6 sets the execution interval TI to be small as shown in Fig. 6. The minimum value of the execution interval TI is determined so that the analysis period T0 of each of FFT1, FFT2 and the like is continuous to a subsequent one of the analysis periods T0.
  • The execution interval setting unit 6 outputs the set execution interval TI to the above sound data analyzing unit 5.
  • The sound data analyzing unit 5 repeats executing the FFT based on the execution interval TI. Each time the sound data analyzing unit 5 executes the FFT, the sound data analyzing unit 5 outputs an analysis result to the key judging unit 7.
  • The key judging unit 7 is configured to judge a key of the sound data SD or SD' based on the analysis result outputted from the sound data analyzing unit 5.
  • Specifically, the key judging unit 7 stores reference frequencies of 24 musical notes including two kinds of keys (i.e., a minor key and a major key) of each of 12 musical notes in an octave.
  • The key judging unit 7 sums up the analysis results inputted each at the execution interval TI in the time axis direction to provide a total value, selects the reference frequency close to the frequency having a strong signal intensity based on the obtained total value, and obtains the signal intensity of each of the musical notes as shown in Fig. 7.
  • Next, as shown in Fig. 8, the key judging unit 7 rearranges the signal intensities in an order of a higher signal intensity, normalizes the signal intensities, selects some of the musical notes having high signal intensities, and judges a key of the sound data SD or SD'. The key judging unit 7 displays key judgement results of the sound data SD or SD' as a key display KD on a display of a computer or a screen of a mobile terminal.
  • Next, the key judgement of the sound data SD by the music piece analyzer 1 with the above arrangement will be described with reference to a flowchart shown in Fig. 9.
  • First, a user of the computer or the mobile terminal selects the music piece analyzer 1 on the screen to start a program and select the sound data SD that is an analysis target, the sound data SD is inputted to the music piece analyzer 1 (Step S1).
  • After the sound data SD is inputted, the reproduction time detector 2 detects the reproduction time of the sound data SD (Step S2).
  • The sound data judging unit 3 judges whether the reproduction time of the sound data SD is equal to or more than a predetermined length (Step S3).
  • When the reproduction time of the sound data SD is judged to be less than the predetermined length, the sound data copier 4 copies the sound data SD (Step S4) and pastes the copied data CD to the sound data SD to generate the continuous sound data SD'.
  • When the reproduction time of the initial sound data SD is equal to or more than the predetermined length, or when the reproduction time of the sound data SD' is equal to or more than the predetermined length, the execution interval setting unit 6 sets the execution interval TI at the sound data analyzing unit 5 based on the reproduction time of the sound data SD or SD' (Step S6).
  • The sound data analyzing unit 5 repeats the FFT based on the set execution interval TI to analyze the frequency spectra of the sound data SD or SD' (Step S7).
  • The sound data analyzing unit 5 judges whether the sound data SD or SD' ends (Step S8). After judging that the sound data SD or SD' ends, the sound data analyzing unit 5 outputs the analysis result to the key judging unit.
  • The key judging unit 7 judges a key of the sound data SD or SD' based on the analysis result (Step S9).
  • The key judging unit 7 displays the key of the sound data SD or SD' as the judgement result on a display of a computer or a screen of a mobile terminal (Step S10).
  • The exemplary embodiment provides the following advantages.
  • Since the music piece analyzer 1 includes the sound data copier 4, even a very short sound data SD can be transformed by copying into the sound data SD' having the reproduction time equal to or more than a predetermined length. Accordingly, irrespective of the reproduction time of the sound data SD, the sound data analyzing unit 5 can execute the FFT to analyze the frequency spectra, so that the key of the sound data SD or SD' can be judged.
  • With this arrangement, various sound data SD of DJ-related devices is usable irrespective of the length of the reproduction time of the sound data SD, so that a high DJ-performance can be provided.
  • Since the music piece analyzer 1 includes the reproduction time detector 2 and the execution interval setting unit 6, the execution interval TI of the FFT by the sound data analyzing unit 5 can be changed depending on the reproduction time of the sound data SD. Accordingly, when the reproduction time of the sound data SD is short, an analysis accuracy of the sound data SD can be improved by decreasing the execution interval TI and increasing the number of the FFT execution.
  • On the other hand, when the reproduction time of the sound data SD is long, an analysis time of the sound data SD can be shortened by prolonging the execution interval TI and decreasing the number of the FFT execution during the reproduction of the sound data SD. Although the long sound data SD tends to be roughly analyzed since the number of the FFT execution is relatively decreased, this number of the FFT execution is sufficient for use for the key judgement and the like, so that a favorable result can be obtained without any trouble.
  • The invention is by no means limited to the above exemplary embodiment, but includes the following modification(s).
  • Although the music piece analyzer 1 of the above exemplary embodiment judges the key of the sound data SD, the invention is not limited to the music piece analyzer 1 for the key judgement. The music piece analyzer 1 may be used for judging a key and a scale.
  • Although the execution interval setting unit 6 of the above exemplary embodiment sets the execution interval TI on the basis of the reproduction time of the sound data SD or SD', the execution interval TI in the invention is not necessarily set on the basis of the reproduction time. The execution interval may be set on the basis of a data length of the inputted sound data.
  • Any other arrangements compatible with the invention may be applied.
  • EXPLANATION OF CODE(S)
  • 1...music piece analyzer, 2...reproduction time detector, 3...sound data judging unit, 4... sound data copier, 5... sound data analyzing unit, 6... execution interval setting unit, 7...key judging unit, CD... copied data, HMW... hamming window, KD...key display, S1... Step, S2...Step, S3...Step, S4...Step, S6...Step, S7...Step, S8...Step, S9...Step, S10...Step, SD...sound data, T0...analysis period, tl...reproduction time, t2...reproduction time, TI... execution interval

Claims (6)

  1. A music piece analyzer (1) comprising:
    a reproduction time detector (2) configured to detect a reproduction time of an inputted sound data;
    an execution interval setting unit (6) configured to set an execution interval TI of Fast Fourier Transform (FFT) depending on the reproduction time detected by the reproduction time detector (2),
    characterized by
    a sound data analyzing unit (5) configured to execute the FFT at the execution interval TI set by the execution interval setting unit (6) to analyze the inputted sound data.
  2. A music piece analyzer (1) comprising:
    a data length detector configured to detect a data length of an inputted sound data;
    an execution interval setting unit (6) configured to set an execution interval TI of Fast Fourier Transform (FFT) depending on the data length detected by the data length detector,
    characterized by
    a sound data analyzing unit (5) configured to execute the FFT at the execution interval TI set by the execution interval setting unit (6) to analyze the inputted sound data.
  3. The music piece analyzer (1) according to claim 1 or 2, wherein
    the execution interval setting unit (6) is configured:
    to prolong the set execution interval of the FFT when the reproduction time or the data length of the inputted sound data is longer than a predetermined reproduction time or a predetermined data length, and
    to shorten the set execution interval of the FFT when the reproduction time or the data length of the inputted sound data is shorter than a predetermined reproduction time or a predetermined data length.
  4. The music piece analyzer (1) according to claim 1 or 2, wherein
    the music piece analyzer (1) further comprises a sound data copier (4) configured to copy the inputted sound data until the reproduction time or the data length of the inputted sound data is equal to or longer than a reproduction time or a data length analyzable by the FFT when the reproduction time or the data length of the inputted sound data is shorter than the reproduction time or the data length analyzable by the FFT, and
    the sound data analyzing unit (5) analyzes the copied sound data.
  5. A music piece analysis method comprising:
    detecting a reproduction time of an inputted sound data;
    setting an execution interval TI of Fast Fourier Transform (FFT) depending on the detected reproduction time; and
    executing the FFT at the set execution internal TI to analyze the inputted sound data.
  6. A music piece analysis program to be run on a computer, the program comprising:
    detecting a reproduction time of an inputted sound data;
    setting an execution interval TI of Fast Fourier Transform (FFT) depending on the detected reproduction time; and
    executing the FFT at the set execution interval TI 1 to analyze the inputted sound data.
EP15903039.4A 2015-09-03 2015-09-03 Musical-piece analysis device, musical-piece analysis method, and musical-piece analysis program Active EP3346468B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/075077 WO2017037920A1 (en) 2015-09-03 2015-09-03 Musical-piece analysis device, musical-piece analysis method, and musical-piece analysis program

Publications (3)

Publication Number Publication Date
EP3346468A1 EP3346468A1 (en) 2018-07-11
EP3346468A4 EP3346468A4 (en) 2019-04-24
EP3346468B1 true EP3346468B1 (en) 2021-11-03

Family

ID=58186787

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15903039.4A Active EP3346468B1 (en) 2015-09-03 2015-09-03 Musical-piece analysis device, musical-piece analysis method, and musical-piece analysis program

Country Status (3)

Country Link
EP (1) EP3346468B1 (en)
JP (1) JP6549234B2 (en)
WO (1) WO2017037920A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3741106B2 (en) * 2003-02-21 2006-02-01 ヤマハ株式会社 Musical sound waveform analysis method and musical sound waveform analysis synthesis method
JP4767691B2 (en) * 2005-07-19 2011-09-07 株式会社河合楽器製作所 Tempo detection device, code name detection device, and program
JP4823804B2 (en) * 2006-08-09 2011-11-24 株式会社河合楽器製作所 Code name detection device and code name detection program
JP2013235050A (en) * 2012-05-07 2013-11-21 Sony Corp Information processing apparatus and method, and program

Also Published As

Publication number Publication date
EP3346468A1 (en) 2018-07-11
EP3346468A4 (en) 2019-04-24
WO2017037920A1 (en) 2017-03-09
JP6549234B2 (en) 2019-07-24
JPWO2017037920A1 (en) 2018-06-14

Similar Documents

Publication Publication Date Title
RU2743315C1 (en) Method of music classification and a method of detecting music beat parts, a data medium and a computer device
Peeters et al. The timbre toolbox: Extracting audio descriptors from musical signals
Davies et al. Context-dependent beat tracking of musical audio
US10497348B2 (en) Evaluation device and evaluation method
CN107210029B (en) Method and apparatus for processing a series of signals for polyphonic note recognition
Brossier et al. Fast labelling of notes in music signals.
CN101194304B (en) Sound signal processing device capable of identifying sound generating period and sound signal processing method
JP2015079151A (en) Music discrimination device, discrimination method of music discrimination device, and program
EP3346468B1 (en) Musical-piece analysis device, musical-piece analysis method, and musical-piece analysis program
JP5035815B2 (en) Frequency measuring device
Rosenzweig et al. libf0: A Python library for fundamental frequency estimation
JP2008065153A (en) Musical piece structure analyzing method, program and device
US20220215818A1 (en) Musical analysis device, musical analysis method, and non-transitory computer-readable medium
WO2017037919A1 (en) Musical-piece analysis device, musical-piece analysis method, and musical-piece analysis program
Turchet Hard real-time onset detection of percussive sounds.
JP2015200685A (en) Attack position detection program and attack position detection device
JP4242281B2 (en) Method for characterizing a timbre of an acoustic signal based on at least one descriptor
Thirumuru et al. Improved vowel region detection from a continuous speech using post processing of vowel onset points and vowel end-points
CN112908289B (en) Beat determining method, device, equipment and storage medium
JP6946442B2 (en) Music analysis device and music analysis program
JPS5876891A (en) Voice pitch extraction
Devaney et al. Score-informed estimation of performance parameters from polyphonic audio using ampact
Schutz et al. Periodic signal modeling for the octave problem in music transcription
Reyes et al. New algorithm based on spectral distance maximization to deal with the overlapping partial problem in note–event detection
Bhaduri et al. A novel method for tempo detection of INDIC Tala-s

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180307

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20190321

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 25/48 20130101ALI20190315BHEP

Ipc: G10L 25/45 20130101AFI20190315BHEP

Ipc: G10L 25/18 20130101ALI20190315BHEP

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ALPHATHETA CORPORATION

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 25/45 20130101AFI20210506BHEP

Ipc: G10L 25/48 20130101ALI20210506BHEP

Ipc: G10L 25/18 20130101ALI20210506BHEP

INTG Intention to grant announced

Effective date: 20210528

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

REG Reference to a national code

Ref country code: DE

Ref legal event code: R084

Ref document number: 602015074792

Country of ref document: DE

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1444695

Country of ref document: AT

Kind code of ref document: T

Effective date: 20211115

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015074792

Country of ref document: DE

REG Reference to a national code

Ref country code: GB

Ref legal event code: 746

Effective date: 20211110

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20211103

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1444695

Country of ref document: AT

Kind code of ref document: T

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220203

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220303

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220303

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220203

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220204

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015074792

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20220804

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20220930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220903

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220930

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220903

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220930

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20150903

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211103

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240730

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240801

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240808

Year of fee payment: 10