WO1996004642A1 - Timbral apparatus and method for musical sounds - Google Patents
Timbral apparatus and method for musical sounds Download PDFInfo
- Publication number
- WO1996004642A1 WO1996004642A1 PCT/US1995/009619 US9509619W WO9604642A1 WO 1996004642 A1 WO1996004642 A1 WO 1996004642A1 US 9509619 W US9509619 W US 9509619W WO 9604642 A1 WO9604642 A1 WO 9604642A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- musical instrument
- responsive
- sound
- signal
- pitch
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H3/00—Instruments in which the tones are generated by electromechanical means
- G10H3/12—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument
- G10H3/14—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means
- G10H3/18—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument using mechanically actuated vibrators with pick-up means using a string, e.g. electric guitar
- G10H3/186—Means for processing the signal picked up from the strings
- G10H3/188—Means for processing the signal picked up from the strings for converting the signal to digital format
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H1/00—Details of electrophonic musical instruments
- G10H1/02—Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos
- G10H1/06—Circuits for establishing the harmonic content of tones, or other arrangements for changing the tone colour
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H3/00—Instruments in which the tones are generated by electromechanical means
- G10H3/12—Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussive instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a loudspeaker or equivalent instrument
- G10H3/125—Extracting or recognising the pitch or fundamental frequency of the picked up signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/066—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for pitch analysis as part of wider processing for musical purposes, e.g. transcription, musical performance evaluation; Pitch recognition, e.g. in polyphonic sounds; Estimation or use of missing fundamental
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/131—Mathematical functions for musical analysis, processing, synthesis or composition
- G10H2250/215—Transforms, i.e. mathematical transforms into domains appropriate for musical signal processing, coding or compression
- G10H2250/235—Fourier transform; Discrete Fourier Transform [DFT]; Fast Fourier Transform [FFT]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/541—Details of musical waveform synthesis, i.e. audio waveshape processing from individual wavetable samples, independently of their origin or of the sound they represent
- G10H2250/631—Waveform resampling, i.e. sample rate conversion or sample depth conversion
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
- G10H2250/00—Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
- G10H2250/541—Details of musical waveform synthesis, i.e. audio waveshape processing from individual wavetable samples, independently of their origin or of the sound they represent
- G10H2250/641—Waveform sampler, i.e. music samplers; Sampled music loop processing, wherein a loop is a sample of a performance that has been edited to repeat seamlessly without clicks or artifacts
Definitions
- This invention relates to a musical instrument responsive controller wherein control signals are generated not only in response to pitch and amplitude of a sound from the instrument, but also in response to timbre of the sound. As a result, continuous control is obtained for each note and during each note.
- This has particular application in, but is not limited to, a musical sound synthesis apparatus and method.
- Synesizers Electronic musical sound generating devices have become very important in live musical performance and studio production. A significant advantage is that they provide a wide palette of sounds with precise control of rhythm and pitch.
- the predominant musical instrument used to control a synthesizer has been the piano or organ keyboard. Beside the widespread availability of pianos and organs and the people who play them, the very nature of the keyboard makes it a good choice from an implementation point of view.
- prior synthesizers have several disadvantages. These stem principally from the keyboard model on which most electronic instruments are based.
- the keyboard has severe limitations as a musical interface and control device because control of musical events is limited basically to timing of attack and release, and velocity of attack (some keyboards also support aftertouch) .
- Keyboard responsive synthesizers decouple the player from the sound generating element. The strike of a finger on a key starts a chain of events that produces a sound. After a key is struck, however, the greatest creative choice left to the musician is when to release it. This series of key closures and releases is the simplest form of information that can be used to control a synthesizer.
- MIDI protocol is used for controlling, recording, and communicating performances of electronic musical instruments.
- MIDI protocol has two chief problems which relate to its origin as a way of communicating keyboard data: relatively slow speed of communication (also referred to as bandwidth) , which can result in noticeable delays in musical events; and lack of a flexible and powerful way to specify modifications to ongoing notes.
- keyboard driven synthesizers can be well served by the MIDI protocol.
- the speed of the MIDI protocol (31.25 Kbaud) is adequate for transmitting the event-based nature of a keyboard.
- a ten note chord can be sent in 6.7 ms, which is on the borderline of being imperceptible.
- the continuous controller information generated from external devices is usually no more than 3 channels (pitch bend, modulation, and aftertouch) , keeping the bit count low.
- the MIDI protocol also represents data in a manner that assumes the controller is a keyboard or at least a percussive device.
- the MIDI protocol note-on command is an indivisible integration of timing, pitch, and loudness (velocity) information. This is appropriate for a keyboard because each key is associated with a particular pitch and each key strike occurs with a velocity and starts a sound of the particular pitch. Therefore, every time a key is struck, pitch, velocity, and timing information is available. Every modification of one of these three values is accompanied by a change or at least a reassertion of the other two.
- MIDI protocol does not preclude processing of the audio signal out of the synthesizer.
- Several synthesizers have individual outputs per voice. These can be mapped to a specific string from the controller and modulated in the analog domain based on information extracted from the string.
- One of the most satisfying examples of this is a Zeta Mirror 6 fret-scanning guitar controller connected to a Hyundai TX-802 FM synthesizer operating in legato mode.
- Each of six outputs from the TX-802 synthesizer is routed back to the Mirror 6 guitar where it passes through a respective voltage controlled amplifier (VCA) .
- Each VCA is then controlled by an envelope follower tracking the energy of one of the strings.
- the six VCA outputs are summed and fed to an amp.
- MIDI synthesizers respond to legato style commands.
- Pitch bend can vary pitch up and down to one octave but with a resolution of only 5.1 divisions per semitone (19.6 cents).
- MIDI protocol pitch bend is per-channel, not per-note, so one cannot just plug a MIDI guitar into a synthesizer and have pitch-bend work correctly. Instead, bending any of the notes on a guitar will cause all of the synthesizer pitches to bend by the same amount. If the synthesizer is polyphonic, however, one can have each string of the guitar control a separate onophonic voice on its own MIDI protocol channel, which would allow separate pitch bend information for each string. Otherwise, there is no way to control continuous variation in pitch polyphonically.
- FM frequency modulation
- VCO voltage controlled oscillators
- the exact pick position affects the ratio of odd to even harmonic components.
- the effect is to produce a "harmonic," in which only the even-numbered harmonics of the tone are allowed to sound. If the string is plucked at that same midpoint, only odd- numbered harmonics are produced. The guitarist can control this ratio in an expressive way that is lost to MIDI guitars.
- the guitarist can control the noisiness or distortion of the signal. Depending on picking or fingering style, the guitarist can control the amount of noise in the attack of a particular note. Also, by muting the strings and "scratching," the guitarist can create a sound that is nothing but noise, which can be useful in a percussive way. This, too, is lost to MIDI guitars.
- the present invention meets the foregoing need in that it provides a novel and improved musical instrument responsive controller and method that generate control signals not only in response to pitch and amplitude, but also in response to timbre of an actual musical sound produced by a musician playing a controlling musical instrument.
- These control signals are typically used to generate a musical sound having these three characteristics controlled by the respective control signals; however, these control signals can be transformed or mapped to control different sound characteristics and/or to control effects other than sound (e.g., lighting).
- the present invention achieves pitch, amplitude and timbre control merely from a controlling musical instrument without the need for external devices such as wheels, knobs and pedals.
- This control can come from any sound source providing sound with pitch, amplitude and timbre characteristics that can be sensed and input into a digital or analog system embodying the present invention. Furthermore, this control is responsive to each note played on the controlling device and this control can affect a respectively generated note produced by the invention in its sound generating embodiment. That is, each note played by a musician on the controlling instrument produces its own respective set of pitch, amplitude and timbre control signals.
- the present invention can do everything current MIDI guitars can do, and more. It has fret scanning like current Zeta guitars. It does pitch and amplitude tracking on six channels.
- the present invention also analyzes the three timbre parameters of brightness, even/odd harmonic balance, and pitched/unpitched balance (a noise or distortion parameter) in real time.
- real time means that the parameters are updated about once every 10ms. So as the controlling musical instrument is played, the present invention tracks the following five parameters continuously in real time on each string of the guitar: pitch, amplitude, brightness, even/odd harmonic balance, and pitched/unpitched balance.
- MIDI protocol can be accomplished (at least to some degree) using MIDI protocol.
- the present invention can send out the same sort of MIDI protocol information that a Zeta Mirror-6 guitar sends out, plus additional controller information for brightness, even/odd balance, and pitched/unpitched balance.
- synthesizer programming on certain kinds of synthesizers, it would be possible to construct synthesizer patches that respond to these parameters.
- a superior alternative to using MIDI protocol is to use ZIPITM protocol, a new protocol from Zeta Music Partners, a partnership of Zeta Music Systems, Inc. and Gibson Ventures, Inc. This has a much higher bandwidth, no keyboard biases towards notes having a single overall pitch and volume, and dedicated real-time continuous control parameters for brightness, even/odd balance, and pitched/unpitched balance.
- the present invention can send all of the results of its analysis via ZIPITM protocol.
- the present invention also has an on-board sample playback engine designed to respond to the five parameters.
- the present invention provides a musical instrument responsive controller. This controller comprises means for receiving a sound signal from a musical instrument, wherein the sound signal is a signal responsive to sound produced from the musical instrument in response to a musician's manipulation thereof.
- the controller further comprises means, responsive to a received sound signal, for generating at least three separate control signals wherein one of the control signals is responsive to a pitch of the received sound signal, another of the control signals is responsive to an amplitude of the received sound signal, and a further control signal is responsive to a timbre characteristic of the received sound signal.
- the controller preferably further comprises means for receiving a manipulation signal from the musical instrument, wherein the manipulation signal is a signal responsive to at least one type of a musician's manipulation of the musical instrument.
- This manipulation signal is usually related to the approximate pitch that the instrument will produce when played.
- the means for generating is also responsive to a received manipulation signal.
- the musical instrument responsive controller also preferably further comprises means, connected to the means for generating, for generating a synthesized musical sound having a different voice from the sound produced by the musical instrument but having pitch, amplitude and timbre of the voice responsive to the control signals.
- the controller also preferably comprises a non-keyboard musical instrument defining the musical instrument, wherein the non-keyboard musical instrument is connected to the means for receiving.
- the present invention also provides a musical instrument responsive control method.
- This method comprises receiving a sound signal from a musical instrument, wherein the sound signal is a signal responsive to sound produced from the musical instrument in response to a musician's manipulation thereof. It also comprises generating at least three separate control signals wherein one of the control signals is responsive to a pitch of the received sound signal, another of the control signals is responsive to an amplitude of the received sound signal, and a further control signal is responsive to a timbre characteristic of the received sound signal.
- This method can produce a non-musical sound effect in response to at least one of the control signals.
- the method can also or alternatively produce a musical sound in response to the control signals.
- the present invention still further provides a musical sound synthesis method, comprising: (a) detecting a pitch selecting manipulation of a musical instrument; (b) receiving an electrical signal representing a sound generated from the musical instrument in response to a detected pitch selecting manipulation; (c) performing a frequency analysis of the electrical signal to determine frequencies present in the sound; (d) performing, responsive to a detected pitch selecting manipulation of the musical instrument, a time analysis of the electrical signal to determine a fundamental frequency of the electrical signal; (e) determining from the frequency analysis and the determined fundamental frequency how much energy is in harmonics of the fundamental frequency present in the frequency analysis of the sound signal; and (f) generating timbre control signals in response to said step (e) .
- FIG. 1 is a schematic and block diagram of the musical instrument responsive controller, in its preferred embodiment as a musical sound synthesis controller, of the present invention.
- FIG. 2 is a more detailed schematic and block diagram of the embodiment of FIG. 1.
- FIG. 3 is a block diagram showing various signal flows among the four processing units shown in FIG. 2.
- FIGS. 4-25 are schematic circuit diagrams showing a particular implementation of the preferred embodiment of FIGS. 1-3. Detailed Description of Preferred Bwh ime ts Overview
- the present invention provides to a musician 2 (FIG. 1) intimate real-time control of synthesized sound, and it also provides a wider palette of sounds and sound-modification algorithms.
- the preferred embodiment of the present invention will be described with reference to a second-generation instrument 4 based on an electric guitar.
- the basic aim of the preferred embodiment is to enable the use of the electric guitar as the controlling instrument in such a way that the skilled guitarist can employ the large repertoire of instrumental techniques he or she has laboriously developed to control the synthesis and processing of musical sounds.
- Other instruments, including violin, woodwinds, voice, and drums, can also be fitted with electronic pickups and used as inputs to this new invention, giving a similar range of expressive power to skilled performers on these instruments.
- the illustrated preferred embodiment of the present invention employs several high-speed digital processors, including CISC, RISC, and DSP technology. These processors perform all functions for analyzing and synthesizing waveforms, for user interface, and for communications, allowing functions of the device to be upgraded and customized in the field by loading new firmware.
- these processors perform signal analysis to determine various parameters from input signals received from the guitar 4. These processors also map these parameters into respective control signals. These functions are generally identified in FIG. 1 by the reference numeral 6.
- control signals can be used to control any desired effect, whether sound or non-sound (e.g., lighting, smoke effects, slides, motion around a stage or platforms that musicians are standing on, videographics, and music notation (i.e, storing and printing of the control information in musical form) ) .
- sound or non-sound e.g., lighting, smoke effects, slides, motion around a stage or platforms that musicians are standing on, videographics, and music notation (i.e, storing and printing of the control information in musical form)
- the effect controlled is musical sound; therefore, FIG. 1 further shows that the preferred embodiment also includes a musical sound synthesis function 8 performed by the processors.
- time domain analysis processor 10 a mapping processor 12, a frequency domain analysis processor 14, and a synthesis processor 16.
- the invention performs a time domain analysis of the waveform of each individual string of the guitar 4.
- Each waveform is obtained in response to a conventional pickup (e.g., piezoelectric or magnetic) adapted to sense the respective sounds from individual strings.
- a conventional pickup e.g., piezoelectric or magnetic
- Such pickups are available from Zeta Music and others.
- the time periods are measured between successive inflection points of the waveforms and a variety of heuristics (techniques understood in the art as useful in generating sound parameters based on recent control parameters such as to prevent unintended anomalous sounds from being produced) are applied to these raw measurements. These heuristics are aided by knowledge (via sensors on the fret board in accordance with the sensing system disclosed in United States Patent No.
- the time domain analysis is performed with the time domain analysis processor 10, which is implemented in the preferred embodiment with a Motorola MC68332 (see U114 in FIG. 14) .
- This processor combines on one chip a 68020-type central processing unit (CPU) core with a specialized time processor unit (TPU) .
- the TPU contains several high speed counters that can be used to count external events, measure time periods, and generate programmable patterns of events.
- the TPU rapidly performs a time domain analysis of the waveform of each individual string to determine the pitch. This is specifically done by converting each sensed fret position of the musician's hand to a respective period value corresponding to the pitch that the respective string would have for that particular fret position.
- This conversion is done in the illustrated embodiment by using a look-up table relating sensed fret position with period, which table is stored in memory of the processor 10.
- a particular retrieved value is compared with the sum of partial period counts received from the TPU. Typically multiple occurrences of the same period count would be needed to confirm a pitch value if only a count were used.
- the fret position value speeds this up by a cycle or more as the first parts of a waveform of a sound produced from the controlling instrument 4 are unstable.
- sensing the fret position which is selected by the musician just prior to striking the string, allows the processor 10 to anticipate the pitch and establish a value from the look-up table, against which value a running sum of the periods from inflection point to inflection point of the waveform for the actual sound from the guitar 4 is compared. If a sum of these partials is detected as equaling the look-up table value, the pitch is confirmed. This can occur at the end of the first stable period of the waveform.
- the main function of the processor 10 is to do pitch and amplitude tracking of the six audio inputs received in the processor 10 through analog-to-digital converters (see U136 in FIG. 19) . This runs with a delay of less than ten microseconds, and updates once per inflection point.
- the processor 10 also handles communications with MIDI protocol that can connect to external devices and systems 18.
- the processor 10 is also connected to a front-panel user interface 20, which includes in the preferred embodiment a 24 column X 2 row display and a number of push buttons and rotary encoders. mapping processor 12
- the mapping processor 12 integrates control information between the analysis engines (the time domain analysis processor 10 and the subsequently described frequency domain analysis processor
- the processor 12 is implemented in the preferred embodiment with another Motorola MC68332 (see U59 in FIG. 4).
- the processor 12 is also in charge of booting the processors 14, 16 since their preferred embodiment implementations have intricate booting requirements.
- the processor 12 is also in charge of producing envelopes to provide continuous values for sample playback via the synthesis processor 16.
- the loop points of the samples, in the steady state, are such that the amplitude is constant as long as the note is held.
- the mapping processor 12 can shape amplitude with a pre-stored envelope, which is stored as part of a preset.
- the processor 12 is in charge of adding vibrato to a sound, in . cases where the musician wants vibrato added automatically.
- the mapping processor 12 can also transform incoming control data. For example, the processor can perform compression, allowing
- SUBST1TUTE SHEET (RULE 26) the synthesized sounds to sustain longer than the sound of a guitar string.
- Another transformation is the opposite of compression heightening the effect of a parameter. For example, it might be very effective to have the analyzed brightness of the guitar string have a heightened effect on the brightness of the sample playback.
- the processor 12 can apply the analysis results from the analysis processors 10, 14 to the synthesis processor 16 to obtain literal control of the sound produced by the processor
- the mapping processor 12 can, however, change the character of the analysis results in either the amplitude or time domain. Rapid changes can be slowed; small change values can be multiplied.
- Values can be shaped by the use of internally stored look-up tables or by using an envelope generator (a function generator that has adjustable rates for each segment: attack, decay, sustain, release) . Such mapping allows greater variety of internal control over the synthesized sound.
- envelope generator a function generator that has adjustable rates for each segment: attack, decay, sustain, release
- the processor 12 also handles most of the burden of the ZIPITM protocol. This includes removing messages that have not yet been sent and become superseded, and filtering the outgoing timbral updates to remove duplicates.
- mapping control functions A particular implementation of mapping control functions is illustrated in FIG. 3.
- Various signals are also labeled in FIG. 3.
- mapping processor 12 These include the following: from time domain analysis processor 10 to mapping processor 12 1. string_pitches: a. note - MIDI number (7 bits) , MIDI fraction (8 bits)
- string_periods timer counts - 16 bits
- string_frets (8 bits)
- string_events a. string_note_on b. string_note_off c. string_repluck d. string_trill
- front panel controls a. instrument/voice selections b. LFOs c. envelopes d. master volume e. master pan
- external ZIPI controls a. external front panel controls b. external analysis (i.e., string_odd, string_even_) from mapping processor 12 to time domain analysis processor 10
- string_filter controls, every 8 msec a. string_fXlter_coefficients (16/24 bits) b. string_gains (16/24 bits)
- string_analysis a. string_odd (16/24 bits) b. string_even (16/24 bits) c. string_noise (16/24 bits) d. string_centroid (16/24)
- PCMCIA data a. number of instruments: b. for each instrument: 1. name 2. number of sound files
- the preferred embodiment of the present invention further includes the digital processor 14, which performs a spectral analysis of the sound signal from each string of the guitar 4. From this analysis, parameters reflecting the timbre of the string signal are determined. Spectral tilt is determined as a weighted ratio between high- and low-frequency components. The processor 14 also measures the proportion of three different components of the string signal: odd harmonics; even harmonics; and non-harmonic vibrations (i.e. components of the sound not resulting from the vibration of the string) . These kinds of information are computed in real time (approximately eight milliseconds in the preferred embodiment) and combined according to the user specified program or "patch.” The results are used as control information for the synthesis engine.
- the frequency domain analysis processor 14 of the preferred embodiment includes a Motorola DSP56002 (see U44 in FIG. 13) . It is directly attached to a high speed 16-bit analog-to-digital converter (see U60 in FIG. 23) which is fed with the sound signals from all six strings of the guitar 4, which sound signals are time-multiplexed in the analog domain (FIG. 24) .
- the processor 14 uses its high-speed calculation capabilities to analyze separately the signals from each string. First, a spectral analysis is performed using fast Fourier transform (FFT) or equivalent techniques.
- FFT fast Fourier transform
- the FFT can be calculated in known manner without knowing the fundamental frequency of the string sound being analyzed.
- the fundamental frequency is, however, used in the frequency domai analysis processor 14 to correlate the FFT results with respect to the harmonics or partials of the string sound.
- Selecting a suitable FFT window or bin e.g., 20 hertz bins
- the frequency domain analysis processor 14 has two main tasks.
- frequenc centroid of each signal which is essentially an energy-weighted average of the frequency components of a signal. This measures the amount of high-frequency energy in the signal, and is a very good indicator of perceived brightness. This is calculated by summing the products obtained by multiplying the energy of each harmonic by the respective harmonic frequency present in the spectrum, and dividing the sum by the total energies of all the frequencies in the spectrum.
- the frequency domain analysis processor 14 can divide the frequency spectrum into three categories: energy that is close in frequency to an even multiple of the fundamental (i.e., even harmonic energy), energy that is close to an odd multiple (i.e., odd harmonic energy), and other energy (i.e., noise).
- the even/odd determination is made by dividing the sum of energies for the respective even/odd harmonics in the spectrum by the total spectrum energy.
- the noise determination is made by adding together the energy in the frequency bands not related to the fundamental frequency and its harmonics and dividing that sum by the total energy of the signal; the result is referred to as the pitched/unpitched balance.
- the frequency domain analysis processor 14 passes the results of these analyses, namely the respective four parameters of brightness, even harmonic, odd harmonic, and noise (pitched/unpitched balance) , for each of the six channels corresponding to the six strings of the guitar 4, to the mapping processor 12 which then sends them to the synthesis processor 16 to control the sample playback, and to the time domain analysis processor 10 to send them externally (typically some mapping, or transforming, of the analysis signals will occur in the mapping processor 12 to create the control signals sent to at least the synthesis processor 16) .
- the frequency domain analysis processor 14 can also pass these six channels of digital audio directly to the synthesis processor 16 with a minimum of overhead. This allows the processor 14 to do spectral analysis, while the synthesis processor 16 does spectrum-synchronous hex (i.e., six, for the six-string guitar 4) effects based on the results from the processor 14.
- synthesis processor 16 The digital synthesis processor 16, which in the preferred embodiment includes a Motorola DSP56002 (see U43 in FIG. 11) , is used for synthesis of output waveforms. Sample files representing different instrument voices are accessed from built-in ROM or from a PCMCIA card which can be inserted through an opening in the front panel 20.
- the synthesized pitch would be the actual instantaneous pitch of the guitar string; however one major advantage of the present invention is that the pitch can be transposed (in the mapping processor 12) by a fixed tonal interval or otherwise modified for musical effects.
- basic sound generation by the processor 16 In general, the present invention uses denatured samples in sets of five, so it must play five samples at a time, times six voices of polyphony (i.e., one per string of the guitar 4). The musician can select v/hich five sounds make up a voice, and each of the six strings can play a separate voice. So the processor 16 could conceivably have to play thirty different samples back at the same time.
- the relative volumes of all five components of a voice can change in real time, in response to analysis information from the frequency domain analysis processor 14 or from external input source 18.
- the pitch of a voice can change continuously, but each of the five samples within a voice must be pitch shifted by the same amount.
- Each of the thirty samples also has a second-order filter, as another way to continuously control brightness.
- Each of the six voices has a fourth-order filter shaping the sum of the five samples.
- the synthesis processor 16 also mixes together these thirty digital signals, and it can add digital audio coming from effects processing. This resultant stereo signal goes to two digital audio channels, which connect to 16-bit digital-to-analog convertors connectors (DACs) (U66 in FIG. 23) .
- DACs digital-to-analog convertors connectors
- Each of the six voices is individually pannable, but the five samples within a voice are not. There are only stereo outputs, since there are only two DACs. More particularly, the sample playback engine implemented by the synthesis processor 16 responds to the five analysis parameters in real time.
- Prior samplers control the volume of each note by controlling the gain of the sample as it is played back, and they control the pitch of each note by reading through the sample at a faster or slower rate.
- the present invention's sample playback engine will do this as well, continuously on each of the six voices.
- the present invention also responds to the other three analysis parameters (i.e., brightness, odd/even harmonics, noise) because the synthesis processor 16 has a library of "denatured" samples.
- the synthesis processor 16 has a library of "denatured" samples.
- the sampler itself controls the volume of each note it plays by adjusting the gain as it plays back the sample.
- the volume of a particular note produced by a sampler depends not on the sample itself, but on the way that the musician plays the note. This is a primitive form of denaturing removing the volume parameter from the samples to make them controllable by the musician.
- Pitch is somewhat denatured in typical samplers, because they only have a few samples of each timbre and use pitch shifting to control the actual pitch produced. Unfortunately, most samples, especially bowed string sounds, come with vibrato built into the sample itself. Since the present invention has real-time control of vibrato (the preferred embodiment's pitch detection is fine-grained enough to detect the actual vibrato played by the musician) , users may not always want the sampler to add vibrato for them. Thus, the present invention's preferred embodiment sample library contains entirely vibrato-less samples.
- Brightness is usually fairly consistent across the samples for a particular timbre. Many samplers offer sophisticated filtering features to control the brightness (and other aspects of the spectral shape) of the tones produced.
- the present invention has separate filters for each polyphonic voice, to control the brightness of each note continuously.
- sample library contains pairs of samples for each particular instrument, one soft and one loud. The analyzed brightness of a guitar note can then control the relative volumes of these two tones.
- Denatured samples are used to respond to the control signals that specify even/odd harmonic balance and pitched/unpitched balance.
- a sample that is stored has been split into three parts.
- the present invention preferably stores three sounds that, when played together, sound like a trumpet. The advantage is that by changing the relative volumes of the three samples, it is possible to drastically alter the timbre of the trumpet.
- the present invention is actually playing back five samples: even bright, odd bright, even dull, odd dull, and noise. So the present invention's six voice polyphony is actually thirty voice polyphony, with each guitar note requiring five samples. (By the way, one can individually select all five samples; they do not have to come from the same instrument; thus, one can have trumpet even harmonics and clarinet odd harmonics if desired.) In the present invention, as soon as the analysis detects some sound on a particular string, it tells the sampler to start playing something. For example, it could simply repeat the pitch of the note that was last played, or it could play nothing but the nois sample.
- the present invention in addition to tracking five parameters in real time to control sample playback, has a mode for digital effects processing. There are already a number of digital effects processors on the market, so the present invention need not have standard effects like reverb, flanging, and so on.
- the present invention also has "pitch-synchronous" effects processing, meaning that the effect depends on the pitch of the note being affected. So there can be a digital delay whose length depends on the pitch of the note being delayed. Pitch-synchronous harmonization allows effects like "add a third above every note, major or minor depending on which fits into the key of B flat.”
- the present invention's effects can also depend on the analyzed brightness, even/odd balance, and pitched/unpitched balance.
- a possible use for this is to have brightness affect one of the parameters of a digital filter, producing a compressor of brightness instead of volume. Or one could map the parameter backwards, producing a brightness difference enhancer to make dull notes duller and bright notes brighter. Since distortion generally adds odd harmonic energy, another possible effect is to control distortion amount based on even/odd balance.
- the processors perform a variety of algorithms which act directly on the waveforms generated by the strings and modify them; it is these modified waveforms that are sent to the output, rather than synthesized waveforms.
- the present invention in its preferred embodiment is primarily designed with the guitar in mind, it will work with other musical instruments (both keyboard and non-keyboard types) as well.
- the choice of six channels of analysis and sample playback is biased towards the guitar, but few non-keyboard instruments have more than six voice polyphony, so six is enough for most purposes.
- monophonic or less than 6-voice polyphonic instruments will not be a problem.
- Instruments that provide some sort of gestural information to the present invention for example, a saxophone with sensors on the pads) will work with the invention. They will receive all the benefits that guitars receive in terms of real time analysis of timbral parameters, close coupling to the synthesized sound, and responsiveness.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Electrophonic Musical Instruments (AREA)
Abstract
A musical instrument responsive controller (6) and method generate at least three control signals which are respectively responsive to pitch, amplitude and at least one timbral characteristic of a sound produced from the instrument (4). Although these signals can control various output effects regardless of their relationship to the input sound or to sound in general, in one application the control signals drive a synthesizer (8) to produce a synthesized musical sound that has not only a controlled pitch and amplitude but also a controlled timbre. This control can occur with respect to each note played on the controlling musical instrument and with respect to each note produced in response.
Description
Timbral Apparatus and Method for Musical Sounds.
Background of the Invention
This invention relates to a musical instrument responsive controller wherein control signals are generated not only in response to pitch and amplitude of a sound from the instrument, but also in response to timbre of the sound. As a result, continuous control is obtained for each note and during each note. This has particular application in, but is not limited to, a musical sound synthesis apparatus and method.
Electronic musical sound generating devices (referred to herein as synthesizers) have become very important in live musical performance and studio production. A significant advantage is that they provide a wide palette of sounds with precise control of rhythm and pitch.
The predominant musical instrument used to control a synthesizer has been the piano or organ keyboard. Beside the widespread availability of pianos and organs and the people who play them, the very nature of the keyboard makes it a good choice from an implementation point of view.
The earliest commercially available synthesizers (e.g., MiniMoog, ARP Odyssey, Putney VCS3) were monophonic and non¬ dynamic. As the technology evolved, instruments became more polyphonic and capable of wide dynamic response (e.g., Yamaha DX- 7) . The control information fed to the synthesis engine within the
synthesizer grew to include how fast the key was struck. Joysticks, mod wheels, aftertouch and footpedals added the continuous element to keyboard control. That is, timbre control of the synthesized sound was implemented by external control devices (e.g., wheels, knobs, pedals) which affected the synthesis engine but which were unrelated to actual timbre (if any) arising from the musician's key-playing of the controlling keyboard instrument.
Historically, this is not unfamiliar to keyboard players. Pipe organs are non-dynamic but volume (amplitude) can be controlled by foot pedal. In many ways the connection of keyboards to synthesizers resulted in very little loss of familiarity of control with a large gain in the choices of instrumental sounds. A keyboard could now sound like any instrument.
Despite their advantages, prior synthesizers have several disadvantages. These stem principally from the keyboard model on which most electronic instruments are based. The keyboard has severe limitations as a musical interface and control device because control of musical events is limited basically to timing of attack and release, and velocity of attack (some keyboards also support aftertouch) . Keyboard responsive synthesizers decouple the player from the sound generating element. The strike of a finger on a key starts a chain of events that produces a sound. After a key is struck, however, the greatest creative choice left to the musician is when to release it. This series of key closures and releases is the simplest form of information that can be used to control a synthesizer.
Many non-keyboard instruments, on the other hand, put the musician directly in touch with the sound producing vibrations. In wind instruments, a musician can make subtle and rapid changes in timbre and pitch by changing the position and/or pressure of his lips, tongue, and mouth. In plucked string instruments such as the guitar, a musician can change the timbre by where and how she plucks the strings and by touching the strings before and during the note, and she can change the pitch by bending the strings. In bowed string instruments, the possibilities of control are even greater.
Although there has been a need for non-keyboard controlled synthesizers wherein these characteristics that are unavailable from keyboards can be used to control synthesized musical sounds, this need still exists because the success of alternate (i.e., non- keyboard) controllers has been less than overwhelming in the history of electronic music. A significant reason for such lack of success is the conventional protocol used to define and communicate control information from the controlling instrument to the sound generating synthesizer. This protocol is known as the MIDI protocol.
MIDI protocol is used for controlling, recording, and communicating performances of electronic musical instruments. The
MIDI protocol has two chief problems which relate to its origin as a way of communicating keyboard data: relatively slow speed of communication (also referred to as bandwidth) , which can result in
noticeable delays in musical events; and lack of a flexible and powerful way to specify modifications to ongoing notes.
These are not always critical shortcomings with regard to keyboard driven synthesizers as they can be well served by the MIDI protocol. The speed of the MIDI protocol (31.25 Kbaud) is adequate for transmitting the event-based nature of a keyboard. A ten note chord can be sent in 6.7 ms, which is on the borderline of being imperceptible. The continuous controller information generated from external devices such as wheels, knobs or pedals is usually no more than 3 channels (pitch bend, modulation, and aftertouch) , keeping the bit count low.
Problems can occur, however, when trying to interface alternate controllers to synthesizers. Polyphonic instruments such as the guitar and violin can outstrip the bandwidth of the MIDI protocol. Just updating 7 bit pitchbend and volume parameters 100 times a second for six guitar strings would exceed the MIDI protocol bandwidth:
6 strings x (3 pitch bytes + 3 amplitude bytes) x 10 bits /.01 seconds = 36.0 Kbaud. (MIDI protocol takes 10 bits to transmit a 7 bit value.)
Furthermore, serious speed problems can arise when several controllers are sharing the same MIDI network, which is a common practice when musicians try to use techniques that depend on very accurate time relationships between events, such as flam techniques on percussion controllers, and especially when information from foot pedals, joysticks and other continuous controllers needs to be updated at a rapid rate.
Independent of bandwidth, the MIDI protocol also represents data in a manner that assumes the controller is a keyboard or at least a percussive device. The MIDI protocol note-on command is an indivisible integration of timing, pitch, and loudness (velocity) information. This is appropriate for a keyboard because each key is associated with a particular pitch and each key strike occurs with a velocity and starts a sound of the particular pitch. Therefore, every time a key is struck, pitch, velocity, and timing information is available. Every modification of one of these three values is accompanied by a change or at least a reassertion of the other two.
For an instrument of continuous nature, such as a violin, these parameters are decoupled. One hand generally determines timing and loudness and the other pitch. They can and do change independently of each other. Furthermore, the timing of a note is not as concise as the pressing of a button or key. Notes can crescendo into audibility. The MIDI protocol, however, requires that an on/off decision be made at some volume threshold. When this threshold is met, the velocity value sent in a MIDI protocol command will usually be the value of this threshold, making the velocity data useless.
MIDI protocol does not preclude processing of the audio signal out of the synthesizer. Several synthesizers have individual outputs per voice. These can be mapped to a specific string from the controller and modulated in the analog domain based on information extracted from the string.
One of the most satisfying examples of this is a Zeta Mirror 6 fret-scanning guitar controller connected to a Yamaha TX-802 FM synthesizer operating in legato mode. Each of six outputs from the TX-802 synthesizer is routed back to the Mirror 6 guitar where it passes through a respective voltage controlled amplifier (VCA) . Each VCA is then controlled by an envelope follower tracking the energy of one of the strings. The six VCA outputs are summed and fed to an amp. The foregoing gives continuous control over the amplitude envelopes of each output, and this use of continuous dynamic control returns much of the nuance to the interface; however, this still does not use timbre of the sound played on the guitar as a control parameter for the synthesis engine of the synthesizer.
Some MIDI synthesizers respond to legato style commands. Pitch bend can vary pitch up and down to one octave but with a resolution of only 5.1 divisions per semitone (19.6 cents). In general, MIDI protocol pitch bend is per-channel, not per-note, so one cannot just plug a MIDI guitar into a synthesizer and have pitch-bend work correctly. Instead, bending any of the notes on a guitar will cause all of the synthesizer pitches to bend by the same amount. If the synthesizer is polyphonic, however, one can have each string of the guitar control a separate onophonic voice on its own MIDI protocol channel, which would allow separate pitch bend information for each string. Otherwise, there is no way to control continuous variation in pitch polyphonically.
Before the use of the MIDI protocol, most available synthesizers were analog and used voltages to represent musical
values. Articulation was separate from pitch and all controllable parameters were on equal footing. Bandwidth and resolution were not concerns, but good intonation (i.e, tuning) was a perpetual effort (a lot like playing a violin) . As with the later synthesizers referred to above, these early synthesizers did not use timbre-generated control extracted from a controlling musical instrument.
The integration of the eight-bit microprocessor into synthesizers mostly solved the tuning issues in these early synthesizers. Dividing of the octaves to be easily represented within 8 bits produced a strong bias for semitones. Combine this with the ability of CPUs to communicate and the MIDI protocol soon evolved.
Interfacing violins and guitars to digitally controlled analog synthesizers was possible. With some cooperation from the manufacturers (e.g., Sequential, Oberheim, and Moog) , analog controls extracted from the string and injected into control voltage sum nodes could produce an intimate connection between the musical instrument manipulated by the musician and the synthesizer it controlled. Pitch bends and dynamics were smooth and responsive, but pitch stability remained a problem as this method bypassed some of the automatic tune functions.
The emergence of frequency modulation (FM) and other methods of digital synthesis made analog voltage controlled oscillators (VCO) and filters instantly quaint and won great popularity because of the clarity and musical sound variety of the technique.
However, these forms of sound generation closed off many of the control entry points to synthesis. At this time alternate controller manufacturers were forced out of the hardware, and the only practical point of entry was through software using MIDI protocol. While simplifying the connection, the loss of control was disappointing. At best the style of playing guitar or violin was forced into the language of the keyboard.
This limited the choice of synthesizers for the users of the non-keyboard controllers. The emergence of sampling and its virtual monopolization of the synthesizer market created new problems for interfacing. FM and VCOs are continuously variable over the entire range of pitch. Samples, as the name implies, are not and require swapping of files that cover specific pitch ranges. Playing a trill across a sample boundary results in discontinuous spectral envelopes for many sounds.
Articulation for FM and VCOs come from external envelopes that can be varied based upon input parameters. With sampling, the attack character of the sampled instrument is inherent in the wavetable and timbral changes are restricted to, at best, simple filtering or crossfading between fixed sounds. Even something as personal as vibrato is often captured with the sample and not under the musician's control.
The skin deep beauty of sampling has left many musicians longing for a more meaningful conversation with their instruments. Nostalgia has even created a demand for older analog voltage controlled synthesizers.
From the foregoing, it can be said that the development of electronic music synthesizers has addressed, to some extent, both pitch and amplitude control that is responsive to the musician's playing of a controlling musical instrument. Musicians of non- keyboard instruments can, however, also change the timbre of a note as it evolves over time.
For example, playing a guitar close to the bridge produces a brighter sound, while playing farther from the bridge produces a duller sound; however, on a current guitar controlled synthesizer this form of expression ("brightness-') is ignored.
Also, the exact pick position affects the ratio of odd to even harmonic components. When the string is touched lightly at its midpoint and plucked, the effect is to produce a "harmonic," in which only the even-numbered harmonics of the tone are allowed to sound. If the string is plucked at that same midpoint, only odd- numbered harmonics are produced. The guitarist can control this ratio in an expressive way that is lost to MIDI guitars.
Additionally, the guitarist can control the noisiness or distortion of the signal. Depending on picking or fingering style, the guitarist can control the amount of noise in the attack of a particular note. Also, by muting the strings and "scratching," the guitarist can create a sound that is nothing but noise, which can be useful in a percussive way. This, too, is lost to MIDI guitars.
As a result of all these factors, musicians have been frustrated by the inability to combine the advantages of electronic
and conventional instruments. The benefits of electronic sound generation and manipulation have not been extended to the widest possible range of different kinds of music. Thus, the mapping of timbral information extracted from the controlling musical instrument onto the synthetic voice or voices is the next step for returning control to the player. This could be handled in the analog audio path after processing under the MIDI protocol, but greater flexibility and more elaborate processing of control information would be better addressed in the digital domain.
Biimmw-ry Qf the Invention The present invention meets the foregoing need in that it provides a novel and improved musical instrument responsive controller and method that generate control signals not only in response to pitch and amplitude, but also in response to timbre of an actual musical sound produced by a musician playing a controlling musical instrument. These control signals are typically used to generate a musical sound having these three characteristics controlled by the respective control signals; however, these control signals can be transformed or mapped to control different sound characteristics and/or to control effects other than sound (e.g., lighting). Thus, the present invention achieves pitch, amplitude and timbre control merely from a controlling musical instrument without the need for external devices such as wheels, knobs and pedals. This control can come from any sound source providing sound with pitch, amplitude and
timbre characteristics that can be sensed and input into a digital or analog system embodying the present invention. Furthermore, this control is responsive to each note played on the controlling device and this control can affect a respectively generated note produced by the invention in its sound generating embodiment. That is, each note played by a musician on the controlling instrument produces its own respective set of pitch, amplitude and timbre control signals.
The present invention can do everything current MIDI guitars can do, and more. It has fret scanning like current Zeta guitars. It does pitch and amplitude tracking on six channels.
The present invention also analyzes the three timbre parameters of brightness, even/odd harmonic balance, and pitched/unpitched balance (a noise or distortion parameter) in real time. In a particular implementation "real time" means that the parameters are updated about once every 10ms. So as the controlling musical instrument is played, the present invention tracks the following five parameters continuously in real time on each string of the guitar: pitch, amplitude, brightness, even/odd harmonic balance, and pitched/unpitched balance.
Current MIDI guitars allow a synthesizer to more or less follow the pitch, amplitude, and timing of a guitar; the preferred embodiment of the present invention provides a synthesizer which follows all five of the above parameters. It is possible to map these parameters in other ways, such as having even/odd ratio control the ratio of violin to trumpet sound, or having brightness
control the amount of detuning, or having one or more of the control signals control a non-sound effect as mentioned above. But the usual case is literal control of the electronic sound: when the guitar's brightness goes up, so does the synthesizer's. In general, making effective use of this analysis information requires continuous control of the sound source.
Although not preferred, this can be accomplished (at least to some degree) using MIDI protocol. The present invention can send out the same sort of MIDI protocol information that a Zeta Mirror-6 guitar sends out, plus additional controller information for brightness, even/odd balance, and pitched/unpitched balance. With special synthesizer programming on certain kinds of synthesizers, it would be possible to construct synthesizer patches that respond to these parameters. One problem, though, is that the low bandwidth of the MIDI protocol is not sufficient to update five parameters on six strings continuously.
A superior alternative to using MIDI protocol is to use ZIPI™ protocol, a new protocol from Zeta Music Partners, a partnership of Zeta Music Systems, Inc. and Gibson Ventures, Inc. This has a much higher bandwidth, no keyboard biases towards notes having a single overall pitch and volume, and dedicated real-time continuous control parameters for brightness, even/odd balance, and pitched/unpitched balance. The present invention can send all of the results of its analysis via ZIPI™ protocol. The present invention also has an on-board sample playback engine designed to respond to the five parameters.
The present invention provides a musical instrument responsive controller. This controller comprises means for receiving a sound signal from a musical instrument, wherein the sound signal is a signal responsive to sound produced from the musical instrument in response to a musician's manipulation thereof. The controller further comprises means, responsive to a received sound signal, for generating at least three separate control signals wherein one of the control signals is responsive to a pitch of the received sound signal, another of the control signals is responsive to an amplitude of the received sound signal, and a further control signal is responsive to a timbre characteristic of the received sound signal.
The controller preferably further comprises means for receiving a manipulation signal from the musical instrument, wherein the manipulation signal is a signal responsive to at least one type of a musician's manipulation of the musical instrument. This manipulation signal is usually related to the approximate pitch that the instrument will produce when played. In this embodiment, the means for generating is also responsive to a received manipulation signal.
The musical instrument responsive controller also preferably further comprises means, connected to the means for generating, for generating a synthesized musical sound having a different voice from the sound produced by the musical instrument but having pitch, amplitude and timbre of the voice responsive to the control signals. The controller also preferably comprises a non-keyboard
musical instrument defining the musical instrument, wherein the non-keyboard musical instrument is connected to the means for receiving.
The present invention also provides a musical instrument responsive control method. This method comprises receiving a sound signal from a musical instrument, wherein the sound signal is a signal responsive to sound produced from the musical instrument in response to a musician's manipulation thereof. It also comprises generating at least three separate control signals wherein one of the control signals is responsive to a pitch of the received sound signal, another of the control signals is responsive to an amplitude of the received sound signal, and a further control signal is responsive to a timbre characteristic of the received sound signal. This method can produce a non-musical sound effect in response to at least one of the control signals. The method can also or alternatively produce a musical sound in response to the control signals.
The present invention still further provides a musical sound synthesis method, comprising: (a) detecting a pitch selecting manipulation of a musical instrument; (b) receiving an electrical signal representing a sound generated from the musical instrument in response to a detected pitch selecting manipulation; (c) performing a frequency analysis of the electrical signal to determine frequencies present in the sound; (d) performing, responsive to a detected pitch selecting manipulation of the musical instrument, a time analysis of the electrical signal to
determine a fundamental frequency of the electrical signal; (e) determining from the frequency analysis and the determined fundamental frequency how much energy is in harmonics of the fundamental frequency present in the frequency analysis of the sound signal; and (f) generating timbre control signals in response to said step (e) .
Therefore, from the foregoing, it is a general object of the present invention to provide a novel and improved musical instrument responsive controller and method and more particularly, but not by way of limitation, a novel and improved musical sound synthesis controller and method. Other and further objects, features and advantages of the present invention will be readily apparent to those skilled in the art when the following description of the preferred embodiments is read in conjunction with the accompanying drawings.
Brief Description of the Drawings FIG. 1 is a schematic and block diagram of the musical instrument responsive controller, in its preferred embodiment as a musical sound synthesis controller, of the present invention.
FIG. 2 is a more detailed schematic and block diagram of the embodiment of FIG. 1.
FIG. 3 is a block diagram showing various signal flows among the four processing units shown in FIG. 2. FIGS. 4-25 are schematic circuit diagrams showing a particular implementation of the preferred embodiment of FIGS. 1-3.
Detailed Description of Preferred Bwh ime ts Overview
The present invention provides to a musician 2 (FIG. 1) intimate real-time control of synthesized sound, and it also provides a wider palette of sounds and sound-modification algorithms. The preferred embodiment of the present invention will be described with reference to a second-generation instrument 4 based on an electric guitar. The basic aim of the preferred embodiment is to enable the use of the electric guitar as the controlling instrument in such a way that the skilled guitarist can employ the large repertoire of instrumental techniques he or she has laboriously developed to control the synthesis and processing of musical sounds. Other instruments, including violin, woodwinds, voice, and drums, can also be fitted with electronic pickups and used as inputs to this new invention, giving a similar range of expressive power to skilled performers on these instruments.
This work builds on experience gained with the guitar- and violin-based synthesizers developed and marketed by ZETA Music Partners. The principal predecessor instrument, the ZETA Mirror-6 MIDI guitar, uses a combination of fret scanning, independent electronic pickups on each string, and real-time pitch analysis to produce MIDI protocol signals that can be used to control a conventional electronic synthesizer.
The present invention goes beyond the earlier ZETA instruments. Features of the preferred embodiment of the present invention include:
* Built-in 16-bit sample playback synthesis engine, with high-quality sounds in internal ROM
* PCMCIA card interface for libraries of additional sounds, patch storage, and firmware upgrades * Stereo output, with controllable pan for each voice
* Fast and accurate tracking of guitar strings vibration parameters
* Pitch (fret pitch and instantaneous pitch)
* Amplitude * Spectral envelope
* Spectral timbre analysis
* Complex synthesis engine
* Multiple-component sound wave files (odd harmonics; even harmonics; non-pitched component) * Post-synthesis filter with real-time controllable parameters
* Real-time matrix mapping of analysis parameters and external inputs to control synthesis
* ZIPI™ (and MIDI) protocol for fast flexible communication of control parameters
* Can control other synthesizers
* Can respond to external controllers
* Hexaphonic waveform mode processes string signals individually
* Six channels of 16-bit low-noise analog-to-digital conversion
* Independent processing of each string signal
* Variable pitch-shifting and harmonizing
* Other nonlinear algorithms
* Pre- and post-filtering, controllable in real time Implementation The illustrated preferred embodiment of the present invention employs several high-speed digital processors, including CISC, RISC, and DSP technology. These processors perform all functions for analyzing and synthesizing waveforms, for user interface, and for communications, allowing functions of the device to be upgraded and customized in the field by loading new firmware.
Referring to FIG. 1, these processors perform signal analysis to determine various parameters from input signals received from the guitar 4. These processors also map these parameters into respective control signals. These functions are generally identified in FIG. 1 by the reference numeral 6.
These analysis and mapping functions are the basic functions of the invention in that the resulting control signals can be used to control any desired effect, whether sound or non-sound (e.g., lighting, smoke effects, slides, motion around a stage or platforms that musicians are standing on, videographics, and music notation (i.e, storing and printing of the control information in musical form) ) . In the preferred embodiment particularly described herein, however, the effect controlled is musical sound; therefore, FIG. 1 further shows that the preferred embodiment also includes a musical sound synthesis function 8 performed by the processors.
In the preferred embodiment described herein, the foregoing functions are performed by four digital processors; however, it is contemplated that other hardware and software implementations can be used to obtain the present invention (it is also contemplated that the invention can be implemented as an analog system) . In particular, it is contemplated, and more preferred, to implement the invention with a single processor; but with present technology the four processor implementation is utilized. These four processors are identified in FIG. 2 as a time domain analysis processor 10, a mapping processor 12, a frequency domain analysis processor 14, and a synthesis processor 16. time domain analysis processor 10
To obtain accurate string pitch information, the invention performs a time domain analysis of the waveform of each individual string of the guitar 4. Each waveform is obtained in response to a conventional pickup (e.g., piezoelectric or magnetic) adapted to sense the respective sounds from individual strings. Such pickups are available from Zeta Music and others. The time periods are measured between successive inflection points of the waveforms and a variety of heuristics (techniques understood in the art as useful in generating sound parameters based on recent control parameters such as to prevent unintended anomalous sounds from being produced) are applied to these raw measurements. These heuristics are aided by knowledge (via sensors on the fret board in accordance with the sensing system disclosed in United States Patent No. 4,468,997 to Young, incorporated herein by reference) of the position at which
the string is fretted; allowance has to be made, however, for special techniques such as harmonics and deep whammy-bar pitch bends. A very accurate value for string vibration frequency can be determined in a little more than one complete period of the waveform. Accurate control signals for synthesis are produced with a time delay that is musically minimal.
The time domain analysis is performed with the time domain analysis processor 10, which is implemented in the preferred embodiment with a Motorola MC68332 (see U114 in FIG. 14) . This processor combines on one chip a 68020-type central processing unit (CPU) core with a specialized time processor unit (TPU) . The TPU contains several high speed counters that can be used to count external events, measure time periods, and generate programmable patterns of events. In the present invention, the TPU rapidly performs a time domain analysis of the waveform of each individual string to determine the pitch. This is specifically done by converting each sensed fret position of the musician's hand to a respective period value corresponding to the pitch that the respective string would have for that particular fret position. This conversion is done in the illustrated embodiment by using a look-up table relating sensed fret position with period, which table is stored in memory of the processor 10. A particular retrieved value is compared with the sum of partial period counts received from the TPU. Typically multiple occurrences of the same period count would be needed to confirm a pitch value if only a count were used. The fret position value speeds this up by a cycle
or more as the first parts of a waveform of a sound produced from the controlling instrument 4 are unstable. That is, sensing the fret position, which is selected by the musician just prior to striking the string, allows the processor 10 to anticipate the pitch and establish a value from the look-up table, against which value a running sum of the periods from inflection point to inflection point of the waveform for the actual sound from the guitar 4 is compared. If a sum of these partials is detected as equaling the look-up table value, the pitch is confirmed. This can occur at the end of the first stable period of the waveform.
The main function of the processor 10 is to do pitch and amplitude tracking of the six audio inputs received in the processor 10 through analog-to-digital converters (see U136 in FIG. 19) . This runs with a delay of less than ten microseconds, and updates once per inflection point.
There are analog low pass filters (see FIGS. 20-22) on each of the six inputs, passing those frequencies that could be played by a particular string of the guitar and attenuating others. This gives the pitch tracking algorithm an advantage, since it only looks at frequencies that could conceivably be the fundamental. These cutoff frequencies are adjustable, by means of various sets of resistors and capacitors that can be switched in and out at the request of a CPU. (This allows for instruments tuned differently than a guitar, or alternate tunings on a guitar.) The processor 10 also handles communications with MIDI protocol that can connect to external devices and systems 18.
The processor 10 is also connected to a front-panel user interface 20, which includes in the preferred embodiment a 24 column X 2 row display and a number of push buttons and rotary encoders. mapping processor 12
The mapping processor 12 integrates control information between the analysis engines (the time domain analysis processor 10 and the subsequently described frequency domain analysis processor
14) and the synthesis engine (the subsequently described synthesis processor 16) .
The processor 12 is implemented in the preferred embodiment with another Motorola MC68332 (see U59 in FIG. 4).
The processor 12 is also in charge of booting the processors 14, 16 since their preferred embodiment implementations have intricate booting requirements.
The processor 12 is also in charge of producing envelopes to provide continuous values for sample playback via the synthesis processor 16. The loop points of the samples, in the steady state, are such that the amplitude is constant as long as the note is held. The mapping processor 12 can shape amplitude with a pre-stored envelope, which is stored as part of a preset. Likewise, the processor 12 is in charge of adding vibrato to a sound, in . cases where the musician wants vibrato added automatically. The mapping processor 12 can also transform incoming control data. For example, the processor can perform compression, allowing
-22-
SUBST1TUTE SHEET (RULE 26)
the synthesized sounds to sustain longer than the sound of a guitar string. Another transformation is the opposite of compression heightening the effect of a parameter. For example, it might be very effective to have the analyzed brightness of the guitar string have a heightened effect on the brightness of the sample playback.
In general, the processor 12 can apply the analysis results from the analysis processors 10, 14 to the synthesis processor 16 to obtain literal control of the sound produced by the processor
16. The mapping processor 12 can, however, change the character of the analysis results in either the amplitude or time domain. Rapid changes can be slowed; small change values can be multiplied.
Values can be shaped by the use of internally stored look-up tables or by using an envelope generator (a function generator that has adjustable rates for each segment: attack, decay, sustain, release) . Such mapping allows greater variety of internal control over the synthesized sound.
The processor 12 also handles most of the burden of the ZIPI™ protocol. This includes removing messages that have not yet been sent and become superseded, and filtering the outgoing timbral updates to remove duplicates.
A particular implementation of mapping control functions is illustrated in FIG. 3. Various signals are also labeled in FIG. 3.
These include the following: from time domain analysis processor 10 to mapping processor 12 1. string_pitches: a. note - MIDI number (7 bits) , MIDI fraction (8 bits)
2. string_periods (timer counts - 16 bits)
3. string_frets (8 bits) . string_events: a. string_note_on b. string_note_off c. string_repluck d. string_trill
5. front panel controls: a. instrument/voice selections b. LFOs c. envelopes d. master volume e. master pan
6. external ZIPI controls: a. external front panel controls b. external analysis (i.e., string_odd, string_even_) from mapping processor 12 to time domain analysis processor 10
1. string analysis a. string_odd, every 8 msecs (16 bits) b. string_even, every 8 msecs (16 bits) c. string_noise, every 8 msecs (16 bits) d. string_centroid, every 8 msecs (16 bits)
2. string_amplitudes (16 bits)
3. instrument names (from PCMCIA card) from mapping processor 12 to frequency domain analysis processor 14
1. string_period (16 bits)
2. boot-up (w/tables -16KB) from mapping processor 12 to synthesis processor 16 1. voice_gains - 5 per string, every 8 msecs (16/24 bits)
2. string_filter controls, every 8 msec a. string_fXlter_coefficients (16/24 bits) b. string_gains (16/24 bits)
3. voice file number - 5 per string (16/24 bits) (pitch based, file number offset)
4. voice_pitch_increments - 5 per string (16/24 bits)
5. overall pan (16/24 bits)
6. boot-up (w/tables -16KB) from frequency domain analysis processor 14 to mapping processor 12
1. string_analysis: a. string_odd (16/24 bits) b. string_even (16/24 bits) c. string_noise (16/24 bits) d. string_centroid (16/24)
2. string_amplitudes (16/24 bits) from synthesis processor 16 to mapping processor 12
1. PCMCIA data a. number of instruments: b. for each instrument: 1. name 2. number of sound files
3. pitch range per file
4. lowest pitch range freouencv domain analysis processor 14 The preferred embodiment of the present invention further includes the digital processor 14, which performs a spectral analysis of the sound signal from each string of the guitar 4. From this analysis, parameters reflecting the timbre of the string signal are determined. Spectral tilt is determined as a weighted ratio between high- and low-frequency components. The processor 14 also measures the proportion of three different components of the string signal: odd harmonics; even harmonics; and non-harmonic vibrations (i.e. components of the sound not resulting from the vibration of the string) . These kinds of information are computed in real time (approximately eight milliseconds in the preferred embodiment) and combined according to the user specified program or "patch." The results are used as control information for the synthesis engine.
The frequency domain analysis processor 14 of the preferred embodiment includes a Motorola DSP56002 (see U44 in FIG. 13) . It is directly attached to a high speed 16-bit analog-to-digital converter (see U60 in FIG. 23) which is fed with the sound signals from all six strings of the guitar 4, which sound signals are time-multiplexed in the analog domain (FIG. 24) . The processor 14 uses its high-speed calculation capabilities to analyze separately the signals from each string. First, a spectral analysis is performed using fast Fourier transform (FFT) or equivalent techniques.
The FFT can be calculated in known manner without knowing the fundamental frequency of the string sound being analyzed. The fundamental frequency is, however, used in the frequency domai analysis processor 14 to correlate the FFT results with respect to the harmonics or partials of the string sound. Selecting a suitable FFT window or bin (e.g., 20 hertz bins), allows adequate correlation of the FFT analysis with the fundamental frequency (determined by the time domain analysis processor 10) and the harmonics of that frequency. This is faster than using only the FFT analysis to determine the fundamental and its harmonics, and speed is important in the synthesis of musical sounds in that it is preferred to have as short a delay between playing the controllin musical instrument and generating the synthesized sound as possible. Having made the FFT analysis, the frequency domain analysis processor 14 has two main tasks. It calculates the frequenc
centroid of each signal, which is essentially an energy-weighted average of the frequency components of a signal. This measures the amount of high-frequency energy in the signal, and is a very good indicator of perceived brightness. This is calculated by summing the products obtained by multiplying the energy of each harmonic by the respective harmonic frequency present in the spectrum, and dividing the sum by the total energies of all the frequencies in the spectrum.
Knowing the fundamental frequency of the signal (which is calculated by the time domain analysis processor 10 and passed to the processor 14 via the mapping processor 12) , the frequency domain analysis processor 14 can divide the frequency spectrum into three categories: energy that is close in frequency to an even multiple of the fundamental (i.e., even harmonic energy), energy that is close to an odd multiple (i.e., odd harmonic energy), and other energy (i.e., noise). The even/odd determination is made by dividing the sum of energies for the respective even/odd harmonics in the spectrum by the total spectrum energy. The noise determination is made by adding together the energy in the frequency bands not related to the fundamental frequency and its harmonics and dividing that sum by the total energy of the signal; the result is referred to as the pitched/unpitched balance.
The frequency domain analysis processor 14 passes the results of these analyses, namely the respective four parameters of brightness, even harmonic, odd harmonic, and noise (pitched/unpitched balance) , for each of the six channels
corresponding to the six strings of the guitar 4, to the mapping processor 12 which then sends them to the synthesis processor 16 to control the sample playback, and to the time domain analysis processor 10 to send them externally (typically some mapping, or transforming, of the analysis signals will occur in the mapping processor 12 to create the control signals sent to at least the synthesis processor 16) .
The frequency domain analysis processor 14 can also pass these six channels of digital audio directly to the synthesis processor 16 with a minimum of overhead. This allows the processor 14 to do spectral analysis, while the synthesis processor 16 does spectrum-synchronous hex (i.e., six, for the six-string guitar 4) effects based on the results from the processor 14. synthesis processor 16 The digital synthesis processor 16, which in the preferred embodiment includes a Motorola DSP56002 (see U43 in FIG. 11) , is used for synthesis of output waveforms. Sample files representing different instrument voices are accessed from built-in ROM or from a PCMCIA card which can be inserted through an opening in the front panel 20. As in conventional sampling synthesizers, these instrument samples are interpolated and resampled to shift them to the pitch determined by the user patch. Often, the synthesized pitch would be the actual instantaneous pitch of the guitar string; however one major advantage of the present invention is that the pitch can be transposed (in the mapping processor 12) by a fixed tonal interval or otherwise modified for musical effects.
basic sound generation by the processor 16 In general, the present invention uses denatured samples in sets of five, so it must play five samples at a time, times six voices of polyphony (i.e., one per string of the guitar 4). The musician can select v/hich five sounds make up a voice, and each of the six strings can play a separate voice. So the processor 16 could conceivably have to play thirty different samples back at the same time.
The relative volumes of all five components of a voice can change in real time, in response to analysis information from the frequency domain analysis processor 14 or from external input source 18.
The pitch of a voice can change continuously, but each of the five samples within a voice must be pitch shifted by the same amount.
Each of the thirty samples also has a second-order filter, as another way to continuously control brightness. Each of the six voices has a fourth-order filter shaping the sum of the five samples. The synthesis processor 16 also mixes together these thirty digital signals, and it can add digital audio coming from effects processing. This resultant stereo signal goes to two digital audio channels, which connect to 16-bit digital-to-analog convertors connectors (DACs) (U66 in FIG. 23) . Each of the six voices is individually pannable, but the five samples within a voice are not. There are only stereo outputs, since there are only two DACs.
More particularly, the sample playback engine implemented by the synthesis processor 16 responds to the five analysis parameters in real time. Prior samplers control the volume of each note by controlling the gain of the sample as it is played back, and they control the pitch of each note by reading through the sample at a faster or slower rate. The present invention's sample playback engine will do this as well, continuously on each of the six voices.
The present invention also responds to the other three analysis parameters (i.e., brightness, odd/even harmonics, noise) because the synthesis processor 16 has a library of "denatured" samples. To understand denaturing, think about the way a regular sampler allows one to control volume. Typically, all of the samples of a particular instrument will have approximately the same volume. The sampler itself controls the volume of each note it plays by adjusting the gain as it plays back the sample.
Thus, the volume of a particular note produced by a sampler depends not on the sample itself, but on the way that the musician plays the note. This is a primitive form of denaturing removing the volume parameter from the samples to make them controllable by the musician.
Pitch is somewhat denatured in typical samplers, because they only have a few samples of each timbre and use pitch shifting to control the actual pitch produced. Unfortunately, most samples, especially bowed string sounds, come with vibrato built into the sample itself. Since the present invention has real-time control
of vibrato (the preferred embodiment's pitch detection is fine-grained enough to detect the actual vibrato played by the musician) , users may not always want the sampler to add vibrato for them. Thus, the present invention's preferred embodiment sample library contains entirely vibrato-less samples.
Brightness is usually fairly consistent across the samples for a particular timbre. Many samplers offer sophisticated filtering features to control the brightness (and other aspects of the spectral shape) of the tones produced. The present invention has separate filters for each polyphonic voice, to control the brightness of each note continuously.
But filtering is not always able to capture the way an instrument's timbre changes as it gets louder and softer. For example, in string instruments, loudly played notes are not just brighter than softly played ones; they are also somewhat more inharmonic. So the present invention's preferred embodiment sample library contains pairs of samples for each particular instrument, one soft and one loud. The analyzed brightness of a guitar note can then control the relative volumes of these two tones.
Denatured samples are used to respond to the control signals that specify even/odd harmonic balance and pitched/unpitched balance. Using sophisticated analysis and resynthesis techniques not germane to the presently claimed invention, a sample that is stored has been split into three parts. One contains the energy from even-numbered harmonics, one contains the energy from odd-numbered harmonics, and one contains the energy that is not
harmonic at all, i.e., the noise. So instead of storing one trumpet sound (for example) , the present invention preferably stores three sounds that, when played together, sound like a trumpet. The advantage is that by changing the relative volumes of the three samples, it is possible to drastically alter the timbre of the trumpet.
This means that every time a synthesized note is generated, the present invention is actually playing back five samples: even bright, odd bright, even dull, odd dull, and noise. So the present invention's six voice polyphony is actually thirty voice polyphony, with each guitar note requiring five samples. (By the way, one can individually select all five samples; they do not have to come from the same instrument; thus, one can have trumpet even harmonics and clarinet odd harmonics if desired.) In the present invention, as soon as the analysis detects some sound on a particular string, it tells the sampler to start playing something. For example, it could simply repeat the pitch of the note that was last played, or it could play nothing but the nois sample. A few milliseconds later, as the analysis becomes more sur of the pitch and amplitude, it can easily adjust them in the sample playback. This avoids any perceptible lag between when a note is played by the musician on the guitar 4 and when a synthesized soun begins. effects processing by the processors 14. 16 The present invention, in addition to tracking five parameters in real time to control sample playback, has a mode for digital
effects processing. There are already a number of digital effects processors on the market, so the present invention need not have standard effects like reverb, flanging, and so on.
What makes the present invention's digital effects processing special is the fact that they can be applied individually to each of the six audio inputs. For a guitar, this means that each string's sound will go through separate effects, instead of the combined sound of all six strings being processed as a single signal. This can be useful such as with regard to desirable distortion from the electric guitar 4 (see, e.g. , Non-Linear Distortion Using Chebyshev Polynomials, by Arfib & LeBrun) . With the conventional, non-hexaphonic distortion common today, many musicians feel that the amount of distortion they use is either correct for chords or correct for monophonic lines, but not both at the same time. In the present invention, however, each string can be individually distorted, so there will be the same amount of distortion on both lead lines and rhythm parts.
Another hex effect is harmonization of each string individually, to give the effect of a 12-string guitar from a 6-string input. (And, of course, they could be tuned in different intervals for each string.) •
The present invention also has "pitch-synchronous" effects processing, meaning that the effect depends on the pitch of the note being affected. So there can be a digital delay whose length depends on the pitch of the note being delayed. Pitch-synchronous
harmonization allows effects like "add a third above every note, major or minor depending on which fits into the key of B flat."
The present invention's effects can also depend on the analyzed brightness, even/odd balance, and pitched/unpitched balance. A possible use for this is to have brightness affect one of the parameters of a digital filter, producing a compressor of brightness instead of volume. Or one could map the parameter backwards, producing a brightness difference enhancer to make dull notes duller and bright notes brighter. Since distortion generally adds odd harmonic energy, another possible effect is to control distortion amount based on even/odd balance.
In the hexaphonic waveform mode (i.e., six signal processing paths for the six string guitar 4) , the processors perform a variety of algorithms which act directly on the waveforms generated by the strings and modify them; it is these modified waveforms that are sent to the output, rather than synthesized waveforms. other musical instruments
Although the present invention in its preferred embodiment is primarily designed with the guitar in mind, it will work with other musical instruments (both keyboard and non-keyboard types) as well. The choice of six channels of analysis and sample playback is biased towards the guitar, but few non-keyboard instruments have more than six voice polyphony, so six is enough for most purposes. Of course, monophonic or less than 6-voice polyphonic instruments will not be a problem.
Instruments that provide some sort of gestural information to the present invention (for example, a saxophone with sensors on the pads) will work with the invention. They will receive all the benefits that guitars receive in terms of real time analysis of timbral parameters, close coupling to the synthesized sound, and responsiveness.
Regular monophonic acoustic instruments will receive most of these benefits also, even without any special sensors. The only difference is that the lack of gestural information will cause responsiveness problems over the MIDI protocol, and will make there be slightly more indeterminacy about the first moments of a tone.
Thus, the present invention is well adapted to carry out the objects and attain the ends and advantages mentioned above as well as those inherent therein. While preferred embodiments of the invention have been described for the purpose of this disclosure, changes in the construction and arrangement of parts and the performance of steps can be made by those skilled in the art, which changes are encompassed within the spirit of this invention as defined by the appended claims. What is claimed is:
Claims
1. A musical instrument responsive controller, comprising: means for receiving a sound signal from a musical instrument, wherein said sound signal is a signal responsive to sound produced from the musical instrument in response to a musician's manipulation thereof; and means, responsive to a received sound signal, for generating at least three separate control signals wherein one of said control signals is responsive to a pitch of the received sound signal, another of said control signals is responsive to an amplitude of the* received sound signal, and a further control signal is responsive to a timbre characteristic of the received sound signal.
2. A musical instrument responsive controller as defined in claim 1, wherein said timbre characteristic is brightness.
3. A musical instrument responsive controller as defined in claim 1, wherein said timbre characteristic is even/odd harmonic balance.
4. A musical instrument responsive controller as defined in claim 1, wherein said timbre characteristic is pitched/unpitched balance.
5. A musical instrument responsive controller as defined in claim 1, wherein: said controller further comprises means for receiving a manipulation signal from the musical instrument, wherein said manipulation signal is a signal responsive to at least one type of the musician's manipulation of the musical instrument; and said means for generating is also responsive to a received manipulation signal.
6. A musical instrument responsive controller as defined in claim 5, wherein said means for generating includes means for determining a fundamental pitch of said sound signal in response to both said manipulation signal and a time domain analysis of said sound signal.
7. A musical instrument responsive controller as defined in claim 1, wherein the musical instrument is a guitar.
8. A musical instrument responsive controller as defined in claim 1, wherein: said further control signal defines a brightness characteristic for a synthesized sound produced in response to said control signals; and said means for generating generates additional control signals in response to other timbre characteristics of the received sound, wherein one of said additional timbre-responsive control signals defines an odd/even harmonic characteristic for the synthesized sound and wherein another of said additional timbre-responsive control signals defines a noise characteristic for the synthesized sound.
9. A musical instrument responsive controller as defined in claim 1, further comprising means, connected to said means for generating, for generating a synthesized musical sound having a different voice from the sound produced by the musical instrument but having pitch, amplitude and timbre of the voice responsive to said control signals.
10. A musical instrument responsive controller as defined in claim 9, further comprising a non-keyboard musical instrument defining the musical instrument, wherein said non-keyboard musical instrument is connected to said means for receiving.
11. A musical instrument responsive controller as defined in claim 1, wherein said means for generating includes means for determining frequency composition of said sound signal and means for defining a brightness characteristic of said sound signal in response to a determined frequency composition.
12. A musical instrument responsive control method, comprising: receiving a sound signal from a musical instrument, wherein said sound signal is a signal responsive to sound produced from the musical instrument in response to a musician's manipulation thereof; and generating at least three separate control signals wherein one of said control signals is responsive to a pitch of the received sound signal, another of said control signals is responsive to an amplitude of the received sound signal, and a further control signal is responsive to a timbre characteristic of the received sound signal.
13. A musical instrument responsive control method as defined in claim 12, further comprising producing a non-musical sound effect in response to at least one of said control signals.
14. A musical instrument responsive control method as defined in claim 12, further comprising producing a musical sound in response to said control signals.
15. A musical sound synthesis method, comprising:
(a) detecting a pitch selecting manipulation of a musical instrument;
(b) receiving an electrical signal representing a sound generated from the musical instrument in response to a detected pitch selecting manipulation;
(c) performing a frequency analysis of the electrical signal to determine frequencies present in the sound; (d) performing, responsive to a detected pitch selecting manipulation of the musical instrument, a time analysis of the electrical signal to determine a fundamental frequency of the electrical signal;
(e) determining from the frequency analysis and the determined fundamental frequency how much energy is in harmonics of the fundamental frequency present in the frequency analysis of the sound signal; and
(f) generating timbre control signals in response to said step (e) .
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US28370894A | 1994-08-01 | 1994-08-01 | |
US08/283,708 | 1994-08-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1996004642A1 true WO1996004642A1 (en) | 1996-02-15 |
Family
ID=23087215
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1995/009619 WO1996004642A1 (en) | 1994-08-01 | 1995-07-31 | Timbral apparatus and method for musical sounds |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO1996004642A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000026896A2 (en) * | 1998-10-29 | 2000-05-11 | Paul Reed Smith Guitars, Limited Partnership | Fast find fundamental method |
US6766288B1 (en) | 1998-10-29 | 2004-07-20 | Paul Reed Smith Guitars | Fast find fundamental method |
WO2009094180A1 (en) * | 2008-01-24 | 2009-07-30 | 745 Llc | Method and apparatus for stringed controllers and/or instruments |
CN111696500A (en) * | 2020-06-17 | 2020-09-22 | 不亦乐乎科技(杭州)有限责任公司 | Method and device for identifying MIDI sequence chord |
US11348562B2 (en) * | 2017-11-07 | 2022-05-31 | Yamaha Corporation | Acoustic device and acoustic control program |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4429609A (en) * | 1981-12-14 | 1984-02-07 | Warrender David J | Pitch analyzer |
US5014589A (en) * | 1988-03-31 | 1991-05-14 | Casio Computer Co., Ltd. | Control apparatus for electronic musical instrument for generating musical tone having tone pitch corresponding to input waveform signal |
-
1995
- 1995-07-31 WO PCT/US1995/009619 patent/WO1996004642A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4429609A (en) * | 1981-12-14 | 1984-02-07 | Warrender David J | Pitch analyzer |
US5014589A (en) * | 1988-03-31 | 1991-05-14 | Casio Computer Co., Ltd. | Control apparatus for electronic musical instrument for generating musical tone having tone pitch corresponding to input waveform signal |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000026896A2 (en) * | 1998-10-29 | 2000-05-11 | Paul Reed Smith Guitars, Limited Partnership | Fast find fundamental method |
WO2000026896A3 (en) * | 1998-10-29 | 2000-08-10 | Paul Reed Smith Guitars Limite | Fast find fundamental method |
US6766288B1 (en) | 1998-10-29 | 2004-07-20 | Paul Reed Smith Guitars | Fast find fundamental method |
WO2009094180A1 (en) * | 2008-01-24 | 2009-07-30 | 745 Llc | Method and apparatus for stringed controllers and/or instruments |
US8017857B2 (en) | 2008-01-24 | 2011-09-13 | 745 Llc | Methods and apparatus for stringed controllers and/or instruments |
US8246461B2 (en) | 2008-01-24 | 2012-08-21 | 745 Llc | Methods and apparatus for stringed controllers and/or instruments |
US11348562B2 (en) * | 2017-11-07 | 2022-05-31 | Yamaha Corporation | Acoustic device and acoustic control program |
CN111696500A (en) * | 2020-06-17 | 2020-09-22 | 不亦乐乎科技(杭州)有限责任公司 | Method and device for identifying MIDI sequence chord |
CN111696500B (en) * | 2020-06-17 | 2023-06-23 | 不亦乐乎科技(杭州)有限责任公司 | MIDI sequence chord identification method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6816833B1 (en) | Audio signal processor with pitch and effect control | |
JP3812328B2 (en) | Automatic accompaniment pattern generation apparatus and method | |
US6946595B2 (en) | Performance data processing and tone signal synthesizing methods and apparatus | |
US7750230B2 (en) | Automatic rendition style determining apparatus and method | |
US7816599B2 (en) | Tone synthesis apparatus and method | |
US5430244A (en) | Dynamic correction of musical instrument input data stream | |
WO1996004642A1 (en) | Timbral apparatus and method for musical sounds | |
JP4407473B2 (en) | Performance method determining device and program | |
US7420113B2 (en) | Rendition style determination apparatus and method | |
US6657115B1 (en) | Method for transforming chords | |
JP3812510B2 (en) | Performance data processing method and tone signal synthesis method | |
JP3530601B2 (en) | Frequency characteristic control apparatus and frequency characteristic control method for musical tone signal | |
JP3530600B2 (en) | Frequency characteristic control apparatus and frequency characteristic control method for musical tone signal | |
US10643594B2 (en) | Effects device for a musical instrument and a method for producing the effects | |
US5942711A (en) | Roll-sound performance device and method | |
KR20040101192A (en) | Generating percussive sounds in embedded devices | |
JP4802947B2 (en) | Performance method determining device and program | |
JP2002297139A (en) | Playing data modification processor | |
CN112634847B (en) | Electronic musical instrument, control method, and storage medium | |
JP3812509B2 (en) | Performance data processing method and tone signal synthesis method | |
JPH08106291A (en) | Level control device of musical sound signal | |
JP2000003175A (en) | Musical tone forming method, musical tone data forming method, musical tone waveform data forming method, musical tone data forming method and memory medium | |
JP3706371B2 (en) | Musical signal frequency characteristic control device and frequency characteristic control method | |
JP3760909B2 (en) | Musical sound generating apparatus and method | |
JP3455976B2 (en) | Music generator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): DE JP |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): KE MW SD SZ UG AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
122 | Ep: pct application non-entry in european phase |