WO2009046460A2 - Phase-amplitude 3-d stereo encoder and decoder - Google Patents

Phase-amplitude 3-d stereo encoder and decoder Download PDF

Info

Publication number
WO2009046460A2
WO2009046460A2 PCT/US2008/079004 US2008079004W WO2009046460A2 WO 2009046460 A2 WO2009046460 A2 WO 2009046460A2 US 2008079004 W US2008079004 W US 2008079004W WO 2009046460 A2 WO2009046460 A2 WO 2009046460A2
Authority
WO
WIPO (PCT)
Prior art keywords
channel
signal
audio
encoding
localization
Prior art date
Application number
PCT/US2008/079004
Other languages
French (fr)
Other versions
WO2009046460A3 (en
Inventor
Jean-Marc Jot
Martin Walsh
Edward Stein
Juha Oskari Merimaa
Michael M. Goodwin
Original Assignee
Creative Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/047,285 external-priority patent/US8345899B2/en
Application filed by Creative Technology Ltd filed Critical Creative Technology Ltd
Priority to CN200880119420.4A priority Critical patent/CN101889307B/en
Priority to GB1006666.0A priority patent/GB2467247B/en
Publication of WO2009046460A2 publication Critical patent/WO2009046460A2/en
Publication of WO2009046460A3 publication Critical patent/WO2009046460A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/02Systems employing more than two channels, e.g. quadraphonic of the matrix type, i.e. in which input signals are combined algebraically, e.g. after having been phase shifted with respect to each other
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/03Application of parametric coding in stereophonic audio systems

Definitions

  • the present invention relates to signal processing techniques. More particularly, the present invention relates to methods for processing audio signals.
  • Two-channel phase-amplitude stereo encoding also known as “matrixed surround encoding” or “matrix encoding” is widely used for connecting the audio output of a video gaming system to a home theater system for multichannel surround sound reproduction, and for low-bandwidth or two-channel transmission or recording of surround sound movie soundtracks.
  • a multichannel audio mix is computed in real time (during game play) by an interactive audio spatialization engine and down-mixed to two channels by use of a matrixed surround encoding process identical to those used for matrix encoding multi-channel movie soundtracks.
  • the surround sound mix can be transmitted via a single standard stereo audio connection or via a S/PDIF coaxial or optical cable connection commonly available in current home theater equipment.
  • the multichannel mix composed in the interactive audio rendering engine is typically obtained as a combination (mixing) of localized sound components reproducing point sources (primary sound components) and of reverberation or spatially diffuse sound components (ambient sound components).
  • phase-amplitude stereo encoding compared to alternative discrete multi-channel audio data formats (such as Dolby Digital or DTS) is that the encoded data stream is a two-channel audio signal that can be played back directly (without any decoding) over standard two-channel stereo loudspeakers or headphones.
  • a matrixed surround decoder can be used to recover a multichannel signal from the matrix-encoded two-channel signal.
  • the fidelity of the spatial reproduction typically suffers from inaccurate source loudness reproduction, inaccurate spatial reproduction, localization steering artifacts, and lack of "discreteness” (or “source separation”), when compared to direct multi-channel reproduction without matrixed surround encoding/decoding.
  • MPEG Surround technology enables the transmission, over one low-bit-rate digital audio connection, of a two-channel matrix-encoded signal compatible with existing commercial matrixed surround decoders, along with an auxiliary spatial information data stream that an MPEG Surround decoder utilizes in order to recover a faithful reproduction of the original discrete multi-channel mix.
  • auxiliary data along with the audio signal requires a new digital connection format incompatible with standard stereo equipment.
  • Another limitation of the above audio encoding-decoding technologies is their restriction to horizontal- only spatialization, their bias towards a particular multichannel loudspeaker layout, and their reliance on the spatial audio rendering technique known as multi-channel amplitude panning.
  • a method for two-channel phase- amplitude stereo encoding of one or more sound sources in the time domain or in the frequency domain, such that the energy of each sound source is preserved in the matrix encoded signal.
  • a method operating in the time domain or in the frequency domain, for two-channel phase- amplitude stereo encoding of one or more localized sound sources and one or more unlocalized sound sources such that the contribution of an unlocalized source in the matrix encoded signal is substantially uncorrelated between the left and right encoded output channels.
  • a method for two-channel phase- amplitude stereo encoding of one or more localized sound sources operating in the time domain or in the frequency domain, such that each sound source is assigned a localization in three dimensions (including up-down discrimination in addition to left-right and front-back discrimination) by use of frequency- independent inter-channel phase and amplitude differences.
  • a frequency-domain method for phase-amplitude stereo decoding of a two-channel stereo signal including frequency-domain spatial analysis of 2-D or 3-D localization cues in the recording and re-synthesis of these localization cues using any preferred spatialization technique, thereby allowing faithful reproduction of 2-D or 3-D positional audio cues and reverberation or ambient cues over headphones or arbitrary multi-channel loudspeaker reproduction formats, while preserving source separation despite prior encoding over only two audio channels.
  • FIG. IA is a simplified functional diagram of an interactive gaming audio engine with single-cable audio output connection to a home theater system for audio playback in a standard 5-channel horizontal- only surround sound reproduction format.
  • FIG. IB is a diagram illustrating a prior-art 5-2-5 matrixed surround encoding-decoding scheme where a 5-channel recording feeds a multichannel matrixed surround encoder to produce a 2-channel matrix-encoded signal and the matrix-encoded signal then feeds a matrixed surround decoder to produce 5 output signals for reproduction over loudspeakers.
  • FIG. 1C is a diagram illustrating a prior-art multichannel matrixed surround encoder for encoding 2-D positional audio cues into a two-channel signal, from a source in a standard 5-channel horizontal-only spatial audio recording format.
  • FIG. 2A is a diagram illustrating peripheral phase- amplitude matrixed surround encoding according to the amplitude panning angle a on a notional encoding circle in the horizontal plane, and the dominance vector ⁇ used in active matrixed surround decoders, as described in the prior art. The values of the physical azimuth angle ⁇ are indicated for standard loudspeaker locations in the horizontal plane.
  • FIG. 1C is a diagram illustrating a prior-art multichannel matrixed surround encoder for encoding 2-D positional audio cues into a two-channel signal, from a source in a standard 5-channel horizontal-only spatial audio recording format.
  • FIG. 2A is a diagram illustrating peripheral phase- amplitude matrixed surround encoding according to the
  • 2B is a diagram illustrating phase-amplitude matrixed surround encoding on a notional encoding sphere known as the "Scheiber sphere," as described in the prior art, represented by the amplitude panning angle a and the inter-channel phase- difference angle ⁇ .
  • FIG. 3 is an illustration of the Gerzon vector on the listening circle in the horizontal plane, computed for a sound component amplitude-panned between loudspeaker channels L and Ls.
  • FIG. 4A is a 2-D plot of the Gerzon velocity vector obtained by 4-channel peripheral panning in 10-degree azimuth increments and radial panning in 9 increments, for loudspeakers Ls , L, R, and Rs respectively located at azimuth angles -110, -30, 30 and 110 degrees on the listening circle in the horizontal plane.
  • FIG. 4A is a 2-D plot of the Gerzon velocity vector obtained by 4-channel peripheral panning in 10-degree azimuth increments and radial panning in 9 increments, for loudspeakers Ls , L, R, and Rs respectively located at azimuth angles -110, -30, 30 and 110 degrees on the listening circle in the horizontal plane.
  • 4B is a 2-D plot of the Gerzon velocity vector obtained by 4-channel peripheral panning in 10-degree azimuth increments and radial panning in 9 increments, for loudspeakers Ls , L, R, and Rs respectively located at azimuth angles -130, -40, 40 and 130 degrees on the listening circle in the horizontal plane.
  • FIG. 5A is a 2-D plot of the dominance vector on the phase-amplitude encoding circle for the panning localizations and loudspeaker positions represented in FIG. 4A, with the surround encoding angle as set to -148 degrees, in accordance with one embodiment of the invention.
  • FIG. 5B is a 2-D plot of the dominance vector on the phase-amplitude encoding circle for the panning localizations and loudspeaker positions represented in FIG. 4B, with the surround encoding angle as set to -135 degrees, in accordance with another embodiment of the invention.
  • FIG. 6A is a diagram illustrating a 6-channel 3-D positional audio panning module in accordance with one embodiment of the invention.
  • FIG. 6B is a diagram illustrating a multichannel phase-amplitude encoding matrix for converting a 6-channel 3-D audio signal into a two-channel phase- amplitude matrix-encoded 3-D audio signal, in accordance with one embodiment of the invention.
  • FIG. 6C depicts a complete interactive phase- amplitude 3-D stereo encoder, in accordance with one embodiment of the invention.
  • FIG. 7A is a signal flow diagram illustrating a phase-amplitude matrixed surround decoder in accordance with one embodiment of the present invention.
  • FIG. 7B is a signal flow diagram illustrating a phase-amplitude matrixed surround decoder for multichannel loudspeaker reproduction, in accordance with one embodiment of the present invention.
  • FIG. 8 is a signal flow diagram illustrating a phase-amplitude stereo encoder in accordance with one embodiment of the present invention.
  • FIG. IB depicts a 5-2-5 matrix encoding-decoding scheme where a 5-channel recording
  • the purpose of such a matrix encoding-decoding scheme is to reproduce a listening experience that closely approaches that of listening to the original iV-channel signal over loudspeakers located at the same N positions around a listener.
  • FIG. 1C depicts a multichannel phase- amplitude matrixed surround encoder for encoding 2-D positional audio cues into a two-channel signal by downmixing a 5- channel signal in the standard horizontal- only "3-2 stereo" format (Ls, L, C, R, Rs) corresponding to the loudspeaker layout depicted in FIG. IA.
  • R T R C - ⁇ (sin ⁇ s Ls + cos ⁇ s Rs) (U where j denotes an idealized 90-degree phase shift and the angle ⁇ s is within [0, ⁇ /4].
  • the relative 90-degree phase shift applied on the surround channels L 5 and R 5 in equation (1) is commonly realized by use of an all-pass filter applying a phase shift ⁇ on the front input channels and an all-pass filter applying a phase shift ⁇ + 90 degrees on the surround channels.
  • a "passive" decoding matrix can be defined as the Hermitian transpose of the encoding matrix. If the encoding equations (1) are formulated in matrix form:
  • the encoding matrix E is preferably energy-preserving (i.e. the sum of the squared left and right encoding coefficients in each column of E is unity)
  • the diagonal coefficients of the combined 5x5 encoding/decoding matrix E H E are all unity. This implies that each channel of the original multichannel signal is exactly transmitted to the corresponding decoder output channel. However, each decoder output channel also receives significant additional contributions (i.e. "bleeding") from the other encoder input channels, which results in significant spatial audio reproduction discrepancy between the original multichannel signal ⁇ L s , L, C, R, Rs] and the reproduced signal ⁇ Ls', L', C, R', Rs' ⁇ after matrixed surround encoding and decoding.
  • an active matrixed surround decoder can improve the "source separation" performance compared to that of a passive matrixed surround decoder in conditions where the matrix-encoded signal presents a strong directional dominance.
  • the effect of the steering logic is to redistribute signal power towards the channels indicated by the direction of the dominance vector ⁇ observed on the encoding circle, as illustrated in FIG. 2A.
  • an active matrixed surround decoder When the magnitude IdI of the dominance vector is near zero, an active matrixed surround decoder must revert to the passive behavior described previously (or using some other passive matrix). This occurs whenever the signals L ⁇ and R ⁇ are uncorrelated or weakly correlated (i.e. contain mostly ambient components) or in the presence of a plurality of concurrent primary sound sources distributed around the encoding circle.
  • prior art 5-2-5 matrix encoding/decoding schemes based on time- domain active matrixed surround decoders are able to accurately reproduce the pairwise amplitude panning of a single primary source anywhere on the encoding circle.
  • they cannot produce an effective and accurate directional enhancement in the presence of multiple concurrent primary sound components, nor preserve the diffuse spatial distribution of ambient sound in the presence of a dominant primary source.
  • noticeable steering artifacts tend to occur (e.g. shifting of sound effect localization or narrowing of the stereo image in the presence of centered dialogue).
  • this precaution is not possible in a gaming application where the mix is automatically driven by real-time game play.
  • the multichannel signal representing the spatial audio scene can be modeled as a superposition of primary and ambient sound components.
  • a primary component may be directionally encoded by use of a "panning" module (labeled paw in FIG. IA) that receives a monophonic source signal and produces a multichannel signal for adding into the output mix.
  • this spatial panning module is to assign to the source a perceived direction observed on the listening sphere centered on the listener, while preserving source loudness and spectral content.
  • P [P 1 ...
  • the Gerzon "velocity vector" defined by equations (6, 7) is proportional to the active acoustic intensity vector measured at the listening location. It is adequate for describing the perceived localization of primary components at low frequencies (below roughly 700 Hz) for a centrally located listener, whereas the "energy vector” defined by equations (6, 8) may be considered more adequate for representing the perceived sound localization at higher frequencies.
  • Multi-channel sound spatialization techniques such as Ambisonics or VBAP can be regarded as different approaches to solving for the set of panning weights p m in equation (6) given the desired direction of the Gerzon vector.
  • Gerzon vector which characterizes the spatial "sharpness” or “focus” of sound images and, when less than 1, may reflect interior panning across the loudspeaker array (such as a "fly-by” or “fly-over” sound event).
  • the Gerzon vector may also be applied for characterizing the directional distribution of ambient sound components in multichannel reproduction, such as room reverberation or spatially extended sound events (e.g. surrounding applause, or the more localized sound of a nearby waterfall).
  • the loudspeaker signals should be mutually uncorrelated, and the Gerzon energy vector is then proportional to the active acoustic intensity. Its magnitude is zero for evenly distributed ambient sound and otherwise increases in the direction of spatial emphasis.
  • the design requirements for a matrix encode- decode system in terms of spatial audio scene reproduction can be formulated as follows: the power and the Gerzon vector direction of each individual sound component (primary or ambient) in the scene, hereafter referred to as the spatial cues associated to each sound source, should be correctly reproduced.
  • the spatial cues associated to each sound source should be correctly reproduced.
  • ambient components are spatially diffuse, i.e. that their Gerzon energy vector is null. This assumption is not restrictive in practice for simulating room reverberation or surrounding background ambience in the virtual environment.
  • a matrixed surround encoding-decoding scheme arises from technology compatibility requirements: it is desirable that the proposed interactive matrix encoder consistently produce an output suitable for decoding with prior-art matrix surround decoders, which assume specific phase-amplitude relationships between the encoded channel signals L ⁇ and R ⁇ for a sound component panned to one of the five channels (Ls, L, C, R, Rs), as indicated by equation (1).
  • the matrixed surround decoder is compatible with legacy matrix encoded content, i.e. responds to strong directional dominance in its input signal in a manner consistent with the response of a prior-art matrixed surround decoder.
  • the matrixed surround decoder should produce a natural sounding "upmix" when subjected to any standard stereo source (not necessarily matrix encoded), ideally without need to modify its operation (such as switching from "movie mode” to "music mode", as is common in prior-art matrixed surround decoders).
  • An improved phase-amplitude matrixed surround encoder is elaborated in the following.
  • the positional encoding of primary sound components in the 2-D horizontal circle is considered.
  • a 3-D spherical encoding scheme is derived.
  • the encoding scheme is completed by including the addition of spatially diffuse ambient sound components in the encoded signal.
  • spatial cues are provided for each individual sound source by a gaming engine or by a studio mixing application and the encoder operates on a time domain or frequency-domain representation of the source signals.
  • a multi-channel source signal is provided in a known spatial audio recording format, this signal is converted to or received in a frequency domain representation, and the spatial cues for each time and frequency are derived by spatial analysis of the multi-channel source signal.
  • L ⁇ [t] ⁇ m L m S m [t]
  • R(a) sin( ⁇ /2 + ⁇ /4) (10.)
  • a spans an interval extended to [- ⁇ , ⁇ ] all positions on the encoding circle of FIG. 2A are uniquely encoded by equations (10), with panning coefficients of opposite polarity for positions in the surround arc (L-Ls-Rs-R).
  • the application of the phase- amplitude panning equations (10) involves mapping the desired azimuth angle ⁇ , measured on the listening circle shown in FIG.
  • any monotonous mapping from ⁇ to a is in principle appropriate.
  • a suitable ⁇ -to-a angular mapping function is one which is equivalent to 5-channel pairwise amplitude panning, using a well-known prior art panning technique such as the vector-based amplitude panning method (VBAP), followed by 5-to-2 matrix encoding.
  • VBAP vector-based amplitude panning method
  • the 5-to-2 encoding matrix is not actually energy preserving when its inputs are not mutually uncorrelated, as is the case when a source is amplitude panned between channels. For instance, it boosts signal power by l+sin(2 ⁇ s) i.e. approximately 3 dB for a sound panned to rear center, and by 1+ V ⁇ 7 ⁇ 2 or 2.3 dB for a sound panned equally between C and L.
  • such energy deviations are eliminated by scaling each source signal according to its panning position.
  • the preferred solution for the set of non- directional panning weights ⁇ is the one that exhibits left-right symmetry and a front- to-back amplitude panning ratio equal to I cos ⁇ s I COS ⁇ F I.
  • FIG. 4A shows a plot of the Gerzon velocity vector g derived from P( ⁇ , ⁇ ) by equations (6, 7) when ⁇ and ⁇ vary in 10-degree increments, with loudspeakers Ls, L, R, and R 5 respectively located at azimuth angles - 110, -30, 30 and 110 degrees on the listening circle in the horizontal plane.
  • the radial panning positions for a given azimuth value are connected by a solid line, which is prolonged by a dotted line connecting to the corresponding point on the edge of the listening circle.
  • FIG. 4B illustrates an alternative embodiment of the invention where loudspeakers L 5 , L, R, and Rs are respectively located at azimuth angles -130, -40, 40 and 130 degrees on the listening circle.
  • the encoding positions for a given azimuth value are connected by a solid line.
  • this solid line is prolonged by a dotted segment connecting to the corresponding encoding point on the edge of the encoding circle, defined by the peripheral encoding equations (10) and assuming linear mapping from ⁇ to a.
  • mapping functions from the radial panning angle ⁇ to the radius r and to the elevation angle ⁇ is not critical, provided that the mapping functions be monotonous and such that, when ⁇ increases from 0 to 90 degrees, the radius r decreases from 1 to 0 and the elevation angle ⁇ increases from 0 to 90 degrees.
  • any source localization on the upper hemisphere or the horizontal circle is thereby encoded by inter-channel amplitude and phase differences in the 2-channel signal ⁇ L ⁇ , R T ) -
  • L(a, ⁇ ) cos(a/2 + ⁇ /4)
  • R(a, ⁇ ) sin( ⁇ /2 + ⁇ /4) e "#2 . (17.)
  • the inter-channel phase difference angle ⁇ is interpreted as a rotation around the left-right axis of the plane in which the amplitude panning angle a is measured. If a spans [- ⁇ /2, ⁇ /2] and ⁇ spans ]- ⁇ , ⁇ ], the angle coordinates (a, ⁇ ) uniquely map any inter-channel phase and/or amplitude difference to a position on the "Scheiber sphere".
  • positive values of ⁇ will correspond to the upper hemisphere and negative values of ⁇ to the lower hemisphere.
  • a useful property is that the dominance vector ⁇ derived by equations (5) coincides with the vertical projection onto the horizontal plane of the position (a, ⁇ ) on the Scheiber sphere:
  • a dominance plot such as Figure 5 is also a "top-down" view of the notional encoding positions on the Scheiber sphere.
  • FIG. 6A depicts a 6-channel panning module (600) for assigning a 3-D positional audio localization ( ⁇ m , ⁇ m ) to a primary sound source signal S m in the 6-channel format (Ls, L, T, B, R, Rs) where T denotes the Top channel and B denotes the Bottom channel, as described previously.
  • FIG. 6A depicts a 6-channel panning module (600) for assigning a 3-D positional audio localization ( ⁇ m , ⁇ m ) to a primary sound source signal S m in the 6-channel format (Ls, L, T, B, R, Rs) where T denotes the Top channel and B denotes the Bottom channel, as described previously.
  • FIG. 600 depicts a 6-channel panning module (600) for assigning a 3-D positional audio localization ( ⁇ m , ⁇ m ) to a primary sound source signal S m in the 6-channel format (Ls, L, T, B, R, Rs) where T
  • 6B depicts a phase-amplitude 3-D stereo encoding matrix module (610), where the resulting 6-channel signal (606) is matrix encoded into a two-channel phase-amplitude stereo encoded signal ⁇ L ⁇ , R T ] according to the following encoding equations:
  • the coefficients L s ( ⁇ ), L( ⁇ ), R( ⁇ ) and Rs( ⁇ ) in equation (21) are energy-preserving 4-channel 2-D peripheral amplitude panning coefficients derived from the azimuth angle ⁇ using the VBAP method, according to the front and surround loudspeaker azimuth angles respectively denoted as ⁇ F and ⁇ s and assigned respectively to the front channel pair (L, R) and to the surround channel pair (Ls, Rs).
  • the resulting encoding matrix is an extension of the prior-art encoding matrix depicted in FIG. 1C, where the input C is optional.
  • the encoding matrix receives 6 input channels 606 produced by the panning module 600.
  • the input channels L 5 , L, R and Rs are processed exactly as in the legacy encoding matrix shown in FIG. 1, using multipliers 614 and all-pass filters 616.
  • the encoding matrix also receives two additional channels T and B, derives their sum and difference signals, and applies to the sum and difference signals the scaling coefficients 612, respectively cos(/?y;/2) and sin( ⁇ /2).
  • the scaled sum and difference signals and then further attenuated by a coefficient combined, respectively, with the front channel and the scaled surround input channels.
  • Alternative embodiments of the phase- amplitude matrixed surround encoding scheme according to the present invention may be realized, within the scope of the present invention, by selecting an arbitrary value within [0, ⁇ ] for ⁇ r, instead of the value derived by equation (18).
  • the combined effect of the 3-D positional panning module 600 and of the 3-D stereo encoding matrix 610 is to map the due localization ( ⁇ , ⁇ ) on the listening sphere to a notional position (a, ⁇ ) on the Scheiber sphere.
  • This mapping can be configured by setting the values of the angular parameters defined previously: ⁇ p within [0, ⁇ /2]; ⁇ s within [ ⁇ /2, ⁇ ]; ⁇ s within [0, ⁇ /4]; and ⁇ j within [0, ⁇ ]. Two examples of such mapping are illustrated in FIG. 5 A and 5B.
  • the setting of these parameters determines the compatibility of the encoding-decoding scheme according to the invention with legacy matrixed surround decoders and matrix-encoded content.
  • the range of possible encoding schemes can be further extended by introducing a front encoding angle parameter O F within [0, ⁇ /4], and replacing L and R respectively by (cos ⁇ f L + sin ⁇ f R) and (cosff f R + sin ⁇ F L) prior to applying equation (20) or (23).
  • op 0 and the channels L and R are passed unmodified to the encoded channels L T and R T , respectively.
  • any intermediate P-channel format (C 1 , C 2 , ...C p ) instead of the preferred 6-channel format (L s , L, T, B, R, R s ), associated to additional or alternative intermediate channel positions ⁇ ( ⁇ p , ⁇ p ) ⁇ in the horizontal plane or anywhere on the listening sphere, using any 2-D or 3-D multi- channel panning technique to implement the multichannel positional panning module for each sound source signal S m , and matrix-encoding each intermediate channel C p as a 3-D source with localization ( ⁇ p , ⁇ p ) according to the panning and encoding scheme defined by equations (21, 23) or (21, 20).
  • the localization of a sound source on the listening sphere is expressed according to the Duda-Algazi angular coordinate system, where the azimuth angle ⁇ is measured in a plane containing the source and the left-right ear axis, and the elevation angle v measures the rotation of this plane with respect to the left-right ear axis.
  • the localization coordinates ⁇ and v can be mapped separately to the amplitude panning angle a and the inter-channel phase difference angle ⁇ .
  • phase- amplitude stereo encoding of the signals according to the invention can be realized in the frequency domain by applying encoding coefficients L(a m , ⁇ m ) and L(a m , ⁇ m ) to a frequency- domain representation of the sound source signal S m .
  • the interactive phase-amplitude stereo encoder includes means for incorporating spatially diffuse ambience and reverberation components in the 2-channel encoded output signal ⁇ L T , R T ] -
  • this bias is avoided by mixing the ambient components directly into the two-channel output ⁇ L ⁇ , R T ] of the phase-amplitude encoder or into the input channels L and R of the encoding matrix 610 (whereas, in a prior-art encoding scheme, a significant amount of ambient signal energy would be mixed into the surround input channels of the encoding matrix).
  • FIG. 6C depicts an interactive phase-amplitude 3-D stereo encoder, according to a preferred embodiment of the invention.
  • Each source S 1n generates a primary sound component panned by a panning module 600 described previously and depicted in FIG. 6A, which assigns the localization ( ⁇ m , ⁇ m ) to the source signal.
  • the output of each panning module 600 is added into the master multichannel bus 622 which feeds the encoding matrix 610 described previously and illustrated in FIG. 6B.
  • each source signal S 1n generates a contribution 623 to the reverb send bus 624, which feeds a reverberation module 626, thereby producing the ambient sound component associated to the source signal S 1n .
  • the reverberation module 626 simulates the reverberation of a virtual room and generates two substantially uncorrelated reverberation signals by methods well known in the prior art, such as feedback delay networks.
  • the two output signals of the reverberation module 626 are combined directly into the output ⁇ L ⁇ , R T ) of the encoding matrix 610.
  • the per- source processing module 623 that generates the primary sound component and the ambient sound component for each source signal S 1n may include filtering and delaying modules 629 to simulate distance, air absorption, source directivity, or acoustic occlusion and obstruction effects caused by acoustic obstacles in the virtual scene, using methods known in the prior art.
  • a frequency domain method for phase-amplitude matrixed surround decoding of 2-channel stereo signals such as music recordings and movie or video game soundtracks, based on spatial analysis of 2-D or 3-D directional cues in the input signal and re-synthesis of these cues for reproduction on any headphone or loudspeaker playback system, using any chosen sound spatialization technique.
  • this invention enables the decoding of 3-D localization cues from two- channel audio recordings while preserving backward compatibility with prior-art two- channel horizontal- only phase- amplitude matrixed surround encoding-decoding techniques such as described previously.
  • the present invention uses a time/frequency analysis and synthesis framework to significantly improve the source separation performance of the matrixed surround decoder.
  • the fundamental advantage of performing the analysis as a function of both time and frequency is that it significantly reduces the likelihood of concurrence or overlap of multiple sources in the signal representation, and thereby improves source separation. If the frequency resolution of the analysis is comparable to that of the human auditory system, the possible effects of any overlap of concurrent sources in the frequency-domain representation is substantially masked during reproduction of the decoder's output signal over headphones or loudspeakers.
  • FIG. 7A is a signal flow diagram illustrating a phase-amplitude matrixed surround decoder in accordance with one embodiment of the present invention. Initially, a time/frequency conversion takes place in block 702 according to any conventional method known to those of skill in the relevant arts, including but not limited to the use of a short term Fourier transform (STFT) or any subband signal representation.
  • STFT short term Fourier transform
  • a primary- ambient decomposition occurs.
  • This decomposition is advantageous because primary signal components (typically direct- path sounds) and ambient components (such as reverberation or applause) generally require different spatial synthesis strategies.
  • Frequency-domain methods for primary-ambient decomposition are described in the prior art, for instance by Merimaa et al. in "Correlation-Based Ambience Extraction from Stereo Recordings", presented at the 123 rd Convention of the Audio Engineering Society (October 2007).
  • the primary signal Sp [P L , P R ) is then subjected to a localization analysis in block 706.
  • the spatial analysis derives a spatial localization vector d representative of a physical position relative to the listener's head. This localization vector may be three-dimensional or two-dimensional, depending of the desired mode of reproduction of the decoder's output signal.
  • the localization vector represents a position on a listening sphere centered on the listener' s head, characterized by an azimuth angle ⁇ and an elevation angle ⁇ .
  • the localization vector may be taken to represent a position on or within a circle centered on the listener's head in the horizontal plane, characterized by an azimuth angle ⁇ and a radius r.
  • This two- dimensional representation enables, for instance, the parametrization of fly-by and fly- through sound trajectories in a horizontal multichannel playback system.
  • the spatial localization vector d is derived, for each time and frequency, from the inter-channel amplitude and phase differences present in the signal Sp.
  • inter-channel differences can be uniquely represented by a notional position (a, ⁇ ) on the Scheiber sphere as illustrated in FIG. 2B, according to Eq. (17), where a denotes the amplitude panning angle and ⁇ denotes the inter-channel phase difference.
  • the operation of the localization analysis block 706 consists of computing the inter-channel amplitude and phase differences, followed by mapping from the notional position (a, ⁇ ) on the Scheiber sphere to the direction ( ⁇ , ⁇ ) in the three-dimensional physical space or to the position ( ⁇ , r) in the two-dimensional physical space.
  • this mapping may be defined in an arbitrary manner and may even depend on frequency.
  • the primary signal Sp is modeled as a mixture of elementary monophonic source signals S m according to the matrix encoding equations (9, 10) or (9, 17), where the notional encoding position ( « m , ⁇ m ) of each source is defined by a known bijective mapping from a two- dimensional or three-dimensional localization in a physical or virtual spatial sound scene.
  • a mixture may be realized, for instance, by an audio mixing workstation or by an interactive audio rendering system such as found in video gaming systems and depicted in FIG. IA or FIG. 6C.
  • the localization analysis block 706 it is advantageous to implement the localization analysis block 706 such that the derived localization vector is obtained by inversion of the mapping realized by the matrix encoding scheme, so that playback of the decoder' s output signal faithfully reproduces the original spatial sound scene.
  • the localization analysis 706 is performed, at each time and frequency, by computing the dominance vector according to equations (5) and applying a mapping from the dominance vector position in the encoding circle to a physical position ( ⁇ , r) in the horizontal listening circle, as illustrated in FIG. 2A and exemplified in FIG. 5 A or 5B.
  • the dominance vector position may then be mapped to a three-dimensional localization ( ⁇ , ⁇ ) by vertical projection from the listening circle to the listening sphere as follows: signr ⁇ (25.) where the sign of the inter-channel difference ⁇ is used to differentiate the upper hemisphere from the lower hemisphere.
  • Block 708 realizes, in the frequency domain, the spatial synthesis of the primary components in the decoder output signal by applying to the primary signal Sp the spatial cues 707 derived by the localization analysis 706.
  • a variety of approaches may be used for the spatial synthesis (or "spatialization") of the primary components from a monophonic signal, including ambisonic or binaural techniques as well as conventional amplitude panning methods.
  • a mono primary signal P to be spatialized is derived, at each time and frequency, by a conventional mono downmix where P
  • the computation of the mono signal P uses downmix coefficients that depend on time and frequency by application of the passive decoding equation for the notional position (a, ⁇ ) derived from the inter-channel amplitude and phase differences computed in the localization analysis block 706:
  • L * (a, ⁇ ) and R * (a, ⁇ ) respectively denote the complex conjugates of the left and right encoding coefficients expressed by equations (17):
  • L * (a, ⁇ ) cos( ⁇ /2 + ⁇ /4) e "j/?/2
  • R * (a, ⁇ ) sin( ⁇ /2 + ⁇ /4) e #2 . (27.)
  • the spatialization method used in the primary component synthesis block 708 should seek to maximize the discreteness of the perceived localization of spatialized sound sources.
  • the spatial synthesis method, implemented in block 710 should seek to reproduce (or even enhance) the spatial spread or diffuseness of sound components.
  • the ambient output signals generated in block 710 are added to the primary output signals generated in block 708.
  • a frequency/time conversion takes place in block 712, such as through the use of an inverse STFT, in order to produce the decoder's output signal.
  • the primary-ambient decomposition 704 and the spatial synthesis of ambient components 710 are omitted.
  • the localization analysis 706 is applied directly to the input signal ⁇ L T , RT] .
  • the time-frequency conversions blocks 702 and 712 and the ambient processing blocks 704 and 710 are omitted.
  • a matrixed surround decoder according to the present invention can offer significant improvements over prior art matrixed surround decoders, notably by enabling arbitrary 2-D or 3-D spatial mapping between the matrix-encoded signal representation and the reproduced sound scene.
  • the spatial analysis can recover, at each time and frequency, the localization d from the dominance ⁇ computed by equations (5).
  • this inverse mapping operation is realized by a table-lookup method that returns the values of the azimuth angle ⁇ and of the radius r given the coordinates ⁇ x and ⁇ y of the dominance vector ⁇ .
  • the lookup tables are generated as follows:
  • the inverse mapping operation for the spatial analysis of the localization ( ⁇ , ⁇ ) from the dominance ( ⁇ x , ⁇ y ) is performed in two steps, using the first table to derive ( ⁇ ', r') and then the second table to obtain ( ⁇ , ⁇ ).
  • the advantage of this two-step process is that it ensures high accuracy in the estimation of the localization coordinates ⁇ and ⁇ without employing extremely large lookup tables, despite the fact that the mapping function is heavily non uniform and very "steep" in some regions of the encoding circle (as is visible in FIG. 5A or FIG. 5B).
  • the sign of the inter-channel phase difference ⁇ denoted sign ( ⁇ )
  • is computed in order to select the upper or lower hemisphere, and replace ⁇ by its opposite if ⁇ is negative.
  • FIG. 7B is a signal flow diagram depicting a phase-amplitude matrixed surround decoder for multichannel loudspeaker reproduction, in accordance with one embodiment of the present invention.
  • the time/frequency conversion in block 702, primary- ambient decomposition in block 704 and localization analysis in block 706 are performed as described earlier.
  • N 4
  • the mono primary downmix signal denoted as P
  • the passive decoding equation (26) for the time- and frequency-dependent encoding position (a, ⁇ ) on the Scheiber sphere determined by the computed dominance vector ⁇ and sign(/?) in the spatial analysis block 706.
  • signal components presented exclusively in the left input channel P L may contribute to output channels on the right side as a result of spatial ambiguities due to frequency-domain overlap of concurrent sources. Although such overlap can be minimized by appropriate choice of the frequency-domain representation, it is preferable to minimize its potential impact on the reproduced scene by populating the output channels with a set of signals that preserves the spatial separation already provided in the decoder's input signal.
  • the resulting N signals are then re- weighted in block 709 with gain factors computed based on the spatial cues 707.
  • the gain factors for each channel are determined by deriving multichannel panning coefficients at each time and frequency based on the localization vector d and on the output format, which may be provided by user input or determined by automated estimation.
  • the decoder' s output format exactly corresponds to the 4-channel layout (L s , L, R, R s ) characterized by the front-channel azimuth angle Qp and the surround-channel azimuth angle ⁇ s
  • an embodiment of the frequency-domain spatial synthesis block 708 may be realized using any sound spatialization or positional audio rendering technique whereby a mono signal is assigned a 3-D localization ( ⁇ , ⁇ ) on the listening sphere or a 2-D localization ( ⁇ , r) on the listening circle, for spatial reproduction over loudspeakers or headphones.
  • Such spatialization techniques include, and are not limited to, amplitude panning techniques (such as VBAP), binaural techniques, ambisonic techniques, and wave-field synthesis techniques.
  • the ambient passive upmix first distributes the ambient signals [A L , A R ) to each output signal of the block, based on the given output format.
  • the left-right separation is maintained for pairs of output channels that are symmetric in the left-right direction. That is, A L is distributed to the left and A R to the right channel of such a pair.
  • passive upmix coefficients for the signals [A L , A R ) may be obtained by passive upmix using equations (29) applied to [A L , A R ) instead of [P L , P R ) .
  • Each channel is then weighted so that the total energy of the output signals matches that of the input signals, and so that the resulting Gerzon energy vector, computed according to equations (6) and (8), be of zero magnitude.
  • the weighting coefficients can be computed once based on the output format alone, by assuming that A L and A R have the same energy and applying methods specified in the U.S. Patent Application Ser. No. 11/750,300 entitled Spatial Audio Coding Based on Universal Spatial Cues, incorporated herein by reference.
  • a perceptually accurate multi-channel spatial reproduction of the ambient components over loudspeakers requires that the ambient output signals be mutually uncorrelated.
  • all-pass filters or substantially all-pass
  • decorrelation filters or “decorrelators”
  • the passively upmixed ambient signals are decorrelated in block 713.
  • all-pass filters are applied to a subset of the ambient channels such that all output channels of block 713 are mutually uncorrelated. Any other decorrelation method known to those of skill in the relevant arts is similarly viable, and the decorrelation processing may also include delay elements.
  • the primary and ambient signals corresponding to each of the N output channels are summed and converted to the time domain in block 712.
  • the time- domain signals are then directed to the N transducers 714.
  • the matrixed surround decoding methods described result in a significant improvement in the spatial quality of reproduction of 2-channel Dolby-Surround movie soundtracks over headphones or loudspeakers. Indeed, this invention enables a listening experience that is a close approximation of that provided by direct discrete multichannel reproduction or by discrete multi-channel encoding-decoding technology such as Dolby Digital or DTS.
  • the decoding methods described enable faithful reproduction of the original spatial sound scene not only over the originally assumed target multi-channel loudspeaker layout, but also over headphones or loudspeakers with full flexibility in the number of output channels, their layout, and the spatial rendering technique.
  • FIG. 8 is a signal flow diagram illustrating a phase-amplitude stereo encoder in accordance with one embodiment of the present invention, where a multi-channel source signal is provided in a known spatial audio recording format.
  • a time/frequency conversion takes place in block 802.
  • the frequency domain representation may be generated using an STFT.
  • primary ambient decomposition takes place, according to any known or conventional methods.
  • Matrix encoding of the primary components of the signal occurs in block 806, followed by the addition of the ambient signals.
  • a frequency/time conversion takes place, such as through the use of an inverse STFT. This method ensures that ambient signal components are encoded in the form of an uncorrelated signal pair, which ensures that a matrix decoder will render them with adequately diffuse spatial distribution.
  • the multi-channel source signal is a 5-channel signal in the standard "3-2 stereo" format (Ls, L, C, R, Rs) corresponding to the loudspeaker layout depicted in FIG. IA, and the matrix encoding of primary components in block 806 is performed according to equations (1) applied at each time and frequency.
  • the multi-channel source signal is provided in a P-channel format (C 1 , C 2 , ...C p ...) where each channel C p is intended for reproduction by a loudspeaker located at localization ( ⁇ p , ⁇ p ), and the matrix encoding in block 806 is performed by:
  • R ⁇ ⁇ P R(a P , ⁇ P ) C p (30.)
  • (a p , ⁇ p ) is derived by mapping each localization ( ⁇ p , ⁇ p ) to its corresponding notional encoding position (a p , ⁇ p ) on the Scheiber sphere, and the phase- amplitude encoding coefficients L(a p , ⁇ p ) and R(a p , ⁇ p ) are given by equations (17).
  • the encoding coefficients may be derived by equations (20) or by any chosen localization-to-dominance mapping convention.
  • the spatial localization cues ( ⁇ , ⁇ ) are derived, at each time and frequency, by spatial analysis of the primary multi-channel signal, and the phase-amplitude encoding coefficients L(a, ⁇ ) and R(a, ⁇ ) are obtained by mapping ( ⁇ , ⁇ ) to (a, ⁇ ), as described earlier.
  • this mapping is realized by applying, at each time and frequency, the encoding scheme described by equations (20, 21) or (21, 23) and FIG. 6A-6B.
  • the spatial analysis may be performed by various methods, including the DirAC method or the spatial analysis method described in copending U.S. Patent Application Ser. No. 11/750,300, entitled Spatial Audio Coding Based on Universal Spatial Cues.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Algebra (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Stereophonic System (AREA)

Abstract

A two-channel phase-amplitude stereo encoding and decoding scheme enabling flexible and spatially accurate interactive 3-D audio reproduction via standard audio-only two-channel transmission. The encoding scheme allows associating a 2-D or 3-D positional localization to each of a plurality of sound sources by use of frequency independent inter-channel phase and amplitude differences. The decoder is based on frequency-domain spatial analysis of 2-D or 3-D directional cues in a two-channel stereo signal and re-synthesis of these cues using any preferred spatialization technique, thereby allowing faithful reproduction of positional audio cues and reverberation or ambient cues over arbitrary multi-channel loudspeaker reproduction formats or over headphones, while preserving source separation despite the intermediate encoding over only two audio channels.

Description

Phase- Amplitude 3-D Stereo Encoder and Decoder CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority to and the benefit of the disclosures of U.S. Provisional Patent Application Ser. No. 60/977,432, filed on October 4, 2007, and entitled "Phase-Amplitude Stereo Decoder and Encoder" (CLIP228PRV), and of U.S. Provisional Patent Application Ser. No. 61/102,002, filed on October 1, 2008, and entitled "Phase-Amplitude Stereo Decoder and Encoder" (CLIP228PRV2), the disclosures of which are incorporated by reference herein.
This application further claims priority to and the benefit of the disclosure of U.S. Patent Application Ser. No. 12/047,285 which is entitled Phase-Amplitude
Matrixed Surround Decoder, (docket CLIP198US) and filed on March 12, 2008, the disclosure of which is incorporated by reference herein. .
This application is related to and incorporates by reference the disclosure of U.S. Patent Application Serial No. U.S. Patent Application Ser. No. 11/750,300, which is entitled Spatial Audio Coding Based on Universal Spatial Cues, attorney docket CLIP159US, and filed on May 17, 2007. BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to signal processing techniques. More particularly, the present invention relates to methods for processing audio signals.
2. Description of the Related Art
Two-channel phase-amplitude stereo encoding, also known as "matrixed surround encoding" or "matrix encoding", is widely used for connecting the audio output of a video gaming system to a home theater system for multichannel surround sound reproduction, and for low-bandwidth or two-channel transmission or recording of surround sound movie soundtracks. Typically, in the gaming application, a multichannel audio mix is computed in real time (during game play) by an interactive audio spatialization engine and down-mixed to two channels by use of a matrixed surround encoding process identical to those used for matrix encoding multi-channel movie soundtracks. As a result of the encoding-decoding process, schematically illustrated in FIG. IA, the surround sound mix can be transmitted via a single standard stereo audio connection or via a S/PDIF coaxial or optical cable connection commonly available in current home theater equipment. The multichannel mix composed in the interactive audio rendering engine is typically obtained as a combination (mixing) of localized sound components reproducing point sources (primary sound components) and of reverberation or spatially diffuse sound components (ambient sound components).
An advantage of phase-amplitude stereo encoding compared to alternative discrete multi-channel audio data formats (such as Dolby Digital or DTS) is that the encoded data stream is a two-channel audio signal that can be played back directly (without any decoding) over standard two-channel stereo loudspeakers or headphones. For multichannel loudspeaker presentation, a matrixed surround decoder can be used to recover a multichannel signal from the matrix-encoded two-channel signal. However, with currently available time-domain matrixed surround decoders, the fidelity of the spatial reproduction typically suffers from inaccurate source loudness reproduction, inaccurate spatial reproduction, localization steering artifacts, and lack of "discreteness" (or "source separation"), when compared to direct multi-channel reproduction without matrixed surround encoding/decoding.
MPEG Surround technology enables the transmission, over one low-bit-rate digital audio connection, of a two-channel matrix-encoded signal compatible with existing commercial matrixed surround decoders, along with an auxiliary spatial information data stream that an MPEG Surround decoder utilizes in order to recover a faithful reproduction of the original discrete multi-channel mix. However, the transmission of auxiliary data along with the audio signal requires a new digital connection format incompatible with standard stereo equipment. Another limitation of the above audio encoding-decoding technologies is their restriction to horizontal- only spatialization, their bias towards a particular multichannel loudspeaker layout, and their reliance on the spatial audio rendering technique known as multi-channel amplitude panning. This makes these technologies non-ideal for reproduction using headphones or alternative loudspeaker layouts and spatialization techniques (such as ambisonic or binaural technologies, for instance), which are more effective than the amplitude panning technique for improved spatial audio reproduction in some listening conditions. For headphone playback, in particular, a superior listening experience could be obtained by use of binaural 3-D audio spatialization methods, also requiring only two audio transmission channels. However, due to the inclusion of head-related inter-channel delay and frequency- dependent amplitude difference cues in the encoded signal, a binaural transmission format would be unsuited to multi-channel surround sound reproduction over an extended home theater listening area.
It is desired to overcome the above limitations of existing matrixed surround encoding and decoding technology by providing more flexible and spatially accurate encoding and decoding schemes.
SUMMARY OF THE INVENTION
In accordance with one embodiment of the present invention, provided is a method for two-channel phase- amplitude stereo encoding of one or more sound sources, in the time domain or in the frequency domain, such that the energy of each sound source is preserved in the matrix encoded signal.
In accordance with another embodiment of the present invention, provided is a method, operating in the time domain or in the frequency domain, for two-channel phase- amplitude stereo encoding of one or more localized sound sources and one or more unlocalized sound sources such that the contribution of an unlocalized source in the matrix encoded signal is substantially uncorrelated between the left and right encoded output channels.
In accordance with another embodiment of the present invention, provided is a method for two-channel phase- amplitude stereo encoding of one or more localized sound sources, operating in the time domain or in the frequency domain, such that each sound source is assigned a localization in three dimensions (including up-down discrimination in addition to left-right and front-back discrimination) by use of frequency- independent inter-channel phase and amplitude differences.
In accordance with another embodiment of the invention, provided is a frequency-domain method for phase-amplitude stereo decoding of a two-channel stereo signal, including frequency-domain spatial analysis of 2-D or 3-D localization cues in the recording and re-synthesis of these localization cues using any preferred spatialization technique, thereby allowing faithful reproduction of 2-D or 3-D positional audio cues and reverberation or ambient cues over headphones or arbitrary multi-channel loudspeaker reproduction formats, while preserving source separation despite prior encoding over only two audio channels.
These and other features and advantages of the present invention are described below with reference to the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. IA is a simplified functional diagram of an interactive gaming audio engine with single-cable audio output connection to a home theater system for audio playback in a standard 5-channel horizontal- only surround sound reproduction format.
FIG. IB is a diagram illustrating a prior-art 5-2-5 matrixed surround encoding-decoding scheme where a 5-channel recording feeds a multichannel matrixed surround encoder to produce a 2-channel matrix-encoded signal and the matrix-encoded signal then feeds a matrixed surround decoder to produce 5 output signals for reproduction over loudspeakers.
FIG. 1C is a diagram illustrating a prior-art multichannel matrixed surround encoder for encoding 2-D positional audio cues into a two-channel signal, from a source in a standard 5-channel horizontal-only spatial audio recording format. FIG. 2A is a diagram illustrating peripheral phase- amplitude matrixed surround encoding according to the amplitude panning angle a on a notional encoding circle in the horizontal plane, and the dominance vector δ used in active matrixed surround decoders, as described in the prior art. The values of the physical azimuth angle θ are indicated for standard loudspeaker locations in the horizontal plane. FIG. 2B is a diagram illustrating phase-amplitude matrixed surround encoding on a notional encoding sphere known as the "Scheiber sphere," as described in the prior art, represented by the amplitude panning angle a and the inter-channel phase- difference angle β.
FIG. 3 is an illustration of the Gerzon vector on the listening circle in the horizontal plane, computed for a sound component amplitude-panned between loudspeaker channels L and Ls. FIG. 4A is a 2-D plot of the Gerzon velocity vector obtained by 4-channel peripheral panning in 10-degree azimuth increments and radial panning in 9 increments, for loudspeakers Ls , L, R, and Rs respectively located at azimuth angles -110, -30, 30 and 110 degrees on the listening circle in the horizontal plane. FIG. 4B is a 2-D plot of the Gerzon velocity vector obtained by 4-channel peripheral panning in 10-degree azimuth increments and radial panning in 9 increments, for loudspeakers Ls , L, R, and Rs respectively located at azimuth angles -130, -40, 40 and 130 degrees on the listening circle in the horizontal plane.
FIG. 5A is a 2-D plot of the dominance vector on the phase-amplitude encoding circle for the panning localizations and loudspeaker positions represented in FIG. 4A, with the surround encoding angle as set to -148 degrees, in accordance with one embodiment of the invention.
FIG. 5B is a 2-D plot of the dominance vector on the phase-amplitude encoding circle for the panning localizations and loudspeaker positions represented in FIG. 4B, with the surround encoding angle as set to -135 degrees, in accordance with another embodiment of the invention.
FIG. 6A is a diagram illustrating a 6-channel 3-D positional audio panning module in accordance with one embodiment of the invention.
FIG. 6B is a diagram illustrating a multichannel phase-amplitude encoding matrix for converting a 6-channel 3-D audio signal into a two-channel phase- amplitude matrix-encoded 3-D audio signal, in accordance with one embodiment of the invention.
FIG. 6C depicts a complete interactive phase- amplitude 3-D stereo encoder, in accordance with one embodiment of the invention. FIG. 7A is a signal flow diagram illustrating a phase-amplitude matrixed surround decoder in accordance with one embodiment of the present invention.
FIG. 7B is a signal flow diagram illustrating a phase-amplitude matrixed surround decoder for multichannel loudspeaker reproduction, in accordance with one embodiment of the present invention. FIG. 8 is a signal flow diagram illustrating a phase-amplitude stereo encoder in accordance with one embodiment of the present invention. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Reference will now be made in detail to preferred embodiments of the invention. Examples of the preferred embodiments are illustrated in the accompanying drawings. While the invention will be described in conjunction with these preferred embodiments, it will be understood that it is not intended to limit the invention to such preferred embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The present invention may be practiced without some or all of these specific details. In other instances, well known mechanisms have not been described in detail in order not to unnecessarily obscure the present invention. It should be noted herein that throughout the various drawings like numerals refer to like parts. The various drawings illustrated and described herein are used to illustrate various features of the invention. To the extent that a particular feature is illustrated in one drawing and not another, except where otherwise indicated or where the structure inherently prohibits incorporation of the feature, it is to be understood that those features may be adapted to be included in the embodiments represented in the other figures, as if they were fully illustrated in those figures. Unless otherwise indicated, the drawings are not necessarily to scale. Any dimensions provided on the drawings are not intended to be limiting as to the scope of the invention but merely illustrative. MATRIXED SURROUND PRINCIPLES
FIG. IB depicts a 5-2-5 matrix encoding-decoding scheme where a 5-channel recording |Ls[r], L[t], C[t], R[t], Rs[t]} feeds a multichannel matrixed surround encoder to produce the matrix-encoded 2-channel signal {Lτ[t], Rτ[t] }, and the matrix - encoded signal then feeds a matrixed surround decoder to produce a 5-channel loudspeaker output signal {Ls'[t], L'[t], C'[t], R'[t], Rs'[t]} for reproduction. In general, the purpose of such a matrix encoding-decoding scheme is to reproduce a listening experience that closely approaches that of listening to the original iV-channel signal over loudspeakers located at the same N positions around a listener.
Multichannel matrixed surround encoding equations
FIG. 1C depicts a multichannel phase- amplitude matrixed surround encoder for encoding 2-D positional audio cues into a two-channel signal by downmixing a 5- channel signal in the standard horizontal- only "3-2 stereo" format (Ls, L, C, R, Rs) corresponding to the loudspeaker layout depicted in FIG. IA. The general form of the phase- amplitude matrixed surround encoding equations in this case is: Lτ = L C + j (cosσs Ls + sinσs Rs)
RT = R
Figure imgf000008_0001
C - } (sinσs Ls + cosσs Rs) (U where j denotes an idealized 90-degree phase shift and the angle σs is within [0, π/4]. A common choice for σs is 29 degrees, which yields: cosσs = 0.875; sinσs = 0.485 (2.) As illustrated in FIG. 1C, the relative 90-degree phase shift applied on the surround channels L5 and R5 in equation (1) is commonly realized by use of an all-pass filter applying a phase shift Φ on the front input channels and an all-pass filter applying a phase shift Φ + 90 degrees on the surround channels.
Passive matrixed surround decoding equations
For any phase-amplitude encoding matrix, a "passive" decoding matrix can be defined as the Hermitian transpose of the encoding matrix. If the encoding equations (1) are formulated in matrix form:
[LTRTf = E [Ls L C R Rsf, (3.) then the passive decoding equations produce five corresponding output channels as follows:
[L8' L' C R' Rs'? = EK [LT RT?. (4.)
Since the encoding matrix E is preferably energy-preserving (i.e. the sum of the squared left and right encoding coefficients in each column of E is unity), the diagonal coefficients of the combined 5x5 encoding/decoding matrix EH E are all unity. This implies that each channel of the original multichannel signal is exactly transmitted to the corresponding decoder output channel. However, each decoder output channel also receives significant additional contributions (i.e. "bleeding") from the other encoder input channels, which results in significant spatial audio reproduction discrepancy between the original multichannel signal {Ls, L, C, R, Rs] and the reproduced signal {Ls', L', C, R', Rs'} after matrixed surround encoding and decoding.
Active matrixed surround decoders
By varying the coefficients of the decoding matrix, an active matrixed surround decoder can improve the "source separation" performance compared to that of a passive matrixed surround decoder in conditions where the matrix-encoded signal presents a strong directional dominance. This enhancement is achieved by a "steering logic" which continuously adapts the decoding matrix according to a measured dominance vector, denoted by δ = (Sx, Sy), which can be derived from the 4-channel passive matrixed surround decoder output signals L' = Lτ, R' = RT, C' = 0J(L'+R'), and S' = OJ(L '-R'), as follows:
SX = ( \R'\2 - \L'\2 ) I ( \R'\2 + \L'\2 ) δy = ( \C'\2 - \S'\2 ) / ( \C'\2 + \S'\2 ), (5.) where the squared norm I . I2 denotes signal power. The magnitude of the dominance vector IdI = (Sx 2 + Sy 2)¥l measures the degree of directional dominance in the encoded signal and is never more than 1.
The effect of the steering logic is to redistribute signal power towards the channels indicated by the direction of the dominance vector δ observed on the encoding circle, as illustrated in FIG. 2A. When the magnitude IdI of the dominance vector is near zero, an active matrixed surround decoder must revert to the passive behavior described previously (or using some other passive matrix). This occurs whenever the signals Lτ and Rτ are uncorrelated or weakly correlated (i.e. contain mostly ambient components) or in the presence of a plurality of concurrent primary sound sources distributed around the encoding circle. In general, prior art 5-2-5 matrix encoding/decoding schemes based on time- domain active matrixed surround decoders are able to accurately reproduce the pairwise amplitude panning of a single primary source anywhere on the encoding circle. However, they cannot produce an effective and accurate directional enhancement in the presence of multiple concurrent primary sound components, nor preserve the diffuse spatial distribution of ambient sound in the presence of a dominant primary source. In such situations, noticeable steering artifacts tend to occur (e.g. shifting of sound effect localization or narrowing of the stereo image in the presence of centered dialogue). For this reason, it is recommended for mixing engineers to monitor a matrix-encoded mix through the encode-decode chain in the studio, in order to detect and avoid the occurrence of such artifacts. However, this precaution is not possible in a gaming application where the mix is automatically driven by real-time game play.
DESIGN CRITERIA In order to characterize the performance of a matrixed surround encoding- decoding scheme in accordance with the present invention, it is useful to define general spatial synthesis principles applicable in the design of interactive audio rendering systems (for e.g. gaming, computer music or virtual reality), regardless of the spatial rendering technique or setup used. From these general principles, we shall derive spatial audio scene preservation requirements for the matrix encoding-decoding process, in terms of energetic and spatial properties of the primary and ambient sound components in the spatial audio scene, regardless of the playback context.
Spatial audio scene and signal model
As illustrated in FIG. IA, the multichannel signal representing the spatial audio scene can be modeled as a superposition of primary and ambient sound components. A primary component may be directionally encoded by use of a "panning" module (labeled paw in FIG. IA) that receives a monophonic source signal and produces a multichannel signal for adding into the output mix. Generally defined, the role of this spatial panning module is to assign to the source a perceived direction observed on the listening sphere centered on the listener, while preserving source loudness and spectral content. In reproduction of an M-channel signal P = [P1... PM] using loudspeakers, this perceived direction can be measured by the Gerzon vector g, defined as follows: g = LumPm em (6.) where the "channel vector" emis a unit vector in the direction of the m-th output channel (FIG. 3). The weights pm in equation (6) are given by: pm = Pm I WVWi for the "velocity vector" (7.) pm = IPJ2 / HPII2 for the "energy vector" (8.) where HPII1 denotes the amplitude-sum of the M-channel signal, and HPII2 denotes its total signal power.
The Gerzon "velocity vector" defined by equations (6, 7) is proportional to the active acoustic intensity vector measured at the listening location. It is adequate for describing the perceived localization of primary components at low frequencies (below roughly 700 Hz) for a centrally located listener, whereas the "energy vector" defined by equations (6, 8) may be considered more adequate for representing the perceived sound localization at higher frequencies. Multi-channel sound spatialization techniques such as Ambisonics or VBAP can be regarded as different approaches to solving for the set of panning weights pm in equation (6) given the desired direction of the Gerzon vector. Spatialization techniques differ in their practical engineering compromises and in their ability to accurately control the magnitude of the Gerzon vector, which characterizes the spatial "sharpness" or "focus" of sound images and, when less than 1, may reflect interior panning across the loudspeaker array (such as a "fly-by" or "fly-over" sound event).
The Gerzon vector may also be applied for characterizing the directional distribution of ambient sound components in multichannel reproduction, such as room reverberation or spatially extended sound events (e.g. surrounding applause, or the more localized sound of a nearby waterfall). In this case, the loudspeaker signals should be mutually uncorrelated, and the Gerzon energy vector is then proportional to the active acoustic intensity. Its magnitude is zero for evenly distributed ambient sound and otherwise increases in the direction of spatial emphasis. System design criteria
Based on the above principles, the design requirements for a matrix encode- decode system in terms of spatial audio scene reproduction can be formulated as follows: the power and the Gerzon vector direction of each individual sound component (primary or ambient) in the scene, hereafter referred to as the spatial cues associated to each sound source, should be correctly reproduced. In the preferred embodiments considered in the following description, it is assumed that ambient components are spatially diffuse, i.e. that their Gerzon energy vector is null. This assumption is not restrictive in practice for simulating room reverberation or surrounding background ambience in the virtual environment.
Additional design criteria for a matrixed surround encoding-decoding scheme according to a preferred embodiment of the present invention arise from technology compatibility requirements: it is desirable that the proposed interactive matrix encoder consistently produce an output suitable for decoding with prior-art matrix surround decoders, which assume specific phase-amplitude relationships between the encoded channel signals Lτ and Rτ for a sound component panned to one of the five channels (Ls, L, C, R, Rs), as indicated by equation (1). Conversely, in a preferred embodiment of the present invention, the matrixed surround decoder is compatible with legacy matrix encoded content, i.e. responds to strong directional dominance in its input signal in a manner consistent with the response of a prior-art matrixed surround decoder.
Further, in a preferred embodiment of the present invention, the matrixed surround decoder should produce a natural sounding "upmix" when subjected to any standard stereo source (not necessarily matrix encoded), ideally without need to modify its operation (such as switching from "movie mode" to "music mode", as is common in prior-art matrixed surround decoders). This implies that ambient sound components in the input stereo signal should be extracted and re-distributed by the decoder to make use of the surround output channels (Ls and R5) in order to enhance the sense of immersion, while maintaining the original localization of primary sound components in the stereo image and making use of the center loudspeaker to improve the robustness of the sound image against lateral displacements of the listener away from the "sweet spot".
IMPROVED PHASE- AMPLITUDE STEREO ENCODER
An improved phase-amplitude matrixed surround encoder according to one embodiment of the present invention is elaborated in the following. In a first step, the positional encoding of primary sound components in the 2-D horizontal circle is considered. Then, a 3-D spherical encoding scheme is derived. Lastly, the encoding scheme is completed by including the addition of spatially diffuse ambient sound components in the encoded signal. In a preferred embodiment, spatial cues are provided for each individual sound source by a gaming engine or by a studio mixing application and the encoder operates on a time domain or frequency-domain representation of the source signals. In other embodiments, a multi-channel source signal is provided in a known spatial audio recording format, this signal is converted to or received in a frequency domain representation, and the spatial cues for each time and frequency are derived by spatial analysis of the multi-channel source signal.
2-D peripheral encoding
Considering a set of M monophonic sound source signals {5m[r] }, a two- channel stereo mixture {Lτ[t], Rτ[t] } of primary sound components can be expressed as: Lτ[t] = ∑m Lm Sm[t]
Rτ[t] = ∑m Rm Sm[t] (9.) where L1n and R1n denote the left and right panning coefficients for each source. For a source assigned the panning angle a on the encoding circle (as illustrated in FIG. 2A), the energy-preserving phase- amplitude panning coefficients can be expressed as: L(a) = cos(α/2 + π/4)
R(a) = sin(α/2 + π/4) (10.) where the panning angle a is measured clockwise from the front direction (C), and varies from a = -π/2 (radians) for a signal panned to the left channel to a = π/2 for a signal panned to the right channel. Assuming that a spans an interval extended to [-π, π], all positions on the encoding circle of FIG. 2A are uniquely encoded by equations (10), with panning coefficients of opposite polarity for positions in the surround arc (L-Ls-Rs-R). The application of the phase- amplitude panning equations (10) involves mapping the desired azimuth angle θ, measured on the listening circle shown in FIG. 3, to the panning angle a. As indicated in FIG. 2A, this mapping must be such that θ = θp maps to a = π/2 and that θ = θs maps to a = -as, where θp denotes the azimuth angle assigned to the front channels L or R (for instance 30°), θs denotes the azimuth angle assigned to the surround channels Ls or Rs (for instance 110°), and as verifies, for consistency with the multichannel matrix encoding equation (1), σs = \ as/2 + π/4 \. (11.)
For encoding at intermediate positions on the circle, any monotonous mapping from θ to a is in principle appropriate. In order to ensure compatibility with the matrix encoding of 5-channel mixes using equations (1), a suitable θ-to-a angular mapping function is one which is equivalent to 5-channel pairwise amplitude panning, using a well-known prior art panning technique such as the vector-based amplitude panning method (VBAP), followed by 5-to-2 matrix encoding.
However, the 5-to-2 encoding matrix is not actually energy preserving when its inputs are not mutually uncorrelated, as is the case when a source is amplitude panned between channels. For instance, it boosts signal power by l+sin(2σs) i.e. approximately 3 dB for a sound panned to rear center, and by 1+ VΪ7~2 or 2.3 dB for a sound panned equally between C and L. In an encoder according to an embodiment of the present invention, such energy deviations are eliminated by scaling each source signal according to its panning position. As a simplification, it is also advantageous to pan over only 4 channels (Ls, L, R, Rs), ignoring C, before matrix encoding. 2-D encoding with interior panning
An important difference between direct 2-channel encoding using equations (10) and multichannel panning with matrix encoding using equations (1) is that the latter incorporate a 90-degree phase shift applied to the surround channels Ls and Rs, which has the effect of distributing the 180-degree phase difference equally between the left and right encoded channels. Without this phase shift, denoted by j in equation (1), a "fly-by" or "fly-over" sound effect panned between front center position and the rear center position would be encoded as panning along the left half of the encoding circle. Denoting p(θ) the set of panning weights obtained by peripheral panning (using, for instance, the VBAP technique), the horizontal multichannel panning algorithm can be extended to include interior panning localizations as follows: P((9, ψ) = cosy/ p(θ) + siny/ ε (12.) where P is the resulting set of panning weights (prior to scaling for energy preservation), cosy/ and siny/ are "radial panning" coefficients with ψ within [0, π/2], and ε is a set of energy-preserving non-directional (or "middle") panning weights that yields a Gerzon velocity vector of zero magnitude by equations (6, 7). In the case of 4-channel panning over (Ls, L, R, Rs), the preferred solution for the set of non- directional panning weights ε is the one that exhibits left-right symmetry and a front- to-back amplitude panning ratio equal to I cosθs I COSΘF I.
FIG. 4A shows a plot of the Gerzon velocity vector g derived from P(θ, ψ) by equations (6, 7) when θ and ψ vary in 10-degree increments, with loudspeakers Ls, L, R, and R5 respectively located at azimuth angles - 110, -30, 30 and 110 degrees on the listening circle in the horizontal plane. The radial panning positions for a given azimuth value are connected by a solid line, which is prolonged by a dotted line connecting to the corresponding point on the edge of the listening circle. Similarly, FIG. 4B illustrates an alternative embodiment of the invention where loudspeakers L5, L, R, and Rs are respectively located at azimuth angles -130, -40, 40 and 130 degrees on the listening circle.
FIG. 5A plots the dominance vector derived from P(θ, ψ) by using equations (5) after matrix encoding by equations (1), under the same assumptions as in FIG. 4A, assuming that the surround encoding angle as is -148 degrees (i.e. σs = 29 degrees). The encoding positions for a given azimuth value are connected by a solid line. On the side arcs (L-Ls) and (R-Rs), this solid line is prolonged by a dotted segment connecting to the corresponding encoding point on the edge of the encoding circle, defined by the peripheral encoding equations (10) and assuming linear mapping from θ to a. Similarly, FIG. 5B plots the dominance vector derived for the alternative embodiment assumed in FIG. 4B, and assuming that the surround encoding angle as is -135 degrees (i.e. σs = 22.5 degrees).
Since the matrix encoding equations (1) are linear, the application of any A- channel radial panning technique followed by matrix encoding can also be viewed as a cross-fading operation applied to the phase- amplitude stereo encoding coefficients:
Figure imgf000016_0001
SL
R(a, ψ) = cosψ R(a) + sinψ SR (13.) where, SL and SR are derived by matrix encoding from the set of "middle" panning weights ε. Because of the 90-degree phase shifts in the matrix encoding equations (1), SL and SR are conjugate complex coefficients including a phase shift: SL = \cosθs\ + j COSΘF (cosσs + sinσs) SR = Icos0sl - j COS(9F (cosσs + sinσs). (14.)
Since the stereo encoding coefficients are generally not real factors, the direct implementation of 2-channel panning for each primary sound source is impractical in the time domain. Preferred time-domain embodiments of the invention use the A- channel peripheral-radial panning and encoding scheme described above, or may use panning and mixing in the 5-channel format (Ls, L, T, R, Rs), where T represents a virtual "middle" channel as indicated in FIG. 3, followed by 5-to-2 matrix encoding using the following encoding equations: Lτ = L + SL T + j (cosσs Ls + sinσs Rs)
RT = R + SR T - j (sinσs Ls + cosσs Rs)- (15.)
3-D positional phase- amplitude stereo encoding
When
Figure imgf000016_0003
= 0 (and therefore
Figure imgf000016_0002
= 1) in equation (12), the notional localization of the sound event coincides with the reference listening position. However, in 4-channel loudspeaker reproduction, a listener located at this position would perceive a sound event localized above the head. This suggests that increasing the value of the radial panning angle ψ from 0 to 90 degrees could be interpreted as increasing the elevation angle φ of the virtual source position on the listening sphere from 0 to 90 degrees. This interpretation of radial panning enables establishing an equivalence between 2-D peripheral-radial panning at a localization (θ, r) in the horizontal listening circle of FIG. 3, employing a virtual 'Middle' channel T, and 3-D multi-channel panning at a localization (θ, φ) on the upper hemisphere, where T represents a virtual or actual 'Top' channel and φ is the 3-D elevation angle, while r denotes the 2-D localization radius. The choice of mapping functions from the radial panning angle ψ to the radius r and to the elevation angle φ is not critical, provided that the mapping functions be monotonous and such that, when ψ increases from 0 to 90 degrees, the radius r decreases from 1 to 0 and the elevation angle φ increases from 0 to 90 degrees. The most straightforward assumption, adopted in the following embodiments, is that r = and φ = ψ, which implies that r and φ are related by vertical projection: r = cosφ. (16.)
Upon matrix encoding, any source localization on the upper hemisphere or the horizontal circle is thereby encoded by inter-channel amplitude and phase differences in the 2-channel signal {Lτ, RT) - In order to examine the properties of phase- amplitude stereo encoding systems, it is common to employ a spherical representation of stereo phase-amplitude encoding that extends the panning equations (10) to include arbitrary inter-channel phase differences: L(a, β) = cos(a/2 + π/4) e3βl2 R(a, β) = sin(α/2 + π/4) e"#2. (17.) In graphical representation, as shown in FIG. 2B, the inter-channel phase difference angle β is interpreted as a rotation around the left-right axis of the plane in which the amplitude panning angle a is measured. If a spans [-π/2, π/2] and β spans ]-π, π], the angle coordinates (a, β) uniquely map any inter-channel phase and/or amplitude difference to a position on the "Scheiber sphere". In particular, β = 0 describes the frontal arc (L-C-R) and β = π describes the rear arc (L-Ls-Rs-R)- By convention, in a preferred embodiment, positive values of β will correspond to the upper hemisphere and negative values of β to the lower hemisphere. For the "top" position T, equations (14) imply that the inter-channel phase difference in the matrix-encoded stereo signal is: βτ = 2 arctan[ (cosσs+sinσs) COS(9F / 1 cosθsl ] (18.) A useful property is that the dominance vector δ derived by equations (5) coincides with the vertical projection onto the horizontal plane of the position (a, β) on the Scheiber sphere:
Sx = sinα
Figure imgf000018_0001
Consequently, a dominance plot such as Figure 5 is also a "top-down" view of the notional encoding positions on the Scheiber sphere. This allows extending the phase- amplitude 3-D positional encoding scheme to include symmetrical positions in the lower hemisphere, by defining a "bottom" encoding position. In a preferred embodiment, this position, denoted B, is defined as the symmetric of the "top" position T on the Scheiber sphere with respect to the horizontal plane, at (a, β) = (0, - βr), so that the upper and lower hemispheres are equivalent for a 2-D matrix decoder. FIG. 6A and FIG. 6B together depict a 3-D positional phase- amplitude stereo encoding scheme according to a preferred embodiment of the present invention. FIG. 6A depicts a 6-channel panning module (600) for assigning a 3-D positional audio localization (θm, φm) to a primary sound source signal Sm in the 6-channel format (Ls, L, T, B, R, Rs) where T denotes the Top channel and B denotes the Bottom channel, as described previously. FIG. 6B depicts a phase-amplitude 3-D stereo encoding matrix module (610), where the resulting 6-channel signal (606) is matrix encoded into a two-channel phase-amplitude stereo encoded signal {Lτ, RT] according to the following encoding equations:
Lτ = L + εL T + εR B + j (cosσs L5 + ήnσs R5)
Rτ = R + εR T + εL B -i (sinσs Ls + cosσs Rs) (20.) where εL = -fi/2 exp( j βτ/2) and εR =
Figure imgf000018_0002
βτl2), so that ε∑2 + εR 2 = 1. In the 6-channel 3-D positional panning module depicted in FIG. 6A, the source is scaled by six panning coefficients 604 derived from the azimuth angle Q1n and the elevation angle φm as follows (omitting the source index m for clarity): L(θ, φ) = cosζo L(θ) Ls(θ, φ) = cosζo Ls(θ)
R(θ, φ) = cosζo R(θ) Rs(θ, φ) = cosζo Rs(θ) T(θ, φ) = smφ [φ > 0 ?] B(θ, φ) = -sinφ [φ < 0 ?] (21.) where [<condition> ?] denotes a logical bit (i.e. 1 if <condition> is true, 0 if it is false). In a preferred embodiment, the coefficients Ls(θ), L(θ), R(θ) and Rs(θ) in equation (21) are energy-preserving 4-channel 2-D peripheral amplitude panning coefficients derived from the azimuth angle θ using the VBAP method, according to the front and surround loudspeaker azimuth angles respectively denoted as ΘF and θs and assigned respectively to the front channel pair (L, R) and to the surround channel pair (Ls, Rs). Further, in a preferred embodiment of the present invention, the source signal feeding each panning module is scaled by an energy normalization factor 602, equal to: k(θ, φ) = l (22.)
^Lτ (θ, φf + Rτ (θ, φf where Lτ(θ, φ) and Rτ(θ, φ) are derived by applying the encoding matrix defined by equations (20) to the panning coefficients defined by equations (21). This normalization ensures that the contribution of each source signal S1n in the matrix- encoded signal {Lτ, RT] is energy-preserving, regardless of its panning localization (Q1n, φm).
The particular embodiment of the encoding matrix 610 in FIG. 6B is obtained by rewriting equation (20) as follows:
LT = L +TJΪ/2 (T + B) cos(y?r/2) + j [(T - B) sin(y?r/2) + cosσs Ls + sinσs Rs]
RT = R +TJΪ/2 (T + B) cos(βτ/2) - j [(T- B) sin(^r/2) + sinσs Ls + cosσs Rsl (23.) The resulting encoding matrix is an extension of the prior-art encoding matrix depicted in FIG. 1C, where the input C is optional. The encoding matrix receives 6 input channels 606 produced by the panning module 600. The input channels L5, L, R and Rs are processed exactly as in the legacy encoding matrix shown in FIG. 1, using multipliers 614 and all-pass filters 616. The encoding matrix also receives two additional channels T and B, derives their sum and difference signals, and applies to the sum and difference signals the scaling coefficients 612, respectively cos(/?y;/2) and sin(βτ/2). The scaled sum and difference signals and then further attenuated by a coefficient
Figure imgf000019_0001
combined, respectively, with the front channel and the scaled surround input channels. Alternative embodiments of the phase- amplitude matrixed surround encoding scheme according to the present invention may be realized, within the scope of the present invention, by selecting an arbitrary value within [0, π] for βr, instead of the value derived by equation (18).
Mapping the listening sphere to the Scheiber sphere
The combined effect of the 3-D positional panning module 600 and of the 3-D stereo encoding matrix 610 is to map the due localization (θ, φ) on the listening sphere to a notional position (a, β) on the Scheiber sphere. This mapping can be configured by setting the values of the angular parameters defined previously: θp within [0, π/2]; θs within [π/2, π]; σs within [0, π/4]; and βj within [0, π]. Two examples of such mapping are illustrated in FIG. 5 A and 5B. The setting of these parameters determines the compatibility of the encoding-decoding scheme according to the invention with legacy matrixed surround decoders and matrix-encoded content. For instance, a legacy-compatible encoder can be realized by setting ΘF = 30°, θs = 110°, σs = 29°, and deriving βτ according to equation (18). The range of possible encoding schemes can be further extended by introducing a front encoding angle parameter OF within [0, π/4], and replacing L and R respectively by (cosσf L + sinσf R) and (cosfff R + sinσF L) prior to applying equation (20) or (23). In a legacy-compatible embodiment of the encoding matrix, op = 0 and the channels L and R are passed unmodified to the encoded channels LT and RT, respectively. Further, it is straightforward to extend the preferred embodiment described above, within the scope of the invention, to use any intermediate P-channel format (C1, C2, ...Cp...) instead of the preferred 6-channel format (Ls, L, T, B, R, Rs), associated to additional or alternative intermediate channel positions {(θp, φp)} in the horizontal plane or anywhere on the listening sphere, using any 2-D or 3-D multi- channel panning technique to implement the multichannel positional panning module for each sound source signal Sm, and matrix-encoding each intermediate channel Cp as a 3-D source with localization (θp, φp) according to the panning and encoding scheme defined by equations (21, 23) or (21, 20).
Alternatively, in another embodiment of the invention, the localization of a sound source on the listening sphere is expressed according to the Duda-Algazi angular coordinate system, where the azimuth angle μ is measured in a plane containing the source and the left-right ear axis, and the elevation angle v measures the rotation of this plane with respect to the left-right ear axis. In this case the localization coordinates μ and v can be mapped separately to the amplitude panning angle a and the inter-channel phase difference angle β. One embodiment consists of setting a = μ and β = v, in which case the listening sphere maps identically to the Scheiber sphere, and phase-amplitude 3-D stereo encoding is achieved directly by applying equations (17).
It will be readily apparent that, regardless of the chosen mapping from localization to encoding position on the Scheiber sphere, the phase- amplitude stereo encoding of the signals according to the invention can be realized in the frequency domain by applying encoding coefficients L(am, βm) and L(am, βm) to a frequency- domain representation of the sound source signal Sm.
Ambience encoding
In a preferred embodiment of the invention, the interactive phase-amplitude stereo encoder includes means for incorporating spatially diffuse ambience and reverberation components in the 2-channel encoded output signal {LT, RT] -
Let us assume that the spatial audio scene contains only ambient components. In prior- art matrixed surround decoders, this condition is associated with zero dominance, and occurs when the signals Lτ and Rτ are uncorrelated and of equal energy (which is consistent with the signal properties of ambient components in conventional stereo recordings). In these conditions, a prior-art multichannel matrixed surround decoder falls into its passive decoding behavior, which has the effect of spreading signal energy into the surround channels. This is a desirable property both for matrixed surround decoders and for music upmixers.
However, a drawback of any matrixed surround encoding-decoding system using a prior- art time-domain matrix encoder complying with equation (1) is that the spatial distribution of an ambient sound scene reproduced by the decoder is not consistent with the original recording: it exhibits a significant systematic bias toward the rear channels Ls and Rs. An analogous phenomenon is visible in Figures 5A and 5B for primary signals, where it is seen that a multichannel signal having a null Gerzon velocity vector is encoded with strong negative dominance, indicating strong negative correlation between the left and right encoded signals LT and RT. In the case of a diffuse ambient signal (with a null energy vector), the front-to-back channel power ratio would be equal to ICOS(9SI/COS(9F, which by equation (5) sets the dominance at -0.434 on the y axis if θp = 30° and θs = 110°, causing a matrixed surround decoder to pan signal energy heavily into the surround channels (instead of falling into its passive behavior). In a preferred embodiment of a phase- amplitude stereo encoder according to the present invention, this bias is avoided by mixing the ambient components directly into the two-channel output {Lτ, RT] of the phase-amplitude encoder or into the input channels L and R of the encoding matrix 610 (whereas, in a prior-art encoding scheme, a significant amount of ambient signal energy would be mixed into the surround input channels of the encoding matrix).
FIG. 6C depicts an interactive phase-amplitude 3-D stereo encoder, according to a preferred embodiment of the invention. Each source S1n generates a primary sound component panned by a panning module 600 described previously and depicted in FIG. 6A, which assigns the localization (θm, φm) to the source signal. The output of each panning module 600 is added into the master multichannel bus 622 which feeds the encoding matrix 610 described previously and illustrated in FIG. 6B. Additionally, each source signal S1n generates a contribution 623 to the reverb send bus 624, which feeds a reverberation module 626, thereby producing the ambient sound component associated to the source signal S1n. The reverberation module 626 simulates the reverberation of a virtual room and generates two substantially uncorrelated reverberation signals by methods well known in the prior art, such as feedback delay networks. The two output signals of the reverberation module 626 are combined directly into the output {Lτ, RT) of the encoding matrix 610. The per- source processing module 623 that generates the primary sound component and the ambient sound component for each source signal S1n may include filtering and delaying modules 629 to simulate distance, air absorption, source directivity, or acoustic occlusion and obstruction effects caused by acoustic obstacles in the virtual scene, using methods known in the prior art. IMPROVED PHASE-AMPLITUDE MATRIXED SURROUND DECODER
In accordance with one embodiment of the invention, provided is a frequency domain method for phase-amplitude matrixed surround decoding of 2-channel stereo signals such as music recordings and movie or video game soundtracks, based on spatial analysis of 2-D or 3-D directional cues in the input signal and re-synthesis of these cues for reproduction on any headphone or loudspeaker playback system, using any chosen sound spatialization technique. As will be apparent in the following description, this invention enables the decoding of 3-D localization cues from two- channel audio recordings while preserving backward compatibility with prior-art two- channel horizontal- only phase- amplitude matrixed surround encoding-decoding techniques such as described previously.
The present invention uses a time/frequency analysis and synthesis framework to significantly improve the source separation performance of the matrixed surround decoder. The fundamental advantage of performing the analysis as a function of both time and frequency is that it significantly reduces the likelihood of concurrence or overlap of multiple sources in the signal representation, and thereby improves source separation. If the frequency resolution of the analysis is comparable to that of the human auditory system, the possible effects of any overlap of concurrent sources in the frequency-domain representation is substantially masked during reproduction of the decoder's output signal over headphones or loudspeakers.
By operating on frequency-domain signals and incorporating primary- ambient decomposition, a matrixed surround decoder according to the invention overcomes the limitations of prior-art matrix surround decoders in terms of diffuse ambience reproduction and directional source separation, and is able to analyze dominance information for primary sound components while avoiding confusion by the presence of ambient components in the scene, in order to accurately reproduce 2-D or 3-D positional cues via any spatial reproduction system. This enables a significant improvement in the spatial reproduction of two-channel matrix-encoded movie and game soundtracks or conventional stereo music recordings over headphones or loudspeakers. FIG. 7A is a signal flow diagram illustrating a phase-amplitude matrixed surround decoder in accordance with one embodiment of the present invention. Initially, a time/frequency conversion takes place in block 702 according to any conventional method known to those of skill in the relevant arts, including but not limited to the use of a short term Fourier transform (STFT) or any subband signal representation.
Next, in block 704, a primary- ambient decomposition occurs. This decomposition is advantageous because primary signal components (typically direct- path sounds) and ambient components (such as reverberation or applause) generally require different spatial synthesis strategies. The primary-ambient decomposition separates the two-channel input signal ST = [LT, RT) into a primary signal Sp = [PL, PR) whose channels are mutually correlated and an ambient signal SA = [AL, AR) whose channels are mutually uncorrelated or weekly correlated, such that a combination of signals Sp and SA reconstructs an approximation of signal ST and the contribution of ambient components existing in signal ST are significantly reduced in the primary signal Sp. Frequency-domain methods for primary-ambient decomposition are described in the prior art, for instance by Merimaa et al. in "Correlation-Based Ambience Extraction from Stereo Recordings", presented at the 123rd Convention of the Audio Engineering Society (October 2007). The primary signal Sp = [PL, PR) is then subjected to a localization analysis in block 706. For each time and frequency, the spatial analysis derives a spatial localization vector d representative of a physical position relative to the listener's head. This localization vector may be three-dimensional or two-dimensional, depending of the desired mode of reproduction of the decoder's output signal. In the three-dimensional case, the localization vector represents a position on a listening sphere centered on the listener' s head, characterized by an azimuth angle θ and an elevation angle φ. In the two-dimensional case, the localization vector may be taken to represent a position on or within a circle centered on the listener's head in the horizontal plane, characterized by an azimuth angle θ and a radius r. This two- dimensional representation enables, for instance, the parametrization of fly-by and fly- through sound trajectories in a horizontal multichannel playback system. In the localization analysis block 706, the spatial localization vector d is derived, for each time and frequency, from the inter-channel amplitude and phase differences present in the signal Sp. These inter-channel differences can be uniquely represented by a notional position (a, β) on the Scheiber sphere as illustrated in FIG. 2B, according to Eq. (17), where a denotes the amplitude panning angle and β denotes the inter-channel phase difference. According to equation (10) or (17), the panning angle a is related to the inter-channel level difference m =
Figure imgf000025_0001
I \PR\ by α = 2 tan"1(l/m) - π/2 (24.)
According to one embodiment on the invention, the operation of the localization analysis block 706 consists of computing the inter-channel amplitude and phase differences, followed by mapping from the notional position (a, β) on the Scheiber sphere to the direction (θ, φ) in the three-dimensional physical space or to the position (θ, r) in the two-dimensional physical space. In general, this mapping may be defined in an arbitrary manner and may even depend on frequency. According to another embodiment of the invention, the primary signal Sp is modeled as a mixture of elementary monophonic source signals Sm according to the matrix encoding equations (9, 10) or (9, 17), where the notional encoding position («m, βm) of each source is defined by a known bijective mapping from a two- dimensional or three-dimensional localization in a physical or virtual spatial sound scene. Such a mixture may be realized, for instance, by an audio mixing workstation or by an interactive audio rendering system such as found in video gaming systems and depicted in FIG. IA or FIG. 6C. In such applications, it is advantageous to implement the localization analysis block 706 such that the derived localization vector is obtained by inversion of the mapping realized by the matrix encoding scheme, so that playback of the decoder' s output signal faithfully reproduces the original spatial sound scene.
In another embodiment of the present invention, the localization analysis 706 is performed, at each time and frequency, by computing the dominance vector according to equations (5) and applying a mapping from the dominance vector position in the encoding circle to a physical position (θ, r) in the horizontal listening circle, as illustrated in FIG. 2A and exemplified in FIG. 5 A or 5B. Alternatively, the dominance vector position may then be mapped to a three-dimensional localization (θ, φ) by vertical projection from the listening circle to the listening sphere as follows:
Figure imgf000026_0001
signrø (25.) where the sign of the inter-channel difference β is used to differentiate the upper hemisphere from the lower hemisphere.
Block 708 realizes, in the frequency domain, the spatial synthesis of the primary components in the decoder output signal by applying to the primary signal Sp the spatial cues 707 derived by the localization analysis 706. A variety of approaches may be used for the spatial synthesis (or "spatialization") of the primary components from a monophonic signal, including ambisonic or binaural techniques as well as conventional amplitude panning methods. In one embodiment of the present invention, a mono primary signal P to be spatialized is derived, at each time and frequency, by a conventional mono downmix where P
Figure imgf000026_0002
In another embodiment, the computation of the mono signal P uses downmix coefficients that depend on time and frequency by application of the passive decoding equation for the notional position (a, β) derived from the inter-channel amplitude and phase differences computed in the localization analysis block 706:
P = L*(a, β) PL + R*(a, β) PR (26.) where L* (a, β) and R* (a, β) respectively denote the complex conjugates of the left and right encoding coefficients expressed by equations (17): L*(a, β) = cos(α/2 + π/4) e"j/?/2 R*(a, β) = sin(α/2 + π/4) e#2. (27.)
In general, the spatialization method used in the primary component synthesis block 708 should seek to maximize the discreteness of the perceived localization of spatialized sound sources. For ambient components, on the other hand, the spatial synthesis method, implemented in block 710, should seek to reproduce (or even enhance) the spatial spread or diffuseness of sound components. As illustrated in FIG. 7 A, the ambient output signals generated in block 710 are added to the primary output signals generated in block 708. Finally, a frequency/time conversion takes place in block 712, such as through the use of an inverse STFT, in order to produce the decoder's output signal.
In an alternative embodiment of the present invention, the primary-ambient decomposition 704 and the spatial synthesis of ambient components 710 are omitted. In this case, the localization analysis 706 is applied directly to the input signal {LT, RT] .
In yet another embodiment of the present invention, the time-frequency conversions blocks 702 and 712 and the ambient processing blocks 704 and 710 are omitted. Despite these simplifications, a matrixed surround decoder according to the present invention can offer significant improvements over prior art matrixed surround decoders, notably by enabling arbitrary 2-D or 3-D spatial mapping between the matrix-encoded signal representation and the reproduced sound scene.
Spatial analysis
The spatial analysis of the primary signal Sp = {PL, PR] produces, at each time and frequency, a format- independent spatial localization vector d, characterized by an azimuth angle θ and an elevation angle φ or a radius r, to be used in the spatial synthesis of primary signal components, according to any chosen multi-channel audio output format or spatial reproduction technique.
In one embodiment, it is assumed that the input signal ST = {LT, RT] was encoded according to the phase- amplitude 3-D positional encoding method defined previously by equations (20, 21) or (21, 23) and illustrated in FIG. 6A and 6B, with the values of the encoder parameters ΘF, θs, σs and βτ known a priori. This defines a unique mapping from the due localization d, characterized by (θ, φ) or (θ, r), to the dominance δ, characterized by (a, β) as illustrated by FIG. 5 A or FIG. 5B. By application of the corresponding inverse mapping, the spatial analysis can recover, at each time and frequency, the localization d from the dominance δ computed by equations (5).
In a preferred embodiment, this inverse mapping operation is realized by a table-lookup method that returns the values of the azimuth angle θ and of the radius r given the coordinates δx and δy of the dominance vector δ. The lookup tables are generated as follows:
(a) For a high-density sampling of all possible localization values (θ, φ), with θ uniformly sampled within [0, 2π] and φ uniformly sampled within [0, π], calculate the left and right encoding coefficients Lτ(θ, φ) and Rτ(θ, φ) by applying equations (20, 21) or (21, 23) and derive the coordinates δx{θ, φ) and δy{θ, φ) of the dominance vector from Lτ(θ, φ) and Rτ(θ, φ) by applying equations (5).
(b) Define a sampling of the dominance positions in the encoding circle according to the modified dominance coordinate system (θ', r') centered on the 'Top' encoding position T (the dominance position that is reached when φ = 0 for any value of θ), such that, for r' incrementing uniformly from 0 to 1, the dominance position increments linearly on a straight segment from the point T to a point on the edge of the encoding circle defined by the peripheral encoding equations (10) with θ' as the azimuth angle. Form a first two- dimensional lookup table that returns the nearest sampled position (θ', r') for uniformly sampled values of δx and δy.
(c) For each of the sampled dominance positions (θ', r'), record the localization value (θ, φ) corresponding to the nearest of the dominance positions obtained in step (b). For positions (θ', r') that fall beyond the side vertices (L-Ls) and
(R-Rs), record φ = 0 and determine θ by selecting the nearest of the extension segments that connect each radial panning locus to its corresponding peripheral encoding position on the edge of the circle (dotted segments on FIG. 5 A or 5B). Form a second two-dimensional lookup table that returns (θ, φ) for each of the sampled dominance positions (θ', r'), with θ' uniformly sampled within [0, 2π] and r' uniformly sampled within [0, I]. In the preferred embodiment, the inverse mapping operation for the spatial analysis of the localization (θ, φ) from the dominance (δx, δy) is performed in two steps, using the first table to derive (θ', r') and then the second table to obtain (θ, φ). The advantage of this two-step process is that it ensures high accuracy in the estimation of the localization coordinates θ and φ without employing extremely large lookup tables, despite the fact that the mapping function is heavily non uniform and very "steep" in some regions of the encoding circle (as is visible in FIG. 5A or FIG. 5B).
In an embodiment of the spatial analysis for a 2-D matrixed stereo decoder, the 2-D localization (θ, r) is derived from (θ, φ) by taking r = cosζo. In a preferred embodiment of the spatial analysis for a 3-D phase- amplitude stereo decoder, the sign of the inter-channel phase difference β, denoted sign (β), is computed in order to select the upper or lower hemisphere, and replace φ by its opposite if β is negative. The sign of β may be computed from the complex values of the signals PL and PR at each time and frequency, without explicitly computing their phase difference β: signrø = SIgn(Im(Pz, PR*)) (28.) where sign( . ) is -1 for a strictly negative value and 1 otherwise, Im( . ) denotes the imaginary part, and * denotes complex conjugation.
Spatial synthesis
FIG. 7B is a signal flow diagram depicting a phase-amplitude matrixed surround decoder for multichannel loudspeaker reproduction, in accordance with one embodiment of the present invention. The time/frequency conversion in block 702, primary- ambient decomposition in block 704 and localization analysis in block 706 are performed as described earlier. Given the time- and frequency-dependent spatial localization cues in block 707, the spatial synthesis of primary components in block 708 renders the primary signal Sp = [PL, PR) to N output channels where N corresponds to the number of transducers in block 714. In the embodiment of FIG. 7B, N = 4, but the synthesis is applicable to any number of output channels. Furthermore, the spatial synthesis of ambient components in block 710 renders the ambient signal SA = {AL, AR) to the same N output channels.
In one embodiment of block 705, the primary passive upmix forms a mono downmix of its input signal Sp = [PL, PR) and populates each of its output channels with this downmix. In one embodiment, the mono primary downmix signal, denoted as P, is derived by applying the passive decoding equation (26) for the time- and frequency-dependent encoding position (a, β) on the Scheiber sphere determined by the computed dominance vector δ and sign(/?) in the spatial analysis block 706. The spatial synthesis then consists of re- weighting the output channels of block 705 in block 709, at each time and frequency with gain factors computed based on the spatial cues 707, that is d = (θ, r) or d = (θ, φ). Using an intermediate mono downmix when upmixing a two-channel signal can lead to undesired spatial "leakage" or cross-talk: signal components presented exclusively in the left input channel PL may contribute to output channels on the right side as a result of spatial ambiguities due to frequency-domain overlap of concurrent sources. Although such overlap can be minimized by appropriate choice of the frequency-domain representation, it is preferable to minimize its potential impact on the reproduced scene by populating the output channels with a set of signals that preserves the spatial separation already provided in the decoder's input signal. In another embodiment of block 705, the primary passive upmix performs a passive matrix decoding into the N output signals according to equation (4) as Pn = βn) PL +
Figure imgf000030_0001
βn) PR for n = l...N (29.) where (an, βn) corresponds to the notional position of output channel n on the Scheiber sphere. The resulting N signals are then re- weighted in block 709 with gain factors computed based on the spatial cues 707. In one embodiment of block 709, the gain factors for each channel are determined by deriving multichannel panning coefficients at each time and frequency based on the localization vector d and on the output format, which may be provided by user input or determined by automated estimation. In the case where the decoder's input signal ST = {Lτ, RT) is a matrix-encoded signal generated according to an embodiment of invention, and the decoder' s output format exactly corresponds to the 4-channel layout (Ls, L, R, Rs) characterized by the front-channel azimuth angle Qp and the surround-channel azimuth angle θs, then an embodiment of the spatial synthesis block 708 generating a mono downmix signal in block 705 according to equations (26, 27), and panning this downmix signal over the output channels (Ls, L, R, Rs) in block 709 according to the 2-D peripheral-radial panning method described previously can reconstruct the original set of primary signal components {Ls, L, R, Rs} as if no intermediate matrix encoding-decoding had taken place (assuming that the primary- ambient decomposition 704 has successfully extracted all ambient signal components from the signal Sp = [PL, PR) and assuming that concurrent sound sources are perfectly separated in the chosen time-frequency signal representation).
Similarly, an embodiment of the frequency-domain spatial synthesis block 708 according to the invention may be realized using any sound spatialization or positional audio rendering technique whereby a mono signal is assigned a 3-D localization (θ, φ) on the listening sphere or a 2-D localization (θ, r) on the listening circle, for spatial reproduction over loudspeakers or headphones. Such spatialization techniques include, and are not limited to, amplitude panning techniques (such as VBAP), binaural techniques, ambisonic techniques, and wave-field synthesis techniques.
Methods for frequency-domain spatial synthesis using amplitude panning techniques are described in more detail in U.S. Patent Application Ser. No. 11/750,300, entitled Spatial Audio Coding Based on Universal Spatial Cues. Methods for frequency- domain spatial synthesis using binaural, ambisonic, wave-field synthesis or other spatialization techniques based on inter-channel amplitude and phase differences are described further in U.S. Patent Application Ser. No. 12/243,963, entitled "Spatial Audio Analysis and Synthesis for Binaural Reproduction and Format Conversion", attorney docket no. CLIP227US, filed Oct. 1, 2008 and incorporated by reference Block 713 in FIG. 7B illustrates one embodiment of the spatial synthesis of ambient components. In general, the spatial synthesis of ambience should seek to reproduce (or even enhance) the spatial spread or diffuseness of the corresponding sound components. In block 713, the ambient passive upmix first distributes the ambient signals [AL, AR) to each output signal of the block, based on the given output format. In one embodiment, the left-right separation is maintained for pairs of output channels that are symmetric in the left-right direction. That is, AL is distributed to the left and AR to the right channel of such a pair. For non-symmetric channel configurations, passive upmix coefficients for the signals [AL, AR) may be obtained by passive upmix using equations (29) applied to [AL, AR) instead of [PL, PR) . Each channel is then weighted so that the total energy of the output signals matches that of the input signals, and so that the resulting Gerzon energy vector, computed according to equations (6) and (8), be of zero magnitude. The weighting coefficients can be computed once based on the output format alone, by assuming that AL and AR have the same energy and applying methods specified in the U.S. Patent Application Ser. No. 11/750,300 entitled Spatial Audio Coding Based on Universal Spatial Cues, incorporated herein by reference. A perceptually accurate multi-channel spatial reproduction of the ambient components over loudspeakers requires that the ambient output signals be mutually uncorrelated. This may be achieved by applying all-pass (or substantially all-pass) "decorrelation filters" (or "decorrelators") to at least some of the ambient output channel signals before combination with the primary output channel signals. In one embodiment of the spatial synthesis of ambient components in block 710 of FIG. 7B, the passively upmixed ambient signals are decorrelated in block 713. In one embodiment of block 713, depending on the operation of the passive upmix block 711, all-pass filters are applied to a subset of the ambient channels such that all output channels of block 713 are mutually uncorrelated. Any other decorrelation method known to those of skill in the relevant arts is similarly viable, and the decorrelation processing may also include delay elements.
Finally, the primary and ambient signals corresponding to each of the N output channels are summed and converted to the time domain in block 712. The time- domain signals are then directed to the N transducers 714. The matrixed surround decoding methods described result in a significant improvement in the spatial quality of reproduction of 2-channel Dolby-Surround movie soundtracks over headphones or loudspeakers. Indeed, this invention enables a listening experience that is a close approximation of that provided by direct discrete multichannel reproduction or by discrete multi-channel encoding-decoding technology such as Dolby Digital or DTS. Furthermore, the decoding methods described enable faithful reproduction of the original spatial sound scene not only over the originally assumed target multi-channel loudspeaker layout, but also over headphones or loudspeakers with full flexibility in the number of output channels, their layout, and the spatial rendering technique. IMPROVED MULTI-CHANNEL MATRIXED SURROUND ENCODER
FIG. 8 is a signal flow diagram illustrating a phase-amplitude stereo encoder in accordance with one embodiment of the present invention, where a multi-channel source signal is provided in a known spatial audio recording format. Initially, a time/frequency conversion takes place in block 802. For example, the frequency domain representation may be generated using an STFT. Next, in block 804, primary ambient decomposition takes place, according to any known or conventional methods. Matrix encoding of the primary components of the signal occurs in block 806, followed by the addition of the ambient signals. Finally, in block 808, a frequency/time conversion takes place, such as through the use of an inverse STFT. This method ensures that ambient signal components are encoded in the form of an uncorrelated signal pair, which ensures that a matrix decoder will render them with adequately diffuse spatial distribution.
In one embodiment, the multi-channel source signal is a 5-channel signal in the standard "3-2 stereo" format (Ls, L, C, R, Rs) corresponding to the loudspeaker layout depicted in FIG. IA, and the matrix encoding of primary components in block 806 is performed according to equations (1) applied at each time and frequency. In an alternative embodiment, the multi-channel source signal is provided in a P-channel format (C1, C2, ...Cp...) where each channel Cp is intended for reproduction by a loudspeaker located at localization (θp, φp), and the matrix encoding in block 806 is performed by:
LT = ΣP L{ap, βp) Cp
Rτ = ΣP R(aP, βP) Cp (30.) where (ap, βp) is derived by mapping each localization (θp, φp) to its corresponding notional encoding position (ap, βp) on the Scheiber sphere, and the phase- amplitude encoding coefficients L(ap, βp) and R(ap, βp) are given by equations (17). Alternatively the encoding coefficients may be derived by equations (20) or by any chosen localization-to-dominance mapping convention.
In other embodiments of the primary matrix encoding block 806, the spatial localization cues (θ, φ) are derived, at each time and frequency, by spatial analysis of the primary multi-channel signal, and the phase-amplitude encoding coefficients L(a, β) and R(a, β) are obtained by mapping (θ, φ) to (a, β), as described earlier. In one embodiment, this mapping is realized by applying, at each time and frequency, the encoding scheme described by equations (20, 21) or (21, 23) and FIG. 6A-6B. The spatial analysis may be performed by various methods, including the DirAC method or the spatial analysis method described in copending U.S. Patent Application Ser. No. 11/750,300, entitled Spatial Audio Coding Based on Universal Spatial Cues.
Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims.
Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.

Claims

CLAIMSWhat is claimed is:
1. A method for two-channel phase amplitude stereo encoding of at least one audio source signal assigned a localization relative to a listener position, the method comprising : scaling the at least one audio input source by panning coefficients derived from the localization to generate a multi-channel signal corresponding to a desired multi-channel format; and matrix encoding the multi-channel signal to generate a 2-channel encoded signal such that the localization of the at least one source is represented by inter- channel phase and amplitude differences in the 2-channel encoded signal; such that the total power of the contribution of the source in the 2-channel encoded signal is equal to the power of the audio source signal regardless of the assigned localization.
2. The method as recited in claim 1 wherein the scaling the at least one audio input source is performed by frequency-independent encoding coefficients derived from the localization to generate a 2-channel encoded signal such that the position of the at least one source is represented by inter-channel phase and amplitude differences in the 2-channel encoded signal and further comprising generating a first unlocalized audio signal and a second unlocalized audio signal from the unlocalized audio source signal such that the first and second audio signals are substantially uncorrelated such that the localization includes an azimuth angle and an elevation angle.
3. The method as recited in claim 1 wherein wherein panning coefficients are derived from the azimuth angle by the use of vector based amplitude panning (VBAP) techniques.
4. The method as recited in claim 1 wherein the scaling accommodates a top channel corresponding to an upper hemisphere located above the listening plane and a bottom channel located below the listening plane.
5. The method as recited in claim 1 wherein the scaling results in a six channel signal and wherein the six channel signal is matrix encoded into a two channel phase- amplitude stereo encoded signal.
6. The method as recited in claim lwherein the at least one audio source signal comprises a plurality of sources and wherein the scaled multi-channel signal for each source is combined prior to matrix encoding.
7. A method for two-channel phase amplitude stereo encoding of at least one localized audio source signal assigned a localization relative to a listener position and at least one unlocalized audio source signal, the method comprising : scaling the at least one audio input source by frequency-independent encoding coefficients derived from the localization to generate a 2-channel encoded signal such that the position of the at least one source is represented by inter-channel phase and amplitude differences in the 2-channel encoded signal; generating a first unlocalized audio signal and a second unlocalized audio signal from the unlocalized audio source signal such that the first and second audio signals are substantially uncorrelated; and adding the first and second audio signals respectively to first and second encoded channel signals.
8. A method for two-channel phase amplitude stereo encoding of at least one localized audio source signal assigned a localization in three dimensions relative to a listener, the method comprising: scaling the at least one audio input source by frequency-independent encoding coefficients derived from the localization to generate a 2-channel encoded signal such that the position of the at least one source is represented by inter-channel phase and amplitude differences in the 2-channel encoded signal; generating a first unlocalized audio signal and a second unlocalized audio signal from the unlocalized audio source signal such that the first and second audio signals are substantially uncorrelated; such that the localization includes an up-down dimension, a left-right dimension and a front-back dimension.
9. A method for deriving three-dimensional encoded localization cues from an audio input signal having a first channel signal and a second channel signal comprising:
(a) converting the first and second channel signals to a frequency-domain or subband representation comprising a plurality of time-frequency tiles; and
(b) deriving a direction for each time-frequency tile in the plurality by considering the inter-channel amplitude difference and the inter-channel phase difference between the first channel signal and the second channel signal.; such that the localization cues includes an up-down dimension, a left-right dimension and a front-back dimension.
10. The method as recited in claim 9 wherein the localization cues include an azimuth angle and an elevation angle.
11. The method recited in claim 9 where deriving the localization for each time-frequency tile includes mapping the inter-channel differences to a position on a notional sphere or within a notional circle, such that the inter-channel phase difference maps to a position coordinate along a front-back axis.
12. The method recited in claim 9 where the input signal is obtained by phase- amplitude matrix encoding of a multichannel recording having multichannel spatial cues, and the derived encoded spatial cues substantially match the multichannel spatial cues of the multichannel recording.
13. The method recited in claim 9 further comprising separating ambient sound components from primary sound components in the audio input signal and deriving the direction for the primary sound components only.
14. The method as recited in claim 9 further comprising decomposing the frequency domain signal into primary and ambient components and determining for each time and frequency of the primary component a spatial localization vector representative of a physical position relative to the listener's head, the localization vector characterized by at least an azimuth angle, wherein the azimuth angle is derived for each time and frequency from the inter-channel phase and amplitude differences present in the primary component of the stereo signal.
PCT/US2008/079004 2007-10-04 2008-10-06 Phase-amplitude 3-d stereo encoder and decoder WO2009046460A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN200880119420.4A CN101889307B (en) 2007-10-04 2008-10-06 Phase-amplitude 3-D stereo encoder and decoder
GB1006666.0A GB2467247B (en) 2007-10-04 2008-10-06 Phase-amplitude 3-D stereo encoder and decoder

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US97743207P 2007-10-04 2007-10-04
US60/977,432 2007-10-04
US12/047,285 US8345899B2 (en) 2006-05-17 2008-03-12 Phase-amplitude matrixed surround decoder
US12/047,285 2008-03-12
US10200208P 2008-10-01 2008-10-01
US61/102,002 2008-10-01

Publications (2)

Publication Number Publication Date
WO2009046460A2 true WO2009046460A2 (en) 2009-04-09
WO2009046460A3 WO2009046460A3 (en) 2009-06-11

Family

ID=40526992

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/079004 WO2009046460A2 (en) 2007-10-04 2008-10-06 Phase-amplitude 3-d stereo encoder and decoder

Country Status (3)

Country Link
CN (1) CN101889307B (en)
GB (1) GB2467247B (en)
WO (1) WO2009046460A2 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011090437A1 (en) * 2010-01-19 2011-07-28 Nanyang Technological University A system and method for processing an input signal to produce 3d audio effects
WO2014161996A2 (en) * 2013-04-05 2014-10-09 Dolby International Ab Audio processing system
CN104378728A (en) * 2014-10-27 2015-02-25 常州听觉工坊智能科技有限公司 Stereophonic audio processing method and device
WO2015010962A3 (en) * 2013-07-22 2015-03-26 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method, signal processing unit, and computer program for mapping a plurality of input channels of an input channel configuration to output channels of an output channel configuration
WO2015074400A1 (en) * 2013-11-19 2015-05-28 深圳市新一代信息技术研究院有限公司 Method and apparatus for extracting acoustic image body of sound source in 3d space
US9282417B2 (en) 2010-02-02 2016-03-08 Koninklijke N.V. Spatial sound reproduction
WO2018059742A1 (en) * 2016-09-30 2018-04-05 Benjamin Bernard Method for conversion, stereophonic encoding, decoding and transcoding of a three-dimensional audio signal
US10375472B2 (en) 2015-07-02 2019-08-06 Dolby Laboratories Licensing Corporation Determining azimuth and elevation angles from stereo recordings
WO2020073024A1 (en) 2018-10-05 2020-04-09 Magic Leap, Inc. Emphasis for audio spatialization
CN111542877A (en) * 2017-12-28 2020-08-14 诺基亚技术有限公司 Determination of spatial audio parametric coding and associated decoding
US10854210B2 (en) 2016-09-16 2020-12-01 Coronal Audio S.A.S. Device and method for capturing and processing a three-dimensional acoustic field
US11032639B2 (en) 2015-07-02 2021-06-08 Dolby Laboratories Licensing Corporation Determining azimuth and elevation angles from stereo recordings
CN114005454A (en) * 2015-06-17 2022-02-01 三星电子株式会社 Internal sound channel processing method and device for realizing low-complexity format conversion

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102522093A (en) * 2012-01-09 2012-06-27 武汉大学 Sound source separation method based on three-dimensional space audio frequency perception
EP2665208A1 (en) * 2012-05-14 2013-11-20 Thomson Licensing Method and apparatus for compressing and decompressing a Higher Order Ambisonics signal representation
KR20230163585A (en) * 2013-04-26 2023-11-30 소니그룹주식회사 Audio processing device, method, and recording medium
US9998845B2 (en) * 2013-07-24 2018-06-12 Sony Corporation Information processing device and method, and program
JP6543627B2 (en) * 2013-07-30 2019-07-10 ディーティーエス・インコーポレイテッドDTS,Inc. Matrix decoder with constant output pairwise panning
SG11201803909TA (en) * 2015-11-17 2018-06-28 Dolby Laboratories Licensing Corp Headtracking for parametric binaural output system and method
CN106155982B (en) * 2016-07-08 2019-03-15 天津大学 Amplitude/frequency/time encoding and Short Time Fourier Transform coding/decoding method and device
CN106412792B (en) * 2016-09-05 2018-10-30 上海艺瓣文化传播有限公司 The system and method that spatialization is handled and synthesized is re-started to former stereo file
US10158963B2 (en) * 2017-01-30 2018-12-18 Google Llc Ambisonic audio with non-head tracked stereo based on head position and time
CN110800048B (en) * 2017-05-09 2023-07-28 杜比实验室特许公司 Processing of multichannel spatial audio format input signals
US11328735B2 (en) * 2017-11-10 2022-05-10 Nokia Technologies Oy Determination of spatial audio parameter encoding and associated decoding
CN109036456B (en) * 2018-09-19 2022-10-14 电子科技大学 Method for extracting source component environment component for stereo
CN110751956B (en) * 2019-09-17 2022-04-26 北京时代拓灵科技有限公司 Immersive audio rendering method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007031896A1 (en) * 2005-09-13 2007-03-22 Koninklijke Philips Electronics N.V. Audio coding
WO2007096808A1 (en) * 2006-02-21 2007-08-30 Koninklijke Philips Electronics N.V. Audio encoding and decoding

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE0400998D0 (en) * 2004-04-16 2004-04-16 Cooding Technologies Sweden Ab Method for representing multi-channel audio signals

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007031896A1 (en) * 2005-09-13 2007-03-22 Koninklijke Philips Electronics N.V. Audio coding
WO2007096808A1 (en) * 2006-02-21 2007-08-30 Koninklijke Philips Electronics N.V. Audio encoding and decoding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HERRE J. ET AL.: 'MPEG Surround . The ISO/MPEG Standard for Efficient and Compatible Multi-Channel Audio Coding' AES 122ND CONVENTION 05 May 2007, AUSTRIA, *
'Proc. of the 7th Int. Conference on Digital Audio Effects (DAFx' 04)', 05 October 2004, NAPLES, ITALY article FALLER C.: 'PARAMETRIC CODING OF SPATIAL AUDIO' *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013517737A (en) * 2010-01-19 2013-05-16 ナンヤン・テクノロジカル・ユニバーシティー System and method for processing an input signal for generating a 3D audio effect
WO2011090437A1 (en) * 2010-01-19 2011-07-28 Nanyang Technological University A system and method for processing an input signal to produce 3d audio effects
US9282417B2 (en) 2010-02-02 2016-03-08 Koninklijke N.V. Spatial sound reproduction
WO2014161996A2 (en) * 2013-04-05 2014-10-09 Dolby International Ab Audio processing system
WO2014161996A3 (en) * 2013-04-05 2014-12-04 Dolby International Ab Audio processing system
US9812136B2 (en) 2013-04-05 2017-11-07 Dolby International Ab Audio processing system
US9478224B2 (en) 2013-04-05 2016-10-25 Dolby International Ab Audio processing system
US10154362B2 (en) 2013-07-22 2018-12-11 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for mapping first and second input channels to at least one output channel
US10798512B2 (en) 2013-07-22 2020-10-06 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and signal processing unit for mapping a plurality of input channels of an input channel configuration to output channels of an output channel configuration
CN105556992A (en) * 2013-07-22 2016-05-04 弗朗霍夫应用科学研究促进协会 Apparatus, method, and computer program for mapping first and second input channels to at least one output channel
WO2015010961A3 (en) * 2013-07-22 2015-03-26 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus, method, and computer program for mapping first and second input channels to at least one output channel
US11272309B2 (en) 2013-07-22 2022-03-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for mapping first and second input channels to at least one output channel
WO2015010962A3 (en) * 2013-07-22 2015-03-26 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method, signal processing unit, and computer program for mapping a plurality of input channels of an input channel configuration to output channels of an output channel configuration
US9936327B2 (en) 2013-07-22 2018-04-03 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and signal processing unit for mapping a plurality of input channels of an input channel configuration to output channels of an output channel configuration
US11877141B2 (en) 2013-07-22 2024-01-16 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method and signal processing unit for mapping a plurality of input channels of an input channel configuration to output channels of an output channel configuration
CN105556992B (en) * 2013-07-22 2018-07-20 弗朗霍夫应用科学研究促进协会 The device of sound channel mapping, method and storage medium
US10701507B2 (en) 2013-07-22 2020-06-30 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for mapping first and second input channels to at least one output channel
EP3518563A3 (en) * 2013-07-22 2019-08-14 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Apparatus and method for mapping first and second input channels to at least one output channel
US9646617B2 (en) 2013-11-19 2017-05-09 Shenzhen Xinyidai Institute Of Information Technology Method and device of extracting sound source acoustic image body in 3D space
WO2015074400A1 (en) * 2013-11-19 2015-05-28 深圳市新一代信息技术研究院有限公司 Method and apparatus for extracting acoustic image body of sound source in 3d space
CN104378728A (en) * 2014-10-27 2015-02-25 常州听觉工坊智能科技有限公司 Stereophonic audio processing method and device
CN114005454A (en) * 2015-06-17 2022-02-01 三星电子株式会社 Internal sound channel processing method and device for realizing low-complexity format conversion
US10375472B2 (en) 2015-07-02 2019-08-06 Dolby Laboratories Licensing Corporation Determining azimuth and elevation angles from stereo recordings
US11032639B2 (en) 2015-07-02 2021-06-08 Dolby Laboratories Licensing Corporation Determining azimuth and elevation angles from stereo recordings
US10854210B2 (en) 2016-09-16 2020-12-01 Coronal Audio S.A.S. Device and method for capturing and processing a three-dimensional acoustic field
CN109791768B (en) * 2016-09-30 2023-11-07 冠状编码股份有限公司 Process for converting, stereo encoding, decoding and transcoding three-dimensional audio signals
WO2018059742A1 (en) * 2016-09-30 2018-04-05 Benjamin Bernard Method for conversion, stereophonic encoding, decoding and transcoding of a three-dimensional audio signal
US11232802B2 (en) 2016-09-30 2022-01-25 Coronal Encoding S.A.S. Method for conversion, stereophonic encoding, decoding and transcoding of a three-dimensional audio signal
CN109791768A (en) * 2016-09-30 2019-05-21 冠状编码股份有限公司 For being converted to three-dimensional sound signal, stereo coding, decoding and transcoding process
CN111542877A (en) * 2017-12-28 2020-08-14 诺基亚技术有限公司 Determination of spatial audio parametric coding and associated decoding
CN111542877B (en) * 2017-12-28 2023-11-24 诺基亚技术有限公司 Determination of spatial audio parameter coding and associated decoding
WO2020073024A1 (en) 2018-10-05 2020-04-09 Magic Leap, Inc. Emphasis for audio spatialization
US11696087B2 (en) 2018-10-05 2023-07-04 Magic Leap, Inc. Emphasis for audio spatialization
US11463837B2 (en) 2018-10-05 2022-10-04 Magic Leap, Inc. Emphasis for audio spatialization
EP3861763A4 (en) * 2018-10-05 2021-12-01 Magic Leap, Inc. Emphasis for audio spatialization

Also Published As

Publication number Publication date
GB2467247A (en) 2010-07-28
CN101889307A (en) 2010-11-17
GB2467247B (en) 2012-02-29
CN101889307B (en) 2013-01-23
GB201006666D0 (en) 2010-06-09
WO2009046460A3 (en) 2009-06-11

Similar Documents

Publication Publication Date Title
US8712061B2 (en) Phase-amplitude 3-D stereo encoder and decoder
WO2009046460A2 (en) Phase-amplitude 3-d stereo encoder and decoder
US10609503B2 (en) Ambisonic depth extraction
US10820134B2 (en) Near-field binaural rendering
US8345899B2 (en) Phase-amplitude matrixed surround decoder
Kyriakakis Fundamental and technological limitations of immersive audio systems
TWI517028B (en) Audio spatialization and environment simulation
US8374365B2 (en) Spatial audio analysis and synthesis for binaural reproduction and format conversion
EP2805326B1 (en) Spatial audio rendering and encoding
US8391508B2 (en) Method for reproducing natural or modified spatial impression in multichannel listening
US9197977B2 (en) Audio spatialization and environment simulation
JP2012514358A (en) Method and apparatus for encoding and optimal reproduction of a three-dimensional sound field
Jot Interactive 3D audio rendering in flexible playback configurations
Wiggins An investigation into the real-time manipulation and control of three-dimensional sound fields
Jot et al. Binaural simulation of complex acoustic scenes for interactive audio
Malham Approaches to spatialisation
Jot et al. Spatial enhancement of audio recordings
Pulkki et al. Multichannel audio rendering using amplitude panning [dsp applications]
Jot Two-Channel Matrix Surround Encoding for Flexible Interactive 3-D Audio Reproduction
Jot Efficient Description and Rendering of Complex Interactive Acoustic Scenes
Tsakostas et al. Binaural rendering for enhanced 3d audio perception
Tarzan et al. Assessment of sound spatialisation algorithms for sonic rendering with headsets
Tsakostas et al. Real-time spatial mixing using binaural processing
Trevino Lopez et al. Evaluation of different spatial windows for a multi-channel audio interpolation system

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200880119420.4

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08834762

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 1006666

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20081006

WWE Wipo information: entry into national phase

Ref document number: 1006666.0

Country of ref document: GB

122 Ep: pct application non-entry in european phase

Ref document number: 08834762

Country of ref document: EP

Kind code of ref document: A2