WO2006004605A2 - Multi-pass video encoding - Google Patents

Multi-pass video encoding Download PDF

Info

Publication number
WO2006004605A2
WO2006004605A2 PCT/US2005/022616 US2005022616W WO2006004605A2 WO 2006004605 A2 WO2006004605 A2 WO 2006004605A2 US 2005022616 W US2005022616 W US 2005022616W WO 2006004605 A2 WO2006004605 A2 WO 2006004605A2
Authority
WO
WIPO (PCT)
Prior art keywords
encoding
image
images
complexity
readable medium
Prior art date
Application number
PCT/US2005/022616
Other languages
French (fr)
Other versions
WO2006004605A3 (en
WO2006004605B1 (en
Inventor
Xin Tong
Hsi-Jung Wu
Thomas Pun
Adriana Dumitra
Barin Haskel
Jim Normile
Original Assignee
Apple Computer, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/118,616 external-priority patent/US8406293B2/en
Priority claimed from US11/118,604 external-priority patent/US8005139B2/en
Application filed by Apple Computer, Inc. filed Critical Apple Computer, Inc.
Priority to KR1020067017074A priority Critical patent/KR100909541B1/en
Priority to CN2005800063635A priority patent/CN1926863B/en
Priority to JP2007518338A priority patent/JP4988567B2/en
Priority to EP05773224A priority patent/EP1762093A4/en
Publication of WO2006004605A2 publication Critical patent/WO2006004605A2/en
Publication of WO2006004605A3 publication Critical patent/WO2006004605A3/en
Publication of WO2006004605B1 publication Critical patent/WO2006004605B1/en
Priority to HK07106057.0A priority patent/HK1101052A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/142Detection of scene cut or scene change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/15Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/192Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive

Definitions

  • Video encoders encode a sequence of video images (e.g., video frames) by using a variety of encoding schemes.
  • Video encoding schemes typically encode video frames or portions of video frames (e.g., sets of pixels in the video frames) in terms of intraframes or interframes.
  • An intraframe encoded frame or pixel set is one that is encoded independently of other frames or pixels sets in other frames.
  • An interframe encoded frame or pixel set is one that is encoded by reference to one or more other frames or pixel sets in other frames.
  • some encoders When compressing video frames, some encoders implements a 'rate controller,' which provides a 'bit budget' for a video frame or a set of video frames that are to be encoded.
  • the bit budget specifies the number of bits that have been allocated to encode the video frame or set of video frames.
  • the rate controller attempts to generate the highest quality compressed video stream in view of certain constraints (e.g., a target bit rate, etc.).
  • a single-pass rate controller provides bit budgets for an encoding scheme that encodes a series of video images in one pass
  • a multi-pass rate controller provides bit budgets for an encoding scheme that encodes a series of video images in multiple passes.
  • Multi ⁇ pass rate controllers are useful in real-time encoding situations.
  • Multi ⁇ pass rate controllers optimize the encoding for a particular bit rate based on a set of constraints. Not many rate controllers to date consider the spatial or temporal complexity of frames or pixel-sets within the frames in controlling the bit rates of their encodings. Also, most multi-pass rate controllers do not adequately search the solution space for encoding solutions that use optimal quantization parameters for frames and/or pixel sets within frames in view of a desired bit rate.
  • a rate controller that uses novel techniques to consider the spatial or temporal complexity of video images and/or portions of video images, while controlling the bit rate for encoding a set of video images.
  • a multi-pass rate controller that adequately examines the encoding solutions to identify an encoding solution that uses an optimal set of quantization parameters for video images and/or portions of video images.
  • Some embodiments of the invention provide a multi-pass encoding method that encodes several images (e.g., several frames of a video sequence).
  • the method iteratively performs an encoding operation that encodes these images.
  • the encoding operation is based on a nominal quantization parameter, which the method uses to compute quantization parameters for the images.
  • the method uses several different nominal quantization parameters.
  • the method stops its iterations when it reaches a terminating criterion (e.g., it identifies an acceptable encoding of the images).
  • Some embodiments of the invention provide a method for encoding video sequences.
  • the method identifies a first attribute quantifying the complexity of a first image in the video. It also identifies a quantization parameter for encoding the first image based on the identified first attribute.
  • the method then encodes the first image based on the identified quantization parameter. In some embodiments, this method performs these three operations for several images in the video.
  • Some embodiments of the invention encode a sequence of video images based on "visual masking" attributes of the video images and/or portions of the video images.
  • Visual masking of an image or a portion of the image is an indication of how much coding artifacts can be tolerated in the image or image portion.
  • some embodiments compute a visual masking strength that quantifies the brightness energy of the image or the image portion.
  • the brightness energy is measured as a function of the average luma or pixel energy of the image or image portion.
  • the visual masking strength of an image or image portion might also quantify activity energy of the image or image portion.
  • the activity energy expresses the complexity of the image or image portion.
  • the activity energy includes a spatial component that quantifies the spatial complexity of the image or image portion, and/or a motion component that quantifies the amount of distortion that can be . tolerated/masked due to motion between images.
  • Some embodiments of the invention provide a method for encoding video sequences.
  • the method identifies a visual-masking attribute of a first image in the video. It also identifies a quantization parameter for encoding the first image based on the identified visual-masking attribute.
  • the method then encodes the first image based on the identified quantization parameter.
  • Figure 1 presents a process that conceptually illustrates the encoding method of some embodiments of the invention.
  • Figure 2 conceptually illustrates a codec system of some embodiments.
  • Figure 3 is a flow chart illustrating encoding process of some embodiments.
  • Figures 4a is plot of the difference between nominal removal time and final arrival time of images versus image number illustrating underflow condition in some embodiments.
  • Figure 4b illustrates plot of the difference between nominal removal time and final arrival time of images versus image number for the same images shown in Figure 4a after the underflow condition is eliminated.
  • Figure 5 illustrates a process that the encoder uses to perform underflow detection in some embodiments.
  • Figure 6 illustrates a process the encoder utilizes to eliminate underflow condition in a single segment of images in some embodiments.
  • Figure 7 illustrates an application of buffer underflow management in a video streaming application.
  • Figure 8 illustrates an application of buffer underflow management in an HD- DVD system.
  • Figure 9 presents a computer system with which one embodiment of the invention is implemented.
  • R T represents a target bit rate, which is a desired bit rate for encoding a sequence of frames. Typically, this bit rate is expressed in units of bit/second, and is calculated from the desired final file size, the number of frames in the sequence, and the frame rate.
  • Rp represents the bit rate of the encoded bit stream at the end of a pass p.
  • E p represents the percentage of error in the bit rate at the end of pass p.
  • this percentage is calculated as 100 x ⁇ represents the error tolerance in the final bit rate.
  • ⁇ c represents the error tolerance in the bit rate for the first QP search stage.
  • QP represents the quantization parameter
  • QPNom(p ) represents the nominal quantization parameter that is used in pass p encoding for a sequence of frames.
  • the value of QPNom(p) is adjusted by the invention's multi-pass encoder in a first QP adjustment stage to reach the target bit rate.
  • MQPp(k) represents the masked frame QP, which is the quantization parameter (QP) for a frame k in pass p. Some embodiments compute this value by using the nominal QP and frame-level visual masking.
  • MQP MB( p ) (k, m) represents the masked macroblock QP, which is the quantization parameter (QP) for an individual macroblock (with a macroblock index m) in a frame k and a pass p.
  • QP quantization parameter
  • Some embodiments compute MQPMB( P )(k, m) by using MQP p (k) and macroblock-level visual masking.
  • ⁇ F (k) represents a value referred to as the masking strength for frame k.
  • masking strength ⁇ F (k) is a measure of complexity for the frame and, in some
  • this value is used to determine how visible coding artifacts/noise would appear and to compute the MQP p (k) of frame k.
  • ⁇ R(P) represents the reference masking strength in pass p.
  • masking strength is used to compute MQP p (k) of frame k, and it is adjusted by the invention's multi-pass encoder in a second stage to reach the target bit rate.
  • ⁇ MB (k, m) represents the masking strength for a macroblock with an index m
  • the masking strength ⁇ MB (k, m) is a measure of complexity for the
  • AMQP P represents an average masked QP over frames in pass p. In some embodiments, this value is computed as the average MQP p (k) over all frames in a pass p.
  • Some embodiments of the invention provide an encoding method that achieves the best visual quality for encoding a sequence of frames at a given bit rate.
  • this method uses a visual masking process that assigns a quantization parameter QP to every macroblock. This assignment is based on the realization that coding artifacts/noise in brighter or spatially complex areas in an image or a video frame are less visible than those in darker or flat areas.
  • this visual masking process is performed as part of an inventive multi-pass encoding process.
  • This encoding process adjusts a nominal quantization parameter and controls the visual masking process through a reference masking strength parameter ⁇ R , in order to have the final encoded bit stream reach the
  • adjusting the nominal quantization parameter and controlling the masking algorithm adjusts the QP values for each picture (i.e., each frame in typically video encoding schemes) and each macroblock within each picture.
  • the multi-pass encoding process globally adjusts the nominal QP and ⁇ R for the entire sequence. In other embodiments, this process
  • the method has three stages of encoding. These three stages are: (1) an initial analysis stage that is performed in pass 0, (2) a first search stage that is performed in pass 1 through pass N 1 , and (3) a second search stage that is performed in pass N 1 H- I through N 1 H- N 2 .
  • the method identifies an initial value for the nominal QP (QPNom(i), to be used in pass 1 of the encoding). During the initial analysis stage, the method also identifies a value of the reference masking strength ⁇ R , which is used in all the passes in first search stage.
  • the method performs N 1 iterations (i.e., N 1 passes) of an encoding process.
  • N 1 iterations i.e., N 1 passes
  • the process encodes the frame by using a particular quantization parameter MQP p (k) and particular quantization parameters MQP MB( p ) (k, m) for individual macroblocks m within the frame k, where MQP MB(P) (k, m) is computed using MQP p (k).
  • the quantization parameter MQP p (k) changes between passes as it is derived from a nominal quantization parameter QPNo m( p ) that changes between passes.
  • the process computes a nominal QPNom(p+i) for pass p+1.
  • the nominal QP No mCp+ t) is based on the nominal QP value(s) and bit rate error(s) from previous pass(es).
  • the nominal QPNom(p+i) value is computed differently at the end 1 of each pass in the second search stage.
  • the method performs N 2 iterations (i.e., N 2 passes) of the encoding process.
  • the process encodes each frame k during each pass p by using a particular quantization parameter MQP p (k) and particular quantization parameters MQPMB(p)(k, m) for individual macroblocks m within the frame k, where MQPMB(p ) (k, m) is derived from MQP p (k).
  • the quantization parameter MQP p (k) changes between passes.
  • this parameter changes as it is computed using a reference masking strength ⁇ R(P) that changes between passes.
  • the reference masking strength ⁇ R( P ) is computed based on the
  • this reference masking strength is computed to be a different value at the end of each pass in the second search stage.
  • the multi-pass encoding process is described in conjunction with the visual masking process, one of ordinary skill in art will realize that an encoder does not need to use both these processes together.
  • the multi-pass encoding process is used to encode a bitstream near a given target bit rate without visual masking, by ignoring ⁇ R and omitting the second search stage
  • the visual masking process Given a nominal quantization parameter, the visual masking process first computes a masked frame quantization parameter (MQP) for each frame using the reference masking strength ( ⁇ R ) and the frame masking strength ( ⁇ F ). This process
  • MQP MB masked macroblock quantization parameter
  • the reference masking strength ( ⁇ R ) in some embodiments is identified during the first
  • ⁇ F (k) C*power(E*avgFrameLuma(k), ⁇ ) * power( D*avgFrameSAD(k), ⁇ F), (A) where
  • avgFrameSAD(k) is the average of Mb SAD (k, m) over all macroblocks in frame k;
  • MbSAD(k, m) is the sum of the values given by a function Calc4x4MeanRemovedSAD(4x4_block_pixel_values) for all 4x4 blocks in the macroblock with index m;
  • calculate the mean of pixel values in the given 4x4 block; subtract the mean from pixel values and compute their absolute values; sum the absolute values obtained in the previous step; return the sum; ⁇
  • Activity_Attribute G * power( D*Spatial_Activity_Attribute, exponent_beta) +
  • the Temporal_Activity_Attribute quantifies the amount of distortion that can be tolerated (i.e., masked) due to motion between frames.
  • the Temporal_Activity_Attribute of a frame equals a constant times the sum of the absolute value of the motion compensated error signal of pixel regions defined within the frame.
  • Temporal_Activity_Attribute is provided by the equation (D) below:
  • avgFrameSAD expresses (as described above) the average macroblock SAD (MbSAD(k, m)) value in a frame
  • avgFrameSAD(O) is the avgFrameSAD for the current frame
  • negative j indexes time instances before the current and positive j indexes time instances after the current frame.
  • the variables N and M refer to the number of frames that are respectively before and after the current frame. Instead of simply selecting the values N and M based on particular number of frames, some embodiments compute the values N and M based on the particular durations of time before and after the time of the current frame. Correlating the motion masking to temporal durations is more advantageous than correlating the motion masking to a set number of frames. This is because the correlation of the motion masking with the temporal durations is directly in line with the viewer's time-based visual perception. The correlation of such masking with the number of frames, on the other hand, suffers from a variable display duration as different displays present video at different frame rates.
  • Equation (D) "W” refers to a weighting factor, which, in some embodiment, decreases as the frame j gets further from the current frame. Also, in this equation, the first summation expresses the amount of motion that can be masked before the current frame, the second summation expresses the amount of motion that can be masked after the current frame, and the last expression (avgFrameSAD(O)) expresses the frame SAD of the current frame.
  • the weighting factors are adjusted to account for scene changes. For instance, some embodiments account for an upcoming scene change within the look ahead range (i.e., within the M frames) but not any frames after a scene change. For instance, these embodiments might set the weighting factors to zero for frames within the look ahead range that are after a scene change. Also, some embodiments do not account for frames prior to or on a scene change within the look behind range (i.e., within the N frames). For instance, these embodiments might set the weighting factors to zero for frames within the look behind range that relate to a previous scene or fall before the previous scene change. 3. Variations to the Second Approach a) Limiting the Influence of Past and Future Frames on the Temporal_Activity_Attribute
  • Equation (D) above essentially expresses the Temporal_Activity_Attribute in the following terms:
  • Past_Frame_Activity PFA
  • Future_Frame_Activity (FFA) equals ⁇ W j ⁇ avgFrameSAD(j)), and
  • CFA Current_Frame_Activity
  • Some embodiments modify the calculation of the Temporal_Activity_Attribute so that neither the Past_Frame_Activity nor the Future_Frame_Activity unduly control the value of the Temporal_Activity_Attribute. For instance, some embodiments initially define PFA to equal
  • some embodiments after initially defining the PFA and FFA values based on the weighted sums, some embodiments also determine whether FFA value is bigger than a scalar times PFA. If so, these embodiments then set FFA equal to an upper FFA limit value (e.g., a scalar times PFA). ). In addition to setting FFA equal to an upper FFA limit value, some embodiments may perform a combination of setting PFA to zero and setting CFA to zero. Other embodiments may set either of or both of FFA and CFA to a weighted combination of FFA, CFA, and PFA.
  • an upper FFA limit value e.g., a scalar times PFA.
  • Equation (C) above essentially expresses the Activity _Attribute in the following terms:
  • Activity_Attribute Spatial_Activity + Temporal_Activity, where the Spatial_Activity equals a scalar* (scalar* Spatial_Activity_Attribute) ⁇ , and
  • Temporal_Activity equals a scalar*(scalar*Temporal_Activity_Attribute) ⁇ .
  • Some embodiments modify the calculation of the Activity_Attribute so that neither the Spatial_Activity nor the Temporal_Activity unduly control the value of the Activity_Attribute. For instance, some embodiments initially define the
  • SA Spatial_Activity
  • Temporal_Activity (TA) to equal a scalar*(scalar*Temporal_Activity_Attribute) A .
  • these embodiments determine whether SA is bigger than a scalar times TA. If so, these embodiments then set SA equal to an upper SA limit value (e.g., a scalar times TA). In addition to setting SA equal to an upper SA limit in such a case, some embodiments might also set the TA value to zero or to a weighted combination ofTA and SA.
  • an upper SA limit value e.g., a scalar times TA.
  • some embodiments after initially defining the SA and TA values based on the exponential equations, some embodiments also determine whether TA value is bigger than a scalar times SA. If so, these embodiments then set TA equal to an upper TA limit value (e.g., a scalar times SA). In addition to setting TA equal to an upper TA limit in such a case, some embodiments might also set the SA value to zero or to a weighted combination of SA and TA.
  • an upper TA limit value e.g., a scalar times SA
  • the macroblock-level masking strength ⁇ MB (k, m) is
  • ⁇ MB (k, m) A*power(C*avgMbLuma(k,m), ⁇ )*power(B*MbSAD(k,
  • ⁇ MB , ⁇ , A, B, and C are constants and/or are adapted to the local
  • the macroblock' s Mb_Brightness_Attribute equals avgMbLuma(k,m)
  • Mb_Spatial_Activity_Attribute equals avgMbSAD(k). This Mb_Spatial_Activity_Attribute measures the amount of spatial innovations in a region of pixels within the macroblock that is being coded.
  • Mb_Activity_Attribute F * power( D*Mb_Spatial_Activity_Attribute, exponent_beta) +
  • the Mb_Temporal_Activity_Attribute for a macroblock can be analogous to the above-described computation of the Mb_Temporal_Activity_Attribute for a frame.
  • the Mb_Temporal_Activity_Attribute is provided by the equation (I) below:
  • the macroblock m in frame i or j can be the macroblock in the same location as the macroblock m in the current frame, or can be the macroblock in frame i or j that is initially predicted to correspond the macroblock m in the current frame.
  • the Mb_Temporal_Activity_Attribute provided by equation (I) can be modified in an analogous manner to the modifications (discussed in Section III.A.3 above) of the frame Temporal_Activity_Attribute provided by equation (D). Specifically, the Mb_Temporal_Activity_Attribute provided by the equation (I) can be modified to limit the undue influence of macroblocks in the past and future frames.
  • the Mb_Activity_Attribute provided by equation (H) can be modified in an analogous manner to the modifications (discussed in Section III.A.3 above) of the frame Activity_Attribute provided by equation (C). Specifically, the Mb_Activity_Attribute provided by equation (H) can be modified to limit the undue influenc e o f the Mb_Spatial_Activity_Attribute and the
  • the visual masking process can calculate the masked
  • T is a suitably chosen threshold
  • ⁇ F and ⁇ M B can be predetermined constants or
  • Figure 1 presents a process 100 that conceptually illustrates the multi-pass encoding method of some embodiments of the invention. As shown in this figure, the process 100 has three stages, which are described in the following three sub-sections.
  • the process 100 initially computes (at 105) the initial
  • quantization parameter QP Nom( i)
  • the initial reference masking strength ⁇ R ⁇ )
  • the initial nominal quantization parameter QP Nom( i )
  • QP Nom( i ) the initial nominal quantization parameter
  • ⁇ R(0) can be some arbitrary value or a value
  • the reference masking strength, ⁇ R(1) is set to be equal
  • ⁇ R are also possible. For instance, it may be computed as the median or other
  • the initial nominal QP can be selected as an arbitrary value (e.g., 26).
  • a value can be selected that is known to produce an acceptable quality for the target bit rate based on coding experiments.
  • the initial nominal QP value can also be selected from a look-up table based on spatial resolution, frame rate, spatial/temporal complexity, and target bit rate.
  • this initial nominal QP value is selected from the table using a distance measure that depends on each of these parameters, or it may be selected using a weighted distance measure of these parameters.
  • This initial nominal QP value can also be set to the adjusted average of the frame QP values as they are selected during a fast encoding with a rate controller (without masking), where the average has been adjusted based on the bit rate percentage rate error E 0 for pass 0.
  • the initial nominal QP can also be set to a weighted adjusted average of the frame QP values, where the weight for each frame is determined by the percentage of macroblocks in this frame that are not coded as skipped macroblocks.
  • the initial nominal QP can be set to an adjusted average or an adjusted weighted average of the frame QP values as they are selected during a fast encoding with a rate controller (with masking), as long as the effect of changing the reference masking strength from ⁇ R(0) to taken into account.
  • the multi-pass encoding process 100 enters the first search stage.
  • the process 100 performs N 1 encodings of the sequence, where N 1 represents the number of passes through the first search stage.
  • N 1 represents the number of passes through the first search stage.
  • the process uses a changing nominal quantization parameter with a constant reference masking strength.
  • the process 100 computes (at 107) a particular quantization parameter MQP p (k) for each frame k and a particular quantization parameter MQP M B(p)(k, m) for each individual macroblock m within the frame k.
  • the calculation of the parameters MQP p (k) anH IT ⁇ for a given nominal quantization parameter QPNom(p) and reference masking strength ⁇ R(P) was described in Section III (where MQP p (k) and MQPMB( P )(k, m) are computed by using the functions CaIcMQP and CalcMQPforMB, which were described above in Section III).
  • the nominal quantization parameter and the first-stage reference masking strength are parameter QP N om(i) and reference masking strength ⁇ R ⁇ , which were computed during the initial analysis
  • the process encodes (at 110) the sequence based on the quantization parameter values computed at 107.
  • the encoding process 100 determines (at 115) whether it should terminate. Different embodiments have different criteria for terminating the overall encoding process. Examples of exit conditions that completely terminate the multi-pass encoding process include:
  • QPNom(p) is at the upper or lower bound of the valid range of QP values.
  • Some embodiments might use all of these exit conditions, while other embodiments might only use some of them. Yet other embodiments might use other exit conditions for terminating the encoding process.
  • the process , 100 omits the second search stage and transitions to 145.
  • the process saves the bitstream from the last pass p as the final result, and then terminates.
  • the process determines (at 115) that it should not terminate, it then determines (at 120) whether it should terminate the first search stage.
  • it determines (at 120) whether it should terminate the first search stage.
  • different embodiments have different criteria for terminating the first search stage. Examples of exit conditions that terminate the first search stage of the multi-pass encoding process include:
  • QPNom(p+i) is the same as QPNom(q), and q ⁇ p, (in this case, the error in bit rate cannot be lowered any further by modifying the nominal QP).
  • Some embodiments might use all these exit conditions, while other embodiments might only use some of them. Yet other embodiments might use other exit conditions for terminating the first search stage.
  • the process 100 proceeds to the second search stage, which is described in the next sub-section.
  • the process determines (at 120) that it should not terminate the first search stage, it updates (at 125) the nominal QP for the next pass in the first search stage (i.e., defines QP N om(p ⁇ i))-
  • the nominal QPNom( P +i) is updated as follows. At the end of pass 1, these embodiments define
  • QpNom(p+i) I ⁇ terpExtrap(O, E ql , E q2 , QP N om( q i), QPNom(q2)), where InterpExtrap is a function that is further described below. Also, in the above equation, ql and q2 are pass numbers with corresponding bit rate errors that are the lowest among all passes up to pass p, and ql, q2, and p have the following relationship:
  • the nominal QP value is typically rounded to an integer value and clipped to lie within the valid range of QP values.
  • One of ordinary skill in art will realize that other embodiments might compute the nominal QPNom(p ⁇ i ) value differently than the approached described above.
  • the process encodes (at 110) the sequence of frames based on these newly computed quantization parameters. From 110, the process then transitions to 115, which was described above.
  • the process 100 determines (at 120) that it should terminate the first search stage, it transitions to 130.
  • the process 100 performs N 2 encodings of the sequence, where N 2 represents the number of passes through the second search stage. During each pass, the process uses the same nominal quantization parameter and a changing reference masking strength.
  • the process 100 computes a reference masking strength ⁇ R(P + 1) for the
  • next pass i.e., pass p+1, which is pass N 1 H-I.
  • pass N 1 H-I the process 100 encodes the sequence of frames in 135.
  • Different embodiments compute (at 130) the reference masking strength ⁇ R ( P + I ) at the end of a pass p in different ways. Two alternative approaches are described below.
  • Some embodiments compute the reference masking strength ⁇ R(P) based on the
  • ⁇ R(Nl+m) InterpExtrap(0, ENRm-2, ENRIH-I 5 ⁇ R(Nl+m-2), ⁇ R(Nl+m-
  • Some embodiments that use AMQP compute a desired AMQP for pass p+1 based on the error in bit rate(s) and value(s) of AMQP from previous pass(es).
  • some embodiments at the end of pass N 1 compute AMQP NI + I , where
  • AMQP NI + I InterpExtra ⁇ (O, E N1-1 , E N1 , AMQP N1-1 , AMQP N1 ), when Ni> 1, and
  • ⁇ R ⁇ NI+I Search( AMQPN 1+1 , ⁇ R(NI))
  • AMQP N1 +m InterpExtra ⁇ (O, E N i+m-2, E N i+m-i, AMQP N1+m-2 , AMQP N1 + m -i), and
  • the ⁇ R corresponding to the desired AMQP can be found using the Search function, which has the following pseudo code in some embodiments:
  • the numbers 10, 12 and 0.05 may be replaced with suitably chosen thresholds.
  • the process computes (at 132) a particular quantization parameter MQP p (k) for each frame k and particular quantization parameters MQP MB(p) (k, m) for individual macroblocks m within the frame k.
  • the process encodes (at 135) the frame sequence using the quantization parameters computed at 130. After 135, the process determines (at 140) whether it should terminate the second search stage. Different embodiments use different criteria for terminating the second search stage at the end of a pass p. Examples of such criteria are:
  • Some embodiments might use all of these exit conditions, while other embodiments might only use some of them. Yet other embodiments might use other exit conditions for terminating the first search stage.
  • the process 100 determines (at 140) that it should not terminate the second search stage, it returns to 130 to recompute the reference masking strength for the next pass of encoding. From 130, the process transitions to 132 to compute quantization parameters and then to 135 to encode the video sequence by using the newly computed quantization parameters.
  • the process decides (at 140) to terminate the second search stage, it transitions to 145.
  • the process 100 saves the bitstream from the last pass p as the final result, and then terminates.
  • Some embodiments of the invention provide a multi-pass encoding process that examines various encodings of a video sequence for a target bit rate, in order to identify an optimal encoding solution with respect to the usage of an input buffer used by the decoder.
  • this multi-pass process follows the multi-pass encoding process 100 of Figure 1.
  • decoder buffer The decoder input buffer (“decoder buffer”) usage will fluctuate to some degree during the decoding of an encoded sequence of images (e.g., frames), because of a variety of factors, such as fluctuation in the size of encoded images, the speed with which the decoder receives encoded data, the size of the decoder buffer, the speed of the decoding process, etc.
  • a decoder buffer underflow signifies the situation where the decoder is ready to decode the next image before that image has completely arrived at the decoder side.
  • the multi-pass encoder of some embodiments simulates the decoder buffer and re- encode selected segments in the sequence to prevent decoder buffer underflow.
  • FIG. 2 conceptually illustrates a codec system 200 of some embodiments of the invention.
  • This system includes a decoder 205 and an encoder 210.
  • the encoder 210 has several components that enable it to simulate the operations of similar components of the decoder 205.
  • the decoder 205 has an input buffer 215, a decoding process 220, and an output buffer 225.
  • the encoder 210 simulates these modules by maintaining a simulated decoder input buffer 230, a simulated decoding process 235, and a simulated decoder output buffer 240.
  • Figure 2 is simplified to show the decoding process 220 and encoding process 245 as single blocks.
  • the simulated decoding process 235 and simulated decoder output buffer 240 are not utilized for buffer underflow management, and are therefore shown in this figure for illustration only.
  • the decoder maintains the input buffer 215 to smooth out variations in the rate and arrival time of incoming encoded images. If the decoder runs out of data (underflow) or fills up the input buffer (overflow), there will be visible decoding discontinuities as the picture decoding halts or incoming data is discarded. Both of these cases are undesirable.
  • the encoder 210 in some embodiments first encodes a sequence of images and stores them in a storage 255. For instance, the encoder 210 uses the multi-pass encoding process 100 to obtain a first encoding of the sequence of images. It then simulates the decoder input buffer 215 and re-encodes the images that would cause buffer underflow. After all buffer underflow conditions are removed, the re-encoded images are supplied to the decoder 205 through a connection 255, which maybe a network connection (Internet, cable, PSTN lines, etc.), a non- network direct connection, a media (DVD, etc.), etc.
  • a connection 255 which maybe a network connection (Internet, cable, PSTN lines, etc.), a non- network direct connection, a media (DVD, etc.), etc.
  • Figure 3 illustrates an encoding process 300, of the encoder of some embodiments. This process tries to find an optimal encoding solution that does not cause the decoder buffer to underflow. As shown in Figure 3, the process 300 identifies (at 302) a first encoding of the sequence of images that meets a desired target bit rate (e.g., the average bit rate for each image in the sequence meets a desired average target bit rate). For instance, the process 300 may use (at 302) the multi-pass encoding process 100 to obtain the first encoding of the sequence of images.
  • a desired target bit rate e.g., the average bit rate for each image in the sequence meets a desired average target bit rate
  • the encoding process 300 simulates (at 305) the decoder input buffer 215 by considering a variety of factors, such as the connection speed (i.e., the speed with which the decoder receives encoded data), the size of the decoder input buffer, the size of encoded images, the decoding process speed, etc.
  • the process 300 determines if any segment of the encoded images will cause a decoder input buffer to underflow. The techniques that the encoder uses to determine (and subsequently eliminate) the underflow condition are described further below. If the process 300 determines (at 310) that the encoded images do not create underflow condition, the process ends.
  • the process 300 determines (at 310) that a buffer underflow condition exists in any segment of the encoded images, it refines (at 315) the encoding parameters based on the value of these parameters from previous encoding passes. The process then re-encodes (at 320) the segment with underflow to reduce the segment bit size. After re-encoding the segment, the process 300 examines (at 325) the segment to determine if the underflow condition is eliminated.
  • the process 300 transitions to 315 to further refine the encoding parameters to eliminate underflow.
  • the process specifies (at 330) that starting point for re-examining and re-encoding the video sequence as the frame after the end of the segment re-encoded in the last iteration at 320.
  • the process re- encodes the portion of the video sequence specified at 330, up to (and excluding) the first IDR frame following the underflow segment specified at 315 and 320.
  • the process transitions back to 305 to simulate the decoder buffer to determine whether the rest of the video sequence still causes buffer underflow after re-encoding.
  • the flow of the process 300 from 305 was described above.
  • the encoder simulates the decoder buffer conditions to determine whether any segment in the sequence of the encoded or re-encoded images cause underflow in the decoder buffer.
  • the encoder uses a simulation model that considers the size of encoded images, network conditions such as bandwidth, decoder factors (e.g., input buffer size, initial and nominal time to remove images, decoding process time, display time of each image, etc.).
  • the MPEG-4 AVC Coded Picture Buffer (CPB) model is used to simulate the decoder input buffer conditions.
  • the CPB is the term used in MPEG-4 H.264 standard to refer to the simulated input buffer of the Hypothetical Reference Decoder (HRD).
  • the HRD is a hypothetical decoder model that specifies constraints on the variability of conforming streams that an encoding process may produce.
  • the CPB model is well known and is described in Section 1 below for convenience. More detailed description of CPB and HRD can be found in Draft ITU- T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec. H.264 / ISO/IEC 14496-10 AVC).
  • the following paragraphs describe how the decoder input buffer is simulated in some embodiments using the CPB model.
  • the time at which the first bit of image n begins to enter the CPB is referred to as the initial arrival time tai( n ), which is
  • initial_cpb_removal_delay is the initial buffering period.
  • the encoder makes its own calculations of the nominal removal time as described below instead of reading them from an optional part of the bit stream as in the H.264 specification.
  • the nominal removal time of the image from the CPB is specified by tr,n( 0 ) ⁇ initial_cpb_removal_delay
  • tr,n( n ) is the nominal removal time of image n
  • ti is the display duration
  • the removal time of image n is specified as follows.
  • the encoder can simulate the decoder input buffer state and obtain the number of bits in the buffer at a given time instant.
  • the encoder can track how each individual image changes the decoder input buffer state via the difference between its nominal removal time and final arrival time (i.e., t b (n) - t r n (n) - t ⁇ n)).
  • t b (n) is less than O 5 the buffer is suffering from
  • an underflow segment as a
  • Figure 4 is a plot of the difference between nominal removal time and final arrival of images t b (n) versus image number in some embodiments. The plot is drawn
  • Figure 4a shows an underflow segment with arrows marking its beginning and end. Note that there is another underflow segment in Figure 4a that occurs after the first underflow segment, which is not explicitly marked by arrows for simplicity.
  • Figure 5 illustrates a process 500 that the encoder uses to perform the underflow detection operation at 305.
  • the process 500 first determines (at 505) the final arrival time, taf, and nominal removal time, tr 5 n 3 of each image by simulating the
  • decoder input buffer conditions as explained above. Note that since this process may be called several times during the iterative process of buffer underflow management, it receives an image number as the starting point and examines the sequence of images from this given starting image. Obviously, for the first iteration, the starting point is the first image in the sequence.
  • the process 500 compares the final arrival time of each image at the decoder input buffer with the nominal removal time of that image by the decoder. If the process determines that there are no images with final arrival time after the nominal removal time (i.e., no underflow condition exits), the process exits. On the other hand, when an image is found for which the final arrival time is after the nominal removal time, the process determines that there is an underflow and transitions to 515 to identify the underflow segment.
  • the process 500 identifies the underflow segment as the segment of the images where the decoder buffer starts to be continuously depleted until the next global minimum where the underflow condition starts to improve (i.e., t b (n) does not
  • the beginning of the underflow segment is further adjusted to start with an I-frame, which is an intra-encoded image that marks the starting of a set of related inter-encoded images.
  • the encoder proceeds to eliminate the underflow. Section B below describes elimination of underflow in a single-segment case (i.e., when the entire sequence of encoded images only contains a single underflow segment). Section C then describes elimination of underflow for the multi-segment underflow cases.
  • underflow segment begins at the nearest local maximum preceding the zero-crossing point, and ends at the next global minimum between the zero-crossing point and the end of the sequence.
  • the end point of the segment could be followed by another zero-crossing point with the curve taking an ascending slope if the buffer recovers from the underflow.
  • Figure 6 illustrates a process 600 the encoder utilizes (at 315, 320, and 325) to eliminate underflow condition in a single segment of images in some embodiments.
  • the process 600 estimates the total number of bits to reduce ( ⁇ B) in the
  • underflow segment by computing the product of the input bit rate into the buffer and the longest delay (e.g., minimum t b (n)) found at the end of the segment.
  • AMQP average masked frame QP
  • B T B - ⁇ B P
  • the process 600 uses the desired AMQP to modify average masked frame QP, MQP(n), based on masking strength ⁇ F ( ⁇ ) such that images that
  • the process then re-encodes (at 620) the video segment based on the parameters defined at 315.
  • the process then examines (at 625) the segment to determine whether the underflow condition is eliminated.
  • Figure 4(b) illustrates the elimination of the underflow condition of Figure 4(a) after process 600 is applied to the underflow segment to re-encode it.
  • the process exits. Otherwise, it will transition back to 605 to further adjust encoding parameters to reduce total bit size.
  • the encoder searches for one underflow segment at a time, starting from the first zero-crossing point (i.e., at the lowest n) with a descending slope.
  • the underflow segment begins at the nearest local maximum preceding this zero-crossing point, and ends at the next global minimum between the zero-crossing point and the next zero-crossing point (or the end of the sequence if there is no more zero crossing).
  • the encoder hypothetically removes the underflow in this segment and estimates the updated buffer fullness by setting t b (n) to
  • the encoder then continues searching for the next segment using the modified buffer fullness. Once all underflow segments are identified as described above, the encoder derives the AMQPs and modifies the Masked frame QPs for each segment independently of the others just as in the single-segment case.
  • some embodiments would not identify multiple segments that cause underflow of the input buffer of the decoder. Instead, some embodiments would perform buffer simulation as described above to identify a first segment that causes underflow. After identifying such a segment, these embodiments correct the segment to rectify underflow condition in that segment and then resume encoding following the corrected portion. After the encoding of the remaining of the sequence, these embodiments will repeat this process for the next underflow segment.
  • decoder buffer underflow techniques described above applies to numerous encoding and decoding systems. Several examples of such systems are described below.
  • Figure 7 illustrates a network 705 connecting a video streaming server 710 and several client decoders 715-725. Clients are connected to the network 705 via links with different bandwidths such as 300 Kb/sec and 3 Mb/sec.
  • the video streaming server 710 is controlling streaming of encoded video images from an encoder 730 to the client decoders 715-725.
  • the streaming video server may decide to stream the encoded video images using the slowest bandwidth in the network (i.e., 300 Kb/sec) and the smallest client buffer size.
  • the streaming server 710 needs only one set of encoded images that are optimized for a target bit rate of 300 Kb/sec.
  • the server may generate and store different encodings that are optimized for different bandwidths and different client buffer conditions.
  • FIG. 8 illustrates another example of an application for decoder underflow management.
  • an HD-DVD player 805 is receiving encoded video images from an HD-DVD 840 that has stored encoded video data from a video encoder 810.
  • the HD-DVD player 805 has an input buffer 815, a set of decoding modules shown as one block 820 for simplicity, and an output buffer 825.
  • the output of the player 805 is sent to display devices such as TV 830 or computer display terminal 835.
  • the HD-DVD player may have a very high bandwidth, e.g. 29.4 Mb/sec.
  • the encoder ensures that the video images are encoded in a way that no segments in the sequence of images would be so large that cannot be delivered to the decoder input buffer on time.
  • FIG. 9 presents a computer system with which one embodiment of the invention is implemented.
  • Computer system 900 includes a bus 905, a processor 910, a system memory 915, a read-only memory 920, a permanent storage device 925, input devices 930, and output devices 935.
  • the bus 905 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 900. For instance, the bus 905 communicatively connects the processor 910 with the read-only memory 920, the system memory 915, and the permanent storage device 925.
  • the processor 910 retrieves instructions to execute and data to process in order to execute the processes of the invention.
  • the read-only-memory (ROM) 920 stores static data and instructions that are needed by the processor 910 and other modules of the computer system.
  • the permanent storage device 925 is read-and-write memory device. This device is a non- volatile memory unit that stores instruction and data even when the computer system 900 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 925.
  • a mass-storage device such as a magnetic or optical disk and its corresponding disk drive
  • the system memory 915 is a read-and- write memory device. However, unlike storage device 925, the system memory is a volatile read- and-write memory, such as a random access memory.
  • the system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 915, the permanent storage device 925, and/or the read-only memory 920.
  • the bus 905 also connects to the input and output devices 930 and 935.
  • the input devices enable the user to communicate information and select commands to the computer system.
  • the input devices 930 include alphanumeric keyboards and cursor- controllers.
  • the output devices 935 display images generated by the computer system.
  • the output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD).
  • bus 905 also couples computer 900 to a network 965 through a network adapter (not shown).
  • the computer can be a part of a network of computers (such as a local area network ("LAN”), a wide area network (“WAN”), or an Intranet) or a network of networks (such as the Internet).
  • LAN local area network
  • WAN wide area network
  • Intranet a network of networks
  • the Internet a network of networks
  • Several embodiments described above compute the mean removed SAD to obtain an indication of the image variance in a macroblock. Other embodiments, however, might identify the image variance differently. For example, some embodiments might predict an expected image value for the pixels of a macroblock. These embodiments then generate a macroblock SAD by subtracting this predicted value form the luminance value of the pixels of the macroblock, and summing the absolute value of the subtractions. In some embodiments, the predicted value is based on not only the values of the pixels in the macroblock but also the value of the pixels in one or more of the neighboring macroblocks.
  • the embodiments described above use the derived spatial and temporal masking values directly.
  • Other embodiments will apply a smoothing filtering on successive spatial masking values and/or to successive temporal masking values before using them in order to pick out the general trend of those values through the video images.
  • the invention is not to be limited by the foregoing illustrative details.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

Some embodiments of the invention provide a multi-pass encoding method that encodes several images (e.g., several frames of a video sequence). The method iteratively performs an encoding operation that encodes these images (Figure 1, 110). The encoding operation is based on a nominal quantization parameter, which the method uses to compute quantization parameters for the images (132). During several different iterations of the encoding operation, the method uses several different nominal quantization parameters (125). The method stops its iterations (140) when it reaches a terminating criterion (e.g., it identifies an acceptable encoding of the images).

Description

MULTI-PASS VIDEO ENCODING
BACKGROUND OF THE INVENTION
Video encoders encode a sequence of video images (e.g., video frames) by using a variety of encoding schemes. Video encoding schemes typically encode video frames or portions of video frames (e.g., sets of pixels in the video frames) in terms of intraframes or interframes. An intraframe encoded frame or pixel set is one that is encoded independently of other frames or pixels sets in other frames. An interframe encoded frame or pixel set is one that is encoded by reference to one or more other frames or pixel sets in other frames.
When compressing video frames, some encoders implements a 'rate controller,' which provides a 'bit budget' for a video frame or a set of video frames that are to be encoded. The bit budget specifies the number of bits that have been allocated to encode the video frame or set of video frames. By efficiently allocating the bit budgets, the rate controller attempts to generate the highest quality compressed video stream in view of certain constraints (e.g., a target bit rate, etc.).
To date, a variety of single-pass and multi-pass rate controllers have been proposed. A single-pass rate controller provides bit budgets for an encoding scheme that encodes a series of video images in one pass, whereas a multi-pass rate controller provides bit budgets for an encoding scheme that encodes a series of video images in multiple passes.
Single-pass rate controllers are useful in real-time encoding situations. Multi¬ pass rate controllers, on the other hand, optimize the encoding for a particular bit rate based on a set of constraints. Not many rate controllers to date consider the spatial or temporal complexity of frames or pixel-sets within the frames in controlling the bit rates of their encodings. Also, most multi-pass rate controllers do not adequately search the solution space for encoding solutions that use optimal quantization parameters for frames and/or pixel sets within frames in view of a desired bit rate.
Therefore, there is a need in the art for a rate controller that uses novel techniques to consider the spatial or temporal complexity of video images and/or portions of video images, while controlling the bit rate for encoding a set of video images. There is also a need in the art for a multi-pass rate controller that adequately examines the encoding solutions to identify an encoding solution that uses an optimal set of quantization parameters for video images and/or portions of video images.
SUMMARY OF THE INVENTION
Some embodiments of the invention provide a multi-pass encoding method that encodes several images (e.g., several frames of a video sequence). The method iteratively performs an encoding operation that encodes these images. The encoding operation is based on a nominal quantization parameter, which the method uses to compute quantization parameters for the images. During several different iterations of the encoding operation, the method uses several different nominal quantization parameters. The method stops its iterations when it reaches a terminating criterion (e.g., it identifies an acceptable encoding of the images).
Some embodiments of the invention provide a method for encoding video sequences. The method identifies a first attribute quantifying the complexity of a first image in the video. It also identifies a quantization parameter for encoding the first image based on the identified first attribute. The method then encodes the first image based on the identified quantization parameter. In some embodiments, this method performs these three operations for several images in the video.
Some embodiments of the invention encode a sequence of video images based on "visual masking" attributes of the video images and/or portions of the video images. Visual masking of an image or a portion of the image is an indication of how much coding artifacts can be tolerated in the image or image portion. To express the visual masking attribute of an image or an image portion, some embodiments compute a visual masking strength that quantifies the brightness energy of the image or the image portion. In some embodiments, the brightness energy is measured as a function of the average luma or pixel energy of the image or image portion.
Instead of, or in conjunction with the brightness energy, the visual masking strength of an image or image portion might also quantify activity energy of the image or image portion. The activity energy expresses the complexity of the image or image portion. In some embodiments, the activity energy includes a spatial component that quantifies the spatial complexity of the image or image portion, and/or a motion component that quantifies the amount of distortion that can be . tolerated/masked due to motion between images.
Some embodiments of the invention provide a method for encoding video sequences. The method identifies a visual-masking attribute of a first image in the video. It also identifies a quantization parameter for encoding the first image based on the identified visual-masking attribute. The method then encodes the first image based on the identified quantization parameter.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.
Figure 1 presents a process that conceptually illustrates the encoding method of some embodiments of the invention.
Figure 2 conceptually illustrates a codec system of some embodiments.
Figure 3 is a flow chart illustrating encoding process of some embodiments.
Figures 4a is plot of the difference between nominal removal time and final arrival time of images versus image number illustrating underflow condition in some embodiments.
Figure 4b illustrates plot of the difference between nominal removal time and final arrival time of images versus image number for the same images shown in Figure 4a after the underflow condition is eliminated.
Figure 5 illustrates a process that the encoder uses to perform underflow detection in some embodiments.
Figure 6 illustrates a process the encoder utilizes to eliminate underflow condition in a single segment of images in some embodiments.
Figure 7 illustrates an application of buffer underflow management in a video streaming application.
Figure 8 illustrates an application of buffer underflow management in an HD- DVD system.
Figure 9 presents a computer system with which one embodiment of the invention is implemented. DETAILED DESCRIPTION OF THE INVENTION
In the following detailed description of the invention, numerous details, examples and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed. I. DEFINITIONS
This section provides definitions for several symbols that are used in this document.
RT represents a target bit rate, which is a desired bit rate for encoding a sequence of frames. Typically, this bit rate is expressed in units of bit/second, and is calculated from the desired final file size, the number of frames in the sequence, and the frame rate.
Rp represents the bit rate of the encoded bit stream at the end of a pass p.
Ep represents the percentage of error in the bit rate at the end of pass p. In
some cases, this percentage is calculated as 100 x
Figure imgf000007_0001
ε represents the error tolerance in the final bit rate.
εc represents the error tolerance in the bit rate for the first QP search stage.
QP represents the quantization parameter.
QPNom(p) represents the nominal quantization parameter that is used in pass p encoding for a sequence of frames. The value of QPNom(p) is adjusted by the invention's multi-pass encoder in a first QP adjustment stage to reach the target bit rate. MQPp(k) represents the masked frame QP, which is the quantization parameter (QP) for a frame k in pass p. Some embodiments compute this value by using the nominal QP and frame-level visual masking.
MQPMB(p)(k, m) represents the masked macroblock QP, which is the quantization parameter (QP) for an individual macroblock (with a macroblock index m) in a frame k and a pass p. Some embodiments compute MQPMB(P)(k, m) by using MQPp(k) and macroblock-level visual masking. φF(k) represents a value referred to as the masking strength for frame k. The
masking strength φF(k) is a measure of complexity for the frame and, in some
embodiments, this value is used to determine how visible coding artifacts/noise would appear and to compute the MQPp(k) of frame k.
ΦR(P) represents the reference masking strength in pass p. The reference
masking strength is used to compute MQPp(k) of frame k, and it is adjusted by the invention's multi-pass encoder in a second stage to reach the target bit rate. φMB(k, m) represents the masking strength for a macroblock with an index m
in frame k. The masking strength φMB(k, m) is a measure of complexity for the
macroblock and, in some embodiments, it is used to determine how visible coding artifacts/noise would appear and to compute MQPMB(p)(k, m). AMQPP represents an average masked QP over frames in pass p. In some embodiments, this value is computed as the average MQPp(k) over all frames in a pass p. II. OVERVIEW
Some embodiments of the invention provide an encoding method that achieves the best visual quality for encoding a sequence of frames at a given bit rate. In some embodiments, this method uses a visual masking process that assigns a quantization parameter QP to every macroblock. This assignment is based on the realization that coding artifacts/noise in brighter or spatially complex areas in an image or a video frame are less visible than those in darker or flat areas.
In some embodiments, this visual masking process is performed as part of an inventive multi-pass encoding process. This encoding process adjusts a nominal quantization parameter and controls the visual masking process through a reference masking strength parameter ΦR, in order to have the final encoded bit stream reach the
target bit rate. As further described below, adjusting the nominal quantization parameter and controlling the masking algorithm adjusts the QP values for each picture (i.e., each frame in typically video encoding schemes) and each macroblock within each picture.
In some embodiments, the multi-pass encoding process globally adjusts the nominal QP and ΦR for the entire sequence. In other embodiments, this process
divides the video sequence into segments, with the nominal QP and ΦR adjusted for
each segment." The description below refers to a sequence of frames on which the multi-pass encoding process is employed. One of ordinary skill will realize that this sequence includes the entire sequence in some embodiments, while it includes only a segment of a sequence in other embodiments.
In some embodiments, the method has three stages of encoding. These three stages are: (1) an initial analysis stage that is performed in pass 0, (2) a first search stage that is performed in pass 1 through pass N1, and (3) a second search stage that is performed in pass N1 H- I through N1 H- N2.
In the initial analysis stage (i.e., during pass 0), the method identifies an initial value for the nominal QP (QPNom(i), to be used in pass 1 of the encoding). During the initial analysis stage, the method also identifies a value of the reference masking strength ΦR, which is used in all the passes in first search stage.
In first search stage, the method performs N1 iterations (i.e., N1 passes) of an encoding process. For each frame k during each pass p, the process encodes the frame by using a particular quantization parameter MQPp(k) and particular quantization parameters MQPMB(p)(k, m) for individual macroblocks m within the frame k, where MQPMB(P)(k, m) is computed using MQPp(k).
In the first search stage, the quantization parameter MQPp(k) changes between passes as it is derived from a nominal quantization parameter QPNom(p) that changes between passes. In other words, at the end of each pass p during the first search stage, the process computes a nominal QPNom(p+i) for pass p+1. In some embodiments, the nominal QPNomCp+t) is based on the nominal QP value(s) and bit rate error(s) from previous pass(es). In other embodiments, the nominal QPNom(p+i) value is computed differently at the end1 of each pass in the second search stage.
In the second search stage, the method performs N2 iterations (i.e., N2 passes) of the encoding process. As in the first search stage, the process encodes each frame k during each pass p by using a particular quantization parameter MQPp(k) and particular quantization parameters MQPMB(p)(k, m) for individual macroblocks m within the frame k, where MQPMB(p)(k, m) is derived from MQPp(k).
Also, as in the first search stage, the quantization parameter MQPp(k) changes between passes. However, during the second search stage, this parameter changes as it is computed using a reference masking strength ΦR(P) that changes between passes. In
some embodiments, the reference masking strength ΦR(P) is computed based on the
error in bit rate(s) and value(s) of ΦR from previous pass(es). In other embodiments, this reference masking strength is computed to be a different value at the end of each pass in the second search stage.
Although the multi-pass encoding process is described in conjunction with the visual masking process, one of ordinary skill in art will realize that an encoder does not need to use both these processes together. For instance, in some embodiments, the multi-pass encoding process is used to encode a bitstream near a given target bit rate without visual masking, by ignoring ΦR and omitting the second search stage
described above.
The visual masking and multi-pass encoding process are further described in Sections III and IV of this application. III. VISUAL MASKING
Given a nominal quantization parameter, the visual masking process first computes a masked frame quantization parameter (MQP) for each frame using the reference masking strength (ΦR) and the frame masking strength (ΦF). This process
then computes a masked macroblock quantization parameter (MQPMB) for each macroblock, based on the frame and macroblock-level masking strengths (ΦF and
φMB). When the visual masking process is employed in a multi-pass encoding process,
the reference masking strength (ΦR) in some embodiments is identified during the first
encoding pass, as mentioned above and further described below.
A. Computing the Frame-Level Masking Strength 1. First Approach
To compute the frame-level masking strength φF(k), some embodiments use
the following equation (A): φF(k) = C*power(E*avgFrameLuma(k),β) * power( D*avgFrameSAD(k),αF), (A) where
• avgFrameLuma(k) is the average pixel intensity in frame k computed using bxb regions, where b is an integer greater or equal to 1 (for instance, b=l or b=4);
• avgFrameSAD(k) is the average of Mb SAD (k, m) over all macroblocks in frame k;
• MbSAD(k, m) is the sum of the values given by a function Calc4x4MeanRemovedSAD(4x4_block_pixel_values) for all 4x4 blocks in the macroblock with index m;
«
• α F ,C, D, and E are constants and/or are adapted to the local statistics;
and
• power(a,b) means ab.
The pseudo-code for the function Calc4x4MeanRemovedSAD is as follows:
Calc4x4MeanRemovedSAD(4x4_block_pixel__values)
{ calculate the mean of pixel values in the given 4x4 block; subtract the mean from pixel values and compute their absolute values; sum the absolute values obtained in the previous step; return the sum; }
2. Second Approach
Other embodiments compute the frame-level masking strength differently. For instance, the above-described equation (A) computes the frame masking strength essentially as follows:
(j)F(k) = C*power(E*Brightness_Attribute,exponentO) *
power(scalar * Spatial_Activity_Attribute, exponent 1 ) . In equation (A), the frame's Brightness_Attribute equals avgFrameLuma(k), and the Spatial_Activity_Attribute equals avgFrameSAD(k), which is the average macroblock SAD (MbSAD(k, m)) value over all macroblocks in a frame, where the average macroblock SAD equals the sum of the absolute value of the mean removed 4x4 pixel variation (as given by Calc4x4MeanRemovedSAD) for all 4x4 blocks in a macroblock. This Spatial_Activity_Attribute measures the amount of spatial innovations in a region of pixels within the frame that is being coded.
Other embodiments expand the activity measure to include the amount of temporal innovations in a region of pixels across a number of successive frames. Specifically, these embodiments compute the frame masking strength as follows: φF(k) = C*power(E*Brightness_Attribute,exponent()) *
power(scalar*Activity_Attribute, exponentl) (B)
In this equation, the Activity_Attribute is given by the following equation (C), Activity_Attribute = G * power( D*Spatial_Activity_Attribute, exponent_beta) +
E *power(F*Temporal_Activity_Attribue, exponent_delta) (C)
In some embodiments, the Temporal_Activity_Attribute quantifies the amount of distortion that can be tolerated (i.e., masked) due to motion between frames. In some of these embodiments, the Temporal_Activity_Attribute of a frame equals a constant times the sum of the absolute value of the motion compensated error signal of pixel regions defined within the frame. In other embodiments, Temporal_Activity_Attribute is provided by the equation (D) below:
Temporal_Activity_Attribute =
-N M
2 (Wj • avgFrameSAD(jΫ) + ^J (W, avgFrameSAD(j)) + W0 • avgFrameS AD(O) (D)
7=-l In equation (D), "avgFrameSAD" expresses (as described above) the average macroblock SAD (MbSAD(k, m)) value in a frame, avgFrameSAD(O) is the avgFrameSAD for the current frame, and negative j indexes time instances before the current and positive j indexes time instances after the current frame. Hence, avgFrameSAD(j=-2) expresses the average frame SAD of two frames before the current frame, avgFrameSAD(j=3) expresses the average frame SAD of three frames after the current frame.
Also, in equation (D), the variables N and M refer to the number of frames that are respectively before and after the current frame. Instead of simply selecting the values N and M based on particular number of frames, some embodiments compute the values N and M based on the particular durations of time before and after the time of the current frame. Correlating the motion masking to temporal durations is more advantageous than correlating the motion masking to a set number of frames. This is because the correlation of the motion masking with the temporal durations is directly in line with the viewer's time-based visual perception. The correlation of such masking with the number of frames, on the other hand, suffers from a variable display duration as different displays present video at different frame rates.
In equation (D), "W" refers to a weighting factor, which, in some embodiment, decreases as the frame j gets further from the current frame. Also, in this equation, the first summation expresses the amount of motion that can be masked before the current frame, the second summation expresses the amount of motion that can be masked after the current frame, and the last expression (avgFrameSAD(O)) expresses the frame SAD of the current frame.
In some embodiments, the weighting factors are adjusted to account for scene changes. For instance, some embodiments account for an upcoming scene change within the look ahead range (i.e., within the M frames) but not any frames after a scene change. For instance, these embodiments might set the weighting factors to zero for frames within the look ahead range that are after a scene change. Also, some embodiments do not account for frames prior to or on a scene change within the look behind range (i.e., within the N frames). For instance, these embodiments might set the weighting factors to zero for frames within the look behind range that relate to a previous scene or fall before the previous scene change. 3. Variations to the Second Approach a) Limiting the Influence of Past and Future Frames on the Temporal_Activity_Attribute
Equation (D) above essentially expresses the Temporal_Activity_Attribute in the following terms:
Temporal_Activity_Attribute = Past_Frame_Activity + Future_Frame_Activity +
Current_Frame_Activity,
N where Past_Frame_Activity (PFA) equals 2,(W1 * avgFrameSAD(ϊ)),
(=1 M
Future_Frame_Activity (FFA) equals ^^Wj Λ avgFrameSAD(j)), and
Current_Frame_Activity (CFA) equals avgFrameSAD(current).
Some embodiments modify the calculation of the Temporal_Activity_Attribute so that neither the Past_Frame_Activity nor the Future_Frame_Activity unduly control the value of the Temporal_Activity_Attribute. For instance, some embodiments initially define PFA to equal
N M
2 [W1 avgFrameSAD(i)), and FFA to equal ^J (Wj * avgFrameSAD(j)) • i-l hi These embodiments then determine whether PFA is bigger than a scalar times FFA. If so, these embodiments then set PFA equal to an upper PFA limit value (e.g., a scalar times FFA). In addition to setting PFA equal to an upper PFA limit value, some embodiments may perform a combination of setting FFA to zero and setting CFA to zero. Other embodiments might set either of or both of PFA and CFA to a weighted combination of PFA, CFA, and FFA.
Analogously, after initially defining the PFA and FFA values based on the weighted sums, some embodiments also determine whether FFA value is bigger than a scalar times PFA. If so, these embodiments then set FFA equal to an upper FFA limit value (e.g., a scalar times PFA). ). In addition to setting FFA equal to an upper FFA limit value, some embodiments may perform a combination of setting PFA to zero and setting CFA to zero. Other embodiments may set either of or both of FFA and CFA to a weighted combination of FFA, CFA, and PFA.
The potential subsequent adjustment of the PFA and FFA values (after the initial computation of these values based on the weighted sums) prevent either of these values from unduly controlling the Temporal_Activity_Attribute. b) Limiting the Influence of Spatial_Activity_Attribute and Temporal Activity Attribute on the Activity Attribute
Equation (C) above essentially expresses the Activity _Attribute in the following terms:
Activity_Attribute = Spatial_Activity + Temporal_Activity, where the Spatial_Activity equals a scalar* (scalar* Spatial_Activity_Attribute)β, and
Temporal_Activity equals a scalar*(scalar*Temporal_Activity_Attribute)Δ. Some embodiments modify the calculation of the Activity_Attribute so that neither the Spatial_Activity nor the Temporal_Activity unduly control the value of the Activity_Attribute. For instance, some embodiments initially define the
Spatial_Activity (SA) to equal a scalar* (scalar* Spatial_Activity_Attribute)p, and
define the Temporal_Activity (TA) to equal a scalar*(scalar*Temporal_Activity_Attribute)A.
These embodiments then determine whether SA is bigger than a scalar times TA. If so, these embodiments then set SA equal to an upper SA limit value (e.g., a scalar times TA). In addition to setting SA equal to an upper SA limit in such a case, some embodiments might also set the TA value to zero or to a weighted combination ofTA and SA.
Analogously, after initially defining the SA and TA values based on the exponential equations, some embodiments also determine whether TA value is bigger than a scalar times SA. If so, these embodiments then set TA equal to an upper TA limit value (e.g., a scalar times SA). In addition to setting TA equal to an upper TA limit in such a case, some embodiments might also set the SA value to zero or to a weighted combination of SA and TA.
The potential subsequent adjustment of the SA and TA values (after the initial computation of these values based on the exponential equations) prevent either of these values from unduly controlling the Activity_Attribute.
B. Computing the Macroblock-Level Masking Strength 1. First Approach
In some embodiments, the macroblock-level masking strength φMB(k, m) is
calculated as follows: φMB(k, m) = A*power(C*avgMbLuma(k,m), β)*power(B*MbSAD(k,
DI)5^MB)5 (F) where
• avgMbLuma(k, m) is the average pixel intensity in frame k, macroblock m;
• αMB, β, A, B, and C are constants and/or are adapted to the local
statistics.
2. Second Approach
The above-described equation (F) computes the macroblock masking strength essentially as follows: φMβ(k,m) = D*power(E*MbJBrightness_Attribute, exponentO)*
power(scalar*Mb_Spatial_Activity_Attribute, exponentl).
In equation (F), the macroblock' s Mb_Brightness_Attribute equals avgMbLuma(k,m), and Mb_Spatial_Activity_Attribute equals avgMbSAD(k). This Mb_Spatial_Activity_Attribute measures the amount of spatial innovations in a region of pixels within the macroblock that is being coded.
Just as in the case of the frame masking strength, some embodiments might expand the activity measure in the macroblock masking strength to include the amount of temporal innovations in a region of pixels across a number of successive frames. Specifically, these embodiments would compute the macroblock masking strength as follows: φMβ(k,m) = D* power(E*Mb_Brightness_Attribute,exponentO) *
power(scalar*Mb_Activity_Attribute, exponentl), (G) where the Mb_Activity_Attribute is given by the following equation (H), Mb_Activity_Attribute = F * power( D*Mb_Spatial_Activity_Attribute, exponent_beta) +
G *power(F*Mb_Temporal_Activity_Attribue, exponent_delta) (H)
The computation of the Mb_Temporal_Activity_Attribute for a macroblock can be analogous to the above-described computation of the Mb_Temporal_Activity_Attribute for a frame. For instance, in some of these embodiments, the Mb_Temporal_Activity_Attribute is provided by the equation (I) below:
Mb_Temporal_Activity_Attribute =
N M
∑(Wt • MbSAD(i,m)) + ∑(Wj • MbSAD(j,m)) + MbSAD(m) (I)
The variables in the equation (I) were defined in Section III.A. In equation (F), the macroblock m in frame i or j can be the macroblock in the same location as the macroblock m in the current frame, or can be the macroblock in frame i or j that is initially predicted to correspond the macroblock m in the current frame.
The Mb_Temporal_Activity_Attribute provided by equation (I) can be modified in an analogous manner to the modifications (discussed in Section III.A.3 above) of the frame Temporal_Activity_Attribute provided by equation (D). Specifically, the Mb_Temporal_Activity_Attribute provided by the equation (I) can be modified to limit the undue influence of macroblocks in the past and future frames.
Similarly, the Mb_Activity_Attribute provided by equation (H) can be modified in an analogous manner to the modifications (discussed in Section III.A.3 above) of the frame Activity_Attribute provided by equation (C). Specifically, the Mb_Activity_Attribute provided by equation (H) can be modified to limit the undue influenc e o f the Mb_Spatial_Activity_Attribute and the
Mb_Temporal_Activity_Attribute. C. Computing the Masked QP Values
Based on the values of masking strengths (ΦF and φMB) and the value of the
reference masking strength (ΦR), the visual masking process can calculate the masked
QP values at the frame level and macroblock level by using two functions CaIcMQP and CaIcMQPf orMB. The pseudo code for these two functions is below:
CalcMQP(nominalQP, ΦR, φF(k), maxQPFrame Adjustment)
{
QPFrameAdjustment = βF *(φF(k) - ΦR) / ΦR; clip QPFrameAdjustment to lie within [minQPFrameAdjustment,, maxQPFrameAdjustment] ; maskedQPofFrame = nominalQP + QPFrameAdjustment; clip maskedQPofFrame to lie in the admissible range; return maskedQPofFrame (for frame k); }
CalcMQPforMB(maskedQPofFrame, φF(k), φMB(k, m), maxQPMacroblockAdj ustment)
{ if (φF(k) >T) where T is a suitably chosen threshold
QPMacroblockAdjustment= β MB * (ΦMBO^ m) - φF(k)) / φF(k); else
QPMacroblockAdjustment= 0; clip QPMacroblockAdjustment so that it lies within [minQPMacroblockAdj ustment, maxQPMacroblockAdj ustment ]; maskedQPofMacroblock = maskedQPofFrame + QPMacroblockAdjustment; clip maskedQPofMacroblock so that it lies within the valid QP value range; return maskedQPofMacroblock; }
In the above functions, βF and βMB can be predetermined constants or
adapted to local statistics. IV. MULTI-PASS ENCODING
Figure 1 presents a process 100 that conceptually illustrates the multi-pass encoding method of some embodiments of the invention. As shown in this figure, the process 100 has three stages, which are described in the following three sub-sections.
A. Analysis and initial QP selection
As shown in Figure 1, the process 100 initially computes (at 105) the initial
value of the reference masking strength (ΦR(I)) and the initial value of the nominal
quantization parameter (QPNom(i)) during the initial analysis stage (i.e., during pass 0) of the multi-pass encoding process. The initial reference masking strength (ΦR^)) is
used during the first search stage, while the initial nominal quantization parameter (QPNom(i)) is used during the first pass of the first search stage (i.e., during pass 1 of the multi-pass encoding process).
At the beginning of pass 0, ΦR(0) can be some arbitrary value or a value
selected based on experimental results (for instance, the middle value of a typical range of ΦR values). During an analysis of the sequence, a masking strength φp(k) is
computed for each frame, then the reference masking strength, ΦR(1), is set to be equal
to avg(φp(k)) at the end of pass 0. Other decisions for the reference masking strength
ΦR are also possible. For instance, it may be computed as the median or other
arithmetic function of the values φF(k), e.g., a weighted average of the values φF(k).
There are several approaches to initial QP selection with varying complexity. For instance, the initial nominal QP can be selected as an arbitrary value (e.g., 26). Alternatively, a value can be selected that is known to produce an acceptable quality for the target bit rate based on coding experiments. The initial nominal QP value can also be selected from a look-up table based on spatial resolution, frame rate, spatial/temporal complexity, and target bit rate. In some embodiments, this initial nominal QP value is selected from the table using a distance measure that depends on each of these parameters, or it may be selected using a weighted distance measure of these parameters.
This initial nominal QP value can also be set to the adjusted average of the frame QP values as they are selected during a fast encoding with a rate controller (without masking), where the average has been adjusted based on the bit rate percentage rate error E0 for pass 0. Similarly, the initial nominal QP can also be set to a weighted adjusted average of the frame QP values, where the weight for each frame is determined by the percentage of macroblocks in this frame that are not coded as skipped macroblocks. Alternatively, the initial nominal QP can be set to an adjusted average or an adjusted weighted average of the frame QP values as they are selected during a fast encoding with a rate controller (with masking), as long as the effect of changing the reference masking strength from ΦR(0) to
Figure imgf000022_0001
taken into account.
B. First Search Stage: Nominal QP Adjustments
After 105, the multi-pass encoding process 100 enters the first search stage. In first search stage, the process 100 performs N1 encodings of the sequence, where N1 represents the number of passes through the first search stage. During each pass of the first stage, the process uses a changing nominal quantization parameter with a constant reference masking strength.
Specifically, during each pass p in the first search stage, the process 100 computes (at 107) a particular quantization parameter MQPp(k) for each frame k and a particular quantization parameter MQPMB(p)(k, m) for each individual macroblock m within the frame k. The calculation of the parameters MQPp(k) anH
Figure imgf000022_0002
ITΛ for a given nominal quantization parameter QPNom(p) and reference masking strength ΦR(P) was described in Section III (where MQPp(k) and MQPMB(P)(k, m) are computed by using the functions CaIcMQP and CalcMQPforMB, which were described above in Section III). In the first pass (i.e., pass 1) through 107, the nominal quantization parameter and the first-stage reference masking strength are parameter QPNom(i) and reference masking strength ΦR^, which were computed during the initial analysis
stage 105.
After 107, the process encodes (at 110) the sequence based on the quantization parameter values computed at 107. Next, the encoding process 100 determines (at 115) whether it should terminate. Different embodiments have different criteria for terminating the overall encoding process. Examples of exit conditions that completely terminate the multi-pass encoding process include:
• |EP| < ε, where ε is the error tolerance in the final bit rate.
• QPNom(p) is at the upper or lower bound of the valid range of QP values.
• The number of passes has exceeded the maximum number of
allowable passes PMAX-
Some embodiments might use all of these exit conditions, while other embodiments might only use some of them. Yet other embodiments might use other exit conditions for terminating the encoding process.
When the multi-pass encoding process decides (at 115) to terminate, the process, 100 omits the second search stage and transitions to 145. At 145, the process saves the bitstream from the last pass p as the final result, and then terminates.
On the other hand, when the process determines (at 115) that it should not terminate, it then determines (at 120) whether it should terminate the first search stage. Again, different embodiments have different criteria for terminating the first search stage. Examples of exit conditions that terminate the first search stage of the multi-pass encoding process include:
QPNom(p+i) is the same as QPNom(q), and q≤p, (in this case, the error in bit rate cannot be lowered any further by modifying the nominal QP).
• |EP| < εc, εc > ε, where εc is the error tolerance in the bit rate for the first search stage.
• The number of passes has exceeded P1, where P1 is less than PMAX-
• The number of passes has exceeded P2, which is less than P1, and |EP| <
ε2, ε2c.
Some embodiments might use all these exit conditions, while other embodiments might only use some of them. Yet other embodiments might use other exit conditions for terminating the first search stage.
When the multi-pass encoding process decides (at 120) to terminate the first search stage, the process 100 proceeds to the second search stage, which is described in the next sub-section. On the other hand, when the process determines (at 120) that it should not terminate the first search stage, it updates (at 125) the nominal QP for the next pass in the first search stage (i.e., defines QPNom(p÷i))- In some embodiments, the nominal QPNom(P+i) is updated as follows. At the end of pass 1, these embodiments define
QPNom(p+l) = QPNom(p) + χEp, where χis a constant. At the end of each pass from pass 2 to pass N1, these embodiments then define
QpNom(p+i) = IήterpExtrap(O, Eql, Eq2, QPNom(qi), QPNom(q2)), where InterpExtrap is a function that is further described below. Also, in the above equation, ql and q2 are pass numbers with corresponding bit rate errors that are the lowest among all passes up to pass p, and ql, q2, and p have the following relationship:
' I≤ql<q2≤p.
Below is the pseudo code for the InterpExtrap function. Note that if x is not between xl and x2, this function is an extrapolation function. Otherwise, it is an interpolation function.
InterpExtrap(x, xl, x2, yl, y2)
{ if (x2 != xl) y = y 1 + (x - xl) * (y2 - yl)/(x2 - xl); else y = yl; return y;
}
The nominal QP value is typically rounded to an integer value and clipped to lie within the valid range of QP values. One of ordinary skill in art will realize that other embodiments might compute the nominal QPNom(p÷i) value differently than the approached described above.
After 125, the process transitions back to 107 to start the next pass (i.e., p := p+1), and for this pass, compute (at 107) a particular quantization parameter MQPp(k) for each frame k and a particular quantization parameter MQPMB(p)(k, m) for each individual macroblock m within the frame k for the current pass p. Next, the process encodes (at 110) the sequence of frames based on these newly computed quantization parameters. From 110, the process then transitions to 115, which was described above. C. Second Search Stage: Reference Masking Strength Adjustments
When the process 100 determines (at 120) that it should terminate the first search stage, it transitions to 130. In the second search stage, the process 100 performs N2 encodings of the sequence, where N2 represents the number of passes through the second search stage. During each pass, the process uses the same nominal quantization parameter and a changing reference masking strength.
At 130, the process 100 computes a reference masking strength ΦR(P+1) for the
next pass, i.e., pass p+1, which is pass N1H-I. In pass N1H-I, the process 100 encodes the sequence of frames in 135. Different embodiments compute (at 130) the reference masking strength ΦR(P+I) at the end of a pass p in different ways. Two alternative approaches are described below.
Some embodiments compute the reference masking strength ΦR(P) based on the
error in bit rate(s) and value(s) of ΦR from previous pass(es). For instance, at the end
of pass N1, some embodiments define
ΦR(NI+I) = ΦR(NI) + ΦR(NI) X Konst x EN1
At the end of pass N1H-In, where m is an integer greater than 1 , some embodiments define
φR(Nl+m) = InterpExtrap(0, ENRm-2, ENRIH-I5 φR(Nl+m-2), φR(Nl+m-
D)-
Alternatively, some embodiments define
φRCNl+m) ^ InterpExtrap(0, ENH-m-q25 ENi+m-ql, φR(Nl+m-q2):, φR(Nl+m-
qi))> where ql and q2 are previous passes that gave the best errors. Other embodiments compute the reference masking strength at the end of each pass in the second search stage by using AMQP, which was defined in Section I. One way for computing AMQP for a given nominal QP and some value for ΦR will be
described below by reference to the pseudo code of a function GetAvgMaskedQP.
GetAvgMaskedQP(nominalQP, φR)
{
SUm=O; for(k=0;k<numframes;k++) {
MQP(k) = maskedQP for frame k calculated using CalcMQP(nominalQP, ΦR, φF(k), maxQPFrameAdjustment); // see above sum += MQP(k);
} return surn/numframes;
}
Some embodiments that use AMQP, compute a desired AMQP for pass p+1 based on the error in bit rate(s) and value(s) of AMQP from previous pass(es). The
ΦR(P+I) corresponding to this AMQP is then found through a search procedure given
by a function Search(AMQP(p+!), ΦR(P)), the pseudo code of which is given at the end
of this subsection.
For instance, some embodiments at the end of pass N1 compute AMQPNI+I, where
AMQPNI+I = InterpExtraρ(O, EN1-1, EN1, AMQPN1-1, AMQPN1), when Ni> 1, and
AMQPNI+I = AMQPN1, when N1= 1,
These embodiments then define:
ΦR<NI+I) = Search( AMQPN1+1, ΦR(NI))
At the end of pass Ni+m (where m is an integer greater than 1), some embodiments define: AMQPN1+m = InterpExtraρ(O, ENi+m-2, ENi+m-i, AMQPN1+m-2, AMQPN1+m-i), and
φROSfi+m) ^ Search(AMQPNi+m5 φR(Ni+m-i))
Given the desired AMQP and some default value of ΦR, the ΦR corresponding to the desired AMQP can be found using the Search function, which has the following pseudo code in some embodiments:
Search(AMQP, φR) { interpolateSuccess=True; //until set otherwise refLumaSadO=re:fLumaSadl=refLumaSadx=φR; errorlnAvgMaskedQp = GetAvgMaskedQp(nominalQp, refLumaSadx) - AMQP; if(errorIπAvgMaskedQp>0) { ntimes=0; do{ ntimes++; refLumaSadO = (refLumaSadO * 1.1); errorlnAvgMaskedQp = GetAvgMaskedQp(nominalQp,refLumaSadO) - amqp;
}while(errorInAvgMaskedQp>0 && ntimes<10); if(ntimes>=10)interpolateSuccess=False;
} else{ //errorIπAvgMaskedQp<0 ntimes=0; do{ ntimes++; refLumaSadl = (refLumaSadl * 0.9); errorlnAvgMaskedQp = GetAvgMaskedQp(nominalQp,refLumaSadl) - amqp;
}while(errorIπAvgMaskedQp<0 && ntimes<10); if(ntimes>= 10)interpolate Success=False ;
} ntimes=0; do{ ntimes++; refLumaSadx = (refLumaSadO+refLumaSadl)/2; //simple successive approximation errorlnAvgMaskedQp = GetAvgMaskedQp(nominalQp,refLumaSadx) - AMQP; if(errorInAvgMaskedQp>0)refLumaSadl=:refLumaSadx; else refLumaSadO=refLumaSadx;
}while( ABS (errorlnAvgMaskedQp) > 0.05 && ntimes<12 ); if(ntimes>=12)interpolateSuccess=False;
} if (interpolateSuccess) return refLumaSadx; else return ΦR; }
In the above pseudo code, the numbers 10, 12 and 0.05 may be replaced with suitably chosen thresholds.
After computing the reference masking strength for the next pass (pass p+1) through the encoding of the frame sequence, the process 100 transitions to 132 and starts the next pass (i.e., p := p+1). For each frame k and each macroblock m during each encoding pass p, the process computes (at 132) a particular quantization parameter MQPp(k) for each frame k and particular quantization parameters MQPMB(p)(k, m) for individual macroblocks m within the frame k. The calculation of the parameters MQPp(k) and MQPMB(p)(k, m) for a given nominal quantization parameter QPNom(P) and reference masking strength ΦR(P), were described in Section
III (where MQPp(k) and MQPMB(p)(k, ni) are computed by using the functions CaIcMQP and CalcMQPforMB, which were described above in Section III). During the first pass through 132, the reference masking strength is the one that was just computed at 130. Also, during the second search stage, the nominal QP remains constant throughout the second search stage. In some embodiments, the nominal QP through the second search stage is the nominal QP that resulted in the best encoding solution (i.e., in the encoding solution with the lowest bit rate error) during the first search stage. After 132, the process encodes (at 135) the frame sequence using the quantization parameters computed at 130. After 135, the process determines (at 140) whether it should terminate the second search stage. Different embodiments use different criteria for terminating the second search stage at the end of a pass p. Examples of such criteria are:
• |EP| < ε, where ε is the error tolerance in the final bit rate
• The number of passes has exceeded the maximum number of passes allowed
Some embodiments might use all of these exit conditions, while other embodiments might only use some of them. Yet other embodiments might use other exit conditions for terminating the first search stage.
When the process 100 determines (at 140) that it should not terminate the second search stage, it returns to 130 to recompute the reference masking strength for the next pass of encoding. From 130, the process transitions to 132 to compute quantization parameters and then to 135 to encode the video sequence by using the newly computed quantization parameters.
On the other hand, when the process decides (at 140) to terminate the second search stage, it transitions to 145. At 145, the process 100 saves the bitstream from the last pass p as the final result, and then terminates.
V. DECODERINPUTBUFFERUNDERFLOWCONTROL
Some embodiments of the invention provide a multi-pass encoding process that examines various encodings of a video sequence for a target bit rate, in order to identify an optimal encoding solution with respect to the usage of an input buffer used by the decoder. In some embodiment, this multi-pass process follows the multi-pass encoding process 100 of Figure 1.
The decoder input buffer ("decoder buffer") usage will fluctuate to some degree during the decoding of an encoded sequence of images (e.g., frames), because of a variety of factors, such as fluctuation in the size of encoded images, the speed with which the decoder receives encoded data, the size of the decoder buffer, the speed of the decoding process, etc.
A decoder buffer underflow signifies the situation where the decoder is ready to decode the next image before that image has completely arrived at the decoder side. The multi-pass encoder of some embodiments simulates the decoder buffer and re- encode selected segments in the sequence to prevent decoder buffer underflow.
Figure 2 conceptually illustrates a codec system 200 of some embodiments of the invention. This system includes a decoder 205 and an encoder 210. In this figure, the encoder 210 has several components that enable it to simulate the operations of similar components of the decoder 205.
Specifically, the decoder 205 has an input buffer 215, a decoding process 220, and an output buffer 225. The encoder 210 simulates these modules by maintaining a simulated decoder input buffer 230, a simulated decoding process 235, and a simulated decoder output buffer 240. In order not to obstruct the description of the invention, Figure 2 is simplified to show the decoding process 220 and encoding process 245 as single blocks. Also, in some embodiments, the simulated decoding process 235 and simulated decoder output buffer 240 are not utilized for buffer underflow management, and are therefore shown in this figure for illustration only.
The decoder maintains the input buffer 215 to smooth out variations in the rate and arrival time of incoming encoded images. If the decoder runs out of data (underflow) or fills up the input buffer (overflow), there will be visible decoding discontinuities as the picture decoding halts or incoming data is discarded. Both of these cases are undesirable.
To eliminate the underflow condition, the encoder 210 in some embodiments first encodes a sequence of images and stores them in a storage 255. For instance, the encoder 210 uses the multi-pass encoding process 100 to obtain a first encoding of the sequence of images. It then simulates the decoder input buffer 215 and re-encodes the images that would cause buffer underflow. After all buffer underflow conditions are removed, the re-encoded images are supplied to the decoder 205 through a connection 255, which maybe a network connection (Internet, cable, PSTN lines, etc.), a non- network direct connection, a media (DVD, etc.), etc.
Figure 3 illustrates an encoding process 300, of the encoder of some embodiments. This process tries to find an optimal encoding solution that does not cause the decoder buffer to underflow. As shown in Figure 3, the process 300 identifies (at 302) a first encoding of the sequence of images that meets a desired target bit rate (e.g., the average bit rate for each image in the sequence meets a desired average target bit rate). For instance, the process 300 may use (at 302) the multi-pass encoding process 100 to obtain the first encoding of the sequence of images.
After 302, the encoding process 300 simulates (at 305) the decoder input buffer 215 by considering a variety of factors, such as the connection speed (i.e., the speed with which the decoder receives encoded data), the size of the decoder input buffer, the size of encoded images, the decoding process speed, etc. At 310, the process 300 determines if any segment of the encoded images will cause a decoder input buffer to underflow. The techniques that the encoder uses to determine (and subsequently eliminate) the underflow condition are described further below. If the process 300 determines (at 310) that the encoded images do not create underflow condition, the process ends. On the other hand, if the process 300 determines (at 310) that a buffer underflow condition exists in any segment of the encoded images, it refines (at 315) the encoding parameters based on the value of these parameters from previous encoding passes. The process then re-encodes (at 320) the segment with underflow to reduce the segment bit size. After re-encoding the segment, the process 300 examines (at 325) the segment to determine if the underflow condition is eliminated.
When the process determines (at 325) that the segment still causes underflow, the process 300 transitions to 315 to further refine the encoding parameters to eliminate underflow. Alternatively, when the process determines (at 325) that the segment will not cause any underflow, the process specifies (at 330) that starting point for re-examining and re-encoding the video sequence as the frame after the end of the segment re-encoded in the last iteration at 320. Next, at 335, the process re- encodes the portion of the video sequence specified at 330, up to (and excluding) the first IDR frame following the underflow segment specified at 315 and 320. After 335, the process transitions back to 305 to simulate the decoder buffer to determine whether the rest of the video sequence still causes buffer underflow after re-encoding. The flow of the process 300 from 305 was described above.
A. Determining the underflow segment in the sequence of encoded images
As described above, the encoder simulates the decoder buffer conditions to determine whether any segment in the sequence of the encoded or re-encoded images cause underflow in the decoder buffer. In some embodiments, the encoder uses a simulation model that considers the size of encoded images, network conditions such as bandwidth, decoder factors (e.g., input buffer size, initial and nominal time to remove images, decoding process time, display time of each image, etc.).
In some embodiments, the MPEG-4 AVC Coded Picture Buffer (CPB) model is used to simulate the decoder input buffer conditions. The CPB is the term used in MPEG-4 H.264 standard to refer to the simulated input buffer of the Hypothetical Reference Decoder (HRD). The HRD is a hypothetical decoder model that specifies constraints on the variability of conforming streams that an encoding process may produce. The CPB model is well known and is described in Section 1 below for convenience. More detailed description of CPB and HRD can be found in Draft ITU- T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec. H.264 / ISO/IEC 14496-10 AVC).
1. Using the CPB Model to Simulate the Decoder Buffer
The following paragraphs describe how the decoder input buffer is simulated in some embodiments using the CPB model. The time at which the first bit of image n begins to enter the CPB is referred to as the initial arrival time tai( n ), which is
derived as follows:
tai( O ) = O, when the image is the first image (i.e., image 0),
• tai( n ) = Max( taf( n - 1 ), tai,earliest( n ) ), when the image is not the first
image in the sequence being encoded or re-encoded (i.e., where n > 0).
In the above equation,
• tai5earliest( n ) = tr5n( n ) - initial_cpb_removal_delay, where tr5n( n ) is the nominal removal time of image n from the CPB as specified
below and initial_cpb_removal_delay is the initial buffering period.
The final arrival time for image n is derived by taf( n ) = tai( n ) + b( n ) / BitRate,
where b( n ) is the size in bits of image n.
In some embodiments, the encoder makes its own calculations of the nominal removal time as described below instead of reading them from an optional part of the bit stream as in the H.264 specification. For image 0, the nominal removal time of the image from the CPB is specified by tr,n( 0 ) ~ initial_cpb_removal_delay
For image n (n > 0), the nominal removal time of the image from the CPB is specified by tr,n( n ) == tr,n ( 0 ) + SUm1 = 010 nΛ(t[),
where tr,n( n ) is the nominal removal time of image n, and ti is the display duration
for picture i.
The removal time of image n is specified as follows.
tr( n ) = tr,n( n ), when trsn( n ) >= taf( n ),
• tr( n ) = taf( n ), wb.en tr,n( n ) < taf( n )
It is this latter case that indicates that the size of image n, b(n), is so large that it prevents removal at the nominal removal time.
2. Detection of underflow segments
As described in the previous section, the encoder can simulate the decoder input buffer state and obtain the number of bits in the buffer at a given time instant. Alternatively, the encoder can track how each individual image changes the decoder input buffer state via the difference between its nominal removal time and final arrival time (i.e., tb(n) - tr n(n) - t^n)). When tb(n) is less than O5 the buffer is suffering from
underflow between time instants tr n(n) and taf(n), and possibly before tr n(n) and after
The images directly involved in an underflow can be easily found by testing whether tb(n) is less than 0. However, the images with tb(n) less than 0 do not
necessarily cause an underflow, and conversely the images causing an underflow might not have tb(n) less than 0. Some embodiments define an underflow segment as a
stretch of consecutive images (in decoding order) that cause underflow by continuously depleting the decoder input buffer until underflow reaches its worst point.
Figure 4 is a plot of the difference between nominal removal time and final arrival of images tb(n) versus image number in some embodiments. The plot is drawn
for a sequence of 1500 encoded images. Figure 4a shows an underflow segment with arrows marking its beginning and end. Note that there is another underflow segment in Figure 4a that occurs after the first underflow segment, which is not explicitly marked by arrows for simplicity.
Figure 5 illustrates a process 500 that the encoder uses to perform the underflow detection operation at 305. The process 500 first determines (at 505) the final arrival time, taf, and nominal removal time, tr5n3 of each image by simulating the
decoder input buffer conditions as explained above. Note that since this process may be called several times during the iterative process of buffer underflow management, it receives an image number as the starting point and examines the sequence of images from this given starting image. Obviously, for the first iteration, the starting point is the first image in the sequence.
At 510, the process 500 compares the final arrival time of each image at the decoder input buffer with the nominal removal time of that image by the decoder. If the process determines that there are no images with final arrival time after the nominal removal time (i.e., no underflow condition exits), the process exits. On the other hand, when an image is found for which the final arrival time is after the nominal removal time, the process determines that there is an underflow and transitions to 515 to identify the underflow segment.
At 515, the process 500 identifies the underflow segment as the segment of the images where the decoder buffer starts to be continuously depleted until the next global minimum where the underflow condition starts to improve (i.e., tb(n) does not
get more negative over a stretch of images). The process 500 then exits. In some embodiments, the beginning of the underflow segment is further adjusted to start with an I-frame, which is an intra-encoded image that marks the starting of a set of related inter-encoded images. Once one or more segments that are causing the underflow are identified, the encoder proceeds to eliminate the underflow. Section B below describes elimination of underflow in a single-segment case (i.e., when the entire sequence of encoded images only contains a single underflow segment). Section C then describes elimination of underflow for the multi-segment underflow cases.
B. Single-Segment Underflow Elimination
Referring to Figure 4(a), if the tb(n)-versus-n curve only crosses the n-axis
once with a descending slope, then there is only one underflow segment in the entire sequence. The underflow segment begins at the nearest local maximum preceding the zero-crossing point, and ends at the next global minimum between the zero-crossing point and the end of the sequence. The end point of the segment could be followed by another zero-crossing point with the curve taking an ascending slope if the buffer recovers from the underflow.
Figure 6 illustrates a process 600 the encoder utilizes (at 315, 320, and 325) to eliminate underflow condition in a single segment of images in some embodiments. At 605, the process 600 estimates the total number of bits to reduce (ΔB) in the
underflow segment by computing the product of the input bit rate into the buffer and the longest delay (e.g., minimum tb(n)) found at the end of the segment.
Next, at 610, the process 600 uses the average masked frame QP (AMQP) and total number of bits in the current segment from the last encoding pass (or passes) to estimate a desired AMQP for achieving a desired number of bits for the segment, BT = B - ΔBP, where p is the current number of iterations of the process 600 for the segment. If this iteration is the first iteration of process 600 for the particular segment, AMQP and total number of bits are the AMQP and the total number of bits for this segment that are derived from the initial encoding solution identified at 302. On the other hand, when this iteration is not the first iteration of process 600, these parameters can be derived from the encoding solution or solutions obtained in the last pass or last several passes of the process 600.
Next, at 615, the process 600 uses the desired AMQP to modify average masked frame QP, MQP(n), based on masking strength ΦF(Π) such that images that
can tolerate more masking get more bit reductions. The process then re-encodes (at 620) the video segment based on the parameters defined at 315. The process then examines (at 625) the segment to determine whether the underflow condition is eliminated. Figure 4(b) illustrates the elimination of the underflow condition of Figure 4(a) after process 600 is applied to the underflow segment to re-encode it. When the underflow condition is eliminated, the process exits. Otherwise, it will transition back to 605 to further adjust encoding parameters to reduce total bit size.
C. Underflow Elimination with Multiple Underflow Segments
When there are multiple underflow segments in a sequence, re-encoding of a segment changes the buffer fullness time, tb(n), for all the ensuing frames. To account
for the modified buffer condition, the encoder searches for one underflow segment at a time, starting from the first zero-crossing point (i.e., at the lowest n) with a descending slope.
The underflow segment begins at the nearest local maximum preceding this zero-crossing point, and ends at the next global minimum between the zero-crossing point and the next zero-crossing point (or the end of the sequence if there is no more zero crossing). After finding one segment, the encoder hypothetically removes the underflow in this segment and estimates the updated buffer fullness by setting tb(n) to
0 at the end of the segment and redoing the buffer simulation for all subsequent frames.
The encoder then continues searching for the next segment using the modified buffer fullness. Once all underflow segments are identified as described above, the encoder derives the AMQPs and modifies the Masked frame QPs for each segment independently of the others just as in the single-segment case.
One of ordinary skill would realize that other embodiments might be implemented differently. For instance, some embodiments would not identify multiple segments that cause underflow of the input buffer of the decoder. Instead, some embodiments would perform buffer simulation as described above to identify a first segment that causes underflow. After identifying such a segment, these embodiments correct the segment to rectify underflow condition in that segment and then resume encoding following the corrected portion. After the encoding of the remaining of the sequence, these embodiments will repeat this process for the next underflow segment.
D. Applications of Buffer Underflow Management
The decoder buffer underflow techniques described above applies to numerous encoding and decoding systems. Several examples of such systems are described below.
Figure 7 illustrates a network 705 connecting a video streaming server 710 and several client decoders 715-725. Clients are connected to the network 705 via links with different bandwidths such as 300 Kb/sec and 3 Mb/sec. The video streaming server 710 is controlling streaming of encoded video images from an encoder 730 to the client decoders 715-725.
The streaming video server may decide to stream the encoded video images using the slowest bandwidth in the network (i.e., 300 Kb/sec) and the smallest client buffer size. In this case, the streaming server 710 needs only one set of encoded images that are optimized for a target bit rate of 300 Kb/sec. On the other hand, the server may generate and store different encodings that are optimized for different bandwidths and different client buffer conditions.
Figure 8 illustrates another example of an application for decoder underflow management. In this example, an HD-DVD player 805 is receiving encoded video images from an HD-DVD 840 that has stored encoded video data from a video encoder 810. The HD-DVD player 805 has an input buffer 815, a set of decoding modules shown as one block 820 for simplicity, and an output buffer 825.
The output of the player 805 is sent to display devices such as TV 830 or computer display terminal 835. The HD-DVD player may have a very high bandwidth, e.g. 29.4 Mb/sec. In order to maintain a high quality image on the display devices, the encoder ensures that the video images are encoded in a way that no segments in the sequence of images would be so large that cannot be delivered to the decoder input buffer on time.
VI. COMPUTER SYSTEM
Figure 9 presents a computer system with which one embodiment of the invention is implemented. Computer system 900 includes a bus 905, a processor 910, a system memory 915, a read-only memory 920, a permanent storage device 925, input devices 930, and output devices 935. The bus 905 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 900. For instance, the bus 905 communicatively connects the processor 910 with the read-only memory 920, the system memory 915, and the permanent storage device 925.
From these various memory units, the processor 910 retrieves instructions to execute and data to process in order to execute the processes of the invention. The read-only-memory (ROM) 920 stores static data and instructions that are needed by the processor 910 and other modules of the computer system.
The permanent storage device 925, on the other hand, is read-and-write memory device. This device is a non- volatile memory unit that stores instruction and data even when the computer system 900 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 925.
Other embodiments use a removable storage device (such as a floppy disk or zip® disk, and its corresponding disk drive) as the permanent storage device. Like the permanent storage device 925, the system memory 915 is a read-and- write memory device. However, unlike storage device 925, the system memory is a volatile read- and-write memory, such as a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 915, the permanent storage device 925, and/or the read-only memory 920.
The bus 905 also connects to the input and output devices 930 and 935. The input devices enable the user to communicate information and select commands to the computer system. The input devices 930 include alphanumeric keyboards and cursor- controllers. The output devices 935 display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD).
Finally, as shown in Figure 9, bus 905 also couples computer 900 to a network 965 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network ("LAN"), a wide area network ("WAN"), or an Intranet) or a network of networks (such as the Internet). Any or all of the components of computer system 900 may be used in conjunction with the invention. However, one of ordinary skill in the art would appreciate that any other system configuration may also be used in conjunction with the present invention.
While the invention has been described with reference to numerous sϋecific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, instead of using H264 method of simulating the decoder input buffer other simulation methods may be used that consider the buffer size, arrival and removal times of images in the buffer, and decoding and display times of the images.
Several embodiments described above compute the mean removed SAD to obtain an indication of the image variance in a macroblock. Other embodiments, however, might identify the image variance differently. For example, some embodiments might predict an expected image value for the pixels of a macroblock. These embodiments then generate a macroblock SAD by subtracting this predicted value form the luminance value of the pixels of the macroblock, and summing the absolute value of the subtractions. In some embodiments, the predicted value is based on not only the values of the pixels in the macroblock but also the value of the pixels in one or more of the neighboring macroblocks.
Also, the embodiments described above use the derived spatial and temporal masking values directly. Other embodiments will apply a smoothing filtering on successive spatial masking values and/or to successive temporal masking values before using them in order to pick out the general trend of those values through the video images. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details.

Claims

CLAIMSWhat is claimed is:
1. A method of encoding a plurality of images, the method comprising: a) defining a nominal quantization parameter for encoding the images; b) deriving at least an image-specific quantization parameter for at least one image based on the nominal quantization parameter; c) encoding the images based on the image-specific quantization parameter; and d) iteratively repeating the defining, deriving, and encoding operations to optimize the encoding.
2. The method of claim 1 further comprising: a) deriving a plurality of image-specific quantization parameters for a plurality of images based on the nominal quantization parameter; b) encoding the images based on the image-specific quantization parameters; and c) repeating the defining, deriving, and encoding operations to optimize the encoding.
3. The method of claim 1 further comprising stopping the iterations when an encoding operation satisfies a set of termination criteria.
4. The method of claim 3, wherein the set of termination criteria includes the identification of an acceptable encoding of the images.
5. The method of claim 4, wherein an acceptable encoding of the images is an encoding of the images that is within a particular range of a target bit rate.
6. A method of encoding a plurality of images, the method comprising: a) identifying a plurality of image attributes, each particular image attribute quantifying the complexity of at least a particular portion of a particular image, b) identifying a reference attribute that quantifies the complexity of the plurality of images, b) identifying quantization parameters for encoding the plurality of images based on the identified image attributes, reference attribute and the nominal quantization parameter; c) encoding the plurality of images based on the identified quantization parameter; and d) iteratively performing the identifying and encoding operations to optimize the encoding, wherein a plurality of different iterations use a plurality of different reference attributes.
7. The method of claim 6, wherein a plurality of the attributes are visual masking strengths of at least one portion of each image, the visual masking strengths for estimating the amount of encoding artifacts that are not perceptible to a viewer of the video sequence after the video sequence has been encoded according to the method and then decoded.
8. The method of claim 6, wherein a plurality of the attributes are visual masking strengths of at least one portion of each image, wherein a visual masking strength for a portion of an image quantifies the complexity of the portion of the image, wherein in quantifying the complexity of a portion of an image, the visual masking strength provides an indication of the amount of compression artifacts that can result from the encoding without visible distortions in the encoded image after the image is decoded.
9. A computer readable medium storing a computer program for encoding a plurality of images, the computer program comprising sets of instructions for: a) defining a nominal quantization parameter for encoding the images; b) deriving at least an image-specific quantization parameter for at least one image based on the nominal quantization parameter; c) encoding the images based on the image-specific quantization parameter; and d) iteratively repeating the defining, deriving, and encoding operations to optimize the encoding.
10. The computer readable medium of claim 18, wherein the computer program further comprises sets of instructions for: a) deriving a plurality of image-specific quantization parameters for a plurality of images based on the nominal quantization parameter; b) encoding the images based on the image-specific quantization parameters; and c) repeating the defining, deriving, and encoding operations to optimize the encoding.
11. The computer readable medium of claim 9 further comprising a set of instructions for stopping the iterations when an encoding operation satisfies a set of termination criteria.
12. The computer readable medium of claim 11, wherein the set of termination criteria includes the identification of an acceptable encoding of the images.
13. The computer readable medium of claim 12, wherein an acceptable encoding of the images is an encoding of the images that is within a particular range of a target bit rate.
14. A method of encoding a sequence of video images, the method comprising:
a) receiving the sequence of video images;
b) iteratively examining different encoding solutions for the sequence of video images to identify an encoding solution that optimizes image quality while meeting a target bit rate and satisfying a set of constraints regarding flow of encoded data through an input buffer of a hypothetical reference decoder for decoding the encoded video sequence.
15. The method of claim 14, wherein said iterative examining comprises for each encoding solution, determining whether the hypothetical reference decoder underflows while processing the encoding solution for any set of images within the video sequence.
16. The method of claim 14, wherein said iterative examination of different encodings comprises:
a) simulating the input buffer condition of hypothetical reference decoder;
b) utilizing said simulation to select the number of bits to optimize image quality while maximizing input buffer usage at the hypothetical reference decoder; c) re-encoding said encoded video images to achieve said optimized buffer usage; and
d) performing said simulation, utilization, and re-encoding iteratively until an optimal encoding is identified.
17. The method of claim 16, wherein simulating the hypothetical reference decoder input buffer condition further comprises considering a rate at which the hypothetical reference decoder receives encoded data.
18. The method of claim 16, wherein simulating the hypothetical reference decoder input buffer condition further comprises considering a size of the hypothetical reference decoder input buffer.
19. The method of claim 16, wherein simulating the hypothetical reference decoder input buffer condition further comprises considering an initial removal delay from the hypothetical reference decoder's input buffer.
20. The method of claim 14 further comprising:
a) before said iterative examinations, identifying an initial encoding solution that is not based on the set of constraints relating to the buffer flow; and
b) using the initial encoding solution to start a first examination in said iterative examinations.
21. A computer readable medium storing a computer program for encoding a sequence of video images in a system having a hypothetical reference decoder with an input buffer, the computer program comprising sets of instructions for:
a) receiving the sequence of video images;
b) iteratively examining different encoding solutions for the sequence of video images to identify an encoding solution that optimizes image quality while meeting a target bit rate and satisfying a set of constraints regarding flow of encoded data through an input buffer of a hypothetical reference decoder for decoding the encoded video sequence.
22. The computer readable medium of claim 21, wherein the set of instructions for said iterative examining comprises a set of instructions for determining, for each encoding solution, whether the hypothetical reference decoder underflows while processing the encoding solution for any set of images within the video sequence.
23. The computer readable medium of claim 21, wherein the set of instructions for said iterative examination of different encodings comprises a set of instructions for:
a) simulating the input buffer condition of hypothetical reference decoder;
b) utilizing said simulation to select the number of bits to optimize image quality while maximizing input buffer usage at the hypothetical reference decoder; c) re-encoding said encoded video images to achieve said optimized buffer usage; and
d) performing said simulation, utilization, and re-encoding iteratively until an optimal encoding is identified.
24. The computer readable medium of claim 23, wherein the set of instructions for simulating the hypothetical reference decoder input buffer condition further comprises a set of instructions for considering a rate at which the hypothetical reference decoder receives encoded data.
25. The computer readable medium of claim 23, wherein the set of instructions for simulating the hypothetical reference decoder input buffer condition further comprises a set of instructions for considering a size of the hypothetical reference decoder input buffer.
26. The computer readable medium of claim 23, wherein the set of instructions for simulating the hypothetical reference decoder input buffer condition further comprises a set of instructions for considering an initial removal delay from the hypothetical reference decoder's input buffer.
27. The computer readable medium of claim 21, wherein the computer program further comprises sets of instructions for:
a) before said iterative examinations, identifying an initial encoding solution that is not based on the set of constraints relating to the buffer flow; and b) using the initial encoding solution to start a first examination in said iterative examinations.
28. A method of encoding video, the method comprising: a) identifying a first visual masking strength for a first portion of a first image in the video sequence, wherein the visual masking strength quantifies a degree to which coding artificats are not perceptible to a viewer due to complexity of the first portion; and b) encoding at least a part of the first image based on the identified first visual masking strength.
29. The method of claim 28, wherein the visual masking strength specifies spatial complexity of the first portion.
30. The method of claim 29, wherein the spatial complexity is calculated as a function of the pixel values of a part of the image.
31. The method of claim 30, wherein the first portion has a plurality of pixels and an image value for each pixel, wherein identifying the visual masking for the first portion comprises: a) estimating an image value for the pixels of the first portion; b) subtracting said statistical attribute from the image values of the pixels of the first portion; c) computing the visual masking strength based on the result of the subtraction.
32. The method of claim 31, wherein the estimated image value is a statistical attribute of image values of pixels of the first portion.
33. The method of claim 32, wherein the statistical attribute is a mean.
34. The method of claim 31, wherein the estimate image value is based partly on pixels that neighbor the pixels of the first portion.
35. The method of claim 28, wherein the visual masking strength specifies temporal complexity of the first portion.
36. The method of claim 35, wherein the temporal complexity is calculated as a function of the motion compensated error signal of pixel regions defined within the first portion of the first image.
37. The method of claim 35, wherein the temporal complexity is calculated as a function of the motion compensated error signal of pixel regions defined within the first portion of the first image and the motion compensated error signal of pixels defined within a set of second portions of a set of other images.
38. The method of claim 37, wherein the set of other images includes only one image.
39. The method of claim 37, wherein the set of other images includes more than one other image.
40. The method of claim 39, wherein the motion compensated error signal is an amalgamated motion compensated error signal, wherein the method further comprising; a) defining a weighting factor for each other image, wherein the weighting factor of a second image is larger than the weighting factor of a third image, wherein the second image is closer to the first image in the video sequence than the third image is to the first image; b) calculating an individual motion compensated error signal for the first image and each image in the set of other images; c) using the weighting factors to generate the amalgamated motion compensated error signal from the individual motion compensated error signals.
41. The method of claim 40, wherein the weighting factors for a sub-set of images in the set of other images that are not part of a scene with the first image are selected to eliminate the sub-set of images.
42. The method of claim 37, wherein the set of other image includes only images that are part of a scene with the first image and does not include any images that relate to another scene.
43. The method of claim 37, wherein the second image is selected from a set of past images occurring before the first image and a set of future images occurring after the first image.
44. The method of claim 28, wherein the visual masking strength comprises a spatial complexity component and a temporal complexity component, the method further comprising comparing the spatial complexity component and the temporal complexity component to each other and modifying them based on certain criteria to maintain the contribution of the spatial complexity component and the contribution of the temporal complexity component to the masking strength with an acceptable range of one another.
45. The method of claim 44, wherein the temporal complexity component is adjusted to account for an upcoming scene change within a look ahead range of certain frames.
46. The method of claim 28, wherein the visual masking strength specifies brightness attribute of the first portion.
47. The method of claim 46, wherein the brightness attribute is calculated as average pixel intensity of the first portion.
48. The method of claim 28, wherein the first portion is the entire first image.
49. The method of claim 28, wherein the first portion is less than the entire first image.
50. The method of claim 49, wherein the first portion is a macroblock within the first image.
51. A computer readable medium storing a computer program for encoding video, the computer program comprising sets of instructions for: a) identifying a first visual masking strength that quantifies the complexity of a first portion of a first image in the video sequence; and b) encoding at least a part of the first image based on the identified first visual masking strength.
52. The computer readable medium of claim 51, wherein the visual masking strength quantifies a degree to which coding artificats are not perceptible to a viewer due to the spatial complexity of the first portion.
53. The computer readable medium of claim 51, wherein the visual masking strength quantifies a degree to which coding artificats are not perceptible to a viewer due to the motion in the video, wherein the motion is captured by the first image and a set of images before and after the first image.
54. The computer readable medium of claim 51 , wherein masking strength comprises a spatial complexity and a temporal complexity, the method further comprising comparing the spatial complexity and the temporal complexity to each other and modifying them based on a set of criteria to maintain the contribution of the spatial complexity component and the contribution of the temporal complexity component to the masking strength with an acceptable range of one another.
55. The computer readable medium of claim 54, wherein masking strength comprises a spatial complexity and a temporal complexity, the computer program further comprising a set of instructions for altering the spatial complexity and temporal complexity by smoothing out the temporal trend of the spatial complexity and the temporal complexity within a set of images.
56. The computer readable medium of claim 54 wherein the temporal complexity component is adjusted to account for an upcoming scene change within a look ahead range of certain frames.
57. The computer readable medium of claim 51, wherein the masking strength attribute specifies brightness attribute of the first portion.
PCT/US2005/022616 2004-06-27 2005-06-24 Multi-pass video encoding WO2006004605A2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
KR1020067017074A KR100909541B1 (en) 2004-06-27 2005-06-24 Multi-pass video encoding method
CN2005800063635A CN1926863B (en) 2004-06-27 2005-06-24 Multi-pass video encoding method
JP2007518338A JP4988567B2 (en) 2004-06-27 2005-06-24 Multi-pass video encoding
EP05773224A EP1762093A4 (en) 2004-06-27 2005-06-24 Multi-pass video encoding
HK07106057.0A HK1101052A1 (en) 2004-06-27 2007-06-07 Method of multi-pass video encoding

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US58341804P 2004-06-27 2004-06-27
US60/583,418 2004-06-27
US64391805P 2005-01-09 2005-01-09
US60/643,918 2005-01-09
US11/118,616 2005-04-28
US11/118,604 2005-04-28
US11/118,616 US8406293B2 (en) 2004-06-27 2005-04-28 Multi-pass video encoding based on different quantization parameters
US11/118,604 US8005139B2 (en) 2004-06-27 2005-04-28 Encoding with visual masking

Publications (3)

Publication Number Publication Date
WO2006004605A2 true WO2006004605A2 (en) 2006-01-12
WO2006004605A3 WO2006004605A3 (en) 2006-05-04
WO2006004605B1 WO2006004605B1 (en) 2006-07-13

Family

ID=35783274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/022616 WO2006004605A2 (en) 2004-06-27 2005-06-24 Multi-pass video encoding

Country Status (6)

Country Link
EP (1) EP1762093A4 (en)
JP (2) JP4988567B2 (en)
KR (3) KR100997298B1 (en)
CN (3) CN102833539B (en)
HK (1) HK1101052A1 (en)
WO (1) WO2006004605A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009045683A1 (en) * 2007-09-28 2009-04-09 Athanasios Leontaris Video compression and tranmission techniques
US7822118B2 (en) 2002-11-08 2010-10-26 Apple Inc. Method and apparatus for control of rate-distortion tradeoff by mode selection in video encoders
WO2011084918A1 (en) * 2010-01-06 2011-07-14 Dolby Laboratories Licensing Corporation High performance rate control for multi-layered video coding applications
US8005139B2 (en) 2004-06-27 2011-08-23 Apple Inc. Encoding with visual masking
US8208536B2 (en) 2005-04-28 2012-06-26 Apple Inc. Method and apparatus for encoding using single pass rate controller
US8406293B2 (en) 2004-06-27 2013-03-26 Apple Inc. Multi-pass video encoding based on different quantization parameters
WO2013095627A1 (en) * 2011-12-23 2013-06-27 Intel Corporation Content adaptive high precision macroblock rate control
EP2951994A4 (en) * 2013-01-30 2016-10-12 Intel Corp Content adaptive bitrate and quality control by using frame hierarchy sensitive quantization for high efficiency next generation video coding
EP3044960A4 (en) * 2013-09-12 2017-08-02 Magnum Semiconductor, Inc. Methods and apparatuses including an encoding system with temporally adaptive quantization
US10313675B1 (en) 2015-01-30 2019-06-04 Google Llc Adaptive multi-pass video encoder control

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100918499B1 (en) * 2007-09-21 2009-09-24 주식회사 케이티 Apparatus and method for multi-pass encoding
EP2101503A1 (en) * 2008-03-11 2009-09-16 British Telecommunications Public Limited Company Video coding
CN102860010A (en) 2010-05-06 2013-01-02 日本电信电话株式会社 Video encoding control method and apparatus
JP5295429B2 (en) 2010-05-07 2013-09-18 日本電信電話株式会社 Moving picture coding control method, moving picture coding apparatus, and moving picture coding program
KR101391661B1 (en) 2010-05-12 2014-05-07 니폰덴신뎅와 가부시키가이샤 Video coding control method, video coding device and video coding program
KR101702562B1 (en) 2010-06-18 2017-02-03 삼성전자 주식회사 Storage file format for multimedia streaming file, storage method and client apparatus using the same
US9402082B2 (en) * 2012-04-13 2016-07-26 Sharp Kabushiki Kaisha Electronic devices for sending a message and buffering a bitstream
CN102946542B (en) * 2012-12-07 2015-12-23 杭州士兰微电子股份有限公司 Mirror image video interval code stream recompile and seamless access method and system are write
US10742708B2 (en) 2017-02-23 2020-08-11 Netflix, Inc. Iterative techniques for generating multiple encoded versions of a media title
US11166034B2 (en) 2017-02-23 2021-11-02 Netflix, Inc. Comparing video encoders/decoders using shot-based encoding and a perceptual visual quality metric
US11184621B2 (en) * 2017-02-23 2021-11-23 Netflix, Inc. Techniques for selecting resolutions for encoding different shot sequences
US11153585B2 (en) 2017-02-23 2021-10-19 Netflix, Inc. Optimizing encoding operations when generating encoded versions of a media title
US10666992B2 (en) 2017-07-18 2020-05-26 Netflix, Inc. Encoding techniques for optimizing distortion and bitrate
CN109756733B (en) * 2017-11-06 2022-04-12 华为技术有限公司 Video data decoding method and device

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05167998A (en) * 1991-12-16 1993-07-02 Nippon Telegr & Teleph Corp <Ntt> Image-encoding controlling method
JP3627279B2 (en) * 1995-03-31 2005-03-09 ソニー株式会社 Quantization apparatus and quantization method
US5956674A (en) * 1995-12-01 1999-09-21 Digital Theater Systems, Inc. Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels
FR2753330B1 (en) * 1996-09-06 1998-11-27 Thomson Multimedia Sa QUANTIFICATION METHOD FOR VIDEO CODING
JPH10304311A (en) * 1997-04-23 1998-11-13 Matsushita Electric Ind Co Ltd Video coder and video decoder
DE69830979T2 (en) * 1997-07-29 2006-05-24 Koninklijke Philips Electronics N.V. METHOD AND DEVICE FOR VIDEO CODING WITH VARIABLE BITRATE
US6192075B1 (en) * 1997-08-21 2001-02-20 Stream Machine Company Single-pass variable bit-rate control for digital video coding
KR20010012071A (en) * 1998-02-20 2001-02-15 요트.게.아. 롤페즈 Method and device for coding a sequence of pictures
US6278735B1 (en) * 1998-03-19 2001-08-21 International Business Machines Corporation Real-time single pass variable bit rate control strategy and encoder
US6289129B1 (en) * 1998-06-19 2001-09-11 Motorola, Inc. Video rate buffer for use with push dataflow
ES2259827T3 (en) * 1998-10-13 2006-10-16 Matsushita Electric Industrial Co., Ltd. REGULATION OF THE CALCULATION AND MEMORY REQUIREMENTS OF A BIT TRAIN COMPRESSED IN A VIDEO DECODER.
US20020057739A1 (en) * 2000-10-19 2002-05-16 Takumi Hasebe Method and apparatus for encoding video
US6594316B2 (en) * 2000-12-12 2003-07-15 Scientific-Atlanta, Inc. Method and apparatus for adaptive bit rate control in an asynchronized encoding system
US6831947B2 (en) * 2001-03-23 2004-12-14 Sharp Laboratories Of America, Inc. Adaptive quantization based on bit rate prediction and prediction error energy
US7062429B2 (en) * 2001-09-07 2006-06-13 Agere Systems Inc. Distortion-based method and apparatus for buffer control in a communication system
JP3753371B2 (en) * 2001-11-13 2006-03-08 Kddi株式会社 Video compression coding rate control device
US7027982B2 (en) * 2001-12-14 2006-04-11 Microsoft Corporation Quality and rate control strategy for digital audio
KR100468726B1 (en) * 2002-04-18 2005-01-29 삼성전자주식회사 Apparatus and method for performing variable bit rate control in real time
JP2004166128A (en) * 2002-11-15 2004-06-10 Pioneer Electronic Corp Method, device and program for coding image information
US8542733B2 (en) * 2003-06-26 2013-09-24 Thomson Licensing Multipass video rate control to match sliding window channel constraints

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None
See also references of EP1762093A4

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7822118B2 (en) 2002-11-08 2010-10-26 Apple Inc. Method and apparatus for control of rate-distortion tradeoff by mode selection in video encoders
US8355436B2 (en) 2002-11-08 2013-01-15 Apple Inc. Method and apparatus for control of rate-distortion tradeoff by mode selection in video encoders
US8406293B2 (en) 2004-06-27 2013-03-26 Apple Inc. Multi-pass video encoding based on different quantization parameters
US8594190B2 (en) 2004-06-27 2013-11-26 Apple Inc. Encoding with visual masking
US8005139B2 (en) 2004-06-27 2011-08-23 Apple Inc. Encoding with visual masking
US8811475B2 (en) 2004-06-27 2014-08-19 Apple Inc. Multi-pass video encoding solution for buffer underflow
US8208536B2 (en) 2005-04-28 2012-06-26 Apple Inc. Method and apparatus for encoding using single pass rate controller
CN101855910A (en) * 2007-09-28 2010-10-06 杜比实验室特许公司 Video compression and tranmission techniques
WO2009045683A1 (en) * 2007-09-28 2009-04-09 Athanasios Leontaris Video compression and tranmission techniques
US9445110B2 (en) 2007-09-28 2016-09-13 Dolby Laboratories Licensing Corporation Video compression and transmission techniques
US12041234B2 (en) 2007-09-28 2024-07-16 Dolby Laboratories Licensing Corporation Video compression and transmission techniques
WO2011084918A1 (en) * 2010-01-06 2011-07-14 Dolby Laboratories Licensing Corporation High performance rate control for multi-layered video coding applications
CN102714725A (en) * 2010-01-06 2012-10-03 杜比实验室特许公司 High performance rate control for multi-layered video coding applications
US8908758B2 (en) 2010-01-06 2014-12-09 Dolby Laboratories Licensing Corporation High performance rate control for multi-layered video coding applications
WO2013095627A1 (en) * 2011-12-23 2013-06-27 Intel Corporation Content adaptive high precision macroblock rate control
US9497241B2 (en) 2011-12-23 2016-11-15 Intel Corporation Content adaptive high precision macroblock rate control
EP2951994A4 (en) * 2013-01-30 2016-10-12 Intel Corp Content adaptive bitrate and quality control by using frame hierarchy sensitive quantization for high efficiency next generation video coding
EP3044960A4 (en) * 2013-09-12 2017-08-02 Magnum Semiconductor, Inc. Methods and apparatuses including an encoding system with temporally adaptive quantization
US10313675B1 (en) 2015-01-30 2019-06-04 Google Llc Adaptive multi-pass video encoder control

Also Published As

Publication number Publication date
WO2006004605A3 (en) 2006-05-04
KR100909541B1 (en) 2009-07-27
CN102833539B (en) 2015-03-25
JP2008504750A (en) 2008-02-14
JP2011151838A (en) 2011-08-04
KR100997298B1 (en) 2010-11-29
KR20070011294A (en) 2007-01-24
JP4988567B2 (en) 2012-08-01
WO2006004605B1 (en) 2006-07-13
KR100988402B1 (en) 2010-10-18
KR20090034992A (en) 2009-04-08
CN102833539A (en) 2012-12-19
KR20090037475A (en) 2009-04-15
CN102833538B (en) 2015-04-22
HK1101052A1 (en) 2007-10-05
CN1926863A (en) 2007-03-07
CN1926863B (en) 2012-09-19
JP5318134B2 (en) 2013-10-16
CN102833538A (en) 2012-12-19
EP1762093A2 (en) 2007-03-14
EP1762093A4 (en) 2011-06-29

Similar Documents

Publication Publication Date Title
US8594190B2 (en) Encoding with visual masking
US8811475B2 (en) Multi-pass video encoding solution for buffer underflow
JP5318134B2 (en) Multi-pass video encoding
Guo et al. Optimal bit allocation at frame level for rate control in HEVC
US6529631B1 (en) Apparatus and method for optimizing encoding and performing automated steerable image compression in an image coding system using a perceptual metric
CA2688249C (en) A buffer-based rate control exploiting frame complexity, buffer level and position of intra frames in video coding
US20060233237A1 (en) Single pass constrained constant bit-rate encoding
EP1994758A1 (en) Method and apparatus for determining in picture signal encoding the bit allocation for groups of pixel blocks in a picture
WO2007143876A1 (en) Method and apparatus for adaptively determining a bit budget for encoding video pictures
EP4333433A1 (en) Video coding method and apparatus, and electronic device
Wu et al. Adaptive initial quantization parameter determination for H. 264/AVC video transcoding
Chi et al. Region-of-interest video coding based on rate and distortion variations for H. 263+
Zhang et al. A two-pass rate control algorithm for H. 264/AVC high definition video coding
Overmeire et al. Constant quality video coding using video content analysis
Hoang Real-time VBR rate control of MPEG video based upon lexicographic bit allocation
Chang et al. A two-layer characteristic-based rate control framework for low delay video transmission
Guan et al. A Novel Video Compression Algorithm Based on Wireless Sensor Network.
Tun et al. A novel rate control algorithm for the Dirac video codec based upon the quality factor optimization

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2005773224

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007518338

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 1020067017074

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 200580006363.5

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

WWP Wipo information: published in national office

Ref document number: 1020067017074

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2005773224

Country of ref document: EP