US20070053429A1 - Color video codec method and system - Google Patents
Color video codec method and system Download PDFInfo
- Publication number
- US20070053429A1 US20070053429A1 US11/594,144 US59414406A US2007053429A1 US 20070053429 A1 US20070053429 A1 US 20070053429A1 US 59414406 A US59414406 A US 59414406A US 2007053429 A1 US2007053429 A1 US 2007053429A1
- Authority
- US
- United States
- Prior art keywords
- data
- video
- colorspace
- bit
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/87—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/93—Run-length coding
Definitions
- Data compression methods are used to reduce the amount of data necessary to represent information. Compression is often used when data storage space, transmission bandwidth, or transmitter/receiver data rate is limited. Data is compressed to a smaller size for storage or transmission and then decompressed back to original size when needed.
- Compression schemes can be classified as either “lossless” or “lossy.”
- a lossless compression scheme the data that is reconstructed at decompression is an exact match to the original data—no information is lost.
- a lossy compression scheme some information may be lost in the compression process.
- the goal of a lossy compression scheme is to choose the discarded information wisely, so that the data reconstructed at decompression is as close as possible to the original data, or at least so that the difference between the original and the reconstructed data is acceptable.
- Video signals are a common type of data for use in compression systems.
- Raw video data tends to be large, so that working with raw, uncompressed video would require large amounts of storage space or transmission bandwidth.
- characteristics of typical video allow fairly aggressive compression. For instance, there is high correlation between adjacent pixels in a single video frame (the set of all picture elements that represent one complete image), since objects in video tend to be of fairly uniform color and texture.
- there is high correlation between pixels in the same position in adjacent video frames since motion in video usually occurs slowly in relation to the video frame rate.
- Compression schemes also can be classified as either “symmetric” or “asymmetric.”
- a symmetric scheme the compression and decompression processes are roughly equal in computational complexity.
- a symmetric scheme is appropriate when similar processing constraints are present at both compression and decompression points, such as in video-conferencing applications where both compression and decompression must be done in real-time.
- An asymmetric scheme is used when compression and decompression have different complexity constraints. Typically, the constraint on the decompression end is greater, so computations are performed by the compressor in order to lessen the computational burden on the decompressor.
- An asymmetric scheme is usually used for video that will be captured once and then distributed many times, such as video clips stored and made available to many users on a computer network.
- FIG. 1 is a diagram of a typical asymmetric video compression system. Many existing video compression systems fit within this basic framework. The system consists of five main blocks—preprocessing, motion estimation, transform, quantization, and encoding—along with a feedback loop used to create decompressor reference data.
- the purpose of the preprocessing block is to prepare the video data for compression.
- Preprocessing functions typically convert the input video data into a format that allows for easier or more aggressive compression.
- subsampling One commonly used step of video preprocessing is subsampling.
- the size of the video frames (the number of pixels) is reduced.
- Subsampling is a simple way to create gains in video compression efficiency—by reducing the video frame size by half in each dimension, a 4:1 compression ratio has already been achieved.
- subsampling can result in distracting artifacts when the video is restored to full resolution after decompression.
- RGB color format discussed below in more detail
- RGB color format is not well suited to efficient compression, since the visually important video information is evenly distributed over the red, green, and blue color channels. For this reason, many video compression schemes include conversion to a different colorspace such as YUV (also discussed below).
- YUV color format also contains three channels, but most of the visually important information is found in the Y channel, which contains pixel intensity information.
- the U and V channels contain all of the color information for the video data.
- the U and V channels can be compressed much more aggressively than the Y channel, with little degradation in decompressed video quality.
- the Y channel can be kept at full resolution while the U and V channels are subsampled by a factor of 16. This results in a similar compression ratio to the RGB subsampling by 4 (3.75:1 versus 4:1) but the quality of the resulting video is much higher because the most visually important information has been preserved.
- the preprocessing block may also include other miscellaneous functions that depend on the specific design of the video compressor, such as object identification and denoising.
- Prediction is used to exploit the redundancy between adjacent frames in typical video signals.
- Most asymmetric video compression systems contain a feedback loop including a “dummy” decompressor that mimics the state of the actual decompressor.
- the feedback loop provides the prediction block with copies of the previous video frame(s), and the prediction block then uses motion estimation to make a guess at what the next frame will look like. Then, rather than working with actual pixel values, the compressor will perform the remaining computations on the error between the actual frame and the predicted frame. Error values are generally smaller and sparser than pixel values, so the use of prediction reduces the amount of information that must be transmitted to the decompressor.
- the prediction block will also provide a parametric description of the estimated motion, which will be used at the decompressor to create the correct predicted frame.
- Most video compression schemes include a mathematical transformation of the video data.
- the purpose of the mathematical transform is to organize the video data into a form more suitable for effective compression.
- DCT discrete cosine transform
- wavelet transform Two common transforms in video compression are the discrete cosine transform (DCT) and the wavelet transform. Each of these transforms organizes the video data into an “average” component and a “detail” component.
- the average component contains basic shape information for video frames.
- the detail component contains edge information, which sharpens and clarifies the video frames.
- Organizing the video data into average and detail components is beneficial for compression because this organization isolates most of the energy in the video frame into a few values.
- the average component tends to contain only a few values that are very important to the accurate reconstruction of the video at the output.
- the detail component will contain many values that have much less impact on the video quality. The few values in the average component can be transmitted with high accuracy, while the many values in the detail component can be compressed much more aggressively.
- quantization is used to increase data compression.
- the accuracy of the video data is decreased by reducing the number of bits used to store the values.
- Effective use of data quantization is enhanced by the reorganization of the video data that was accomplished in the preprocessing and transform blocks; the data that is less visually important can be quantized more aggressively.
- Data quantization is the source of most of the information loss in a typical lossy video compression system.
- the entropy encoding block in a video compressor further compresses the video data using lossless compression schemes.
- Common lossless compression methods for video applications are run-length encoding, Huffman encoding, arithmetic coding, or a combination of these.
- FIG. 2 shows a typical decompressor corresponding to the compressor in FIG. 1 .
- the decompressor simply reverses the operations of the compressor.
- the entropy coding, quantization, and transform are all reversed to recover the motion and error data.
- the motion data is applied to the previous frame, producing a prediction of the upcoming frame.
- the error data is applied to the predicted frame to produce the output video frame.
- any post-processing tasks such as colorspace conversion and upsampling are completed to convert the video into the proper format for output or display.
- the primary disadvantage of the prior art approach for wireless applications is its computational complexity. Even when an asymmetrical design is used, the decompressor is typically too heavy to produce acceptable video quality in real time on wireless devices that are heavily constrained in processing power and battery life.
- a preferred embodiment of the present invention eliminates computationally expensive operations to create a decompressor that is extremely light.
- the invention makes up for removal of transform and motion estimation by exploiting the limited display capabilities of many wireless devices.
- the invention also takes into account asymmetric display capabilities, allowing compression to be gained through aggressive quantization and subsampling. This approach results in a decompressor that is both much simpler and more effective than those in the prior art, allowing efficient computational optimizations that make the decompressor light enough to run on a low-performance wireless device.
- the present invention comprises a system for video compression comprising a video preprocessor; a predictor configured to receive video data from the preprocessor; and an encoder configured to communicate with the predictor.
- the preprocessor comprises a colorspace converter, a frame activity detector, and a subsampler
- the predictor comprises a frame differencer and a reference frame handler
- the encoder comprises an error image encoder and an image adder.
- the invention comprises a system for video decompression comprising a predictor and a decoder configured to communicate with the predictor.
- the predictor comprises a reference frame handler and wherein said decoder comprises an error image decoder and a colorspace converter.
- the invention comprises a method for video compression, comprising receiving color video data represented in a first colorspace representation, converting the received color video data to a second colorspace representation, identifying activity between consecutive frames of the converted color video data, subsampling the converted color video data, calculating error image data based on the subsampled and converted color video data and on the identified frame activity, encoding the error image data, and transmitting the encoded error image data to a device capable of displaying color video data, wherein the step of identifying activity is preferably performed before the step of subsampling.
- the invention comprises a method for video decompression comprising receiving encoded color video error image data, decoding the data, combining the decoded data with previously received data to construct video frame data in a first colorspace representation, converting the color video frame data to a second colorspace representation with one pass through the data, and displaying the color video frame data, wherein the step of converting comprises upsampling and dithering.
- the step of converting is performed using look-up tables.
- the invention comprises a method for compressing and decompressing color video data, comprising receiving color video data represented in a first colorspace representation and with a first pixel depth; converting the color video data to a second colorspace representation with a second pixel depth; compressing the converted data; and decompressing the compressed converted data, wherein the step of decompressing comprises converting the data to a third colorspace representation with a third pixel depth.
- FIGS. 1 and 2 are block diagrams of a typical prior art asymmetric video compression system.
- FIG. 3 is a block diagram of components of a preferred embodiment of the invention.
- FIG. 4 depicts activity thresholds used in a preferred embodiment.
- FIG. 5 depicts a preferred flow of operations in an Error Image Encoder.
- FIG. 6 depicts a preferred flow of operations in an Error Image Decoder.
- FIG. 7 depicts preferred data flow steps within a Video Preprocessor.
- FIG. 8 depicts preferred data flow steps within a Predictor and Encoder.
- FIG. 9 depicts preferred data flow steps within a Decompressor.
- FIG. 10 illustrates preferred Y component subsampling.
- FIG. 11 illustrates preferred S component subsampling.
- FIG. 12 illustrates one-pass color conversion, de-interlacing, and up-sampling.
- FIG. 13 depicts a color hexagon based on the HSV colorspace representation.
- FIG. 14 depicts an exemplary YST quantization pattern for 12-bit color.
- FIG. 15 depicts color histograms for video clips (a) susie.avi, (b) mummy.avi, and (c) elmo.avi.
- a preferred embodiment of the present invention comprises a color video codec.
- the efficient methods of a preferred embodiment of the codec allow color video at 128 ⁇ 117 pixel size and 10 fps to be decoded using less than 125 kB of combined program and data memory and 0.8-2.4 MIPS (depending on the video sequence) on an 8-bit color display.
- the codec generally achieves 30-50 times compression, but for some simple sequences may achieve 100 times compression or greater.
- the peak signal-to-noise ratio (PSNR) comparing the 24-bit input video to the 8-bit output video, is about 20 dB on the Y-channel. Most of this loss in quality comes from the color quantization; a color quantization alone with no other compression applied gives similar PSNR results.
- a video compressor 100 comprises three modules (see FIG. 3 ).
- the first module a Video Preprocessor 120 , prepares the video for compression by converting to a more compressible colorspace, detecting the amount of activity in the video, and subsampling.
- the second module a Predictor 130 , computes frame differencing and maintains prediction reference frames.
- the third module an Encoder 140 , encodes the error image to be transmitted in compressed form.
- the “D” blocks 125 , 135 , 145 , and 155 indicate time delays. The delays may be one or two frame periods, depending on activity level, as discussed below.
- Color Space Converter 122 The video compressor 100 preferably takes 24-bit RGB video input. Colorspace Converter 122 convert the input video from the input RGB colorspace to a YST colorspace. YST is a novel color format preferably used in the present invention. Conversion from RGB to YST concentrates the most important information in the video sequence into the Y component of the new colorspace. The S and T components contain color information that can be more aggressively subsampled to obtain better compression. The YST colorspace is described in greater detail below.
- Frame Activity Detector 126 A frame activity detector 126 identifies the amount of change between the current frame and the previous frame, so that a compression method can be chosen based on the amount of frame activity.
- Y(i,j) is the pixel in the j-th column of the i-th row of the matrix Y containing the Y-component data for a single video frame.
- Each frame is classified on the scale depicted in FIG. 4 according to its error value.
- An error value of 0 indicates that the current frame is identical to the previous frame, so that rather than compressing and sending the current frame, the decompressor is instructed to redisplay the previous frame. If the error value indicates that the change between the current and previous frames is so small as to be unnoticeable, then a frame copy is also triggered. In the case of a frame copy, none of the further computations described are executed—the compressor simply sends a flag to the decompressor, and no video data is transmitted. Preferably, the number of consecutive frame copies is limited to a maximum that is specified as a parameter.
- interlacing can be used to make the video resolution appear higher than it actually is.
- the current frame is identified as “low activity,” and interlacing is used to improve the perceived quality of the video.
- Interlacing is preferably only applied to the Y video channel; the S and T channels are preferably never interlaced. For frames with a lot of activity, interlacing produces distracting artifacts in video. Therefore, if the error value is large, the current frame is identified as high activity, and interlacing is not used.
- Keyframes are typically triggered within the Frame Activity Detector 126 at scene changes in the video sequence. Keyframes can preferably also be set from outside the Frame Activity Detector 126 by triggering at regular intervals or after a certain number of consecutive non-keyframes. In prediction and encoding, keyframes are preferably treated the same as other high-activity frames, except that the reference frame is ignored (i.e., the reference frame is set to all zeros).
- the thresholds identifying the boundaries between copied frames, low activity frames, and high activity frames are parameters typically determined by trial-and-error, with values selected to give the best perceived results.
- a programmer makes the trial-and-error determination and the selected values are hard-coded into the compressor.
- the values are changed on-the-fly during video compression. The same is true of the maximum frame-copy value.
- a preferred feedback loop for maintaining reference frames in Frame Activity Detector 126 uses dual buffers that allow a delay of either one or two frame periods.
- interlacing when interlacing is used, even-row frames are compared with the previous even-row frame, and odd-row frames are compared with the previous odd-row frame. Since even- and odd-row frames alternate, this means that two frames worth of reference data is maintained.
- For high activity frames that do not use interlacing only the immediately previous frame is used. Since interlacing is only used on the Y channel, two previous frames of Y data are maintained, but only one previous frame of S and T data is maintained.
- each frame is preferably subsampled by subsampler 124 before being compressed.
- the subsampled frame is enlarged back to its original size during the color conversion/ dithering/upsampling process (during a table lookup) in the decompressor 150 .
- the Y component is preferably subsampled by a factor of 2 in each dimension, and the S and T components are preferably subsampled by a factor of 4 in each direction.
- the Y component subsampling is preferably computed by applying a [1 ⁇ 2 1 ⁇ 2] averaging filter across every other row of the Y component matrix.
- the even rows are preferably used.
- the rows used preferably alternate—if the even rows were used in the last frame, then the odd rows are chosen for this frame, and vice versa. Note that this subsampling is not a pixel-by-pixel two-dimensional computation; regardless of whether interlacing is used, half of the rows in the full-sized frame will be ignored.
- FIG. 10 An example is shown in FIG. 10 , where Y is the full-size 8 ⁇ 8 pixel Y component and Y′ evens and Y′ odds show the subsampled Y components for the even and odd row cases.
- the S and T component subsampling is preferably computed by segmenting the S and T component matrices into 4 ⁇ 4 pixel blocks and averaging the 16 pixels in each block.
- the S and T components are subsampled in the same way regardless of whether the frame is interlaced or not.
- An example is shown in FIG. 11 , where S is the full-size 8 ⁇ 8 pixel S component, and S′ is the subsampled 2 ⁇ 2 S component.
- the T component is treated identically.
- Predictor 130 Since motion estimation or compensation preferably are not used, the predictor module operates in a straightforward manner.
- Reference Frame Handler 134 The two previous frames of Y channel data and one previous frame of S and T channel data are stored as reference frames for computing an error image. Two frames are needed for the Y channel because interlacing requires one frame for the even rows and one for the odd rows. When interlacing is not used, only the immediately preceding reference frame is needed. Note that these reference frames are preferably received from Image Adder 144 , and have been quantized and dequantized (by Error Image Encoder 142 , described below) to mimic, and preserve synchronization with, the state of the decompressor. They are not the same as the reference frames used in the Frame Activity Detector 126 .
- the Reference Frame Handler 134 sends a copy of the reference frame to the Frame Differencer 132 , for calculation of the error image, and sends a copy to Image Adder 144 , to be added to the subsequently dequantized error image and returned as the next (i.e., updated) reference frame.
- the appropriate reference frame is sent to Frame Differencer 132 .
- Frame Differencer 132 Prediction error i.e., the error image
- the predicted frame is preferably a reference frame stored by Reference Frame Handler 134 —typically, a quantized and dequantized version of the previous frame.
- Error Image Encoder 142 Here the error image is compressed for transmission to the decompressor.
- the first step is to quantize the error image.
- a copy of the quantized error image is dequantized and sent to Image Adder 144 , to be used in reconstructing the reference frame used by Frame Differencer 132 .
- a second copy of the quantized error image is then compressed by Error Image Encoder 142 using runlength coding or non-zero coefficient coding (depending on the keyframe flag).
- both the runlength-encoded data and the non-zero-coefficient encoded data are Huffman encoded, and the Huffman-encoded data are transmitted to the Decompressor 150 .
- the input to the Error Image Encoder 142 preferably equals the current input frame, if the keyframe flag is ON, and it equals the current input frame minus the reference image, if the keyframe flag is OFF.
- the Error Image Encoder 142 and Error Image Decoder 172 preferably use variations on standard methods to losslessly compress and decompress the video error data for transmission.
- FIGS. 5 and 6 show preferred flow of data within the Error Image Encoder 142 and Decoder 172 .
- the error image values are quantized to 4-bit values by truncating away all but the four most significant bits. Both keyframes and non-keyframes are quantized in the same way, although the quantized results are then encoded differently.
- the quantized error image is preferably dequantized by a left bitshift to replace the bits that were truncated away in quantization. This dequantized error image is then fed back within Compressor 100 for use in reference frame maintenance. Dequantization is performed in Decompressor 150 using look-up tables.
- quantization is performed by 4-bit bitshifting. This provides a “uniform” quantization of the input image.
- Input images may be preprocessed by “stretching” or rescaling each pixel value according to the YST specifications. Preferably, only the 2 color channels are stretched.
- the quantization step comprises one stretching/scaling step plus a uniform 4-bit bitshift operation, which in effect makes it a non-uniform quantization.
- All non-uniformly-quantized data on the compressor side (along with the non-quantized error images, and reference images) contains these scaled, or stretched images.
- the Y-channel is not non-uniformly quantized, and is therefore not stretched. Since the dequantization of the data on the decompressor side is preferably made through a table, the non-uniform quantization is easily compensated for, without any extra computational load.
- Runlength Coding of Quantized Keyframes In keyframes, the “error image” contains the actual preprocessed video data, since the reference frame used in differencing is set to all zeros. In typical video, differences between adjacent pixels are expected to be small, suggesting that an efficient way to encode a keyframe may be to use spatial differencing.
- Encoder 140 preferably scans the image in row-major order from the top left corner to the bottom right corner, computing at each position the difference between the current pixel value and the previous pixel value. (For the first pixel in the image, the “previous” value is assumed to be 0.) As long as the difference between adjacent pixels is 0, the encoder will continue to traverse across rows, keeping a runlength count of the number of zero differences. When a non-zero difference is encountered, the runlength count is recorded along with the non-zero difference value, and then the count is reset to 0.
- the effect of this coding method is that the pixel values are represented as sets of runlength- difference pairs (r, d): a run of r identically valued pixels is followed by a pixel with a new value that differs from the previous value by d.
- Long runs of identical pixels are efficiently encoded using runlengths, and at runlength boundaries the values of d are expected to be close to zero, allowing for efficient Huffinan encoding.
- the preferred decoding method for the keyframe data follows from the encoding method.
- Non-zero Coefficient Coding of Quantized Non-Keyframes In non-keyframes, the error image is preferably encoded using a temporal differencing approach. The error image represents the differences between corresponding pixels in the current and previous frames. In typical video, the change in most pixel positions over a single frame period is very small, so the error image is expected to be sparse—that is, mostly zeros.
- the Encoder 140 preferably scans the error image in row-major order from the top left corner to the bottom right corner. As long as the current error value is 0, the encoder will continue to traverse across rows, keeping a runlength count of the number of zeros. When a nonzero error value is encountered, the runlength count is recorded along with the nonzero value, and then the count is reset to 0.
- the effect of this coding method is that the error values are represented as sets of runlength-value pairs (r, v): a run of r zeros is followed by a pixel with the error value v.
- the long runs of zeros are efficiently encoded using runlengths, and the non-zero values are still expected be close to zero, allowing for efficient Huffman encoding. Note that, as with the spatial differencing using for keyframes, the temporal differences can be expressed in 4-bit values by treated them modulo 16.
- the decoding method for the non-keyframe data follows from the encoding method.
- the decoder iterates through the (ri, vi) pairs, decoding ri zeros followed by a single value vi for each pair.
- the Huffman coder used in a preferred embodiment uses a fixed table containing 16 symbols.
- the use of a fixed table saves the statistical computations that are required by adaptive Huffman schemes, and the 16-symbol limitation keeps the table at a manageable size.
- Both the keyframe data and the non-keyframe data are Huffman encoded prior to transmission to Decompressor 150 using the same fixed table.
- the spatial and temporal differencing described above will result in difference values that are close to 0. Therefore, the fixed Huffman table is built to favor small values by assigning the shortest Huffman symbols to the smallest values.
- the difference values will always fall in the range [1,15] due to the modulo 16 treatment, but the runlength values may be larger than 15. Values larger than 15 are handled within the 16-symbol Huffinan table by recursively dividing by sixteen until a value less than 16 is obtained.
- the range [1,15] uses fifteen of the sixteen Huffman symbols, and the remaining symbol is used as a flag to indicate the encoding of a large value.
- the fixed Huffman table decoding is accomplished efficiently by decoding several symbols at a time.
- Huffman-encoded data is always read in 8-bit segments to avoid expensive bitwise operations, and precalculated tables stored in program memory are used to decode the symbols. This is a standard Huffman decoding method.
- a video decompressor of a preferred embodiment comprises two major parts: a Predictor 160 and a Decoder 170 .
- the Predictor 160 comprises a Reference Frame Handler 164 that maintains reference image information to be combined with the received and decoded error image data to create video frames.
- the Decoder 170 comprises an Error Image Decoder 172 that interprets the error data and applies the decompression methods required to decode the video, and a Colorspace Converter 174 that performs upsampling, de-interlacing (if necessary), and intelligent 12-to-8 bit color conversion.
- the Predictor 160 maintains two reference frames based on previously decoded video, stored by Reference Frame Handler 164 .
- the reference frame For interlaced data, the most recent even-row or odd-row frame, as appropriate, is used as the reference frame. In the interlaced case, the reference frame will be delayed by two frame periods, since even- and odd-row frames alternate. For non-interlaced data, the immediately preceding frame is used.
- Error Image Decoder 172 Here the compressed error image data is received and decoded. If a frame copy flag is received, then the previous frame is redisplayed and the error image decoder waits for the next set of frame data. For low or high activity frames, the Huffman, runlength, and non-zero coefficient coding are all reversed to recover the original error values. The decoded coefficient errors are then preferably applied directly to the reference image, thus saving the computation and memory resources that would be required to store, retrieve, and apply the error data as a separate step. Note that unlike in the compressor 100 , the error and reference images are not dequantized at this point.
- the (still-quantized) error image is added to the reference frame stored by Reference Frame Handler 164 to create a video frame.
- One copy of that frame is sent to Reference Frame Handler 164 , to be stored as the next reference frame.
- the other copy is then dequantized (using one or more look-up tables) and sent to Colorspace Converter 174 .
- Huffman table decoding is accomplished efficiently by decoding several symbols at a time.
- Huffman-encoded data is preferably read in 8-bit segments to avoid expensive bitwise operations, and precalculated tables stored in program memory are used to decode the symbols.
- This is a standard Huffman decoding method. See, e.g., Choueka, Y., S. T. Klein, and Y. Perl, Efficient Variants of Huffman Codes in High Level Languages, Proceedings of the 8 th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, June 1985, pp. 122-130.
- the colorspace converter 174 receives the quantized, subsampled YST representation of the output video. The converter 174 then performs upsampling, de-interlacing (if necessary), and intelligent 12-to-8 bit color conversion, all in a single pass through the image. The upsampling and the color conversion through the table look-up implement the dithering process The combination of all these tasks into a single pass makes this process very efficient.
- the intelligent color conversion performs a checkerboard mixing of colors to simulate color shades that are not realizable by an 8-bit color display. This is described in more detail below.
- video data is processed by the system shown in FIG. 3 and described herein as follows.
- Color Space Converter 122 receives 24-bit RGB video data and converts the data to a YST colorspace format.
- the converted data is passed to Frame Activity Detector 126 , which determines the level of frame-to-frame change in the data and stores and updates reference frames.
- Video data and frame activity information is passed to Subsampler 124 , where each frame is subsampled, as described above.
- the preprocessed video frame data is transmitted to Predictor 130 .
- the Y channel is preprocessed, and in a preferred embodiment has 16 levels (4 bit), but is also dequantized, and the U&V are “stretched” 8 bit values. The data could be considered “processed 24-bit YST.”
- Predictor 130 Data flow steps within Predictor 130 and Encoder 140 are illustrated in FIG. 8 .
- Predictor 130 pre-processed video frame data is received by Predictor 130 .
- the Reference Frame Handler 134 receives the data and stores one (or two) reference frames, as described above, and send a reference frame to Frame Differencer 132 .
- the Activity Detector 126 sends information as to whether interlaced/ non-interlaced mode is to be used. When the encoder 140 receives this information, it also encodes it, and sends it to the decoder's Reference Frame Handler 164 .
- Frame Differencer 132 receives video frame data from Video Preprocessor 120 , receives reference frame data from Reference Frame Handler 134 , and calculates an error image, as described above.
- Frame Differencer 132 sends the error image to the Error Image Encoder 142 .
- Error Image Encoder 142 quantizes the error image, as described above, dequantizes one copy, and at step 830 sends the dequantized copy to Image Adder 144 .
- Error Image Encoder 142 encodes a second copy of the quantized error image, as described above, and sends the encoded image to Decompressor 150 .
- Image Adder 144 receives a reference frame from Reference Frame Handler 134 .
- Image Adder 144 adds the dequantized error image received from Error Image Encoder 142 to the reference frame received from Reference Frame Handler 134 to create an updated reference image, and sends the updated reference image to Reference Frame Handler 134 .
- Reference Frame Handler 134 sends a reference image to Frame Differencer 132 , and step 820 is repeated.
- Reference Frame Handler 164 receives control information from Reference Frame Handler 134 .
- Error Image Decoder 172 receives encoded error image data from Error Image Encoder 142 .
- Error Image Decoder 172 receives a reference image from Reference Image Handler 164 , decodes the received error image data, and combines that data with the local reference image to create a new frame.
- One copy of that frame is sent at step 940 to Reference Frame Handler 164 , and another copy is dequantized and sent at step 950 to Colorspace Converter 174 .
- Colorspace Converter 174 converts the received video data from YST data to 8-bit RGB video data, while performing the tasks described above, and sends the 8-bit data to the display device.
- the preferred video codec uses table lookups to efficiently implement color conversion and dithering in a single step.
- the original video stream is subsampled, quantized, and color-converted to 12-bit YST color prior to transmission.
- the 12-bit YST is then converted to 8-bit RGB for display on the mobile handset.
- each 12-bit YST color is matched to four 8-bit RGB pixels arranged in a 2 ⁇ 2 grid.
- the four RGB pixel values are chosen to give the best visual approximation to the original RGB color.
- the RGB approximations for all 4096 YST colors are stored in lookup tables so that no conversion computation needs to be done at the decoder 150 —the correct RGB pixels are simply read from the table and written into the output image.
- the color conversion tables of a preferred embodiment require 16 KB of storage space, which is a sensible tradeoff to save computational complexity in most mobile environments.
- the dithering effect achieved by choosing four 8-bit RGB colors to correspond to each 12-bit YST color provides good color quality at low computational cost.
- the standard prior art methods for converting from 12-bit color to 8-bit color are (1) straight quantization, which is fast but gives poor results, and (2) dithering, which gives much better results than straight quantization but at increased computational cost.
- the table lookup method of a preferred embodiment of the present invention provides the color quality of dithering with the computational efficiency of straight quantization.
- the preferred video codec takes 24-bit RGB video as input and produces 8-bit (3:3:2) RGB video as output.
- the lookup tables could be rewritten to accommodate any 8-bit color scheme with no increase in size or complexity.
- the lookup table approach could also be used for conversion to output color schemes with more than 8 bits, with only moderate increase in the size of the lookup tables. For instance, if the output format required a 12-bit color, the table size would only need to be increased by 50%, to accommodate a 50% increase in output pixel size.
- the preferred embodiment is preferred indeed for providing high-quality video on low-quality color displays such as those found on inexpensive and moderately-priced mobile devices.
- Additional speed at the decoder 150 preferably is achieved in the video codec by combining upsampling and de-interlacing with the color conversion and dithering process. This combination allows all of these functions to be completed in a single pass through the image, saving both computational time and data memory (since no intermediate buffers are needed).
- the YST video frames are subsampled compared to the output video size: the Y component is subsampled by a factor of 4 (2 in each dimension) and the S and T components are subsampled by a factor of 16 (4 in each dimension). This means that each S and T value corresponds to four Y values, and each of these Y values corresponds to 4 RGB values.
- FIG. 12 shows the subsampling relationships between the YST component blocks and the output video frame.
- the S and T values s 11 and t 11 correspond to the four Y values y 11 , y 12 , y 21 , and y 22 .
- the S and T values are used four times to create four YST colors: (y 11 , s 11 , t 11 ), (y 12 , s 11 , t 11 ), (y 21 , s 11 , t 11 ), and (y 22 , s 11 , t 11 ). Each of these colors has an entry in the lookup table.
- a lookup on the color (y 11 , s 11 , t 11 ) provides the RGB values r 11 , r 12 , r 21 , and r 22 ; a lookup on the color (y 12 , s 11 , t 11 ) provides the RGB values r 13 , r 14 , r 23 , and r 24 ; etc.
- De-interlacing preferably is combined into the same process by dividing the color lookup table into two tables—one for even rows and one for odd rows.
- the lookup alternates between the two tables, reading two 8-bit RGB pixels for each YST color rather than four.
- the RGB pixels r 11 , r 12 , r 13 , r 14 , r 31 , r 32 , r 33 , and r 34 will be retrieved from the even lookup table and written to output.
- the pixels r 21 , r 22 , r 23 , r 24 , r 41 , r 42 , r 43 , and r 44 will be retrieved from the odd lookup table and written.
- both tables are used so that all four RGB values for each YST color are retrieved.
- the process preferably is made more efficient through the disclosed organization of the color tables and through the use of bit shifting and data types to reduce the number of pointer references and read/write operations. Since there are four Y values for each S and T value due to the preferred subsampling method, we have organized the table so that S and T only need to be considered 1 ⁇ 4 as often as Y. “Bit-shifting and data types” refers to the way multiple pixels are treated simultaneously. Each output pixel value is an 8-bit value, but when read pixel values are read from the table they are read in pairs, treating each pair as a single 16-bit value. This cuts the number of read operations in half. Similarly, four pixels at a time are written by treating them as 32-bit values, cutting the number of write operations by 1 ⁇ 4.
- update information is immediately applied directly into the reference image buffer 162 as the encoded error stream is being decoded.
- This one-pass execution makes the decoder 150 efficient in both memory usage and processing power, since (a) single-pass execution reduces programming overhead associated with multiple passes through the data; (b) no intermediate buffer is needed to hold error information; and (c) only coefficients that change need to be updated—no computations are spent copying unchanged coefficients.
- YST is a preferred colorspace designed to produce improved color quality on mobile and wireless devices with limited display and processing capabilities.
- the 12-bit YST color quantization is chosen to provide finer quantization in the color ranges that are most important in video quality perception. See below for a detailed description of the YST colorspace.
- the video codec of a preferred embodiment accepts 24-bit color source video, but the displays on most mobile and wireless devices are not capable of displaying 24-bit color. Quantizing the video color down to the display color space (8-bit RGB, for example), is efficient from a compression standpoint but does not allow for fast dithering and results in poor color representation on the mobile device. However, sending full 24-bit color is inefficient in bandwidth, since a lot of information is transmitted and then ignored.
- the preferred video codec quantizes the color to 12 bits at the encoder 100 and then further quantizes from 12 bits down to 8 bits at the decoder 150 .
- Transmitting 12-bit color allows the codec to use methods such as efficient dithering to provide good color representation on low-quality displays without requiring excessive use of transmission bandwidth.
- the use of 12-bit color and color dithering also allows video frames to be subsampled, since dithering can mask degradation in frame quality due to subsampling.
- Activity detection and interlacing In video sequences with a small amount of change between frames, interlacing can be used to improve the perceived quality of the video.
- the activity detection and interlacing process is described in detail in the “Video Preprocessor 120 ” section above. This interlacing method helps mask the quality degradation caused by subsampling, allowing the preferred codec to produce higher perceived quality while reaping the compression benefits of subsampling.
- the “superthin-superfast” design of the preferred codec provides a significant competitive advantage.
- Providers of prior art codecs have begun with the assumption that certain standard methods such as transform-based compression and motion compensation must be included in order to fit within the bandwidth constraints of the wireless environment.
- the present invention takes a different approach, beginning with only the barest necessities for encoding and decoding video.
- the present invention comprises a video codec that is computationally very simple but still provides enough compression to meet the bandwidth constraints of the wireless environment. Simplicity is a primary strength of the video codec, since low computational complexity allows the codec to run on a wide range of mobile devices, many of which lack the processing power to support prior art products.
- YST is a novel colorspace designed to produce improved color quality on mobile and wireless devices with limited display and processing capabilities.
- the 12-bit YST color quantization is chosen to provide finer quantization in the color ranges that are most important in video quality perception.
- the color hexagon shown in FIG. 13 represents all colors that are displayable on an electronic display. All of these colors can be described in terms of three-element vectors. Examples of common descriptions are the RGB and HSV triples, which describe the amount of each one of these primary colors present in a particular display color.
- the hexagon chart shown in FIG. 13 is based on the HSV triple.
- the H-component stands for “hue,” which indicates the color frequency (or wavelength).
- the hue determines the angular position of a particular color in the color hexagon, so a radial line drawn from the center to the edge of the hexagon shows a set of colors with constant hue.
- the S-component, for “saturation,” indicates the purity of the color. Colors with low saturation appear “grayer” than colors with high saturation. The saturation determines the distance a particular color lies from the center of the hexagon, so concentric hexagons show sets of colors with approximately the same saturation. The center of the hexagon is true gray, where saturation is 0. The colors on the outside edge of the hexagon have full saturation.
- the V-component of the HSV triple stands for “value.” This term indicates the intensity or brightness of a particular color. Color intensity is not shown on the color hexagon, since the addition of a third component would require a three-dimensional representation. Instead, the color hexagon is a two-dimensional slice of the colorspace at a particular intensity. To visualize the three-dimensional colorspace, recall that the center of the color hexagon is true gray. The third dimension in the HSV colorspace runs along that gray axis, where the lowest intensity gray is true black, and the highest intensity gray is true white.
- colorspaces such as YIQ, YUV, and the novel YST colorspace used herein, can also be represented on a hexagon chart.
- the Y-component represents the intensity, corresponding to the V-component from the HSV colorspace.
- the other two components represent a coordinate mapping of the colors shown in the hexagon.
- the H- and S-components in HSV are radial coordinates in the hexagon.
- the I-Q coordinate pair and the U-V coordinate pair are rectangular coordinates in the hexagon, linearly transformed to meet the desired characteristics of the colorspace.
- the YST colorspace is designed somewhat similarly, with quantization points chosen to produce good quality color on low-quality displays with small computational cost.
- the quantization pattern for the YST colorspace is chosen based on histogram characteristics of typical video clips and the color sensitivity of the human eye.
- the color chart in FIG. 14 shows an example of a YST quantization pattern.
- Range of Sensitivity While the eye is more sensitive to changes in green-magenta shades than to changes in blue-red shades, the range of this sensitivity is more limited for green-magenta shades. For instance, the human eye perceives pure green at full saturation and at half saturation to be very nearly the same color. However, pure red at half saturation still appears noticeably “grayer” than full-saturation red. For this reason, the quantization points for the green-magenta colors are closer together and span a smaller range than the quantization points for the blue-red colors.
- the eye is very sensitive to these kinds of quantization artifacts.
- Two common situations in which these artifacts arise are video sequences containing grass and trees, where texture appears as variations in natural greens, and video sequences containing human faces, where skin tones vary gradually depending on lighting.
- the YST colorspace is shifted slightly toward green and red tones so that finer quantization is available for natural greens and skin tones.
- RGB values are rescaled so that they take values in the range [0,1]. Then (Y,S,T) values are given by
- RGB can be calculated as the inverse of this matrix.
- a combined inverse and dither is used to create a greater number of perceived colors than is actually supported by the bit depth of the display.
- the inverse is used in the generation of color/upsample/dequantization look-up tables.
- Y takes values in [0,60]; S takes values in [ ⁇ 18,18]; and T takes values in [ ⁇ 36,36].
- Y is rounded off to the nearest of the 16 numbers: 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, and 60.
- S is rounded off to the nearest of the 16 numbers: ⁇ 13, ⁇ 9, 6, ⁇ 4, ⁇ 2, ⁇ 1, 0, 1, 2, 3, 4, 5, 7, 9, 11, and 14.
- T is rounded of to the nearest of the 16 numbers: ⁇ 14, ⁇ 10, ⁇ 7, ⁇ 5, ⁇ 3, ⁇ 2, ⁇ 1, 0, 1, 2, 3, 5, 7, 10, 14, and 18.
- the quantization is preferably determined by trial-and-error to produce the best visual quality for the characteristics of the specific display to be used. These characteristics include, among other possible factors, bit depth, resolution, and intensity ratio (similar to gamma).
- a preferred embodiment comprises quantizing the colorspace in different regions and in different directions, in a manner matched to the information content and the human visual system; S and T establish the different directions.
- the S component corresponds to the direction of largest amplitude of typical image and video data.
- the T component corresponds to the direction of smallest amplitude. See FIG. 15 . Thus in some cases S may carry more information than T.
- S also corresponds to the direction of least sensitivity of the human visual system and T to the direction of highest sensitivity.
- both components typically carry roughly the same amount of information, as perceived by the human visual system (HVS).
- HVS human visual system
- the HVS is more sensitive for T than S, for small values of S and T. For large values of S and T the sensitivity drops. The region of high sensitivity is smaller for T than for S. Thus the quantization levels for both S and T should be denser near 0 and less dense away from 0. However, the levels for T should be significantly more clustered towards 0 than those for S. This can be seen in FIG. 14 .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
In one aspect of a preferred embodiment, the present invention comprises a system for video compression comprising a video preprocessor; a predictor configured to receive video data from the preprocessor; and an encoder configured to communicate with the predictor. Preferably, the preprocessor comprises a colorspace converter, a frame activity detector, and a subsampler, the predictor comprises a frame differencer and a reference frame handler, and the encoder comprises an error image encoder and an image adder. In another aspect, the invention comprises a system for video decompression comprising a predictor and a decoder configured to communicate with the predictor. Preferably, the predictor comprises a reference frame handler and wherein said decoder comprises an error image decoder and a colorspace converter.
Description
- This application claims priority to the following U.S. provisional patent applications: 60/289,340; 60/289,342; 60/289,086; 60/289,085; 60/289,189; and 60/289,190, all filed May 7, 2001, and all entitled “Method and System for Data Compression/Decompression.” The contents of each provisional application are incorporated herein in their entirety by reference.
- Data compression methods are used to reduce the amount of data necessary to represent information. Compression is often used when data storage space, transmission bandwidth, or transmitter/receiver data rate is limited. Data is compressed to a smaller size for storage or transmission and then decompressed back to original size when needed.
- Compression schemes can be classified as either “lossless” or “lossy.” In a lossless compression scheme, the data that is reconstructed at decompression is an exact match to the original data—no information is lost. In a lossy compression scheme, some information may be lost in the compression process. The goal of a lossy compression scheme is to choose the discarded information wisely, so that the data reconstructed at decompression is as close as possible to the original data, or at least so that the difference between the original and the reconstructed data is acceptable.
- Video signals are a common type of data for use in compression systems. Raw video data tends to be large, so that working with raw, uncompressed video would require large amounts of storage space or transmission bandwidth. However, characteristics of typical video allow fairly aggressive compression. For instance, there is high correlation between adjacent pixels in a single video frame (the set of all picture elements that represent one complete image), since objects in video tend to be of fairly uniform color and texture. In addition, there is high correlation between pixels in the same position in adjacent video frames, since motion in video usually occurs slowly in relation to the video frame rate. These high correlations mean that video signals contain a large amount of redundant information, and these redundancies are typically exploited by compression schemes for video. In addition, most video applications do not require lossless compression—the quality constraint is simply that a human viewer perceive little or no degradation in quality after compression and decompression. The limitations and strengths of human visual perception can be taken into account when designing a lossy video compression scheme—information not perceptually significant is discarded first.
- Compression schemes also can be classified as either “symmetric” or “asymmetric.” In a symmetric scheme, the compression and decompression processes are roughly equal in computational complexity. A symmetric scheme is appropriate when similar processing constraints are present at both compression and decompression points, such as in video-conferencing applications where both compression and decompression must be done in real-time. An asymmetric scheme is used when compression and decompression have different complexity constraints. Typically, the constraint on the decompression end is greater, so computations are performed by the compressor in order to lessen the computational burden on the decompressor. An asymmetric scheme is usually used for video that will be captured once and then distributed many times, such as video clips stored and made available to many users on a computer network.
- Further information on typical video compression systems can be found in ITU-T Recommendation H.263 (approved February 1998); The Data Compression Book, 2nd Edition, by Mark Nelson and Jean-loup Gailly (1995); and Video Demystified, 3rd Edition, by Keith Jack (2001) (see especially
chapter 3, on color spaces, the contents of which are incorporated herein by reference for all purposes). -
FIG. 1 is a diagram of a typical asymmetric video compression system. Many existing video compression systems fit within this basic framework. The system consists of five main blocks—preprocessing, motion estimation, transform, quantization, and encoding—along with a feedback loop used to create decompressor reference data. - The purpose of the preprocessing block is to prepare the video data for compression. Preprocessing functions typically convert the input video data into a format that allows for easier or more aggressive compression.
- One commonly used step of video preprocessing is subsampling. When video is subsampled, the size of the video frames (the number of pixels) is reduced. Subsampling is a simple way to create gains in video compression efficiency—by reducing the video frame size by half in each dimension, a 4:1 compression ratio has already been achieved. However, subsampling can result in distracting artifacts when the video is restored to full resolution after decompression.
- Another commonly used step of video preprocessing is colorspace conversion. Existing raw video data is usually stored in an RGB color format (discussed below in more detail), since RGB is a convenient format for many existing displays. However, the RGB color format is not well suited to efficient compression, since the visually important video information is evenly distributed over the red, green, and blue color channels. For this reason, many video compression schemes include conversion to a different colorspace such as YUV (also discussed below). The YUV color format also contains three channels, but most of the visually important information is found in the Y channel, which contains pixel intensity information. The U and V channels contain all of the color information for the video data. Since the human eye is less sensitive to color errors than to intensity errors in typical video, the U and V channels can be compressed much more aggressively than the Y channel, with little degradation in decompressed video quality. For instance, the Y channel can be kept at full resolution while the U and V channels are subsampled by a factor of 16. This results in a similar compression ratio to the RGB subsampling by 4 (3.75:1 versus 4:1) but the quality of the resulting video is much higher because the most visually important information has been preserved.
- The preprocessing block may also include other miscellaneous functions that depend on the specific design of the video compressor, such as object identification and denoising.
- Prediction is used to exploit the redundancy between adjacent frames in typical video signals. Most asymmetric video compression systems contain a feedback loop including a “dummy” decompressor that mimics the state of the actual decompressor. The feedback loop provides the prediction block with copies of the previous video frame(s), and the prediction block then uses motion estimation to make a guess at what the next frame will look like. Then, rather than working with actual pixel values, the compressor will perform the remaining computations on the error between the actual frame and the predicted frame. Error values are generally smaller and sparser than pixel values, so the use of prediction reduces the amount of information that must be transmitted to the decompressor.
- In addition to providing error data for further compression, the prediction block will also provide a parametric description of the estimated motion, which will be used at the decompressor to create the correct predicted frame.
- Most video compression schemes include a mathematical transformation of the video data. Like the colorspace transform described above, the purpose of the mathematical transform is to organize the video data into a form more suitable for effective compression.
- Two common transforms in video compression are the discrete cosine transform (DCT) and the wavelet transform. Each of these transforms organizes the video data into an “average” component and a “detail” component. The average component contains basic shape information for video frames. The detail component contains edge information, which sharpens and clarifies the video frames.
- Organizing the video data into average and detail components is beneficial for compression because this organization isolates most of the energy in the video frame into a few values. For natural video, the average component tends to contain only a few values that are very important to the accurate reconstruction of the video at the output. In contrast, the detail component will contain many values that have much less impact on the video quality. The few values in the average component can be transmitted with high accuracy, while the many values in the detail component can be compressed much more aggressively.
- While most transform techniques are applied to the error data as shown in
FIG. 1 , some systems apply the transform to incoming data and then perform motion estimation and all subsequent operations in the transform domain. - In most video compression schemes, quantization is used to increase data compression. In the quantization block, the accuracy of the video data is decreased by reducing the number of bits used to store the values. Effective use of data quantization is enhanced by the reorganization of the video data that was accomplished in the preprocessing and transform blocks; the data that is less visually important can be quantized more aggressively. Data quantization is the source of most of the information loss in a typical lossy video compression system.
- The entropy encoding block in a video compressor further compresses the video data using lossless compression schemes. Common lossless compression methods for video applications are run-length encoding, Huffman encoding, arithmetic coding, or a combination of these.
-
FIG. 2 shows a typical decompressor corresponding to the compressor inFIG. 1 . The decompressor simply reverses the operations of the compressor. First, the entropy coding, quantization, and transform are all reversed to recover the motion and error data. The motion data is applied to the previous frame, producing a prediction of the upcoming frame. Then, the error data is applied to the predicted frame to produce the output video frame. Finally, any post-processing tasks such as colorspace conversion and upsampling are completed to convert the video into the proper format for output or display. - The primary disadvantage of the prior art approach for wireless applications is its computational complexity. Even when an asymmetrical design is used, the decompressor is typically too heavy to produce acceptable video quality in real time on wireless devices that are heavily constrained in processing power and battery life.
- There is thus a need for a compression/decompression method that is computationally light enough to run even on low-performance mobile devices. Prior art video compression designs are based on the assumption that the compression gain and bandwidth savings obtained from complex computations such as mathematical transform and motion estimation are worth the computational cost. However, in many wireless environments this assumption does not hold true, since the cost of reversing the transform and applying the motion data, even in an asymmetric system, makes the -decompressor too heavy.
- Prior art systems often attempt to produce decompressed video that is as close as possible to the original source video. However, showing well-reconstructed video on a limited display means that much of the data that is retained is not visually useful, since limitations of the display create more visual information loss than does the compression/decompression.
- A preferred embodiment of the present invention eliminates computationally expensive operations to create a decompressor that is extremely light. The invention makes up for removal of transform and motion estimation by exploiting the limited display capabilities of many wireless devices. In addition to an asymmetric computational approach, the invention also takes into account asymmetric display capabilities, allowing compression to be gained through aggressive quantization and subsampling. This approach results in a decompressor that is both much simpler and more effective than those in the prior art, allowing efficient computational optimizations that make the decompressor light enough to run on a low-performance wireless device.
- In one aspect of a preferred embodiment, the present invention comprises a system for video compression comprising a video preprocessor; a predictor configured to receive video data from the preprocessor; and an encoder configured to communicate with the predictor. Preferably, the preprocessor comprises a colorspace converter, a frame activity detector, and a subsampler, the predictor comprises a frame differencer and a reference frame handler, and the encoder comprises an error image encoder and an image adder. In another aspect, the invention comprises a system for video decompression comprising a predictor and a decoder configured to communicate with the predictor. Preferably, the predictor comprises a reference frame handler and wherein said decoder comprises an error image decoder and a colorspace converter.
- In another aspect, the invention comprises a method for video compression, comprising receiving color video data represented in a first colorspace representation, converting the received color video data to a second colorspace representation, identifying activity between consecutive frames of the converted color video data, subsampling the converted color video data, calculating error image data based on the subsampled and converted color video data and on the identified frame activity, encoding the error image data, and transmitting the encoded error image data to a device capable of displaying color video data, wherein the step of identifying activity is preferably performed before the step of subsampling.
- In another aspect, the invention comprises a method for video decompression comprising receiving encoded color video error image data, decoding the data, combining the decoded data with previously received data to construct video frame data in a first colorspace representation, converting the color video frame data to a second colorspace representation with one pass through the data, and displaying the color video frame data, wherein the step of converting comprises upsampling and dithering. Preferably, the step of converting is performed using look-up tables.
- In a further aspect, the invention comprises a method for compressing and decompressing color video data, comprising receiving color video data represented in a first colorspace representation and with a first pixel depth; converting the color video data to a second colorspace representation with a second pixel depth; compressing the converted data; and decompressing the compressed converted data, wherein the step of decompressing comprises converting the data to a third colorspace representation with a third pixel depth.
- The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the U.S. Patent and Trademark Office upon request and payment of the necessary fee.
-
FIGS. 1 and 2 are block diagrams of a typical prior art asymmetric video compression system. -
FIG. 3 is a block diagram of components of a preferred embodiment of the invention. -
FIG. 4 depicts activity thresholds used in a preferred embodiment. -
FIG. 5 depicts a preferred flow of operations in an Error Image Encoder. -
FIG. 6 depicts a preferred flow of operations in an Error Image Decoder. -
FIG. 7 depicts preferred data flow steps within a Video Preprocessor. -
FIG. 8 depicts preferred data flow steps within a Predictor and Encoder. -
FIG. 9 depicts preferred data flow steps within a Decompressor. -
FIG. 10 illustrates preferred Y component subsampling. -
FIG. 11 illustrates preferred S component subsampling. -
FIG. 12 illustrates one-pass color conversion, de-interlacing, and up-sampling. -
FIG. 13 depicts a color hexagon based on the HSV colorspace representation. -
FIG. 14 depicts an exemplary YST quantization pattern for 12-bit color. -
FIG. 15 depicts color histograms for video clips (a) susie.avi, (b) mummy.avi, and (c) elmo.avi. - A preferred embodiment of the present invention comprises a color video codec. The efficient methods of a preferred embodiment of the codec allow color video at 128×117 pixel size and 10 fps to be decoded using less than 125 kB of combined program and data memory and 0.8-2.4 MIPS (depending on the video sequence) on an 8-bit color display. The codec generally achieves 30-50 times compression, but for some simple sequences may achieve 100 times compression or greater. The peak signal-to-noise ratio (PSNR), comparing the 24-bit input video to the 8-bit output video, is about 20 dB on the Y-channel. Most of this loss in quality comes from the color quantization; a color quantization alone with no other compression applied gives similar PSNR results.
-
Compressor 100 - In a preferred embodiment, a
video compressor 100 comprises three modules (seeFIG. 3 ). The first module, aVideo Preprocessor 120, prepares the video for compression by converting to a more compressible colorspace, detecting the amount of activity in the video, and subsampling. The second module, aPredictor 130, computes frame differencing and maintains prediction reference frames. The third module, anEncoder 140, encodes the error image to be transmitted in compressed form. The following sections detail the operation of the three compressor modules. The “D” blocks 125, 135, 145, and 155 indicate time delays. The delays may be one or two frame periods, depending on activity level, as discussed below. -
Video Preprocessor 120 -
Color Space Converter 122—Thevideo compressor 100 preferably takes 24-bit RGB video input.Colorspace Converter 122 convert the input video from the input RGB colorspace to a YST colorspace. YST is a novel color format preferably used in the present invention. Conversion from RGB to YST concentrates the most important information in the video sequence into the Y component of the new colorspace. The S and T components contain color information that can be more aggressively subsampled to obtain better compression. The YST colorspace is described in greater detail below. -
Frame Activity Detector 126—Aframe activity detector 126 identifies the amount of change between the current frame and the previous frame, so that a compression method can be chosen based on the amount of frame activity. The amount of frame-to-frame change is commonly known as the “error value,” and it is computed by summing the pixel-by-pixel differences between the Y-components of the two frames, as shown in the formula below:
In the formula, Y(i,j) is the pixel in the j-th column of the i-th row of the matrix Y containing the Y-component data for a single video frame. - Each frame is classified on the scale depicted in
FIG. 4 according to its error value. An error value of 0 indicates that the current frame is identical to the previous frame, so that rather than compressing and sending the current frame, the decompressor is instructed to redisplay the previous frame. If the error value indicates that the change between the current and previous frames is so small as to be unnoticeable, then a frame copy is also triggered. In the case of a frame copy, none of the further computations described are executed—the compressor simply sends a flag to the decompressor, and no video data is transmitted. Preferably, the number of consecutive frame copies is limited to a maximum that is specified as a parameter. - For low activity frames, interlacing can be used to make the video resolution appear higher than it actually is. When the error value is small, but too large for a frame copy to be used, the current frame is identified as “low activity,” and interlacing is used to improve the perceived quality of the video. Interlacing is preferably only applied to the Y video channel; the S and T channels are preferably never interlaced. For frames with a lot of activity, interlacing produces distracting artifacts in video. Therefore, if the error value is large, the current frame is identified as high activity, and interlacing is not used.
- If the activity in a frame is very high, the frame is identified as a keyframe. Keyframes are typically triggered within the
Frame Activity Detector 126 at scene changes in the video sequence. Keyframes can preferably also be set from outside theFrame Activity Detector 126 by triggering at regular intervals or after a certain number of consecutive non-keyframes. In prediction and encoding, keyframes are preferably treated the same as other high-activity frames, except that the reference frame is ignored (i.e., the reference frame is set to all zeros). - The thresholds identifying the boundaries between copied frames, low activity frames, and high activity frames are parameters typically determined by trial-and-error, with values selected to give the best perceived results. In a preferred embodiment, a programmer makes the trial-and-error determination and the selected values are hard-coded into the compressor. In an alternate embodiment, the values are changed on-the-fly during video compression. The same is true of the maximum frame-copy value.
- A preferred feedback loop for maintaining reference frames in
Frame Activity Detector 126 uses dual buffers that allow a delay of either one or two frame periods. When interlacing is used, even-row frames are compared with the previous even-row frame, and odd-row frames are compared with the previous odd-row frame. Since even- and odd-row frames alternate, this means that two frames worth of reference data is maintained. For high activity frames that do not use interlacing, only the immediately previous frame is used. Since interlacing is only used on the Y channel, two previous frames of Y data are maintained, but only one previous frame of S and T data is maintained. -
Subsampler 124—To save processing time and transmission bandwidth, each frame is preferably subsampled bysubsampler 124 before being compressed. The subsampled frame is enlarged back to its original size during the color conversion/ dithering/upsampling process (during a table lookup) in thedecompressor 150. The Y component is preferably subsampled by a factor of 2 in each dimension, and the S and T components are preferably subsampled by a factor of 4 in each direction. - The Y component subsampling is preferably computed by applying a [½ ½] averaging filter across every other row of the Y component matrix. For high activity frames, the even rows are preferably used. For low activity frames, the rows used preferably alternate—if the even rows were used in the last frame, then the odd rows are chosen for this frame, and vice versa. Note that this subsampling is not a pixel-by-pixel two-dimensional computation; regardless of whether interlacing is used, half of the rows in the full-sized frame will be ignored. An example is shown in
FIG. 10 , where Y is the full-size 8×8 pixel Y component and Y′evens and Y′odds show the subsampled Y components for the even and odd row cases. - The S and T component subsampling is preferably computed by segmenting the S and T component matrices into 4×4 pixel blocks and averaging the 16 pixels in each block. The S and T components are subsampled in the same way regardless of whether the frame is interlaced or not. An example is shown in
FIG. 11 , where S is the full-size 8×8 pixel S component, and S′ is the subsampled 2×2 S component. The T component is treated identically. -
Predictor 130 Since motion estimation or compensation preferably are not used, the predictor module operates in a straightforward manner. -
Reference Frame Handler 134 The two previous frames of Y channel data and one previous frame of S and T channel data are stored as reference frames for computing an error image. Two frames are needed for the Y channel because interlacing requires one frame for the even rows and one for the odd rows. When interlacing is not used, only the immediately preceding reference frame is needed. Note that these reference frames are preferably received fromImage Adder 144, and have been quantized and dequantized (byError Image Encoder 142, described below) to mimic, and preserve synchronization with, the state of the decompressor. They are not the same as the reference frames used in theFrame Activity Detector 126. TheReference Frame Handler 134 sends a copy of the reference frame to theFrame Differencer 132, for calculation of the error image, and sends a copy toImage Adder 144, to be added to the subsequently dequantized error image and returned as the next (i.e., updated) reference frame. Depending on whether interlace mode is used, the appropriate reference frame is sent toFrame Differencer 132. (The one containing even, or odd rows.)Frame Differencer 132 Prediction error (i.e., the error image) is preferably found by computing the difference between the current frame and the predicted frame, although other error image calculation methods could be used. The predicted frame is preferably a reference frame stored byReference Frame Handler 134—typically, a quantized and dequantized version of the previous frame. -
Encoder 140 -
Error Image Encoder 142 Here the error image is compressed for transmission to the decompressor. The first step is to quantize the error image. A copy of the quantized error image is dequantized and sent toImage Adder 144, to be used in reconstructing the reference frame used byFrame Differencer 132. A second copy of the quantized error image is then compressed byError Image Encoder 142 using runlength coding or non-zero coefficient coding (depending on the keyframe flag). Finally, both the runlength-encoded data and the non-zero-coefficient encoded data are Huffman encoded, and the Huffman-encoded data are transmitted to theDecompressor 150. The input to theError Image Encoder 142 preferably equals the current input frame, if the keyframe flag is ON, and it equals the current input frame minus the reference image, if the keyframe flag is OFF. - The
Error Image Encoder 142 andError Image Decoder 172 preferably use variations on standard methods to losslessly compress and decompress the video error data for transmission.FIGS. 5 and 6 show preferred flow of data within theError Image Encoder 142 andDecoder 172. - Quantization: In a preferred embodiment, the error image values are quantized to 4-bit values by truncating away all but the four most significant bits. Both keyframes and non-keyframes are quantized in the same way, although the quantized results are then encoded differently.
- In
Compressor 100, the quantized error image is preferably dequantized by a left bitshift to replace the bits that were truncated away in quantization. This dequantized error image is then fed back withinCompressor 100 for use in reference frame maintenance. Dequantization is performed inDecompressor 150 using look-up tables. - In one embodiment, quantization is performed by 4-bit bitshifting. This provides a “uniform” quantization of the input image. However, non-uniform quantization is the preferred method for the color channels, with more narrow quantization bins around the center (value=128), and wider bins at the extreme values (0, and 255). Input images may be preprocessed by “stretching” or rescaling each pixel value according to the YST specifications. Preferably, only the 2 color channels are stretched.
- Thus, in one embodiment, the quantization step comprises one stretching/scaling step plus a uniform 4-bit bitshift operation, which in effect makes it a non-uniform quantization. All non-uniformly-quantized data on the compressor side (along with the non-quantized error images, and reference images) contains these scaled, or stretched images. The Y-channel is not non-uniformly quantized, and is therefore not stretched. Since the dequantization of the data on the decompressor side is preferably made through a table, the non-uniform quantization is easily compensated for, without any extra computational load.
- Runlength Coding of Quantized Keyframes: In keyframes, the “error image” contains the actual preprocessed video data, since the reference frame used in differencing is set to all zeros. In typical video, differences between adjacent pixels are expected to be small, suggesting that an efficient way to encode a keyframe may be to use spatial differencing.
-
Encoder 140 preferably scans the image in row-major order from the top left corner to the bottom right corner, computing at each position the difference between the current pixel value and the previous pixel value. (For the first pixel in the image, the “previous” value is assumed to be 0.) As long as the difference between adjacent pixels is 0, the encoder will continue to traverse across rows, keeping a runlength count of the number of zero differences. When a non-zero difference is encountered, the runlength count is recorded along with the non-zero difference value, and then the count is reset to 0. - The effect of this coding method is that the pixel values are represented as sets of runlength- difference pairs (r, d): a run of r identically valued pixels is followed by a pixel with a new value that differs from the previous value by d. Long runs of identical pixels are efficiently encoded using runlengths, and at runlength boundaries the values of d are expected to be close to zero, allowing for efficient Huffinan encoding. Since pixel values in the keyframe range from 0 to 15, the difference between two adjacent pixels can range from −15 to 15. However, the difference can still be expressed in a 4-bit value, since the color differences can be treated modulo 16: −1=+15, −2=+14, etc.
- The preferred decoding method for the keyframe data follows from the encoding method. The first pair (r0, d0) in the image will indicate the value of the first pixel. (Since the initial value of the “previous” pixel was assumed to be 0, a nonzero value of r0 will indicate that the first pixel value is 0, and a value r0=0 will indicate that the first pixel value is do.) From the first pixel, the decoder iterates through the (ri, di) pairs, repeating the previous value ri−1 times and then applying the difference di to find the next value.
- Non-zero Coefficient Coding of Quantized Non-Keyframes: In non-keyframes, the error image is preferably encoded using a temporal differencing approach. The error image represents the differences between corresponding pixels in the current and previous frames. In typical video, the change in most pixel positions over a single frame period is very small, so the error image is expected to be sparse—that is, mostly zeros.
- The
Encoder 140 preferably scans the error image in row-major order from the top left corner to the bottom right corner. As long as the current error value is 0, the encoder will continue to traverse across rows, keeping a runlength count of the number of zeros. When a nonzero error value is encountered, the runlength count is recorded along with the nonzero value, and then the count is reset to 0. - The effect of this coding method is that the error values are represented as sets of runlength-value pairs (r, v): a run of r zeros is followed by a pixel with the error value v. The long runs of zeros are efficiently encoded using runlengths, and the non-zero values are still expected be close to zero, allowing for efficient Huffman encoding. Note that, as with the spatial differencing using for keyframes, the temporal differences can be expressed in 4-bit values by treated them modulo 16.
- The decoding method for the non-keyframe data follows from the encoding method. The decoder iterates through the (ri, vi) pairs, decoding ri zeros followed by a single value vi for each pair.
- Huffman Coding: The Huffman coder used in a preferred embodiment uses a fixed table containing 16 symbols. The use of a fixed table saves the statistical computations that are required by adaptive Huffman schemes, and the 16-symbol limitation keeps the table at a manageable size. Both the keyframe data and the non-keyframe data are Huffman encoded prior to transmission to
Decompressor 150 using the same fixed table. The spatial and temporal differencing described above will result in difference values that are close to 0. Therefore, the fixed Huffman table is built to favor small values by assigning the shortest Huffman symbols to the smallest values. - The difference values will always fall in the range [1,15] due to the modulo 16 treatment, but the runlength values may be larger than 15. Values larger than 15 are handled within the 16-symbol Huffinan table by recursively dividing by sixteen until a value less than 16 is obtained. The range [1,15] uses fifteen of the sixteen Huffman symbols, and the remaining symbol is used as a flag to indicate the encoding of a large value.
- At the
decoder 150, the fixed Huffman table decoding is accomplished efficiently by decoding several symbols at a time. Huffman-encoded data is always read in 8-bit segments to avoid expensive bitwise operations, and precalculated tables stored in program memory are used to decode the symbols. This is a standard Huffman decoding method. -
Image Adder 144 Here the dequantized error image from theError Image Encoder 142 is added to the predicted image (the stored reference frame, received from Reference Frame Handler 134) to construct a new reference frame. The updated reference frame is then sent toReference Frame Handler 134. -
Decompressor 150 - A video decompressor of a preferred embodiment comprises two major parts: a
Predictor 160 and aDecoder 170. ThePredictor 160 comprises aReference Frame Handler 164 that maintains reference image information to be combined with the received and decoded error image data to create video frames. TheDecoder 170 comprises anError Image Decoder 172 that interprets the error data and applies the decompression methods required to decode the video, and aColorspace Converter 174 that performs upsampling, de-interlacing (if necessary), and intelligent 12-to-8 bit color conversion. -
Predictor 160 - Like the
Encoder 140, thePredictor 160 maintains two reference frames based on previously decoded video, stored byReference Frame Handler 164. For interlaced data, the most recent even-row or odd-row frame, as appropriate, is used as the reference frame. In the interlaced case, the reference frame will be delayed by two frame periods, since even- and odd-row frames alternate. For non-interlaced data, the immediately preceding frame is used. -
Decoder 170 -
Error Image Decoder 172 Here the compressed error image data is received and decoded. If a frame copy flag is received, then the previous frame is redisplayed and the error image decoder waits for the next set of frame data. For low or high activity frames, the Huffman, runlength, and non-zero coefficient coding are all reversed to recover the original error values. The decoded coefficient errors are then preferably applied directly to the reference image, thus saving the computation and memory resources that would be required to store, retrieve, and apply the error data as a separate step. Note that unlike in thecompressor 100, the error and reference images are not dequantized at this point. That is, after Huffman, runlength, and non-zero-coefficient decoding occurs, the (still-quantized) error image is added to the reference frame stored byReference Frame Handler 164 to create a video frame. One copy of that frame is sent toReference Frame Handler 164, to be stored as the next reference frame. The other copy is then dequantized (using one or more look-up tables) and sent toColorspace Converter 174. - Fixed Huffman table decoding is accomplished efficiently by decoding several symbols at a time. Huffman-encoded data is preferably read in 8-bit segments to avoid expensive bitwise operations, and precalculated tables stored in program memory are used to decode the symbols. This is a standard Huffman decoding method. See, e.g., Choueka, Y., S. T. Klein, and Y. Perl, Efficient Variants of Huffman Codes in High Level Languages, Proceedings of the 8th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, June 1985, pp. 122-130.
-
Colorspace Converter 174 Thecolorspace converter 174 receives the quantized, subsampled YST representation of the output video. Theconverter 174 then performs upsampling, de-interlacing (if necessary), and intelligent 12-to-8 bit color conversion, all in a single pass through the image. The upsampling and the color conversion through the table look-up implement the dithering process The combination of all these tasks into a single pass makes this process very efficient. The intelligent color conversion performs a checkerboard mixing of colors to simulate color shades that are not realizable by an 8-bit color display. This is described in more detail below. - Description of Preferred Data Flow
- In a preferred embodiment, video data is processed by the system shown in
FIG. 3 and described herein as follows. - Data flow steps within the
Video Preprocessor 120 are illustrated inFIG. 7 . Atstep 710Color Space Converter 122 receives 24-bit RGB video data and converts the data to a YST colorspace format. Atstep 720 the converted data is passed toFrame Activity Detector 126, which determines the level of frame-to-frame change in the data and stores and updates reference frames. Atstep 730, video data and frame activity information is passed toSubsampler 124, where each frame is subsampled, as described above. Atstep 740, the preprocessed video frame data is transmitted toPredictor 130. The Y channel is preprocessed, and in a preferred embodiment has 16 levels (4 bit), but is also dequantized, and the U&V are “stretched” 8 bit values. The data could be considered “processed 24-bit YST.” - Data flow steps within
Predictor 130 andEncoder 140 are illustrated inFIG. 8 . Atstep 810 pre-processed video frame data is received byPredictor 130. TheReference Frame Handler 134 receives the data and stores one (or two) reference frames, as described above, and send a reference frame toFrame Differencer 132. TheActivity Detector 126 sends information as to whether interlaced/ non-interlaced mode is to be used. When theencoder 140 receives this information, it also encodes it, and sends it to the decoder'sReference Frame Handler 164. Atstep 820,Frame Differencer 132 receives video frame data fromVideo Preprocessor 120, receives reference frame data fromReference Frame Handler 134, and calculates an error image, as described above. Atstep 825Frame Differencer 132 sends the error image to theError Image Encoder 142.Error Image Encoder 142 quantizes the error image, as described above, dequantizes one copy, and atstep 830 sends the dequantized copy toImage Adder 144. Atstep 840,Error Image Encoder 142 encodes a second copy of the quantized error image, as described above, and sends the encoded image toDecompressor 150. - At
step 850,Image Adder 144 receives a reference frame fromReference Frame Handler 134. Atstep 860,Image Adder 144 adds the dequantized error image received fromError Image Encoder 142 to the reference frame received fromReference Frame Handler 134 to create an updated reference image, and sends the updated reference image toReference Frame Handler 134. Atstep 870,Reference Frame Handler 134 sends a reference image toFrame Differencer 132, and step 820 is repeated. - Data flow steps within
Decompressor 150 are illustrated inFIG. 9 . Atstep 910Reference Frame Handler 164 receives control information fromReference Frame Handler 134. Atstep 920Error Image Decoder 172 receives encoded error image data fromError Image Encoder 142. Atstep 930Error Image Decoder 172 receives a reference image fromReference Image Handler 164, decodes the received error image data, and combines that data with the local reference image to create a new frame. One copy of that frame is sent atstep 940 toReference Frame Handler 164, and another copy is dequantized and sent atstep 950 toColorspace Converter 174. Atstep 960Colorspace Converter 174 converts the received video data from YST data to 8-bit RGB video data, while performing the tasks described above, and sends the 8-bit data to the display device. - Many of the methods used in the preferred embodiment provide a significant advantage in the wireless and mobile marketplace. This section describes the methods that provide this advantage.
- To accomplish efficient, intelligent conversion to 8-bit color for display on mobile handsets, the preferred video codec uses table lookups to efficiently implement color conversion and dithering in a single step. The original video stream is subsampled, quantized, and color-converted to 12-bit YST color prior to transmission. At the
decoder 150, the 12-bit YST is then converted to 8-bit RGB for display on the mobile handset. - To create the color conversion tables, each 12-bit YST color is matched to four 8-bit RGB pixels arranged in a 2×2 grid. The four RGB pixel values are chosen to give the best visual approximation to the original RGB color. The RGB approximations for all 4096 YST colors are stored in lookup tables so that no conversion computation needs to be done at the
decoder 150—the correct RGB pixels are simply read from the table and written into the output image. - The color conversion tables of a preferred embodiment require 16 KB of storage space, which is a sensible tradeoff to save computational complexity in most mobile environments. In addition, the dithering effect achieved by choosing four 8-bit RGB colors to correspond to each 12-bit YST color provides good color quality at low computational cost. The standard prior art methods for converting from 12-bit color to 8-bit color are (1) straight quantization, which is fast but gives poor results, and (2) dithering, which gives much better results than straight quantization but at increased computational cost. The table lookup method of a preferred embodiment of the present invention provides the color quality of dithering with the computational efficiency of straight quantization.
- The preferred video codec takes 24-bit RGB video as input and produces 8-bit (3:3:2) RGB video as output. However, it is important to note that the present invention encompasses and enables similar color conversion methods that could be applied to other input and output formats. For instance, the lookup tables could be rewritten to accommodate any 8-bit color scheme with no increase in size or complexity. The lookup table approach could also be used for conversion to output color schemes with more than 8 bits, with only moderate increase in the size of the lookup tables. For instance, if the output format required a 12-bit color, the table size would only need to be increased by 50%, to accommodate a 50% increase in output pixel size. However, if the number of bits in the intermediate 12-bit colorspace increases, the tables will double in size for every increase in the number of bits by one. For this reason, the preferred embodiment is preferred indeed for providing high-quality video on low-quality color displays such as those found on inexpensive and moderately-priced mobile devices.
- Additional speed at the
decoder 150 preferably is achieved in the video codec by combining upsampling and de-interlacing with the color conversion and dithering process. This combination allows all of these functions to be completed in a single pass through the image, saving both computational time and data memory (since no intermediate buffers are needed). - The YST video frames are subsampled compared to the output video size: the Y component is subsampled by a factor of 4 (2 in each dimension) and the S and T components are subsampled by a factor of 16 (4 in each dimension). This means that each S and T value corresponds to four Y values, and each of these Y values corresponds to 4 RGB values.
- This 4:1 correspondence between the Y component values and the output pixels makes the combination of upsampling with dithering straightforward.
FIG. 12 shows the subsampling relationships between the YST component blocks and the output video frame. The S and T values s11 and t11 correspond to the four Y values y11, y12, y21, and y22. The S and T values are used four times to create four YST colors: (y11, s11, t11), (y12, s11, t11), (y21, s11, t11), and (y22, s11, t11). Each of these colors has an entry in the lookup table. A lookup on the color (y11, s11, t11) provides the RGB values r11, r12, r21, and r22; a lookup on the color (y12, s11, t11) provides the RGB values r13, r14, r23, and r24; etc. - De-interlacing preferably is combined into the same process by dividing the color lookup table into two tables—one for even rows and one for odd rows. When a video frame is interlaced, the lookup alternates between the two tables, reading two 8-bit RGB pixels for each YST color rather than four. On an even iteration in the example above, the RGB pixels r11, r12, r13, r14, r31, r32, r33, and r34 will be retrieved from the even lookup table and written to output. On an odd iteration, the pixels r21, r22, r23, r24, r41, r42, r43, and r44 will be retrieved from the odd lookup table and written. On non-interlaced frames, both tables are used so that all four RGB values for each YST color are retrieved.
- In addition to the efficiency achieved by combining color conversion, upsampling, and de-interlacing into a single set of operations, the process preferably is made more efficient through the disclosed organization of the color tables and through the use of bit shifting and data types to reduce the number of pointer references and read/write operations. Since there are four Y values for each S and T value due to the preferred subsampling method, we have organized the table so that S and T only need to be considered ¼ as often as Y. “Bit-shifting and data types” refers to the way multiple pixels are treated simultaneously. Each output pixel value is an 8-bit value, but when read pixel values are read from the table they are read in pairs, treating each pair as a single 16-bit value. This cuts the number of read operations in half. Similarly, four pixels at a time are written by treating them as 32-bit values, cutting the number of write operations by ¼.
- Those skilled in the art will recognize that some parts of the invention are not specific to use of the YST colorspace, and would enhance methods based the YUV colorspace or other colorspaces.
- In the
preferred video decoder 150, update information is immediately applied directly into the reference image buffer 162 as the encoded error stream is being decoded. This one-pass execution makes thedecoder 150 efficient in both memory usage and processing power, since (a) single-pass execution reduces programming overhead associated with multiple passes through the data; (b) no intermediate buffer is needed to hold error information; and (c) only coefficients that change need to be updated—no computations are spent copying unchanged coefficients. - YST is a preferred colorspace designed to produce improved color quality on mobile and wireless devices with limited display and processing capabilities. By taking into account the color histogram properties of typical video clips and the color sensitivity of the human eye, the 12-bit YST color quantization is chosen to provide finer quantization in the color ranges that are most important in video quality perception. See below for a detailed description of the YST colorspace.
- The video codec of a preferred embodiment accepts 24-bit color source video, but the displays on most mobile and wireless devices are not capable of displaying 24-bit color. Quantizing the video color down to the display color space (8-bit RGB, for example), is efficient from a compression standpoint but does not allow for fast dithering and results in poor color representation on the mobile device. However, sending full 24-bit color is inefficient in bandwidth, since a lot of information is transmitted and then ignored.
- To balance the concerns of compression and video quality, the preferred video codec quantizes the color to 12 bits at the
encoder 100 and then further quantizes from 12 bits down to 8 bits at thedecoder 150. Transmitting 12-bit color allows the codec to use methods such as efficient dithering to provide good color representation on low-quality displays without requiring excessive use of transmission bandwidth. The use of 12-bit color and color dithering also allows video frames to be subsampled, since dithering can mask degradation in frame quality due to subsampling. - Activity detection and interlacing: In video sequences with a small amount of change between frames, interlacing can be used to improve the perceived quality of the video. The activity detection and interlacing process is described in detail in the “
Video Preprocessor 120” section above. This interlacing method helps mask the quality degradation caused by subsampling, allowing the preferred codec to produce higher perceived quality while reaping the compression benefits of subsampling. - The “superthin-superfast” design of the preferred codec provides a significant competitive advantage. Providers of prior art codecs have begun with the assumption that certain standard methods such as transform-based compression and motion compensation must be included in order to fit within the bandwidth constraints of the wireless environment. However, the present invention takes a different approach, beginning with only the barest necessities for encoding and decoding video. By using intelligent subsampling, color quantization and conversion, and dithering methods, the present invention comprises a video codec that is computationally very simple but still provides enough compression to meet the bandwidth constraints of the wireless environment. Simplicity is a primary strength of the video codec, since low computational complexity allows the codec to run on a wide range of mobile devices, many of which lack the processing power to support prior art products.
- The YST Colorspace
- YST is a novel colorspace designed to produce improved color quality on mobile and wireless devices with limited display and processing capabilities. By taking into account the color histogram properties of typical video clips and the color sensitivity of the human eye, the 12-bit YST color quantization is chosen to provide finer quantization in the color ranges that are most important in video quality perception.
- The color hexagon shown in
FIG. 13 represents all colors that are displayable on an electronic display. All of these colors can be described in terms of three-element vectors. Examples of common descriptions are the RGB and HSV triples, which describe the amount of each one of these primary colors present in a particular display color. The hexagon chart shown inFIG. 13 is based on the HSV triple. - The H-component stands for “hue,” which indicates the color frequency (or wavelength). The hue determines the angular position of a particular color in the color hexagon, so a radial line drawn from the center to the edge of the hexagon shows a set of colors with constant hue.
- The S-component, for “saturation,” indicates the purity of the color. Colors with low saturation appear “grayer” than colors with high saturation. The saturation determines the distance a particular color lies from the center of the hexagon, so concentric hexagons show sets of colors with approximately the same saturation. The center of the hexagon is true gray, where saturation is 0. The colors on the outside edge of the hexagon have full saturation.
- The V-component of the HSV triple stands for “value.” This term indicates the intensity or brightness of a particular color. Color intensity is not shown on the color hexagon, since the addition of a third component would require a three-dimensional representation. Instead, the color hexagon is a two-dimensional slice of the colorspace at a particular intensity. To visualize the three-dimensional colorspace, recall that the center of the color hexagon is true gray. The third dimension in the HSV colorspace runs along that gray axis, where the lowest intensity gray is true black, and the highest intensity gray is true white.
- Other colorspaces, such as YIQ, YUV, and the novel YST colorspace used herein, can also be represented on a hexagon chart. In each of these colorspaces, the Y-component represents the intensity, corresponding to the V-component from the HSV colorspace. The other two components represent a coordinate mapping of the colors shown in the hexagon. The H- and S-components in HSV are radial coordinates in the hexagon. In YIQ and YUV, the I-Q coordinate pair and the U-V coordinate pair are rectangular coordinates in the hexagon, linearly transformed to meet the desired characteristics of the colorspace. The YST colorspace is designed somewhat similarly, with quantization points chosen to produce good quality color on low-quality displays with small computational cost.
- The quantization pattern for the YST colorspace is chosen based on histogram characteristics of typical video clips and the color sensitivity of the human eye. The color chart in
FIG. 14 shows an example of a YST quantization pattern. - Following is a discussion of the goals and considerations resulting in the quantization characteristics shown in the pattern depicted in
FIG. 14 . - Bandwidth Considerations: Color histograms for three different video clips are shown in
FIG. 15 . The histograms were drawn by choosing 10,000 pixels at random from the clips and mapping those pixels in the color hexagon. These examples show that there tends to be more variation in the blue-red direction than in the green-magenta direction for typical video clips. This histogram data could indicate that more bandwidth should be applied to the blue-red color information than to the green-magenta color information. - However, information on color perception of the human eye indicates that the eye is more sensitive to changes in green-magenta color information than in blue-red information. This sensitivity difference means that accuracy in the representation of green-magenta color components is more visually important than blue-red color accuracy. The need for accurate representation of green-magenta color information could indicate that more bandwidth should be applied to the green-magenta color information, contradicting the conclusion drawn from the histogram data.
- The effect of the histogram data and perception information tend to cancel each other out, so that in designing the YST colorspace, the same bandwidth was allotted to the green-magenta and the blue-red color components. This translates into using the same number of quantization points in the green-magenta direction and the blue-red direction.
- Sensitivity in Gray Color Ranges: The eye is more sensitive to color differences in the gray colors near the center of the color hexagon than to changes in the more saturated colors. For this reason, the quantization points in the YST colorspace are more closely spaced in the gray regions in the center of the color hexagon and more spread apart on the outer edges of the colorspace.
- Range of Sensitivity: While the eye is more sensitive to changes in green-magenta shades than to changes in blue-red shades, the range of this sensitivity is more limited for green-magenta shades. For instance, the human eye perceives pure green at full saturation and at half saturation to be very nearly the same color. However, pure red at half saturation still appears noticeably “grayer” than full-saturation red. For this reason, the quantization points for the green-magenta colors are closer together and span a smaller range than the quantization points for the blue-red colors.
- Shift to Emphasize Important Colors: A common artifact of color quantization is the loss of texture information. When colors are quantized, texture information resulting from small variations in color may be lost. In addition, gradual color changes may be replaced with bands of quantized color.
- The eye is very sensitive to these kinds of quantization artifacts. Two common situations in which these artifacts arise are video sequences containing grass and trees, where texture appears as variations in natural greens, and video sequences containing human faces, where skin tones vary gradually depending on lighting. To improve color representation in these two common cases, the YST colorspace is shifted slightly toward green and red tones so that finer quantization is available for natural greens and skin tones.
- In a preferred embodiment, RGB values are rescaled so that they take values in the range [0,1]. Then (Y,S,T) values are given by
- Y=18R+36G+6B
- S=18R−18B
- T=−18R+36G−18B.
- However, those skilled in the art will recognize that the specific coefficients used in the transformation do not have to be identical to those described above in order to be within the scope of the present invention. The invention encompasses the methods used to arrive at the transformation. Consequently, any transformation found using the above methods is part of the invention.
- RGB can be calculated as the inverse of this matrix. However, in the preferred embodiment, a combined inverse and dither is used to create a greater number of perceived colors than is actually supported by the bit depth of the display. The inverse is used in the generation of color/upsample/dequantization look-up tables.
- Y takes values in [0,60]; S takes values in [−18,18]; and T takes values in [−36,36]. Y is rounded off to the nearest of the 16 numbers: 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, and 60. Then S is rounded off to the nearest of the 16 numbers: −13, −9, 6, −4, −2, −1, 0, 1, 2, 3, 4, 5, 7, 9, 11, and 14. Then T is rounded of to the nearest of the 16 numbers: −14, −10, −7, −5, −3, −2, −1, 0, 1, 2, 3, 5, 7, 10, 14, and 18.
- The quantization (rounding bins) is preferably determined by trial-and-error to produce the best visual quality for the characteristics of the specific display to be used. These characteristics include, among other possible factors, bit depth, resolution, and intensity ratio (similar to gamma).
- In summary, a preferred embodiment comprises quantizing the colorspace in different regions and in different directions, in a manner matched to the information content and the human visual system; S and T establish the different directions.
- The S component corresponds to the direction of largest amplitude of typical image and video data. The T component corresponds to the direction of smallest amplitude. See
FIG. 15 . Thus in some cases S may carry more information than T. - However, S also corresponds to the direction of least sensitivity of the human visual system and T to the direction of highest sensitivity. Thus both components typically carry roughly the same amount of information, as perceived by the human visual system (HVS). Thus roughly the same number of quantization levels can be used for S and T, but one should select the quantization levels for S and T in different ways.
- The HVS is more sensitive for T than S, for small values of S and T. For large values of S and T the sensitivity drops. The region of high sensitivity is smaller for T than for S. Thus the quantization levels for both S and T should be denser near 0 and less dense away from 0. However, the levels for T should be significantly more clustered towards 0 than those for S. This can be seen in
FIG. 14 . - The above statements regarding amplitude can be demonstrated from the histograms. Subjective evaluations have shown that when the bandwidth of either direction is reduced, the perceived quality degrades, thus giving support to the claimed benefit of optimization for sensitivity and amplitude. Thus, the methods described herein provide a better set of directions over other popular color transforms, such as YUV and RGB.
- It is believed that in the creation of other colorspaces these questions concerning the human visual system have not been asked or asked properly. Further, if they have been asked properly, they have not been under the constraints of computational, memory, and bandwidth efficiency.
- It will be appreciated by those skilled in the art having the benefit of this disclosure that numerous variations from the foregoing preferred embodiments will be possible without departing from the inventive concept described herein. Accordingly, it is the claims set forth below, and not merely the foregoing illustrations, which are intended to define the exclusive rights of the invention.
Claims (19)
1. A system for video compression, comprising:
a video preprocessor;
a predictor configured to receive video data from said preprocessor; and
an encoder configured to communicate with said predictor;
wherein said preprocessor comprises a colorspace converter, a frame activity detector, and a subsampler;
wherein said predictor comprises a frame differencer and a reference frame handler; and
wherein said encoder comprises an error image encoder and an image adder.
2. A system as in claim 1 , wherein said colorspace converter converts from RGB colorspace to YST colorspace.
3-4. (canceled)
5. A method for video compression, comprising:
receiving color video data represented in a first colorspace representation;
converting said received color video data to a second colorspace representation;
identifying activity between consecutive frames of said converted color video data;
subsampling said converted color video data;
calculating error image data based on said subsampled and converted color video data and on said identified frame activity;
encoding said error image data; and
transmitting said encoded error image data to a device capable of displaying color video data;
wherein said step of identifying activity is performed before said step of subsampling.
6. A method for video decompression, comprising:
receiving encoded color video error image data;
decoding said data;
combining said decoded data with previously received data to construct video frame data in a first colorspace representation;
converting said color video frame data to a second colorspace representation with one pass through the data; and
displaying said color video frame data;
wherein said step of converting comprises upsampling and dithering.
7. A method as in claim 6 , wherein said step of converting is performed using look-up tables.
8. A method for representing color video information, comprising:
receiving 24-bit RGB color video data; and
transforming said RGB data according to the linear transformation:
Y=18R+36G+6B; S=18R−18B; and T=−18R+36G−18B.
9. A method for compressing and decompressing color video data, comprising:
receiving color video data represented in a first colorspace representation and with a first pixel depth;
converting said color video data to a second colorspace representation with a second pixel depth;
compressing said converted data; and
decompressing said compressed converted data;
wherein said step of decompressing comprises converting said data to a third colorspace representation with a third pixel depth.
10. A method as in claim 9 , wherein said second colorspace representation and second pixel depth are selected so as to optimize compression and decompression computational efficiency.
11. A method as in claim 9 , wherein said third colorspace representation with a said third pixel depth is selected to comply with format requirements of a display device.
12. A method as in claim 9 , wherein said first and third colorspace representations are the same format.
13. A method as in claim 9 , wherein said first colorspace representation is RGB.
14. A method as in claim 9 , wherein said second colorspace representation is YST.
15. A method as in claim 9 , wherein said first pixel depth is 24-bit.
16. A method as in claim 9 , wherein said second pixel depth is 12-bit.
17. A method as in claim 16 , wherein said third pixel depth is 8-bit, 12-bit, 16-bit, or 24-bit.
18. A method as in claim 9 , wherein said third pixel depth is 8-bit, 12-bit, 16-bit, or 24-bit.
19. A method as in claim 18 , wherein said third pixel depth is 8-bit.
20. A method as in claim 9 , wherein said step of converting said data to a third colorspace representation with a third pixel depth is performed with one pass through the data and comprises upsampling and dithering.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/594,144 US20070053429A1 (en) | 2001-05-07 | 2006-11-08 | Color video codec method and system |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US28908501P | 2001-05-07 | 2001-05-07 | |
US28934201P | 2001-05-07 | 2001-05-07 | |
US28918901P | 2001-05-07 | 2001-05-07 | |
US28919001P | 2001-05-07 | 2001-05-07 | |
US28908601P | 2001-05-07 | 2001-05-07 | |
US28934001P | 2001-05-07 | 2001-05-07 | |
US10/141,100 US7149249B2 (en) | 2001-05-07 | 2002-05-07 | Color video codec method and system |
US11/594,144 US20070053429A1 (en) | 2001-05-07 | 2006-11-08 | Color video codec method and system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/141,100 Division US7149249B2 (en) | 2001-05-07 | 2002-05-07 | Color video codec method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070053429A1 true US20070053429A1 (en) | 2007-03-08 |
Family
ID=27559597
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/141,100 Expired - Fee Related US7149249B2 (en) | 2001-05-07 | 2002-05-07 | Color video codec method and system |
US11/594,144 Abandoned US20070053429A1 (en) | 2001-05-07 | 2006-11-08 | Color video codec method and system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/141,100 Expired - Fee Related US7149249B2 (en) | 2001-05-07 | 2002-05-07 | Color video codec method and system |
Country Status (3)
Country | Link |
---|---|
US (2) | US7149249B2 (en) |
AU (1) | AU2002256477A1 (en) |
WO (1) | WO2002091282A2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080140380A1 (en) * | 2006-12-07 | 2008-06-12 | David John Marsyla | Unified mobile display emulator |
US20120207449A1 (en) * | 2011-01-28 | 2012-08-16 | Nils Angquist | Efficient Media Import |
US20150139319A1 (en) * | 2011-04-21 | 2015-05-21 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US9997196B2 (en) | 2011-02-16 | 2018-06-12 | Apple Inc. | Retiming media presentations |
US10324605B2 (en) | 2011-02-16 | 2019-06-18 | Apple Inc. | Media-editing application with novel editing tools |
US11747972B2 (en) | 2011-02-16 | 2023-09-05 | Apple Inc. | Media-editing application with novel editing tools |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100922941B1 (en) * | 2002-11-04 | 2009-10-22 | 삼성전자주식회사 | Appratus and method of an energy-based adaptive DCT/IDCT |
US7684096B2 (en) * | 2003-04-01 | 2010-03-23 | Avid Technology, Inc. | Automatic color correction for sequences of images |
FR2872317A1 (en) * | 2004-06-08 | 2005-12-30 | Do Labs Sa | METHOD FOR IMPROVING THE QUALITY OF USE OF A SERVICE RELATING TO AT LEAST ONE MULTIMEDIA DATA |
JP2006047993A (en) | 2004-07-08 | 2006-02-16 | Sharp Corp | Data conversion device |
FR2880718A1 (en) * | 2005-01-10 | 2006-07-14 | St Microelectronics Sa | METHOD AND DEVICE FOR REDUCING THE ARTIFACTS OF A DIGITAL IMAGE |
EP1696384A1 (en) * | 2005-02-23 | 2006-08-30 | SONY DEUTSCHLAND GmbH | Method for processing digital image data |
TWI309948B (en) * | 2006-06-27 | 2009-05-11 | Realtek Semiconductor Corp | Method of generating video driving signal and apparatus thereof |
US20120230395A1 (en) * | 2011-03-11 | 2012-09-13 | Louis Joseph Kerofsky | Video decoder with reduced dynamic range transform with quantization matricies |
US20150264368A1 (en) * | 2014-03-14 | 2015-09-17 | Sony Corporation | Method to bypass re-sampling process in shvc with bit-depth and 1x scalability |
US11528314B2 (en) | 2020-03-26 | 2022-12-13 | Honeywell International Inc. | WebAssembly module with multiple decoders |
CN112364761B (en) * | 2020-11-10 | 2024-06-14 | 广东电网有限责任公司清远供电局 | Testing method and device based on video image recognition algorithm and testing terminal |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5301032A (en) * | 1992-04-07 | 1994-04-05 | Samsung Electronics Co., Ltd. | Digital image compression and decompression method and apparatus using variable-length coding |
US5317397A (en) * | 1991-05-31 | 1994-05-31 | Kabushiki Kaisha Toshiba | Predictive coding using spatial-temporal filtering and plural motion vectors |
US5703697A (en) * | 1996-03-20 | 1997-12-30 | Lg Electronics, Inc. | Method of lossy decoding of bitstream data |
US5758092A (en) * | 1995-11-14 | 1998-05-26 | Intel Corporation | Interleaved bitrate control for heterogeneous data streams |
US5854858A (en) * | 1995-06-07 | 1998-12-29 | Girod; Bernd | Image signal coder operating at reduced spatial resolution |
US5878166A (en) * | 1995-12-26 | 1999-03-02 | C-Cube Microsystems | Field frame macroblock encoding decision |
US6157740A (en) * | 1997-11-17 | 2000-12-05 | International Business Machines Corporation | Compression/decompression engine for enhanced memory storage in MPEG decoder |
US6928648B2 (en) * | 2001-04-20 | 2005-08-09 | Sun Microsystems, Inc. | Method and apparatus for a mobile multimedia java framework |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6031937A (en) * | 1994-05-19 | 2000-02-29 | Next Software, Inc. | Method and apparatus for video compression using block and wavelet techniques |
US5740278A (en) * | 1996-02-16 | 1998-04-14 | Cornell Research Foundation, Inc. | Facsimile-based video compression method and system |
US5973626A (en) * | 1998-03-17 | 1999-10-26 | Cornell Research Foundation, Inc. | Byte-based prefix encoding |
US6353634B1 (en) * | 1998-12-17 | 2002-03-05 | The United States Of America As Represented By The Secretary Of The Navy | Video decoder using bi-orthogonal wavelet coding |
ATE236489T1 (en) | 2000-09-11 | 2003-04-15 | Mediabricks Ab | METHOD FOR PROVIDING MEDIA CONTENT VIA A DIGITAL NETWORK |
EP1187481B1 (en) | 2000-09-11 | 2008-04-02 | Handmark Europe AB | A method for dynamic caching |
-
2002
- 2002-05-07 WO PCT/US2002/014360 patent/WO2002091282A2/en not_active Application Discontinuation
- 2002-05-07 AU AU2002256477A patent/AU2002256477A1/en not_active Abandoned
- 2002-05-07 US US10/141,100 patent/US7149249B2/en not_active Expired - Fee Related
-
2006
- 2006-11-08 US US11/594,144 patent/US20070053429A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5317397A (en) * | 1991-05-31 | 1994-05-31 | Kabushiki Kaisha Toshiba | Predictive coding using spatial-temporal filtering and plural motion vectors |
US5301032A (en) * | 1992-04-07 | 1994-04-05 | Samsung Electronics Co., Ltd. | Digital image compression and decompression method and apparatus using variable-length coding |
US5854858A (en) * | 1995-06-07 | 1998-12-29 | Girod; Bernd | Image signal coder operating at reduced spatial resolution |
US5758092A (en) * | 1995-11-14 | 1998-05-26 | Intel Corporation | Interleaved bitrate control for heterogeneous data streams |
US5878166A (en) * | 1995-12-26 | 1999-03-02 | C-Cube Microsystems | Field frame macroblock encoding decision |
US5703697A (en) * | 1996-03-20 | 1997-12-30 | Lg Electronics, Inc. | Method of lossy decoding of bitstream data |
US6157740A (en) * | 1997-11-17 | 2000-12-05 | International Business Machines Corporation | Compression/decompression engine for enhanced memory storage in MPEG decoder |
US6928648B2 (en) * | 2001-04-20 | 2005-08-09 | Sun Microsystems, Inc. | Method and apparatus for a mobile multimedia java framework |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7545386B2 (en) * | 2006-12-07 | 2009-06-09 | Mobile Complete, Inc. | Unified mobile display emulator |
US20080140380A1 (en) * | 2006-12-07 | 2008-06-12 | David John Marsyla | Unified mobile display emulator |
US20120207449A1 (en) * | 2011-01-28 | 2012-08-16 | Nils Angquist | Efficient Media Import |
US8886015B2 (en) * | 2011-01-28 | 2014-11-11 | Apple Inc. | Efficient media import |
US8954477B2 (en) | 2011-01-28 | 2015-02-10 | Apple Inc. | Data structures for a media-editing application |
US9099161B2 (en) | 2011-01-28 | 2015-08-04 | Apple Inc. | Media-editing application with multiple resolution modes |
US9251855B2 (en) | 2011-01-28 | 2016-02-02 | Apple Inc. | Efficient media processing |
US9870802B2 (en) | 2011-01-28 | 2018-01-16 | Apple Inc. | Media clip management |
US10324605B2 (en) | 2011-02-16 | 2019-06-18 | Apple Inc. | Media-editing application with novel editing tools |
US11747972B2 (en) | 2011-02-16 | 2023-09-05 | Apple Inc. | Media-editing application with novel editing tools |
US9997196B2 (en) | 2011-02-16 | 2018-06-12 | Apple Inc. | Retiming media presentations |
US11157154B2 (en) | 2011-02-16 | 2021-10-26 | Apple Inc. | Media-editing application with novel editing tools |
US9420312B2 (en) * | 2011-04-21 | 2016-08-16 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US10237577B2 (en) | 2011-04-21 | 2019-03-19 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US10129567B2 (en) | 2011-04-21 | 2018-11-13 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
US20150139319A1 (en) * | 2011-04-21 | 2015-05-21 | Intellectual Discovery Co., Ltd. | Method and apparatus for encoding/decoding images using a prediction method adopting in-loop filtering |
Also Published As
Publication number | Publication date |
---|---|
AU2002256477A1 (en) | 2002-11-18 |
WO2002091282A2 (en) | 2002-11-14 |
WO2002091282A3 (en) | 2003-01-09 |
US7149249B2 (en) | 2006-12-12 |
US20020186770A1 (en) | 2002-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20070053429A1 (en) | Color video codec method and system | |
US7460723B2 (en) | Quality based image compression | |
US4663660A (en) | Compressed quantized image-data transmission technique suitable for use in teleconferencing | |
US6501793B2 (en) | Quantization matrix for still and moving picture coding | |
US7050642B2 (en) | Method and apparatus for video compression using microwavelets | |
US6529634B1 (en) | Contrast sensitive variance based adaptive block size DCT image compression | |
US9232226B2 (en) | Systems and methods for perceptually lossless video compression | |
US6526174B1 (en) | Method and apparatus for video compression using block and wavelet techniques | |
US7483581B2 (en) | Apparatus and method for encoding digital image data in a lossless manner | |
US6870963B2 (en) | Configurable pattern optimizer | |
US7664184B2 (en) | Interpolation image compression | |
US20070237222A1 (en) | Adaptive B-picture quantization control | |
US7403561B2 (en) | Fixed bit rate, intraframe compression and decompression of video | |
US7149350B2 (en) | Image compression apparatus, image depression apparatus and method thereof | |
US20020191695A1 (en) | Interframe encoding method and apparatus | |
US20030012431A1 (en) | Hybrid lossy and lossless compression method and apparatus | |
EP1324618A2 (en) | Encoding method and arrangement | |
EP1629675B1 (en) | Fixed bit rate, intraframe compression and decompression of video | |
JPH0686258A (en) | Orthogonal transform encoder and decoder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AP OASYS HOLDINGS, LLC, PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNOR:OASYS MOBILE, INC.;REEL/FRAME:020035/0737 Effective date: 20071011 Owner name: RHP MASTER FUND, LTD, AS AGENT, PENNSYLVANIA Free format text: SECURITY AGREEMENT;ASSIGNOR:OASYS MOBILE, INC.;REEL/FRAME:020035/0737 Effective date: 20071011 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |