CN1196340C - Efficient, flexible motion estimation architecture for real time MPEG2 compliant encoding - Google Patents
Efficient, flexible motion estimation architecture for real time MPEG2 compliant encoding Download PDFInfo
- Publication number
- CN1196340C CN1196340C CNB971200823A CN97120082A CN1196340C CN 1196340 C CN1196340 C CN 1196340C CN B971200823 A CNB971200823 A CN B971200823A CN 97120082 A CN97120082 A CN 97120082A CN 1196340 C CN1196340 C CN 1196340C
- Authority
- CN
- China
- Prior art keywords
- search
- unit
- data
- optimum match
- hierarchical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/53—Multi-resolution motion estimation; Hierarchical motion estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Temporal compression of a digital video data stream with hierarchically searching in at least one search unit for pixels in a reference picture to find a best match for the current macroblock. This is followed by constructing a motion vector between the current macroblock and the best match macroblock in the reference picture.
Description
Technical field
The real time kinematics that the present invention relates to be used to meet the digital video coding of MPEG2 is estimated.
Background technology
Estimation plays compression by using motion vector between image.According to the present invention, the compression of the time domain of digital video data stream is that current macro searching optimum Match macro block is realized by hierarchical search pixel at least one search unit of reference picture.It then is tectonic movement vector between the optimum Match macro block of current macro and reference picture.
In the past ten years, the appearance of worldwide electronic communication system has improved the mode of the people's reception and the information of transmission.Particularly in nearest several years, the performance of real-time video and audio system is greatly improved.For the business such as visual program request and video conferencing is provided to the user, need take a lot of network bandwidths.In fact the network bandwidth major obstacle of this type systematic validity normally.
In order to overcome the restriction that network brings, compressibility has appearred.These systems reduce the video of needs transmission and the data volume of audio frequency by the redundancy of removing in the image sequence.At receiving terminal, image sequence is decompressed and can show in real time.
An example of existing video compression standard is a mpeg standard.In mpeg standard, video compression is defined as in interior compression of given image and the compression between image.Video compression in the image is finished by processing such as discrete cosine transform, quantification and Run-Length Codings.Compression between image realizes that by a processing that is referred to as estimation wherein motion vector is used to describe a series of picture element (pixel) moving from an images to another image.Itself will be encoded these motion vectors.
Motion estimation algorithm is the operation of repeatability, if realize effectively, then needs great computing capability.If estimation will realize especially like this under real-time video transmission environment.In addition, two important restrictions applying of system designer are: finish the area of card/plate that the estimation function need take and the expense of components and parts.To comprise the quantity that is used for required DRAM of stored reference pictorial data and/or SRAM especially.The requirement of the perfect required flow process of motion estimation data is to make computing capability maximum so that satisfy the requirement of real-time coding, makes the chip area minimum that realizes that this function need take simultaneously.The data flow that another requirement clearly is an estimation is very flexible so that can support the multiple systems cost estimate.
Summary of the invention
An object of the present invention is to provide perfect motion estimation data flow process, make computing capability can reach maximum to satisfy the requirement of real-time coding, the area that takies at current chip in fact wants minimum simultaneously.
Another object of the present invention provides flexibly the motion estimation data flow process so that can support the multiple systems cost estimate.
A further object of the present invention provides a kind of method for estimating and equipment of classification.
Another purpose of the present invention provides a kind of method for estimating and equipment of classification, and wherein the Hierarchical Motion Estimation search is that the mat utilization is carried out through the full pel value after taking a sample down.
Method and apparatus is estimated in the motion that another purpose of the present invention provides a kind of classification, and wherein the Hierarchical Motion Estimation search is a search.
Utilize the method and apparatus of introducing to achieve the above object and other purposes here.
According to a kind of searching method that is used for the digital video estimation of the present invention, may further comprise the steps: the pixel at least one hierarchical search unit in the hierarchical search reference picture, so that seek and the corresponding optimum Match macro block of current macro there; The motion vector of the skew between structure optimum Match macro block and the current macro; Motion vector is sent to the fine search unit from this at least one hierarchical search unit; And near the skew of best matching blocks, carrying out fine search, described fine search is included in the full resolution unit, the fine search that carries out in half-resolution unit and the diradical unit.
According to a kind of searching method that is used for the digital video estimation of the present invention, comprise: in reference picture, search for pixel with the full pel value after taking a sample down, so that seek and the corresponding optimum Match macro block of current macro there, and make up the motion vector of the skew between optimum Match macro block and the current macro; Under the best, rebuild the fine search data near the skew of sampling coupling; After this, near the full pel search of under not having the skew of optimum Match macro block, taking a sample with the fine search data after rebuilding.
According to a kind of search processor that is used for the digital video estimation of the present invention, described search processor comprises: a. hierarchical search unit; And b. passes through the fine search unit that optimum Match difference/offset bus links to each other with the hierarchical search unit, comprise full pel searcher, half pel search device and double-basis searcher, described full pel searcher will link to each other with the double-basis searcher with described half pel search device, and described half pel search device is contacted mutually with described double-basis searcher.
According to the present invention, provide a kind of time domain compression method of digital video data stream.This
Will be at least one search unit of reference picture when method begins the hierarchical search pixel to seek optimum Match macro block corresponding to current macro.Next step is a tectonic movement vector between optimum Match macro block and current macro.
According to another embodiment, it provides a kind of time domain compression method of digital video data stream.This method comprises by utilizing through the full pel value after taking a sample down searches for pixel in reference picture, thereby seeks the optimum Match macro block.The optimum Match macro block is meant macro block the most alike with current macro in reference picture.Next step is a tectonic movement vector between optimum Match macro block and current macro.
According to still another embodiment of the invention, it provides a kind of time domain compression method of digital video data stream, comprise have idol/idol, strange/strange, idol/strange, the field search of strange/idol search unit input.Search is at the pixel in the reference diagram image field, and purpose is the optimum Match macro block of seeking wherein corresponding to current macro.The same with the front, be between optimum Match macro block and current macro the tectonic movement vector.
Be appreciated that the present invention with reference to accompanying drawing of the present invention.
Fig. 1 is the flow chart of the general encoder that meets MPEG2 11, comprises discrete cosine transformer 21, quantizer 23, variable length encoder 25, inverse DCT 29, inverse discrete cosine transformation device 31, motion compensation 41, frame memory 42 and estimation 43.Data path comprises i frame image input 111, difference data 112, and motion vector 113, image exports 121, is used for the feedback image 131 of estimation and compensation, and through the image 101 after the motion compensation.This figure supposes that the i frame image has been deposited in 42 at frame memory or frame and exists, and with estimation the i+1 frame image encoded.
Fig. 2 has illustrated I, and P and B image have their DISPLAY ORDER in the example that provides, transmission sequence and forward direction and reverse prediction.
Fig. 3 has provided the search of the best matching blocks of motion estimation block in subsequent frame or previous frame or the image from present frame or image.Unit 211 and 211 ' the be illustrated in same position in two images.
Fig. 4 has provided picture block and has moved to position in the new image from their positions last image according to motion vector, gives the picture block that utilizes the former frame of motion vector after adjusting simultaneously.
Fig. 5 has provided the general structure of the search unit that has hierarchical search unit 201 and fine search unit 221.There is a sampling full pel search unit 203 down hierarchical search unit 201.There is the search unit 223 of a full pel fine search unit, and it provides input for half pel search unit 225 and double-basis search unit 227.Double-basis search unit 227 also will receive input from half pel search unit 225.
Fig. 6 has shown the data flowchart of Hierarchical Motion Estimation, hierarchical search unit 201 receives optimum Match difference/offset data and receives data from current macro (CMB) data/address bus 205 from previous hierarchical search unit (not drawing the figure), and gives fine search/reconstruction unit 221 and hierarchical search memory 211 its output.Fine search/reconstruction unit 221 receives the data from current macro data/address bus 205, simultaneously data is sent to Diff/Qxfrm bus 231 and fine search memory 229 or receives data from Diff/Qxfrm bus 231 and fine search memory 229.The output of fine search/reconstruction unit 221 is sent to motion vector bus 241.
Fig. 7 has shown the data flow of hierarchical search unit, it receives data by luma buffer 207 from current macro data/address bus (having only luminance signal 205), receives simultaneously from the data of search data bus 207 and with data and sends search data bus 207 to.Shown four field search among the figure, f1/f1,301, f2/f2,303, f1/f2,305, and f2/f1,307.They provide the difference of f1/f1 respectively, the difference of f2/f2, the difference of f1/f2, and the difference of f2/f1.These data are delivered to best matching result selected cell 311, the difference/skew 313 of this unit output optimum Match.
Fig. 8 has shown the data flowchart of fine search/reconstruction unit 212.Colourity and brightness data enter this unit by CMB data/address bus 205 and brightness/chroma buffer 207 under the control of Memory Controller 301.Data are delivered to diradical unit (DP) 325 by full resolution unit (FR) 321 and half-resolution unit (HR) 323, and deliver to FD unit 327 by this diradical unit, are sent to motion adjustment unit (MA) 329 then from FD unit 327.Motion estimation process unit (MEPROC) 331 is controlled these unit and is transmitted control signal to motion vector bus (MV bus).Diff/QXFRM data/address bus 332 is delivered in the output 327 of FD unit, and arrives inverse DCT (IQ) 333 and inverse discrete cosine transformation unit (ID) 335 therefrom, gets back to motion adjustment unit (MA) 329 at last.
Table 1 has shown the strategy of estimation, comprises search pattern (classification or do not have classification), map architecture (interlacing or line by line), and visual type is (in the frame, prediction, two-way), estimation option (double-basis, no double-basis), searching times, search-type, and fine dimension.
Disclosed herein is a kind of motion estimation architecture, and it is flexible and efficient, and can satisfy the strict demand of real-time coding environment.
The present invention relates to meet encoder and the cataloged procedure of MPEG and HDTV.The encoding function that encoder is finished comprises: the data input, and estimation, macro block mode produces, and data are rebuild, entropy coding and data output.Estimation and motion compensation play a part the time domain compression.They are operations of repeatability, need very high operational capability, and they comprise concentrated reconstruction process simultaneously, such as inverse discrete cosine transformation, and inverse quantization and motion compensation.
Particularly, the present invention relates to estimation, compensation and prediction, more specifically relate to the calculating of motion vector.It is that current image is divided into piece (such as macro block) that motion compensation utilizes the method for time domain redundancy, seeks the piece with similar content near the same position of the image that has transmitted in front then.The difference of the prediction piece pixel that has only the current block pixel and take out from reference picture just really is used to the compression of transmitting and transfers out afterwards.
The simplest motion prediction and compensation method are that the record brightness of each pixel and colourity is promptly in " I " image: intensity and color, in follow-up image be then each specific pixel write down brightness and colourity variation promptly: intensity and change in color.But it is uneconomic doing like this from transmission medium bandwidth, memory, processor ability and the angle in processing time, because object moves, that is to say that pixel content position from an images has moved to another position in the follow-up image between image.A kind of more advanced thought is to utilize pixel that the image of front predicts a piece in a follow-up images or the position in several images, such as uses motion vector, and writes this result with form of " predictive image " or " P " image.More specifically, this will involve to pixel in the i+1 frame image or macro block the position in the i frame image and carry out optimum prediction or estimation.Further can also utilize the image of back image and front predict pixel block will be in centre or " B " image where.
Should note the sequence consensus that the coded sequence of image and the order of picture transmission might not show with image.Referring to Fig. 2.For the system of I-P-B, the transmission sequence of input imagery is different from coded sequence, and the image of input must temporarily store up to being used for coding.There is a buffer these inputs can be preserved up to using them.
For convenience of description, Fig. 1 has shown the general coding flow chart that meets MPEG.In this flow chart, will handle i frame image and i+1 frame image so that produce motion vector.Pixel macroblock of motion-vector prediction in front and/or the position in the image of back.Using motion vector rather than whole image is the key factor of time domain compression in MPEG and the HDTV standard.As shown in Figure 1, motion vector in case produce just will be used to pixel macroblock in the i frame image moved in the i+1 frame image and to go.
As shown in Figure 1, i frame and i+1 frame image are processed so that produce motion vector in encoder 11 in cataloged procedure, and other image also will encode and transmit in this way such as the image of i+1 frame image and back.The input imagery 111X of follow-up image is admitted to the motion estimation unit 43 of encoder.Motion vector 113 is configured as the output of motion estimation unit 43.These motion vector passive movement compensating units 41 are used for recovering to be called the macro block data of reference data from the image of front and/or back, as the output of this unit.The output addition of negative sign ground and motion estimation unit 43 is with in an output of motion compensation units 41, and the result is delivered to the input of discrete cosine transformer 21.The output of discrete cosine transformer 21 is quantized in quantizer 23.The output of quantizer 23 is divided into two-way output 121 and 131; One tunnel output 121 is sent into next unit 43 being compressed further and to handle before transmission, such as gives run-length encoder; Another road output 131 is stored in the frame memory 42 then by rebuilding the pixel macroblock of encoding.Be used for describing the purpose encoder, this second tunnel output 131 obtains a lossy version of difference macro block by inverse quantization 29 and inverse discrete cosine transformation 31.These data are sent into frame memory 42 with the lossy version that the output addition of motion compensation units 41 obtains original image.
As shown in Figure 2, visual type has three kinds." image in the frame " or " I " image are encoded fully and are transmitted, and do not need to define simultaneously motion vector.These " I " images play a part the motion vector source." predictive image " or " P " image obtains by the motion vector with respect to the front image, and plays a part the motion vector source for other image.At last, " two-way image " or " B " image obtains by the motion vector with respect to other two images, and the one, the image one of front is the image of back, it can not be as the source of motion vector.Motion vector will from " I " and " P " image obtain and be used for the structure " P " and " B " image.
What Fig. 3 showed is a kind of method of carrying out estimation, to in the former frame image, search for entirely the macro block 211 of i+1 frame in a zone when this method begins, seek optimum Match macro block 213 (211 ' and 211 be in same position but it among the former frame image).These macro blocks of translation have just obtained the macro block (mb) type of i+1 frame image by this way, as shown in Figure 4.Like this, the image of i frame is as long as such as mat utilizes motion vector and difference data to make change seldom so that generate the i+1 frame image.What be encoded is difference data and motion vector, rather than i+1 frame image itself.Motion vector is the position of image ground translation image frame by frame, and difference data is loaded with the variation of colourity, brightness and saturation, the variation of color and brightness just.
Get back to Fig. 3, the coupling that we begin to seek from the 211 ' position identical with 211X position in the i+1 frame image the i frame image.In the i frame image, to create a search window.We will search for optimum Match in this search window.In case find, the optimum movement vector of this macro block just is encoded.The optimum Match macroblock encoding comprises motion vector, that is to say in the next frame image, and what pixels being arranged and at directions X what pixels are arranged in the Y direction is optimum Match displacements.The difference data in addition that is encoded is also referred to as " predicated error ", and it is the difference in colourity and brightness between current macro and optimum Match macro block.
Fig. 4 has provided picture block and has moved to position in the new image from their positions the former frame image according to motion vector, gives simultaneously and utilizes motion vector to carry out the picture block of adjusted former frame.
General structure of the present invention is presented among Fig. 5 and Fig. 6.As shown in Figure 5, adopted two-stage classification processor structure, and as shown in Figure 6, adopted the method for two-stage hierarchical search.
Current macro data/address bus (CMB data/address bus) 205 is used to the brightness data to hierarchical search unit 201 and fine search/reconstruction unit 202 input current macro (CMB).This bus also provides brightness and the chroma data of CMB for fine search/reconstruction unit.
Hierarchical search unit 201 shown in the figure is used to mat usually and uses the CMB data of sampling down to finish its search operation.The user can select the degree of taking a sample under the data, and it is in the horizontal direction from 4: 1 to minimum 1: 1 (i.e. samplings down) of maximum.The number of employed this class unit hunting zone as required can change (1,2 or 4).The I-frame in 201 storages of hierarchical search unit and the taking-up hierarchical search memory and the brightness data of P-frame.The size of hierarchical search memory depends on the degree of taking a sample under the pictorial data.If sampling is descended in user's selection, the brightness search data of being stored is suitable with data volume afterwards that current macro (CMB) data of input are descended to take a sample.When search finished, the hierarchical search unit was being offset accordingly with respect to current macro (CMB) position of given current macro (CMB) output optimum Match Search Results and it according to the minimum absolute difference value and by optimum Match difference/offset bus.Top description still also can be used for colourity and/or brightness and chroma data at brightness.
Fig. 5, the fine search/reconstruction unit 221 shown in 6 and 8 both can (that is to say and not have additional hierarchical search unit) to the coding of IP under freestanding environment and work, also can work for the coding of IPB with additional hierarchical search unit.This unit 221 utilizes search operation rather than the utilization front that is stored in the reconstruction in the fine search memory and/or the I-frame and the P-frame data of back of finishing it without the brightness data of the current macro (CMB) of sampling down.When search finishes, fine search/reconstruction unit pixel value of the brightness of current macro (CMB) and colourity or export the brightness of current macro in the non-frame (CMB) and the difference data of the pixel of brightness that colourity deducts the meticulous macro block of optimum Match (RMB) and colourity in the output frame on DIFF/QXFRM data/address bus 231.In addition, when the difference data in the non-frame of output, motion vector is output on the motion vector bus (MV bus) 241, and this motion vector is corresponding to the position of best match reference macro block (RMB) position with respect to current macro (CMB).
To output intraframe data or the difference data in the non-frame finish discrete cosine transform (DCT) and the quantification after, brightness after the conversion and chrominance block are imported into fine search/reconstruction unit by DIFF/QXFRM data/address bus 231, the I that makes fine search/reconstruction unit 221 correctly to rebuild to be output to the fine search memory and the data of P frame.In each unit, adopt the streamline that expands to its objective is in order to satisfy the performance requirement of real-time coding environment.
Total search strategy that motion compensation structure disclosed herein is adopted is divided into following Fig. 6 to streamline part shown in Figure 8.
As Fig. 6 and shown in Figure 8, typical search is finished in the hierarchical search unit 201 mat utilizations full pel value of sampling (on average) down.Mat utilized the coupling that non-reconstructed current macro block (CMB) data have determined to take a sample under the best process from the I-of front and/or back and P-frame after, near 221 mat utilizations take a sample under the best fine search data of rebuilding the skew of coupling in fine search unit were carried out the full pel search of non-sampling down.After having determined the full pel coupling of non-sampling down, the fine data that the position mat utilization of sampling full pel coupling was rebuild under half pixel and available double-basis (DP) fine search did not have according to the best is carried out.According to the result of the optimum Match estimation of determining by the minimum absolute difference value, if macro block is then will be exported the brightness and the chroma data of original current macro (CMB) or optimum Match difference macro block respectively by in the frame or non-intraframe coding.Result in the non-frame has three kinds of different possible outcomes:
CMB-RMB full pel optimum Match
CMB-RMB half pixel optimum Match
CMB-RMB double-basis optimum Match
The hierarchical search unit is displayed among Fig. 5 and Fig. 6.The data flowchart of this unit provides in Fig. 7.As shown in the figure, the brightness data of current macro (CMB) is kept in the luma buffer 207.The following sampling of data is just here carried out.For flexibility as much as possible is provided to the user,, following following sampling option can be arranged according to the scope of search and the size of searching storage:
4: 1---four pixels of each pixel rows storage of macro block, each pixel is got the mean value with four continuous pixel values in the delegation.Each unit can provide maximum search window (level+/-64, vertical+/-56), the searching storage that needs simultaneously minimum (two searching for reference frames are 0.25MB) like this.
2: 1---eight pixels of each pixel rows storage of macro block, each pixel is got the mean value with per two continuous pixels in the delegation.Each unit can provide time maximum search window (level+/-32, vertical+/-32) like this, needs time maximum searching storage (two searching for reference frames are 0.5MB) simultaneously.
1: 1---each pixel rows is stored 16 pixels (not having sampling down).Each unit can provide minimum search window (level+/-16, vertical+/-16) like this, needs maximum searching storage (two reference search frames are 1MB) simultaneously.
Output to four field search units through the CMB data of sampling down by luma buffer 207 through taking a sample down or not having, 301,303,305 and 307, as shown in Figure 7.Concerning I-and P-image, current macro (CMB) data also will output to the hierarchical search memory by the search data bus.Notice that the current macro of B-image (CMB) data will can not output to the hierarchical search memory, because the MPEG2 standard makes the B-image can not be as the reference frame.The searching storage data that are included in all macro blocks in the search window also will be imported into four field search units.When only using a search hierarchical search unit, the search macro block (SMB) that when obtaining search data, will guarantee to be positioned at the search window center with just be in identical position at searched macro block.When using two or four hierarchical search unit, the skew that will guarantee to be positioned at the search macro block position of combinatorial search window center of all unit and current macro (CMB) position when obtaining search data equals the average motion vector of last image.
As shown in Figure 7, a search is carried out in the hierarchical search unit.The odd-numbered line of f1/f1 field search unit 301 usefulness current macro (CMB) is searched for the odd-numbered line of search data.The even number line of f2/f2 field search unit 303 usefulness current macro (CMB) is searched for the even number line of search data.The odd-numbered line of f1/f2 field search unit 305 usefulness current macro (CMB) is searched for the even number line of search data.The even number line of f2/f1 field search unit 307 usefulness current macro (CMB) is searched for the odd-numbered line of search data.
For each difference data of these unit outputs, just obtain two other frame search result by the result of merging f1/f2 and the search of f2/f2 field and the result of f1/f2 and the search of f2/f1 field, each result is imported into best matching result selected cell 311.The first step work that unit 311 carries out is to add a weighted factor to each result, is referred to as basic weighting.Basic weighted value changes with the deviation post of search macro block (SMB) with respect to the mean motion of last image.With respect to the average motion vector of last image, a given search macro block is far away more to the displacement of current macro (CMB) biasing, and the basic weighted value that is added to this searching position result is just big more.Like this, search just trends towards selecting to closely follow most the SMB position of last image averaging movement locus.
The result's that this unit is exported on optimum Match difference/displacement bus number just depends on the form at searched image.Search to frame (line by line) form, export five results: field Search Results (f1/f1, f2/f2, the f1/f2 of four optimum Match, f2/f1), the frame search result of a best current macro (minimum value of f1/f1+f2/f2 difference and f1/f2+f2/f1 difference).For the search of field (interlacing) form, export two results: the optimum frame search (the difference minimum of f1/f1+f2/f2) identical, the optimum frame search (the difference minimum of f1/f2+f2/f1) opposite with the current macro parity with current macro (CMB) parity.
In addition, when the B-image is carried out search operation, produce two groups of such results (a group is the search to the earlier in respect of figures elephant, and another group is the search to the back reference picture).Except the absolute difference of minimum, also to export the deviation post of the SMB that produces minimum value.
As mentioned above, can adopt a plurality of hierarchical searches unit to enlarge the size of search window.When adopting two search units, use the available maximum search window of searching storage of 0.5MB to be level+/-128, vertical+/-56, or level+/-64, vertical+/-112.When adopting four unit of maximum number, use the available maximum search window size of searching storage of 1MB to be level+/-128, vertical+/-112.Under the situation that a plurality of hierarchical searches unit is arranged, the difference/migration result of optimum Match will be sent to another unit from a unit in the daisy chain mode.In this case, first transmitting element that is positioned at the daisy chain top sends its absolute difference and migration result to first receiving element.First receiving element compares the result of its search with the result who receives from first transmitting element, absolute value differences and the migration result with minimum sends second receiving element to then.This process goes on till last unit on chain is sent to fine search/reconstruction unit with final minimum absolute difference value and migration result always.
Fine search/reconstruction unit is displayed on Fig. 5, in 6 and 8.In Fig. 8, also provide the data flowchart of this unit specially.Just as shown in FIG., the brightness of current macro (CMB) and chroma data receive and are stored in the brightness/chroma buffer 207 from CMB data/address bus 205.The received brightness data in brightness data and foregoing hierarchical search unit is identical.By the pipeline processes motion estimation process, buffer is designed to keep the brightness data of two macro blocks and the chroma data of a macro block for most effectively.
The first step of the meticulous step of estimation is carried out in full resolution (FR) unit 321.This unit takes out the brightness data of current macro (CMB) from brightness/chroma buffer 207, takes out the brightness data of the reference macroblock (RMB) that is subordinated to the full pel refined search window simultaneously from the fine search memory by MC (Memory Controller) unit 301.Full resolution unit (FR) 321 is needed, and to be used for finishing the control information that fine data obtains (address and obtain scale) be that (that is: do not have hierarchical search unit) that classification is arranged or do not have classification determined by motion estimation process unit (MEPROC) 331 according to ongoing operation.If be operated in no hierarchical pattern, motion estimation process unit (MEPROC) 331 with the full pel fine search be centrally located in current macro (CMB) position near.If be operated in the hierarchical search pattern, motion estimation process unit (MEPROC) 331 will utilize the result of the hierarchical search unit that the difference/offset bus 330 by optimum Match receives, and its objective is near the position that is centrally located in skew that makes the full pel fine search.In order to satisfy the requirement of real-time performance, according to the estimation option that the structure of the pattern of search (classification or do not have classification), image and type and user select, the number of times that search is carried out and the size of type and search window can change.Table 1 is summed up these information.Note, also carried out motion estimation search in the I-image, its objective is in order to produce the motion vector of error concealing, the user can select they are inserted in the bit stream after the compression and go.
In table 1, Hier represents the hierarchical search pattern, Non-Hier represents not have the hierarchical search pattern, DP represents the double-basis estimation, actually xRef is expressed as search and 1 (parity is opposite) or 2 (parity is identical and opposite) reference field of regulation, OP represents the field data of the reference macroblock (RMB) opposite with the parity of current macro (CMB).SP represents the field data of the reference macroblock (RMB) identical with the parity of current macro (CMB), (PR) expression is stored in the fine search data of the front image in the fine search memory, (FR) expression is stored in the fine search data of the back image in the fine search memory, (BR) expression is stored in the two-way interpolation (on average) between the fine search data of front and back in the fine search memory, f1/f1 represents to be used to search for the fine data of the capable odd-numbered line of current macro (CMB) odd field, f1/f2 represents to be used to search for the fine data of the capable even number line of current macro (CMB) odd field, f2/f1 represents to be used to search for the fine data of the capable odd-numbered line of current macro (CMB) even field, f2/f2 represents to be used to search for the fine data of the capable even number line of current macro (CMB) even field, f1/fx represents to be used to search for the capable odd-numbered line of current macro (CMB) odd field or the fine data of even number line, whether this will produce coupling preferably according to the result of f1/f1 or f1/f2 hierarchical search unit, and f2/fx represents to be used to search for the capable odd-numbered line of current macro (CMB) even field or the fine data of even number line, and whether this will produce coupling preferably according to the result of f2/f1 or f2/f2 hierarchical search unit.When determining absolute difference for each searching position, add a basic weighted factor for each result, its method is identical with the method to the weighting of hierarchical search unit noted earlier.The final best matching result that every kind of search-type is searched for adds that by the minimum absolute difference value basic weighted value determines.
When search operation finished, FR unit output CMB data were also exported near the enough fine data the RMB that is centered around each optimum Match simultaneously so that reach the macro block search of eight half pixel.Image for interlacing, to export one (OP field) or two (SP fields, the OP field) optimal reference macro block (RMB) region of search, for visual line by line, to export two field best match reference macro block (RMB) region of search (best CMBf1 coupling, best CMBf2 coupling) and an optimal reference macro block (RMB) frame search zone.Attention: be the data of reference macroblock (RMB) region of search of transmission optimum Match, adopted the bus of 44 bits, this be because in the B-image when two-way reference macroblock (RMB) data generations optimum Match, each reference macroblock (RMB) optimum Match pixel value will be represented (with reference to U.S. Patent Application Serial Number 08/411 with the byte of 11 bits, 100 and U.S. Patent Application Serial Number 08/602,472, quoted here, for your guidance).In addition, will export to the MEPROC unit to the optimum Match absolute difference and the migration result of the RMB region of search of each optimum Match.
Second meticulous step of estimation carried out in half resolution (HR) unit 323.This unit carries out fine search near nearly eight the reference macroblock (RMBs) of half pixel the full pel reference macroblock (RMB) of the optimum Match that is centered around full resolution (FR) unit 321 and determines.When half pixel reference macroblock (RMB) position of specific search operation being determined optimum Match the position of minimum absolute difference value (that is: produced), the absolute difference of optimum Match and its corresponding half pixel skew all will be exported to motion estimation processor unit 331 (MEPROC).Motion estimation processor (MEPROC) unit 331 compares the optimum Match absolute difference from full resolution (FR) unit 321 and 323 receptions of half-resolution (HR) unit then, instruct half-resolution (HR) unit 323 to export the full pel or the half pixel brightness data of reference macroblocks (RMB) simultaneously, they are that the every kind of search operation that is carried out produces the minimum absolute difference value.Half-resolution unit (HR unit) outputs to diradical unit (DP unit) to these data together with corresponding current macro (CMB) data.
Another step of the meticulous step of estimation is carried out in diradical unit (DP) 325.This unit can be configured to the mat use, and still the current macro (CMB) and reference macroblock (RMB) data of half-resolution (HR) unit 323 are carried out the double-basis fine processing from full resolution (FR).In addition, for interlacing (field) image, this unit further is carried out configuration so that use the identical or opposite reference macroblock (RMB) of parity when being provided two reference field.Default mode is that mat utilizes the current macro (CMB) of full resolution unit (FR unit) 321 and reference macroblock (RMB) data to carry out the double-basis estimation, utilizes this default mode that two advantages can be arranged:
The first, performance can be optimized, because the search operation of half-resolution (HR) 323 and double-basis (DP) Unit 325 can walk abreast carries out.
Second, for (frame) image line by line, eliminated a kind of invalid situation, in this case, the frame optimum Match of half-resolution (HR) reference macroblock (RMB) involves between the opposite field of parity carries out vertical interpolation, and the possibility of carrying out effective double-basis fine processing for given current macro (CMB) is increased to 100% from 33% like this.
According to from the hierarchical search unit, the offset information (if having selected to be provided for to DP unit 325 data of double-basis fine processing) that (FR unit) 321, full resolution unit and half-resolution unit (HR unit) 323 etc. receive, motion estimation processor (MEPROC) 331 will construct the motion vector that points to double-basis reference macroblock (RMB).Motion estimation processor (MEPROC) 331 will carry out the proportional zoom operation of motion vector then, and, can take out the additional brightness fine search data that are used to carry out the double-basis estimation from these unit being transformed in the corresponding fine search memory cell through the motion vector after the proportional zoom.In case found the double-basis optimum Match, corresponding absolute difference and skew all will be exported to motion estimation processor (MEPROC) unit 331.Then, motion estimation processor (MEPROC) unit judges that according to map architecture among the result of three reservations which produces total coupling:
Line by line---the frame reference macroblock (RMB) of optimum Match, the field reference macroblock (RMB) of the combination f1 of optimum Match and f2, the double-basis reference macroblock (RMB) of optimum Match.
Interlacing---the optimum Match field reference macroblock (RMB) that parity is opposite, the optimum Match field reference macroblock (RMB) that parity is identical, the double-basis reference macroblock (RMB) of optimum Match,
To motion estimation processor (MEPROC) 331 notice double-basis (DP) unit 325: which reference macroblock (RMB) result outputs to FD unit 327.At this moment, the fine movement estimation procedure has just been finished.
The next unit of beginning macro block (MB) process of reconstruction is FD unit 327.The brightness data of current macro (CMB) and best match reference macro block (RMB) 325 is collected from double-basis (DP) unit in this unit, obtain corresponding current macro (CMB) chroma data from brightness/chroma buffer 207 simultaneously, and reference macroblock (RMB) of chroma data also will from the fine search memory, obtain to(for) the macro block of non-intraframe coding.Will be according to the indication current macro (CMB) that obtains from motion estimation processor (MEPROC) 331 as also being the information of non-intraframe coding in the frame, brightness and chroma data will be handled with different modes in this unit.If determine it is frame interior (not having motion), the brightness of current macro (CMB) and chroma data will directly be exported to DIFF/QXFRM data/address bus 332 in the FD unit so, and send the brightness and the chroma data of the reference macroblock (RMB) of complete " 00 " for MA (motion is adjusted) unit 329.If determine it is non-frame interior (motion), FD unit 327 will output to the DIFF/QXFRM data/address bus to the brightness of CMB-RMB and colourity so, adjust brightness and chroma data that (MA) unit 329 sends selected reference macroblocks (RMB) for simultaneously motion.Under the situation in non-frame, motion estimation processor (MEPROC) unit 331 wants the pointer of the meticulous memory of initialization in FD unit 327 so that can obtain the chroma data of needed reference macroblock (RMB), thereby the colourity difference of CMB-RMB just can be calculated.Notice that the correct arbitration of DIFF/QXFRM data/address bus 332 will be responsible in the FD unit.
This is to finish by all turning back to IQ (inverse quantization) unit 333 by brightness (or colourity) data of guaranteeing this unit transmission before next colourity (or brightness) data send.Following motion vector by the back of the data of FD unit 327 output by the non-intra-frame macro block of motion estimation processor (MEPROC) unit 331 outputs.Motion estimation processor (MEPROC) unit outputs to motion vector bus (MV bus) to motion vector data.
When to after having carried out discrete cosine transform (DCT) and quantization transform by the data of FD unit output, these data are got back to IQ (inverse quantization) unit 333 with the form of piece and are used for reconstruction (decoding) to the data behind the transform and quantization.IQ333 and ID (anti-DCT) carry out the inverse quantization and the inverse discrete cosine transformation work of Moving Picture Experts Group-2 defined.So just obtain one by original brightness of FD unit output and colourity MB data the loss version arranged, this is outside MEPG-2 decoder content that this macro block obtains that decompresses just.These lossy brightness and chrominance macroblock data are sent to MA (motion is adjusted) unit, and this unit will be imported and reference macroblock (RMB) the data addition that receives from the FD unit in the past.For all processed I-and P-image, brightness after the addition and chrominance macroblock data will be output to the fine search memory by mc unit.
Though our invention has utilized some preferred embodiments and example to describe, this does not also mean that scope of the present invention just is only limited to this, and it will be limited by appending claims.
Claims (14)
1. searching method that is used for the digital video estimation may further comprise the steps: the pixel at least one hierarchical search unit in the hierarchical search reference picture, so that seek and the corresponding optimum Match macro block of current macro there; The motion vector of the skew between structure optimum Match macro block and the current macro;
Motion vector is sent to the fine search unit from this at least one hierarchical search unit; And near the skew of best matching blocks, carrying out fine search, described fine search is included in the full resolution unit, the fine search that carries out in half-resolution unit and the diradical unit.
2. the searching method of claim 1 is characterized in that being included in and carries out a plurality of hierarchical searches in a plurality of hierarchical searches unit so that increase the size of search window.
3. the searching method of claim 2 is characterized in that comprising in the mode of daisy chain the difference and the skew of optimum Match macro block is sent to another search unit from a search unit.
4. searching method that is used for the digital video estimation comprises:
In reference picture, search for pixel with the full pel value after taking a sample down,, and make up the motion vector of the skew between optimum Match macro block and the current macro so that seek and the corresponding optimum Match macro block of current macro there;
Under the best, rebuild the fine search data near the skew of sampling coupling;
After this, near the full pel search of under not having the skew of optimum Match macro block, taking a sample with the fine search data after rebuilding.
5. the searching method of claim 4 is characterized in that, comprises the pixel value of using 2: 1 times sampling search or 4: 1 times samplings.
6. the searching method of claim 4 is characterized in that, next image will be original current macro by intraframe coding and its output.
7. the searching method of claim 4 is characterized in that, next image will be by alternating binary coding or predictive coding, and its output is the difference macro block of optimum Match.
8. the searching method of claim 4 is characterized in that, comprises that the reference macroblock data that adopt non-reconstruction search for the optimum Match macro block.
9. the searching method of claim 8 is characterized in that comprising that the fine data of the migration reconstruction of the full pel optimum Match macro block of taking a sample under the nothing that will utilize after this according to optimum Match is carried out half pel search.
10. the searching method in the claim 9 is characterized in that comprising and carries out the double-basis search.
11. the searching method of claim 1 is characterized in that, hierarchical search comprises to adopt is with idol/idol, strange/strange, idol/strange, very/field of an idol search unit input is searched for and search for pixel in the reference diagram image field.
12. the searching method of claim 11 is characterized in that, comprises adopting the interpolation of best search to search for.
13. a search processor that is used for the digital video estimation, described search processor comprises:
A. hierarchical search unit; And
B. pass through the fine search unit that optimum Match difference/offset bus links to each other with the hierarchical search unit, comprise full pel searcher, half pel search device and double-basis searcher, described full pel searcher will link to each other with the double-basis searcher with described half pel search device, and described half pel search device is contacted mutually with described double-basis searcher.
14. the search processor of claim 13 is characterized in that, described hierarchical search unit comprises sampling full pel searcher down.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US745,584 | 1996-11-07 | ||
US745584 | 1996-11-07 | ||
US08/745,584 US6549575B1 (en) | 1996-11-07 | 1996-11-07 | Efficient, flexible motion estimation architecture for real time MPEG2 compliant encoding |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1182335A CN1182335A (en) | 1998-05-20 |
CN1196340C true CN1196340C (en) | 2005-04-06 |
Family
ID=24997330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB971200823A Expired - Fee Related CN1196340C (en) | 1996-11-07 | 1997-10-06 | Efficient, flexible motion estimation architecture for real time MPEG2 compliant encoding |
Country Status (5)
Country | Link |
---|---|
US (1) | US6549575B1 (en) |
JP (1) | JPH10150666A (en) |
KR (1) | KR100294999B1 (en) |
CN (1) | CN1196340C (en) |
TW (1) | TW339495B (en) |
Families Citing this family (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6786420B1 (en) | 1997-07-15 | 2004-09-07 | Silverbrook Research Pty. Ltd. | Data distribution mechanism in the form of ink dots on cards |
US6618117B2 (en) | 1997-07-12 | 2003-09-09 | Silverbrook Research Pty Ltd | Image sensing apparatus including a microcontroller |
US6690419B1 (en) | 1997-07-15 | 2004-02-10 | Silverbrook Research Pty Ltd | Utilising eye detection methods for image processing in a digital image camera |
US7110024B1 (en) | 1997-07-15 | 2006-09-19 | Silverbrook Research Pty Ltd | Digital camera system having motion deblurring means |
US6879341B1 (en) | 1997-07-15 | 2005-04-12 | Silverbrook Research Pty Ltd | Digital camera system containing a VLIW vector processor |
US6948794B2 (en) | 1997-07-15 | 2005-09-27 | Silverbrook Reserach Pty Ltd | Printhead re-capping assembly for a print and demand digital camera system |
US6624848B1 (en) | 1997-07-15 | 2003-09-23 | Silverbrook Research Pty Ltd | Cascading image modification using multiple digital cameras incorporating image processing |
IL122299A (en) * | 1997-11-25 | 2003-11-23 | Broadcom Corp | Video encoding device |
KR20000014769A (en) * | 1998-08-22 | 2000-03-15 | 구자홍 | Movement presuming method of an image encoding device |
AUPP702098A0 (en) | 1998-11-09 | 1998-12-03 | Silverbrook Research Pty Ltd | Image creation method and apparatus (ART73) |
AUPQ056099A0 (en) | 1999-05-25 | 1999-06-17 | Silverbrook Research Pty Ltd | A method and apparatus (pprint01) |
US6968009B1 (en) * | 1999-11-12 | 2005-11-22 | Stmicroelectronics, Inc. | System and method of finding motion vectors in MPEG-2 video using motion estimation algorithm which employs scaled frames |
KR100677082B1 (en) * | 2000-01-27 | 2007-02-01 | 삼성전자주식회사 | Motion estimator |
KR100727910B1 (en) * | 2000-10-11 | 2007-06-13 | 삼성전자주식회사 | Method and apparatus for motion estimation of hybrid type |
KR100407691B1 (en) * | 2000-12-21 | 2003-12-01 | 한국전자통신연구원 | Effective Motion Estimation for hierarchical Search |
US20020136302A1 (en) * | 2001-03-21 | 2002-09-26 | Naiqian Lu | Cascade window searching method and apparatus |
US7694224B2 (en) * | 2001-05-31 | 2010-04-06 | International Business Machines Corporation | Location predicative restoration of compressed images stored on a hard disk drive with soft and hard errors |
US20030059089A1 (en) * | 2001-09-25 | 2003-03-27 | Quinlan James E. | Block matching at the fractional pixel level for motion estimation |
JP4015934B2 (en) | 2002-04-18 | 2007-11-28 | 株式会社東芝 | Video coding method and apparatus |
KR100699821B1 (en) * | 2002-07-22 | 2007-03-27 | 삼성전자주식회사 | Method for high speed motion estimation using variable search window |
WO2004012460A1 (en) * | 2002-07-29 | 2004-02-05 | Matsushita Electric Industrial Co., Ltd. | Motion vector detection device and motion vector detection method |
JP4841101B2 (en) * | 2002-12-02 | 2011-12-21 | ソニー株式会社 | Motion prediction compensation method and motion prediction compensation device |
US8761252B2 (en) * | 2003-03-27 | 2014-06-24 | Lg Electronics Inc. | Method and apparatus for scalably encoding and decoding video signal |
KR20060109247A (en) | 2005-04-13 | 2006-10-19 | 엘지전자 주식회사 | Method and apparatus for encoding/decoding a video signal using pictures of base layer |
KR20060105409A (en) * | 2005-04-01 | 2006-10-11 | 엘지전자 주식회사 | Method for scalably encoding and decoding video signal |
FR2866737B1 (en) * | 2004-02-25 | 2006-11-17 | Nextream France | DEVICE AND METHOD FOR PRE-PROCESSING BEFORE ENCODING AN IMAGE SEQUENCE |
EP1763252B1 (en) * | 2004-06-29 | 2012-08-08 | Sony Corporation | Motion prediction compensation method and motion prediction compensation device |
US8462850B2 (en) * | 2004-07-02 | 2013-06-11 | Qualcomm Incorporated | Motion estimation in video compression systems |
US20060165162A1 (en) * | 2005-01-24 | 2006-07-27 | Ren-Wei Chiang | Method and system for reducing the bandwidth access in video encoding |
US20060215755A1 (en) * | 2005-03-24 | 2006-09-28 | Mediatek Incorporation | Video encoding methods and systems for battery-powered apparatus |
US8660180B2 (en) * | 2005-04-01 | 2014-02-25 | Lg Electronics Inc. | Method and apparatus for scalably encoding and decoding video signal |
EP1880553A4 (en) * | 2005-04-13 | 2011-03-02 | Lg Electronics Inc | Method and apparatus for decoding video signal using reference pictures |
US20060256864A1 (en) * | 2005-05-13 | 2006-11-16 | Mediatek Incorporation | Motion estimation methods and systems in video encoding for battery-powered appliances |
US8755434B2 (en) * | 2005-07-22 | 2014-06-17 | Lg Electronics Inc. | Method and apparatus for scalably encoding and decoding video signal |
US20090119454A1 (en) * | 2005-07-28 | 2009-05-07 | Stephen John Brooks | Method and Apparatus for Video Motion Process Optimization Using a Hierarchical Cache |
US8116371B2 (en) * | 2006-03-08 | 2012-02-14 | Texas Instruments Incorporated | VLC technique for layered video coding using distinct element grouping |
WO2009032255A2 (en) * | 2007-09-04 | 2009-03-12 | The Regents Of The University Of California | Hierarchical motion vector processing method, software and devices |
US8565310B2 (en) * | 2008-01-08 | 2013-10-22 | Broadcom Corporation | Hybrid memory compression scheme for decoder bandwidth reduction |
US20090207915A1 (en) * | 2008-02-15 | 2009-08-20 | Freescale Semiconductor, Inc. | Scalable motion search ranges in multiple resolution motion estimation for video compression |
CN101272498B (en) * | 2008-05-14 | 2010-06-16 | 杭州华三通信技术有限公司 | Video encoding method and device |
KR101390620B1 (en) * | 2010-03-31 | 2014-04-30 | 인텔 코포레이션 | Power efficient motion estimation techniques for video encoding |
CN110741637B (en) * | 2017-04-28 | 2023-10-03 | 阿斯卡瓦公司 | Method for simplifying video data, computer readable storage medium and electronic device |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5414469A (en) * | 1991-10-31 | 1995-05-09 | International Business Machines Corporation | Motion video compression system with multiresolution features |
US5412435A (en) * | 1992-07-03 | 1995-05-02 | Kokusai Denshin Denwa Kabushiki Kaisha | Interlaced video signal motion compensation prediction system |
US5448310A (en) * | 1993-04-27 | 1995-09-05 | Array Microsystems, Inc. | Motion estimation coprocessor |
JPH0746457A (en) * | 1993-07-31 | 1995-02-14 | Sony Corp | Motion quantity detector and motion quantity detection method |
JPH07154801A (en) * | 1993-11-29 | 1995-06-16 | Ricoh Co Ltd | Hierarchical motion vector detection method |
US5500678A (en) * | 1994-03-18 | 1996-03-19 | At&T Corp. | Optimized scanning of transform coefficients in video coding |
US5526054A (en) * | 1995-03-27 | 1996-06-11 | International Business Machines Corporation | Apparatus for header generation |
US5694170A (en) * | 1995-04-06 | 1997-12-02 | International Business Machines Corporation | Video compression using multiple computing agents |
US5761398A (en) | 1995-12-26 | 1998-06-02 | C-Cube Microsystems Inc. | Three stage hierarchal motion vector determination |
US5719632A (en) * | 1996-01-25 | 1998-02-17 | Ibm Corporation | Motion video compression system with buffer empty/fill look-ahead bit allocation |
JP3297293B2 (en) * | 1996-03-07 | 2002-07-02 | 三菱電機株式会社 | Video decoding method and video decoding device |
-
1996
- 1996-11-07 US US08/745,584 patent/US6549575B1/en not_active Expired - Lifetime
-
1997
- 1997-07-14 TW TW086109922A patent/TW339495B/en not_active IP Right Cessation
- 1997-09-25 KR KR1019970048684A patent/KR100294999B1/en not_active IP Right Cessation
- 1997-10-06 CN CNB971200823A patent/CN1196340C/en not_active Expired - Fee Related
- 1997-10-13 JP JP9278522A patent/JPH10150666A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US6549575B1 (en) | 2003-04-15 |
KR100294999B1 (en) | 2001-11-14 |
TW339495B (en) | 1998-09-01 |
CN1182335A (en) | 1998-05-20 |
JPH10150666A (en) | 1998-06-02 |
KR19980041898A (en) | 1998-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1196340C (en) | Efficient, flexible motion estimation architecture for real time MPEG2 compliant encoding | |
CN1640145B (en) | Video frequency coding method and device, data stream decoding method and device | |
US5844613A (en) | Global motion estimator for motion video signal encoding | |
US5768537A (en) | Scalable MPEG2 compliant video encoder | |
JP3072035B2 (en) | Two-stage video film compression method and system | |
KR100703760B1 (en) | Video encoding/decoding method using motion prediction between temporal levels and apparatus thereof | |
JP4429968B2 (en) | System and method for increasing SVC compression ratio | |
Chan et al. | Experiments on block-matching techniques for video coding | |
US20120076207A1 (en) | Multiple-candidate motion estimation with advanced spatial filtering of differential motion vectors | |
WO2004015998A1 (en) | System and method for rate-distortion optimized data partitioning for video coding using backward adaptation | |
CN1457605A (en) | Improved prediction structure of enhancement layer in fine granular scalability video coding technique | |
CN1636407A (en) | Totally embedded FGS video coding with motion compensation | |
CN1232125C (en) | Method for motion estimation (me) through discrete cosine transform (dct) and an apparatus therefor | |
CN1650634A (en) | Scalable wavelet based coding using motion compensated temporal filtering based on multiple reference frames | |
US20060250520A1 (en) | Video coding method and apparatus for reducing mismatch between encoder and decoder | |
CN1813479A (en) | Video coding in an overcomplete wavelet domain | |
KR20050061483A (en) | Scalable video encoding | |
US20050141616A1 (en) | Video encoding and decoding methods and apparatuses using mesh-based motion compensation | |
CN1792097A (en) | Video processing device with low memory bandwidth requirements | |
CN1656816A (en) | Improved efficiency fgst framework employing higher quality reference frames | |
CN1633814A (en) | Memory-bandwidth efficient FGS encoder | |
CN1848960A (en) | Residual coding in compliance with a video standard using non-standardized vector quantization coder | |
KR100566290B1 (en) | Image Scanning Method By Using Scan Table and Discrete Cosine Transform Apparatus adapted it | |
JP2004511978A (en) | Motion vector compression | |
CN1650633A (en) | Motion compensated temporal filtering based on multiple reference frames for wavelet based coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C06 | Publication | ||
PB01 | Publication | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20050406 Termination date: 20161006 |
|
CF01 | Termination of patent right due to non-payment of annual fee |