WO2021146933A1 - Next-generation loop filter implementations for adaptive resolution video coding - Google Patents

Next-generation loop filter implementations for adaptive resolution video coding Download PDF

Info

Publication number
WO2021146933A1
WO2021146933A1 PCT/CN2020/073563 CN2020073563W WO2021146933A1 WO 2021146933 A1 WO2021146933 A1 WO 2021146933A1 CN 2020073563 W CN2020073563 W CN 2020073563W WO 2021146933 A1 WO2021146933 A1 WO 2021146933A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
boundary
block
boundary strength
value
Prior art date
Application number
PCT/CN2020/073563
Other languages
French (fr)
Inventor
Tsuishan CHANG
Yuchen SUN
Jian Lou
Original Assignee
Alibaba Group Holding Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Limited filed Critical Alibaba Group Holding Limited
Priority to CN202080083267.5A priority Critical patent/CN114762326B/en
Priority to PCT/CN2020/073563 priority patent/WO2021146933A1/en
Publication of WO2021146933A1 publication Critical patent/WO2021146933A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • one or more in-loop filters are conventionally applied to a frame after it has been reconstructed by an in-loop encoder and/or after it has been decoded by an in-loop decoder.
  • a deblocking filter is applied to a reconstructed frame output by an in-loop encoder.
  • Block-based coding algorithms as established by these standards tend to produce artifacts known as “blocking, ” which may be ameliorated by a deblocking filter.
  • a sample adaptive offset (SAO) filter may be applied to the reconstructed frame output by the deblocking filter.
  • SAO adaptive loop filter
  • FIGS. 1A and 1B illustrate an example block diagram of a video encoding process and a video decoding process according to example embodiments of the present disclosure.
  • FIGS. 2A through 2D illustrate coding loop flows including different arrangements of an up-sampler and multiple in-loop filters according to example embodiments of the present disclosure.
  • FIGS. 3A and 3B illustrates deblocking methods performed by a deblocking filter according to the HEVC and VVC specifications and to example embodiments of the present disclosure.
  • FIGS. 4A and 4B illustrate flowcharts of deblocking filter logic according to example embodiments of the present disclosure.
  • FIG. 5 illustrates a deblocking filter computing bS values by reference.
  • FIGS. 6A and 6B illustrates determining whether the deblocking filter is active for a first four horizontal lines and a second four horizontal lines in an 8x8 pixel boundary according to example embodiments of the present disclosure.
  • FIG. 7 illustrates an example flowchart of a sample adaptive offset (SAO) filter method according to example embodiments of the present disclosure.
  • FIG. 8A illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 0-degree angle.
  • FIG. 8B illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 90-degree angle.
  • FIG. 8C illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 135-degree angle.
  • FIG. 8D illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 45-degree angle.
  • FIG. 9A illustrates an example flowchart of an adaptive loop filter (ALF) method according to example embodiments of the present disclosure.
  • ALF adaptive loop filter
  • FIG. 9B illustrates shapes of ALF filters.
  • FIGS. 9C through 9F illustrate subsampling over a sub-block for calculating vertical, horizontal, and two diagonal gradient values of a sub-block of the luma coding tree block (CTB) .
  • CTB luma coding tree block
  • FIG. 10 illustrates an example system for implementing the processes and methods described above for implementing resolution-adaptive video coding in deblocking filters.
  • FIG. 11 illustrates an example system for implementing the processes and methods described above for implementing resolution-adaptive video coding in SAO filters.
  • FIG. 12 illustrates an example system for implementing the processes and methods described above for implementing resolution-adaptive video coding in ALF.
  • Systems and methods discussed herein are directed to integrate inter-frame adaptive resolution change with a video coding loop, and more specifically to in-loop filter methods which improve inter-frame adaptive resolution change and process reconstructed frames output by inter-frame adaptive resolution change.
  • a frame may be subdivided into macroblocks (MBs) each having dimensions of 16x16 pixels, which may be further subdivided into partitions.
  • MBs macroblocks
  • a frame may be subdivided into coding tree units (CTUs) , the luma and chroma components of which may be further subdivided into coding tree blocks (CTBs) which are further subdivided into coding units (CUs) .
  • CTUs coding tree units
  • CTBs coding tree blocks
  • CUs coding units
  • a frame may be subdivided into units of NxN pixels, which may then be further subdivided into subunits.
  • Each of these largest subdivided units of a frame may generally be referred to as a “block” for the purpose of this disclosure.
  • a block may be subdivided into partitions having dimensions in multiples of 4x4 pixels.
  • a partition of a block may have dimensions of 8x4 pixels, 4x8 pixels, 8x8 pixels, 16x8 pixels, or 8x16 pixels.
  • motion prediction coding formats may refer to data formats wherein frames are encoded with motion vector information and prediction information of a frame by the inclusion of one or more references to motion information and prediction units (PUs) of one or more other frames.
  • Motion information may refer to data describing motion of a block structure of a frame or a unit or subunit thereof, such as motion vectors and references to blocks of a current frame or of another frame.
  • PUs may refer to a unit or multiple subunits corresponding to a block structure among multiple block structures of a frame, such as an MB or a CTU, wherein blocks are partitioned based on the frame data and are coded according to established video codecs.
  • Motion information corresponding to a PU may describe motion prediction as encoded by any motion vector coding tool, including, but not limited to, those described herein.
  • frames may be encoded with transform information by the inclusion of one or more transformation units (TUs) .
  • Transform information may refer to coefficients representing one of several spatial transformations, such as a diagonal flip, a vertical flip, or a rotation, which may be applied to a sub-block.
  • Sub-blocks of CUs such as PUs and TUs may be arranged in any combination of sub-block dimensions as described above.
  • a CU may be subdivided into a residual quadtree (RQT) , a hierarchical structure of TUs.
  • the RQT provides an order for motion prediction and residual coding over sub-blocks of each level and recursively down each level of the RQT.
  • An encoder may obtain a current frame of a bitstream and derive a reconstructed frame (a “reconstructed frame” ) .
  • Blocks of a reconstructed frame may be intra-coded or inter-coded.
  • a CTU may include as components a luma CTB and a chroma CTB.
  • a luma CTB of the CTU may be divided into luma sub-blocks.
  • a chroma CTB of the CTU may be divided into chroma sub-blocks, wherein each chroma sub-block may have four neighboring luma sub-blocks.
  • a neighboring luma sub-block may be a luma sub-block below, left of, right of, or above the chroma sub-block.
  • Luma and chroma sub-blocks may be partitioned in accordance with PUs and TUs as described above –that is, partitioned into sub-blocks having dimensions in multiples of 4x4 pixels.
  • FIGS. 1A and 1B illustrate an example block diagram of a video encoding process 100 and a video decoding process 118 according to an example embodiment of the present disclosure.
  • a picture from a video source 102 may be encoded to generate a reconstructed frame, and output the reconstructed frame at a destination such as a reference frame buffer 104 or a transmission buffer 116.
  • the picture may be input into a coding loop, which may include the steps of inputting the picture into a first in-loop up-sampler or down-sampler 106, generating an up-sampled or down-sampled picture, inputting the up-sampled or down-sampled picture into a video encoder 108, generating a reconstructed frame based on a previous reconstructed frame of the reference frame buffer 104, inputting the reconstructed frame into one or more in-loop filters 110, and outputting the reconstructed frame from the loop, which may include, or may not include: inputting the reconstructed frame into a second up-sampler or down-sampler 114, generating an up-sampled or down-sampled reconstructed frame, and
  • a coded frame is obtained from a source such as a bitstream 120.
  • a source such as a bitstream 120.
  • a previous frame having position N–1 in the bitstream 120 may have a resolution larger than or smaller than a resolution of current frame
  • a next frame having position N+1 in the bitstream 120 may have a resolution larger than or smaller than the resolution of the current frame.
  • the current frame may be input into a coding loop, which may include the steps of inputting the current frame into a video decoder 122, inputting the current frame into one or more in-loop filters 124, inputting the current frame into a third in-loop up-sampler or down-sampler 128, generating an up-sampled or down-sampled reconstructed frame, and outputting the up-sampled or down-sampled reconstructed frame into the reference frame buffer 104.
  • the current frame may be output from the loop, which may include outputting the up-sampled or down-sampled reconstructed frame into a display buffer (not illustrated) .
  • the video encoder 108 and the video decoder 122 may each implement a motion prediction coding format, including, but not limited to, those coding formats described herein.
  • Generating a reconstructed frame based on a previous reconstructed frame of the reference frame buffer 104 may include inter-coded motion prediction as described herein, wherein the previous reconstructed frame may be an up-sampled or down-sampled reconstructed frame output by the in-loop up-sampler or down-sampler 114/128, and the previous reconstructed frame serves as a reference picture in inter-coded motion prediction as described herein.
  • a first up-sampler or down-sampler 106, a second up-sampler or down-sampler 114, and a third up-sampler or down-sampler 128 may each implement an up-sampling or down-sampling algorithm suitable for respectively at least up-sampling or down-sampling coded pixel information of a frame coded in a motion prediction coding format.
  • a first up-sampler or down-sampler 106, a second up-sampler or down-sampler 114, and a third up-sampler or down-sampler 128 may each implement an up-sampling or down-sampling algorithm further suitable for respectively upscaling and downscaling motion information such as motion vectors.
  • a frame serving as a reference picture in generating a reconstructed frame for the current frame may therefore be up-sampled or down-sampled in accordance with the resolution of the current frame relative to the resolutions of the previous frame and of the next frame.
  • the frame serving as the reference picture may be up-sampled in the case that the current frame has a resolution larger than the resolutions of either or both the previous frame and the next frame.
  • the frame serving as the reference picture may be down-sampled in the case that the current frame has a resolution smaller than either or both the previous frame and the next frame.
  • a frame may be, for example, down-sampled in the encoding process of the coding loop by a down-sampler 106, and then the frame may be up-sampled by an up-sampler 114 or an up-sampler 128 and output into a reference frame buffer 104.
  • a frame being down-sampled causes quality loss to the content of the frame, particularly high-frequency loss –for example, loss of sharp edges or fine patterns in the frame. This loss is not restored upon the subsequent up-sampling of the frame, resulting in a frame at its own original frame resolution but lacking high frequency detail.
  • the use of a frame suffering from high-frequency loss as a reference frame in a reference frame buffer as described above leads to poor results for motion prediction of subsequent frames referencing the reference frame.
  • in-loop filters may be conventionally applied to a reconstructed frame output from an encoder or a decoder.
  • in-loop filters may, at a further level of granularity, be applied before or applied after an up-sampler.
  • FIGS. 2A through 2D illustrate coding loop flows including different arrangements of an up-sampler and multiple in-loop filters, including, for example, a deblocking filter, a SAO filter, and ALF.
  • FIGS. 2A through 2D should be understood as illustrating receiving a frame that has been down-sampled during a video encoding process, as illustrated by FIG. 1A, and is to be up-sampled and output into a reference frame buffer.
  • the frame may be received during the video encoding process 100 from a video encoder 108 as illustrated by FIG. 1A or may be received during the video decoding process 118 from a video decoder 122 as illustrated by FIG. 1B.
  • the up-sampler 211 may receive a down-sampled frame output by a video encoder 108 in the case of a video encoding process 100 as illustrated by FIG. 1A, or a down-sampled frame output by a video decoder 122 in the case of a video decoding process 118 as illustrated by FIG. 1B.
  • the up-sampler 211 may up-sample the down-sampled frame and output the frame to the filters, including the deblocking filter 212, the SAO filter 213, and the ALF 214.
  • the encoder can analyze gradient, activity, and similar information of the frame, and utilize in-loop filter tools and parameters and coefficients thereof to evaluate image quality; furthermore, the encoder can transmit optimized parameters and/or coefficients of a deblocking filter, a SAO filter, and ALF to the decoder to enhance accuracy, objective quality, and subjective quality of reconstructed signals.
  • a deblocking filter 212 may filter a frame on a per-boundary per-CU basis in a coding order among CUs of the frame, such as a raster scan order wherein a first-coded CU is an uppermost and leftmost CU of the frame, according to video encoding standards. Within a frame, the deblocking filter 212 may filter both CUs of a luma CTB and a chroma CTB of the frame.
  • FIGS. 3A and 3B illustrates a deblocking method 300 performed by a deblocking filter 212.
  • the deblocking method 300 may be performed according to the HEVC specification or according to the VVC specification. Certain differences between the HEVC specification implementation and the VVC specification implementation thereof shall be noted herein for ease of understanding with reference to FIG. 3A and FIG. 3B, respectively, though this shall not be understood as being a comprehensive accounting of all such differences.
  • the deblocking filter 212 determines block and sub-block boundaries to filter.
  • the deblocking filter 212 may filter NxN pixel boundaries of sub-blocks of the CU.
  • the deblocking filter 212 may filter PU boundaries based on differences between motion vectors and reference frames of neighboring prediction sub-blocks. That is, the deblocking filter 212 may filter PU boundaries in the event that the difference in at least one motion vector component between blocks on different sides of the boundary is greater than or equal to a threshold of one sampled pixel.
  • the deblocking filter 212 may filter TU boundaries in the event that coefficients sampled from pixels of a transform sub-block on either side of the boundary are non-zero.
  • the deblocking filter 212 also filters those boundaries of the CU itself which coincide with outer PU and TU boundaries.
  • sub-block boundaries filtered by the deblocking filter may be at least 8x8 pixels in dimensions, such that boundaries of 4x4 sub-blocks, such as TU sub-blocks representing 4x4 transforms, are not filtered by the deblocking filter 212, reducing filter complexity.
  • the deblocking filter 212 may filter PU boundaries between PUs in addition to outer PU boundaries and may filter TU boundaries at an 8x8 pixel grid in the RQT level.
  • sub-block boundaries filtered by the deblocking filter 212 may also be 4x4 pixels in dimensions where a boundary to filter is a boundary of a luma CTB, including CU boundaries and transform sub-block boundaries.
  • Transform sub-block boundaries may include, for example, transform unit boundaries included by sub-block transform ( “SBT” ) and intra sub-partitioning ( “ISP” ) modes, and transforms due to implicit split of large CUs.
  • Sub-block boundaries filtered by the deblocking filter 212 may still be 8x8 pixels in dimensions where a boundary to filter is a prediction sub-block boundary.
  • Prediction sub-block boundaries may include, for example, prediction unit boundaries introduced by spatio-temporal motion vector prediction ( “STMVP” ) , sub-block temporal motion vector prediction ( “SbTMVP” ) , and affine motion prediction modes.
  • the deblocking filter 212 may filter SBT and ISP boundaries in the event that coefficients sampled from pixels of a transform sub-block on either side of the boundary are non-zero. Moreover, to facilitate concurrent computation of the deblocking filter 212 by parallel computing threads, in the event that the filtered boundary is also part of a STMVP, SbTMVP, or affine motion prediction sub-block, the deblocking filter 212 filters at most five samples on one side of the filtered boundary.
  • the deblocking filter 212 may filter STMVP, SbTMVP, and affine motion prediction boundaries based on differences between motion vectors and reference frames of neighboring prediction sub-blocks. That is, the deblocking filter 212 may filter PU boundaries in the event that the difference in at least one motion vector component between blocks on different sides of the boundary is greater than or equal to a threshold of half a sampled luma pixel. Thus, blocking artifacts originating from boundaries between inter prediction blocks having only a small difference in motion vectors are filtered.
  • the filtered boundary is filtered by at most one sampled pixel on each side; in the case that eight pixels are sampled between the filtered boundary and a transform block boundary, the filtered boundary is filtered by at most two sampled pixels on each side; and in the case that any other number of pixels are sampled between the filtered boundary and a transform block boundary, the filtered boundary is filtered by at most three sampled pixels on each side.
  • a deblocking filter 212 may first filter vertical edges of a CU to perform horizontal filtering, and then filter horizontal edges of a CU to perform vertical filtering.
  • a deblocking filter 212 filtering a boundary of a current CU of a frame may reference a block P and a block Q adjacent to the boundary on both sides. While the deblocking filter 212 performs horizontal filtering, a block P may be a block left of the boundary and a block Q may be a block right of the boundary. While the deblocking filter 212 performs vertical filtering, a block P may be a block above the boundary and a block Q may be a block below the boundary.
  • the deblocking filter 212 may identify a frame as having been down-sampled by determining that an inter-coded block P or block Q has a reference frame having a resolution different from a resolution of the current frame.
  • the deblocking filter 212 determines a boundary strength (bS) of a boundary being filtered.
  • a bS value may determine a strength of deblocking filtering to be applied by the deblocking filter to a boundary.
  • a bS value may be 0, 1, or 2, where 0 indicates no filtering to be applied; 1 indicates weak filtering to be applied; and 2 indicates strong filtering to be applied.
  • a bS value may be determined for a boundary having dimensions of 4x4 pixels, but mapped to a boundary having dimensions of 8x8 pixels. For an 8-pixel segment on a boundary in an 8x8 pixel grid, a bS value of the entire segment may be set to the larger of two bS values for two 4-pixel segments making up the 8-pixel segment.
  • the deblocking filter 212 may only determine a bS value of 2 in the case that block P or block Q is an intra-coded block.
  • the deblocking filter 212 may also determine a bS value of 2 in the case that block P or block Q is an inter-coded block rather than an intra-coded block, and has a resolution different from a resolution of the current frame. According to another example embodiment of the present disclosure, the deblocking filter 212 may determine a bS value of 1 in the case that block P or block Q is an inter-coded block rather than an intra-coded block, and has a resolution different from a resolution of the current frame.
  • FIGS. 4A and 4B illustrate flowcharts of deblocking filter logic according to example embodiments of the present disclosure.
  • deblocking filter logic according to example embodiments of the present disclosure with reference to Tables 4, 5, 6, 7, 8, 9, and 10.
  • these example embodiments are based on deblocking filter logic according to the VVC specification.
  • the deblocking filter 212 receives at least the following inputs: with reference to an upper-left sample of a current frame, a coordinate (xCb, yCb) locating an upper-left sample pixel of a current coding block relative to an upper-left sample of a current frame; a variable nCbW specifying width of the current coding block; a variable nCbH specifying height of the current coding block; a variable edgeType specifying whether a vertical edge (denoted by, for example, the value EDGE_VER) or a horizontal edge (denoted by, for example, the value EDGE_HOR) is filtered; a variable cIdx specifying the color component of the current coding block; and a two-dimensional array edgeFlags having dimensions (nCbW) x (nCbH) .
  • the deblocking filter 212 sets a variable gridSize:
  • edgeType has value EDGE_VER:
  • xN is set equal to Max (0, (nCbW /gridSize ) -1 )
  • edgeType has value EDGE_HOR:
  • bS values bS [xD i ] [yD j ] are set as follows:
  • edgeType has value EDGE_VER
  • p 0 is set to recPicture [xCb + xD i -1] [yCb + yD j ]
  • q 0 is set to recPicture [xCb + xD i ] [yCb + yD j ] .
  • edgeType has value EDGE_HOR
  • p 0 is set to recPicture [xCb + xD i ] [yCb + yD j -1]
  • q 0 is set to recPicture [xCb + xD i ] [yCb + yD j ]
  • bS values bS [xD i ] [yD j ] are set as follows:
  • bS [xD i ] [yD j ] is set to 2.
  • the block edge is also a transform block edge and a sampled pixel p 0 or q 0 is in a coding block with ciip_flag having value 1
  • bS[xD i ] [yD j ] is set to 2.
  • bS [xD i ] [yD j ] is set equal to 1.
  • bS [xD i ] [yD j ] is set to 1.
  • edgeFlags [xD i ] [yD j ] has value 2
  • bS [xD i ] [yD j ] is set to 1:
  • a first coding sub-block containing the sampled pixel p 0 and a second coding sub-block containing the sampled pixel q 0 are both coded by IBC prediction mode, and an absolute difference between the horizontal or vertical component of the block vectors used in motion prediction of the two coding sub-blocks is greater than or equal to 8 in units of 1/16 luma sampled pixels; and/or:
  • determination of whether the reference pictures used for the two coding sub-blocks are same or different may be based only on which pictures are referenced, without regard to whether a prediction is formed using an index into reference picture list 0 or an index into reference picture list 1, and also without regard to whether the index position within a reference picture list is different;
  • a first motion vector is used in motion prediction of a first coding sub-block containing the sample p 0 and a second motion vector is used in motion prediction of a second coding sub-block containing the sample q 0 , and an absolute difference between the horizontal component or vertical component of the first and the second motion vectors is greater than or equal to 8 in units of 1/16 luma sampled pixels; and/or:
  • a first and a second motion vector and a first and a second reference picture are used in motion prediction of a first coding sub-block containing the sample p 0
  • a third and a fourth motion vector for the first and the second reference pictures are used in motion prediction of a second coding sub-block containing the sample q 0
  • an absolute difference between the horizontal or vertical component of two respective motion vectors used in motion prediction of the two coding sub-blocks for either reference picture is greater than or equal to 8 in units of 1/16 luma sampled pixels; and/or:
  • a first and a second motion vector for a first reference picture are used in motion prediction of a first coding sub-block containing the sampled pixel p 0
  • a third and a fourth motion vector for a second reference picture are used in motion prediction of a second coding sub-block containing the sampled pixel q 0 , and both following conditions are true:
  • An absolute difference between the horizontal or vertical component of a list 0 motion vector used in motion prediction of the two coding sub-blocks is greater than or equal to 8 in 1/16 luma sampled pixels, or an absolute difference between the horizontal or vertical component of a list 1 motion vector used in motion prediction of the two coding sub-blocks is greater than or equal to 8 in units of 1/16 luma sampled pixels;
  • An absolute difference between the horizontal or vertical component of a list 0 motion vector used in motion prediction of a first coding sub-block containing the sample p 0 and a list 1 motion vector used in motion prediction of a second coding sub-block containing the sample q 0 is greater than or equal to 8 in units of 1/16 luma sampled pixels, or an absolute difference between the horizontal or vertical component of a list 1 motion vector used in motion prediction of the first coding sub-block containing the sample p 0 and a list 0 motion vector used in motion prediction of the second coding sub-block containing the sample q0 is greater than or equal to 8 in units of 1/16 luma sampled pixels.
  • the deblocking filter 212 may determine a bS value of a block by referencing blocks left and above the block within the two pairs of blocks, thus reducing memory requirements for computation.
  • bS values determined in step 304 may subsequently be referenced by the deblocking filter 212 in step 312B to determine whether the deblocking filter 212 should apply strong filtering, or not.
  • edgeFlags [xD i ] [yD j ] has value 2
  • a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel p 0 or q 0 is different from a resolution of the current picture
  • bS [xD i ] [yD j ] is set to 2
  • Table 3 subsequently described with reference to step 312B is further modified as shown by Table 4.
  • the one or more of the following conditions wherein, if true, bS [xD i ] [yD j ] is set to 1 further includes: a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 or q 0 being different than a resolution of the current picture, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 5.
  • edgeFlags [xD i ] [yD j ] has value 2
  • a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel p 0 is different from a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel q 0
  • bS [xD i ] [yD j ] is set to 2
  • Table 3 subsequently described with reference to step 312B is further modified as shown by Table 6.
  • a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel p 0 or q 0 is lower than a resolution of the current picture, or; a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 is different from a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel q 0 .
  • the one or more of the following conditions wherein, if true, bS [xD i ] [yD j ] is set to 1 further includes: a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 or q 0 being lower than a resolution of the current picture, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 8.
  • a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel p 0 or q 0 is higher than a resolution of the current picture, or; a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 is different from a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel q 0 .
  • the one or more of the following conditions wherein, if true, bS [xD i ] [yD j ] is set to 1 further includes: a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 or q 0 being higher than a resolution of the current picture, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 10.
  • those example embodiments wherein bS [xD i ] [yD j ] is set to 2 may describe conditions wherein blocking artifacts are expected to be severe as a result of resolution differences, and thus the deblocking filter 212 should apply a strong filter.
  • Those example embodiments wherein bS [xD i ] [yD j ] is set to 1 may describe conditions wherein blocking artifacts are expected to be moderate as a result of resolution differences, and thus the deblocking filter 212 should not apply a strong filter.
  • the deblocking filter 212 determines threshold values ⁇ and t C .
  • the threshold values ⁇ and t C may be utilized in the subsequent steps 308, 310, and 312 to control strength of the deblocking filter 212.
  • the threshold values ⁇ and t C may be determined by lookup of corresponding values ⁇ ’ a nd t C ’ from a table such as Table 1 below according to the HEVC specification, based on a value of a luma quantization parameter Q (also referred to as qP L ) .
  • Values of t C may be determined from values of t C ’ by the following equation:
  • Table 1 may be modified by extending Q values through to a maximum of 63, by extending ⁇ ’ values as follows, and by replacing t C ’ values with the following (corresponding to Q values 0 through 63, in order) :
  • Q may be determined based on pixel samples from reconstructed luma sub-blocks of the current CU.
  • a value of ⁇ may be derived from ⁇ ’ as follows:
  • a value of t C may be derived from t C ’ as follows:
  • the deblocking filter 212 applies an offset to the luma quantization parameter qP L . According to the VVC specification, this offset may take precedence in controlling strength of the deblocking filter 212 over the effects of ⁇ and t C .
  • An offset qpOffset may be derived from luma levels ( “LL” ) of pixel samples from luma sub-blocks of the current CU as follows:
  • FIGS. 6A and 6B illustrated subsequently, illustrate the coordinates of those particular p and q pixels from which luma levels are sampled from.
  • a transfer function may be applied to derive the offset qpOffset.
  • a base value of qpOffset may be derived from a flag sps_ladf_lowest_interval_qp_offset in a slice header of the frame.
  • the slice header further contains flags specifying lower bounds of multiple luma intensity level intervals. For each of these intervals, if LL exceeds the lower bound set for the interval, the base value of qpOffset is offset by an offset value in the range of -63 to 63, inclusive, preset for the interval in an offset array recorded in the slice header.
  • qP L may be derived as follows:
  • Qp Q is a quantization parameter of a coding block containing the pixel q 0,0 and Qp P is a quantization parameter of a coding block containing the pixel p 0, 0 , as FIGS. 6A and 6B illustrate subsequently.
  • qP L may be derived as follows:
  • the deblocking filter 212 determines whether the deblocking filter 212 is active for a first four vertical lines of pixels or horizontal lines of pixels running across an 8x8 boundary, and whether the deblocking filter is active for a second four vertical lines of pixels or horizontal lines of pixels running across the 8x8 boundary.
  • the deblocking filter 212 determines whether the deblocking filter 212 is active for a first four vertical lines of pixels or horizontal lines of pixels running across an 8x8 or 4x4 boundary, and whether the deblocking filter is active for a second four vertical lines of pixels or horizontal lines of pixels running across the 8x8 or 4x4 boundary.
  • FIGS. 6A and 6B illustrate determining whether the deblocking filter 212 is active for a first four horizontal lines and a second four horizontal lines running across a vertical 8x8 boundary according to example embodiments of the present disclosure, the lines being numbered from 0 to 7; for vertical lines, whether the deblocking filter 212 is active may be derived similarly. For each line, six pixels of the line on either side of the boundary are sampled. As illustrated by FIGS.
  • the deblocking filter 212 samples the pixels p2 0 , p1 0 , p0 0 , q0 0 , q1 0 , and q2 0 of the first line and the pixels p2 3 , p1 3 , p0 3 , q0 3 , q1 3 , and q2 3 in the fourth line among the first four lines to determine whether the deblocking filter is active for the first four lines.
  • the deblocking filter 212 will be active for the first four lines, and furthermore, the following variables are also set as inputs for filters.
  • variable dE is set equal to 1.
  • the deblocking filter 212 will not be active for the first four lines.
  • the deblocking filter 212 also samples the pixels p2 4 , p1 4 , p0 4 , q0 4 , q1 4 , and q2 4 in the first line and the pixels p2 7 , p1 7 , p0 7 , q0 7 , q1 7 , and q2 7 in the fourth line among the second four lines to determine whether the deblocking filter is active for the second four horizontal lines. This is performed in a manner similar to the above-mentioned process for the first four lines.
  • the deblocking filter 212 determines whether strong or weak filtering is applied for the first four vertical lines or horizontal lines in the 8x8 boundary in the case that the deblocking filter 212 is active for those lines, and whether strong or weak filtering is applied for the second four vertical lines or horizontal lines in the 8x8 boundary in the case that the deblocking filter 212 is active for those lines.
  • the deblocking filter 212 determines whether strong or weak filtering is applied for the first four vertical lines or horizontal lines in the 8x8 or 4x4 boundary in the case that the deblocking filter 212 is active for those lines, and whether strong or weak filtering is applied for the second four vertical lines or horizontal lines in the 8x8 or 4x4 boundary in the case that the deblocking filter 212 is active for those lines.
  • the deblocking filter 212 applies strong filtering to the first four lines if the following two sets of conditions are met, and weak filtering otherwise.
  • the deblocking filter 212 determines whether to apply strong or weak filtering to the second four lines in a manner similar to the above-mentioned process for the first four lines.
  • the deblocking filter 212 applies a strong filter to vertical lines or horizontal lines wherein the deblocking filter 212 determined to apply a strong filter.
  • the strong filter is applied to three pixels p 0 , p 1 , and p 2 of the block P side of the boundary with four pixels total as input, outputting pixels p 0 ’ , p 1 ’ , and p 2 ’ , respectively; and three pixels q 0 , q 1 , and q 2 of the block Q side of the boundary with four pixels total as input, outputting pixels q 0 ’ , q 1 ’ , and q 2 ’ , respectively.
  • the outputs are derived as below.
  • the deblocking filter 212 applies a strong filter to vertical lines or horizontal lines wherein the deblocking filter 212 determined to apply a strong filter.
  • the deblocking filter 212 may apply a filter according to the VVC specification, for luma CTBs in particular, to a sub-block boundary 4x4 in dimensions rather than a sub-block boundary 8x8 in dimensions, as described above with reference to step 302.
  • the filter may be applied to one pixel each of the respective blocks to each side of the boundary, where a block to one side of the boundary has a width of 4 pixels or less in the event that the boundary is vertical, or a block to one side of the boundary has a height of 4 pixels or less in the event that the boundary is horizontal.
  • Such implementations may handle blocking artifacts from rectangular transform shapes, and may facilitate concurrent computation of the deblocking filter 212 by parallel computing threads.
  • the deblocking filter 212 may apply a stronger deblocking filter (for example, a bilinear filter) according to the VVC specification, for luma CTBs in particular, in the event that sampled pixels on either the P side or the Q side of the boundary belong to a large block and in the event that two further conditions are also satisfied.
  • Large blocks may be those blocks where width of a horizontal edge is greater than or equal to 32 pixels, or those blocks where height of a vertical edge is greater than or equal to 32 pixels.
  • p i ’ and q j ’ are outputs for those respective inputs:
  • tcPD i and tcPD j are position-dependent clippings and g j , f i , Middle s, t , P s and Q s are derived based on Table 2 below.
  • the deblocking filter 212 may apply a stronger deblocking filter according to the VVC specification, for chroma CTBs in particular, to a sub-block boundary 8x8 in dimensions as described above with reference to step 302, in the event that both the P side and the Q side of the chroma CTB boundary have dimensions greater than or equal to 8 pixels of chroma sample and in the event that three further conditions are also satisfied.
  • the first condition is satisfied by a determination to apply strong filtering as described below with reference to Table 3, and the deblocking filter 212 determining in step 312B as described above that sampled pixels on both the P side and the Q side of the chroma CTB boundary belong to large blocks.
  • Table 3 below describes a decision-making process wherein the deblocking filter 212 may determine to apply strong filtering, or may not.
  • “Adjacent blocks” may refer to the block on the P side and the block on the Q side of the filtered boundary. Where any of the Y, U, or V bS values in the rightmost three columns is determined as 2, the first condition may be satisfied. Where any of the Yu, U, or V bS values in the rightmost three columns is determined as 1, and sampled pixels on both the P side and the Q side of the chroma boundary are determined to belong to large blocks, the first condition may also be satisfied.
  • Table 4 describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
  • Table 5 describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
  • Table 6 describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
  • Table 7 describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
  • Table 8 describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
  • Table 9 describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
  • Table 10 below describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
  • the second condition is satisfied by the deblocking filter 212 determining in step 308 as described above to be active across a boundary.
  • the third condition is satisfied by the deblocking filter 212 determining in step 310 as described above to apply strong filtering over the boundary.
  • the deblocking filter 212 applies a weak filter to vertical lines or horizontal lines wherein the deblocking filter 212 determined to apply a weak filter.
  • the deblocking filter 212 determines a value ⁇ .
  • the weak filter is applied to pixels p 0 and q 0 on either side of the boundary, outputting pixels p 0 ’ a nd q 0 ’ , respectively.
  • the weak filter may be applied to either or both of pixels p 1 and q 1 on either side of the boundary, each with three pixels total as input, outputting either or both of pixels p 1 ’ a nd q 1 ’ , respectively.
  • ⁇ p Clip3 (- (t C >>1) , t C >>1, (( (p 2 +p 0 +1) >>1)-p 1 + ⁇ )>>1)
  • the above-described method 300 may be largely performed in a similar manner, except that the filter strength of a deblocking filter 212 may be further dependent upon averaged luma level of pixel samples of the reconstructed frame; the t C ’ lookup table may be further extended; and stronger deblocking filters may be applied for both the luma and chroma CTBs. Further details of these processes need not be described for understanding of example embodiments of the present disclosure, and shall not be reiterated herein.
  • a SAO filter may filter a CTB on a per-pixel basis by applying an offset to each pixel based on determining a SAO type of each pixel.
  • FIG. 7 illustrates an example flowchart of a SAO filter method 700 according to example embodiments of the present disclosure.
  • a SAO filter 213 receives a frame and decides to apply SAO to a CTB of the frame.
  • a frame may store a flag sao_type_idx in a slice header of the frame, the value thereof indicating whether SAO is to be applied to the CTB, and, if so, which type of SAO is to be applied.
  • a sao_type_idx value of 0 may indicate that SAO is not to be applied to a CTB of the frame;
  • a sao_type_idx value of 1 may indicate that an edge offset filter, as described below, is to be applied to a CTB of the frame;
  • a sao_type_idx value of 2 may indicate that a band offset filter, as described below, is to be applied to a CTB of the frame.
  • a sao_type_idx value of 3 may indicate that both edge offset and band offset are to be applied to a CTB of the frame.
  • each applicable CTB may have further SAO parameters stored including sao_merge_left flag, sao_merge_up_flag, SAO type, and four offsets.
  • a sao_merge_left_flag value of 1 for a CTB may denote that the SAO filter 213 should apply SAO type and offsets of a CTB left of the current CTB to the current CTB.
  • a sao_merge_up_flag value of 1 for a CTB may indicate that the SAO filter 213 should apply SAO type and offsets of the CTB above the current CTB to the current CTB.
  • the SAO filter 213 classifies a CTB as one of several SAO types.
  • each CTB of a frame may be classified as type 0, in which case no SAO will be applied to the CTB, or may be classified as types 1 through 5, where in each case a different SAO will be applied to the CTB. Furthermore, for types 1 through 5, pixels of the CTB will be categorized into one of multiple categories.
  • Types 1 through 4 of CTBs are identified by an angle of an edge pattern of pixels including the current pixel p and two neighbor pixels.
  • FIGS. 8A through 8D illustrate possible edge patterns that include the current pixel p and two neighbor pixels.
  • FIG. 8A illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 0-degree angle.
  • FIG. 8B illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 90-degree angle.
  • FIG. 8C illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 135-degree angle.
  • FIG. 8D illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 45-degree angle.
  • the SAO filter 213 classifies a pixel of a CTB according to edge properties.
  • Each pixel has an 8-bit intensity value ranging from 0 through 255.
  • the current pixel p may be classified by a comparison of its intensity with the two neighbor pixels (in either order) in accordance with Table 12 below.
  • the current pixel p may be classified by a comparison of its intensity with the two neighbor pixels (in either order, which determine cases 1 through 5 below) , as well as with neighbor pixels in general in two opposing directions (which determines case 0 below) in accordance with Table 13 below.
  • the SAO filter 213 applies an offset to the current pixel based on an offset value.
  • the offset value of the current pixel may be determined based on the classification of the current pixel.
  • the SAO filter 213 may determine pixels on the strong edge that are likely to be smoothed during up-sampling, and apply an offset value to compensate for this behavior.
  • the SAO filter 213 classifies a pixel of a CTB into a band.
  • a pixel index over the entire range of pixel intensity values may be established by reducing all 8-bit pixel intensity values to their five most significant bits, thus equalizing all pixel intensity values within each of 32 bands, each covering a same-sized segment of the original range of pixel intensity values. Thus, each pixel lies within one of these 32 bands based on its pixel intensity value. Furthermore, each set of four adjacent bands may be grouped together, with each group being identified by its starting position counting from low to high values over the 32 bands.
  • the SAO filter 213 applies an offset to each band based on an offset value.
  • the offset value may be determined by the intensity value of the band.
  • the offset value may reduce distortion of the band.
  • an ALF 214 may filter a frame per 4x4 pixel sub-block of a luma CTB and a chroma CTB of a frame.
  • FIG. 9A illustrates an example flowchart of an ALF method 900 according to example embodiments of the present disclosure.
  • an ALF 214 receives a frame and decides to apply ALF to a luma CTB and/or a chroma CTB of the frame.
  • a luma CTB has a flag to indicate whether ALF should be applied to the luma CTB.
  • a chroma CTB may have a flag to indicate whether ALF should be applied to the chroma CTB.
  • the ALF 214 may decide to apply ALF based on values of these flags.
  • a frame may store ALF filter parameters in a slice header of the frame.
  • ALF filter parameters may include 25 sets of luma filter coefficients, which may be accordingly applied to luma CTBs based on classification thereof.
  • ALF filter parameters may include more than 25 sets of luma filter coefficients to accommodate more types of classification, such as 35 sets of luma filter coefficients derived from a classification scheme as described below.
  • Filter coefficients may be mapped to the pixels that make up the shape of the filter.
  • a chroma filter 912 may have a 5x5 pixel diamond shape
  • a luma filter 914 may have a 7x7 pixel diamond shape, with each pixel showing an assigned filter coefficient value.
  • filter coefficients of different classifications may be merged to some extent.
  • Filter coefficients may be quantized with norm equal to 128.
  • a bitstream conformance may be applied, wherein a coefficient value of a central position of a filter may fall within a range of 0 through 2 8 , and coefficient values of all other positions of the filter may fall within a range of –2 7 to 2 7 –1, inclusive.
  • the ALF 214 calculates gradient values of a sub-block of the luma CTB in multiple directions by obtaining reconstructed samples.
  • a 1-D Laplacian calculation may be performed in four different directions by obtaining reconstructed samples R (x, y) at intervals from pixels (x, y) of the reconstructed frame. Based on 1-D Laplacian calculations, a horizontal gradient of the sub-block may be calculated as follows:
  • a vertical gradient of the sub-block may be calculated as follows:
  • a gradient of the sub-block in a first diagonal direction may be calculated as follows:
  • a gradient of the sub-block in a second diagonal direction may be calculated as follows:
  • each of the above calculations may be performed as a subsampled 1-D Laplacian calculation, which is performed by subsampling over only the shaded portions of the sub-block as illustrated by FIGS. 9C with regard to a vertical direction, FIG. 9D with regard to a horizontal direction, and FIGS. 9E and 9F with regard to diagonal directions.
  • the subsampled pixel positions may be in common for each of the four calculations.
  • the horizontal and vertical gradients are determined as the maximum and minimum values, respectively, among the horizontal and vertical gradients g h and g v ; and are determined as the maximum and minimum values, respectively, among the two diagonal gradients g d0 and g d1 .
  • the ALF 214 classifies a sub-block of a luma CTB.
  • an ALF 214 classifies the sub-block into one of multiple classes based on a classification index C which is derived from a directionality D and a quantized value of activity of the sub-block.
  • the value of D represents a direction of local gradients in the sub-block, and the value of represents activity of local gradients in the sub-block.
  • C may be derived as follows.
  • Sub-block of a chroma CTB are not classified.
  • D From the four values and directionality D is set according to the following steps comparing the gradient values to each other and to two threshold values t 1 and t 2 , providing D with a range of values from 0 through 4.
  • D has this range of possible values, 25 possible values may be derived for C from the above equation, corresponding to 25 different filters that may be applied to the sub-block.
  • step 3 If continue to step 3 below; otherwise continue to step 4 below.
  • directionality D may be set according to the following steps instead comparing the gradient values to each other and to three threshold values t 1 , t 2 , and t 3 , providing D with a range of values from 0 through 6.
  • D has a greater range of possible values than 0 to 4
  • more possible values than 25 may be derived for C from the above equation.
  • 35 possible values may be derived for C from the above equation, corresponding to 35 different filters that may be applied to the sub-block.
  • step 3 If continue to step 3 below; otherwise continue to step 5 below.
  • directionality D may be set according to the following steps instead comparing the gradient values to each other, the maximum gradient among the gradient values, and to two threshold values t 1 and t 2 , providing D with a range of values from 0 through 6.
  • D is set to 2; otherwise continue to step 5 below.
  • D is set to 3; otherwise, D is set to 4.
  • activity A is calculated by the following variation of the 1-D Laplacian calculation variation.
  • the value of activity A is quantized to a value over a range of 0 through 4.
  • the ALF 214 applies one of several geometric transformations to filter coefficients of the filter.
  • a geometric transformation may be chosen based on comparisons between gradient values according to Table 14 below.
  • K is the size of the filter
  • 0 ⁇ k, l ⁇ K-1 are coefficients coordinates, such that coordinate (0, 0) is at an upper left corner of the filter and coordinate (K–1, K–1) is at a lower right corner of the filter.
  • Each transformation is applied to the filter coefficients f (k, l) according to gradient values calculated as described above.
  • the ALF 214 applies a filter having a filter coefficient f (k, l) over each sub-block.
  • the filter coefficients to be applied may depend on the filter to be applied among all available filters, according to the classification index C.
  • the filter coefficients to be applied may be constant.
  • the filter may act upon a sample value R (i, j) of a reconstructed frame, outputting a sample value R’ (i, j) as below.
  • L is a filter length
  • f m denotes a filter coefficient
  • f (k, l) denotes a decoded filter coefficient
  • FIG. 10 illustrates an example system 1000 for implementing the processes and methods described above for implementing resolution-adaptive video coding in deblocking filters.
  • the techniques and mechanisms described herein may be implemented by multiple instances of the system 1000 as well as by any other computing device, system, and/or environment.
  • the system 1000 shown in FIG. 10 is only one example of a system and is not intended to suggest any limitation as to the scope of use or functionality of any computing device utilized to perform the processes and/or procedures described above.
  • the system 1000 may include one or more processors 1002 and system memory 1004 communicatively coupled to the processor (s) 1002.
  • the processor (s) 1002 may execute one or more modules and/or processes to cause the processor (s) 1002 to perform a variety of functions.
  • the processor (s) 1002 may include a central processing unit (CPU) , a graphics processing unit (GPU) , both CPU and GPU, or other processing units or components known in the art. Additionally, each of the processor (s) 1002 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.
  • system memory 1004 may be volatile, such as RAM, non-volatile, such as ROM, flash memory, miniature hard drive, memory card, and the like, or some combination thereof.
  • the system memory 1004 may include one or more computer-executable modules 1006 that are executable by the processor (s) 1002.
  • the modules 1006 may include, but are not limited to, a deblocking filter module 1008, which includes a boundary determining module 1010, a boundary strength determining module 1012, a threshold determining module 1014, an offset applying module 1016, a filter activity determining module 1018, a filter strength determining module 1020, a strong filter applying module 1022, and a weak filter applying module 1024.
  • a deblocking filter module 1008 which includes a boundary determining module 1010, a boundary strength determining module 1012, a threshold determining module 1014, an offset applying module 1016, a filter activity determining module 1018, a filter strength determining module 1020, a strong filter applying module 1022, and a weak filter applying module 1024.
  • the boundary determining module 1010 may be configured to determine block and sub-block boundaries to filter as abovementioned with reference to FIGS. 3A and 3B.
  • the boundary strength determining module 1012 may be configured to determines a bS value of a boundary being filtered as abovementioned with reference to FIGS. 3A and 3B.
  • the threshold determining module 1014 may be configured to determine threshold values, as abovementioned with reference to FIG. 3A.
  • the offset applying module 1016 may be configured to apply an offset to a luma quantization parameter, as abovementioned with reference to FIG. 3B.
  • the filter activity determining module 1018 may be configured to determine whether the deblocking filter module 1008 is active for a first four vertical lines of pixels or horizontal lines of pixels running across an 8x8 boundary, and whether the deblocking filter is active for a second four vertical lines of pixels or horizontal lines of pixels running across the 8x8 boundary, as abovementioned with reference to FIG. 3A, or may be configured to determine whether the deblocking filter module 1008 is active for a first four vertical lines of pixels or horizontal lines of pixels running across an 8x8 or 4x4 boundary, and whether the deblocking filter is active for a second four vertical lines of pixels or horizontal lines of pixels running across the 8x8 or 4x4 boundary, as abovementioned with reference to FIG. 3B.
  • the filter strength determining module 1020 may be configured to determine whether strong or weak filtering is applied for the first four vertical lines or horizontal lines in the 8x8 boundary in the case that the deblocking filter module 1008 is active for those lines, and whether strong or weak filtering is applied for the second four vertical lines or horizontal lines in the 8x8 boundary in the case that the deblocking filter module 1008 is active for those lines, as abovementioned with reference to FIG.
  • 3A or may be configured to determine whether strong or weak filtering is applied for the first four vertical lines or horizontal lines in the 8x8 or 4x4 boundary in the case that the deblocking filter module 1008 is active for those lines, and whether strong or weak filtering is applied for the second four vertical lines or horizontal lines in the 8x8 or 4x4 boundary in the case that the deblocking filter module 1008 is active for those lines, as abovementioned with reference to FIG. 3B.
  • the strong filter applying module 1022 may be configured to apply a strong filter to vertical lines or horizontal lines wherein the deblocking filter module 1008 determined to apply a strong filter, as abovementioned with reference to FIGS. 3A and 3B.
  • the weak filter applying module 1024 may be configured to apply a weak filter to vertical lines or horizontal lines wherein the deblocking filter module 1008 determined to apply a weak filter, as abovementioned with reference to FIGS. 3A and 3B.
  • the system 1000 may additionally include an input/output (I/O) interface 1040 for receiving video source data and bitstream data, and for outputting reconstructed frames into a reference frame buffer, a transmission buffer, and/or a display buffer.
  • the system 1000 may also include a communication module 1050 allowing the system 1000 to communicate with other devices (not shown) over a network (not shown) .
  • the network may include the Internet, wired media such as a wired network or direct-wired connections, and wireless media such as acoustic, radio frequency (RF) , infrared, and other wireless media.
  • RF radio frequency
  • FIG. 11 illustrates an example system 1100 for implementing the processes and methods described above for implementing resolution-adaptive video coding in SAO filters.
  • the techniques and mechanisms described herein may be implemented by multiple instances of the system 1100 as well as by any other computing device, system, and/or environment.
  • the system 1100 shown in FIG. 11 is only one example of a system and is not intended to suggest any limitation as to the scope of use or functionality of any computing device utilized to perform the processes and/or procedures described above.
  • the system 1100 may include one or more processors 1102 and system memory 1104 communicatively coupled to the processor (s) 1102.
  • the processor (s) 1102 may execute one or more modules and/or processes to cause the processor (s) 1102 to perform a variety of functions.
  • the processor (s) 1102 may include a central processing unit (CPU) , a graphics processing unit (GPU) , both CPU and GPU, or other processing units or components known in the art. Additionally, each of the processor (s) 1102 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.
  • system memory 1104 may be volatile, such as RAM, non-volatile, such as ROM, flash memory, miniature hard drive, memory card, and the like, or some combination thereof.
  • the system memory 1104 may include one or more computer-executable modules 1106 that are executable by the processor (s) 1102.
  • the modules 1106 may include, but are not limited to, a SAO filter module 1108.
  • the SAO filter module 1108 may include a filter application deciding module 1110, a CTB classifying module 1112, a pixel classifying module 1114, an edge offset applying module 1116, a band classifying module 1118, and a band offset applying module 1120.
  • the filter application deciding module 1110 may be configured to receive a frame and decide to apply SAO to a CTB of the frame, as abovementioned with reference to FIG. 7.
  • the CTB classifying module 1112 may be configured to classify a CTB as one of several SAO types, as abovementioned with reference to FIG. 7.
  • the pixel classifying module 1114 may be configured to classify a pixel of a CTB according to edge properties in the case that a CTB is classified as a type for applying edge offset as abovementioned with reference to FIG. 7.
  • the edge offset applying module 1116 may be configured to apply an offset to the current pixel based on pixel classification and based on an offset value, as abovementioned with reference to FIG. 7.
  • the band classifying module 1118 may be configured to classify a pixel of a CTB into a band in the case that a CTB is classified as a type for applying band offset, as abovementioned with reference to FIG. 7.
  • the band offset applying module 1120 may be configured to apply an offset to each band based on an offset value, as abovementioned with reference to FIG. 7.
  • the system 1100 may additionally include an input/output (I/O) interface 1140 for receiving video source data and bitstream data, and for outputting reconstructed frames into a reference frame buffer, a transmission buffer, and/or a display buffer.
  • the system 1100 may also include a communication module 1150 allowing the system 1100 to communicate with other devices (not shown) over a network (not shown) .
  • the network may include the Internet, wired media such as a wired network or direct-wired connections, and wireless media such as acoustic, radio frequency (RF) , infrared, and other wireless media.
  • RF radio frequency
  • FIG. 12 illustrates an example system 1200 for implementing the processes and methods described above for implementing resolution-adaptive video coding in ALF.
  • the techniques and mechanisms described herein may be implemented by multiple instances of the system 1200 as well as by any other computing device, system, and/or environment.
  • the system 1200 shown in FIG. 12 is only one example of a system and is not intended to suggest any limitation as to the scope of use or functionality of any computing device utilized to perform the processes and/or procedures described above.
  • the system 1200 may include one or more processors 1202 and system memory 1204 communicatively coupled to the processor (s) 1202.
  • the processor (s) 1202 may execute one or more modules and/or processes to cause the processor (s) 1202 to perform a variety of functions.
  • the processor (s) 1202 may include a central processing unit (CPU) , a graphics processing unit (GPU) , both CPU and GPU, or other processing units or components known in the art. Additionally, each of the processor (s) 1202 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.
  • system memory 1204 may be volatile, such as RAM, non-volatile, such as ROM, flash memory, miniature hard drive, memory card, and the like, or some combination thereof.
  • the system memory 1204 may include one or more computer-executable modules 1206 that are executable by the processor (s) 1202.
  • the modules 1206 may include, but are not limited to, an ALF module 1208.
  • the ALF module 1208 may include a filter application deciding module 1210, a gradient value calculating module 1212, a block classifying module 1214, a transformation applying module 1216, and a filter applying module 1218.
  • the filter application deciding module 1210 may be configured to receive a frame and decide to apply ALF to a luma CTB and/or a chroma CTB of the frame, as abovementioned with reference to FIG. 9.
  • the gradient value calculating module 1212 may be configured to calculate gradient values of the sub-block in multiple directions by obtaining reconstructed samples, as abovementioned with reference to FIG. 9.
  • the block classifying module 1214 may be configured to classify a sub-block of a luma CTB, as abovementioned with reference to FIG. 9.
  • the transformation applying module 1216 may be configured to apply one of several geometric transformations to filter coefficients of the filter, as abovementioned with reference to FIG. 9.
  • the filter applying module 1218 may be configured to apply a filter having a filter coefficient f (k, l) over each sub-block, as abovementioned with reference to FIG. 9.
  • the system 1200 may additionally include an input/output (I/O) interface 1240 for receiving video source data and bitstream data, and for outputting reconstructed frames into a reference frame buffer, a transmission buffer, and/or a display buffer.
  • the system 1200 may also include a communication module 1250 allowing the system 1200 to communicate with other devices (not shown) over a network (not shown) .
  • the network may include the Internet, wired media such as a wired network or direct-wired connections, and wireless media such as acoustic, radio frequency (RF) , infrared, and other wireless media.
  • RF radio frequency
  • Computer-readable instructions include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like.
  • Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like.
  • the computer-readable storage media may include volatile memory (such as random-access memory (RAM) ) and/or non-volatile memory (such as read-only memory (ROM) , flash memory, etc. ) .
  • volatile memory such as random-access memory (RAM)
  • non-volatile memory such as read-only memory (ROM) , flash memory, etc.
  • the computer-readable storage media may also include additional removable storage and/or non-removable storage including, but not limited to, flash memory, magnetic storage, optical storage, and/or tape storage that may provide non-volatile storage of computer-readable instructions, data structures, program modules, and the like.
  • a non-transient computer-readable storage medium is an example of computer-readable media.
  • Computer-readable media includes at least two types of computer-readable media, namely computer-readable storage media and communications media.
  • Computer-readable storage media includes volatile and non-volatile, removable and non-removable media implemented in any process or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data.
  • Computer-readable storage media includes, but is not limited to, phase change memory (PRAM) , static random-access memory (SRAM) , dynamic random-access memory (DRAM) , other types of random-access memory (RAM) , read-only memory (ROM) , electrically erasable programmable read-only memory (EEPROM) , flash memory or other memory technology, compact disk read-only memory (CD-ROM) , digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
  • PRAM phase change memory
  • SRAM static random-access memory
  • DRAM dynamic random-access memory
  • RAM random-access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • compact disk read-only memory CD-ROM
  • DVD digital versatile disks
  • magnetic cassettes magnetic tape
  • communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism.
  • a computer-readable storage medium employed herein shall not be interpreted as a transitory signal itself, such as a radio wave or other free-propagating electromagnetic wave, electromagnetic waves propagating through a waveguide or other transmission medium (such as light pulses through a fiber optic cable) , or electrical signals propagating through a wire.
  • the computer-readable instructions stored on one or more non-transitory computer-readable storage media that, when executed by one or more processors, may perform operations described above with reference to FIGS. 1A-12.
  • computer-readable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types.
  • the order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.
  • the present disclosure provides inter-coded resolution-adaptive video coding supported by multiple in-loop filters, restoring high-frequency loss that occurs when a picture is down-sampled and subsequently up-sampled, and improving image quality during a resolution-adaptive video coding process.
  • the methods and systems described herein provide a deblocking filter which takes resolution differences between frames undergoing motion prediction into account in determining filter strength, with further modifications for the next-generation video codec specification VVC.
  • the deblocking filter may apply a strong filter or a weak filter in cases where a first reference frame referenced in motion prediction of a block adjacent to the block boundary has a resolution different from that of a second reference frame referenced in motion prediction of a block adjacent to the block boundary, that of a second reference frame referenced in motion prediction of the current frame, or that of the current frame.
  • a method comprising: receiving a current frame; determining a block boundary to be filtered within the current frame; determining a boundary strength of the block boundary based on a difference in resolution between a first reference frame referenced in motion prediction of a block adjacent to the block boundary and another frame; and applying a deblocking filter to the block boundary based on the boundary strength.
  • a method comprising: receiving a frame and deciding to apply SAO to a CTB of the frame; classifying the CTB as one of a plurality of SAO types; classifying a pixel of a CTB according to edge properties by at least comparing difference sums of neighbor pixels in two opposing directions; and applying an edge offset to the pixel based on an offset value.
  • deciding to apply SAO to a CTB of the frame comprises deciding to apply at least edge offset to the CTB based on a value of a flag stored in a slice header of the frame.
  • the method as paragraph J recites, further comprising classifying a pixel of a CTB into a band and applying an offset to the band based on an offset value.
  • a method comprising: receiving a frame and deciding to apply ALF to a CTB of the frame; calculating a plurality of gradient values of a block of the CTB; determining a classification of the block based on computing a directionality value of at least six possible directionality values based on the plurality of gradient values; and applying a filter to the block, the filter comprising a set of filter coefficients determined by classification of the block.
  • the method as paragraph M recites, wherein the directionality value is computed by comparing the plurality of gradient values with at least two threshold values and a maximum among the plurality of gradient values.
  • a set of filter coefficients comprises a plurality of values arranged among 7x7 pixels.
  • a system comprising: one or more processors; and memory communicatively coupled to the one or more processors, the memory storing computer-executable modules executable by the one or more processors that, when executed by the one or more processors, perform associated operations, the computer-executable modules including: a deblocking filter module configured to receive a current frame in a coding loop, the deblocking filter module further comprising a boundary determining module configured to determine a block boundary to be filtered within the current frame; a boundary strength determining module configured to determine a boundary strength of a block boundary to be filtered based on a difference in resolution between a first reference frame referenced in motion prediction of a block adjacent to the block boundary and a second frame; and a strong filter applying module and a weak filter applying module each configured to apply a deblocking filter to the block boundary based on the boundary strength.
  • the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a lower resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  • the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a lower resolution than the second frame, and the second frame being the current frame.
  • the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a different resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of another block adjacent to the block boundary.
  • the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a lower resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  • boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a lower resolution than the second frame, and the second frame being the current frame.
  • the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a higher resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  • the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a higher resolution than the second frame, and the second frame being the current frame.
  • boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a different resolution than the second frame, and the second frame being the current frame.
  • the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a higher resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  • boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a higher resolution than the second frame, and the second frame being the current frame.
  • a system comprising: a SAO filter module configured to receive a frame, the SAO filter module further comprising a filter application deciding module configured to decide to apply SAO to a CTB of the frame; a CTB classifying module configured to classify the CTB as one of a plurality of SAO types; a pixel classifying module configured to classify a pixel of a CTB according to edge properties by at least comparing difference sums of neighbor pixels in two opposing directions; and an edge offset applying module configured to apply an edge offset to the pixel based on an offset value.
  • the system as paragraph BB recites, wherein the filter application deciding module is further configured to decide to apply at least edge offset to the CTB based on a value of a flag stored in a slice header of the frame.
  • the system as paragraph CC recites, wherein the filter application deciding module is further configured to decide to apply a band offset to the CTB based on the value of a flag stored in a slice header of the frame.
  • the system as paragraph DD recites, further comprising a band classifying module configured to classify a pixel of a CTB into a band and a band offset applying module configured to apply an offset to the band based on an offset value.
  • a system comprising: an ALF module configured to receive a frame, the ALF module further comprising a filter application deciding module configured to decide to apply ALF to a CTB of the frame; a gradient value calculating module configured to calculate a plurality of gradient values of a block of the CTB; a block classifying module configured to determine a classification of the block based on computing a directionality value of at least six possible directionality values based on the plurality of gradient values; and a filter applying module configured to apply a filter to the block, the filter comprising a set of filter coefficients determined by classification of the block.
  • the system as paragraph GG recites, wherein the CTB is a luma CTB and the block is a luma block of the luma CTB.
  • the system as paragraph GG recites, wherein the block classifying module is configured to compute a directionality value by comparing the plurality of gradient values with at least three threshold values.
  • the system as paragraph GG recites, wherein the block classifying module is configured to compute a directionality value by comparing the plurality of gradient values with at least two threshold values and a maximum among the plurality of gradient values.
  • a set of filter coefficients comprises a plurality of values arranged among 7x7 pixels.
  • the system as paragraph GG recites, wherein the ALF module is configured to receive the frame from an up-sampler.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Systems and methods are provided for implementing inter-coded resolution-adaptive video coding supported by multiple in-loop filters, restoring high-frequency loss that occurs when a picture is down-sampled and subsequently up-sampled, and improving image quality during a resolution-adaptive video coding process. The methods and systems described herein provide a deblocking filter which takes resolution differences between frames undergoing motion prediction into account in determining filter strength, with further modifications for the next-generation video codec specification VVC. The deblocking filter may apply a strong filter or a weak filter in cases where a first reference frame referenced in motion prediction of a block adjacent to the block boundary has a resolution different from that of a second reference frame referenced in motion prediction of a block adjacent to the block boundary, that of a second reference frame referenced in motion prediction of the current frame, or that of the current frame.

Description

NEXT-GENERATION LOOP FILTER IMPLEMENTATIONS FORADAPTIVE RESOLUTION VIDEO CODING BACKGROUND
In conventional video coding formats, such as the H. 264/AVC (Advanced Video Coding) and H. 265/HEVC (High Efficiency Video Coding) standards, video frames in a sequence have their size and resolution recorded at the sequence-level in a header. Thus, in order to change frame resolution, a new video sequence must be generated, starting with an intra-coded frame, which carries significantly larger bandwidth costs to transmit than inter-coded frames. Consequently, although it is desirable to adaptively transmit a down-sampled, low resolution video over a network when network bandwidth becomes low, reduced or throttled, it is difficult to realize bandwidth savings while using conventional video coding formats, because the bandwidth costs of adaptively down-sampling offset the bandwidth gains.
Research has been conducted into supporting resolution changing while transmitting inter-coded frames. However, such developments impose new requirements on not just the coding and decoding parts of the coding loop, but also further interdependent processing thereafter. For example, one or more in-loop filters are conventionally applied to a frame after it has been reconstructed by an in-loop encoder and/or after it has been decoded by an in-loop decoder.
According to both the H. 264/AVC and H. 265/HEVC standards, a deblocking filter is applied to a reconstructed frame output by an in-loop encoder. Block-based coding algorithms as established by these standards tend to produce artifacts known as “blocking, ” which may be ameliorated by a deblocking filter. Furthermore, according to the H. 265/HEVC standard, a sample adaptive offset (SAO) filter may be applied to the reconstructed frame output by the deblocking filter. In the development of the next-generation video codec specification, VVC, a third in-loop filter, the adaptive loop filter (ALF) is applied to the reconstructed frame output by the SAO filter.
The current implementations of these filters do not take resolution changes into account, and so new techniques are required in order to cause each of these filters  to behave correctly when the reconstructed frame has a resolution different from other frames in a buffer. It is also desirable to implement these techniques based on practical expectations regarding different severity of blocking artifacts in different scenarios.
BRIEF DESCRIPTION OF THE DRAWINGS
The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit (s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items or features.
FIGS. 1A and 1B illustrate an example block diagram of a video encoding process and a video decoding process according to example embodiments of the present disclosure.
FIGS. 2A through 2D illustrate coding loop flows including different arrangements of an up-sampler and multiple in-loop filters according to example embodiments of the present disclosure.
FIGS. 3A and 3B illustrates deblocking methods performed by a deblocking filter according to the HEVC and VVC specifications and to example embodiments of the present disclosure.
FIGS. 4A and 4B illustrate flowcharts of deblocking filter logic according to example embodiments of the present disclosure.
FIG. 5 illustrates a deblocking filter computing bS values by reference.
FIGS. 6A and 6B illustrates determining whether the deblocking filter is active for a first four horizontal lines and a second four horizontal lines in an 8x8 pixel boundary according to example embodiments of the present disclosure.
FIG. 7 illustrates an example flowchart of a sample adaptive offset (SAO) filter method according to example embodiments of the present disclosure.
FIG. 8A illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 0-degree angle. FIG. 8B illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 90-degree angle. FIG. 8C illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 135-degree angle. FIG. 8D illustrates an edge pattern  made up of pixels including the current pixel p and two neighbor pixels at a 45-degree angle.
FIG. 9A illustrates an example flowchart of an adaptive loop filter (ALF) method according to example embodiments of the present disclosure.
FIG. 9B illustrates shapes of ALF filters.
FIGS. 9C through 9F illustrate subsampling over a sub-block for calculating vertical, horizontal, and two diagonal gradient values of a sub-block of the luma coding tree block (CTB) .
FIG. 10 illustrates an example system for implementing the processes and methods described above for implementing resolution-adaptive video coding in deblocking filters.
FIG. 11 illustrates an example system for implementing the processes and methods described above for implementing resolution-adaptive video coding in SAO filters.
FIG. 12 illustrates an example system for implementing the processes and methods described above for implementing resolution-adaptive video coding in ALF.
DETAILED DESCRIPTION
Systems and methods discussed herein are directed to integrate inter-frame adaptive resolution change with a video coding loop, and more specifically to in-loop filter methods which improve inter-frame adaptive resolution change and process reconstructed frames output by inter-frame adaptive resolution change.
According to example embodiments of the present disclosure implemented to be compatible with AVC standards, a frame may be subdivided into macroblocks (MBs) each having dimensions of 16x16 pixels, which may be further subdivided into partitions. According to example embodiments of the present disclosure implemented to be compatible with the HEVC standard, a frame may be subdivided into coding tree units (CTUs) , the luma and chroma components of which may be further subdivided into coding tree blocks (CTBs) which are further subdivided into coding units (CUs) . According to example embodiments of the present disclosure implemented as other standards, a frame may be subdivided into units of NxN pixels, which may then be  further subdivided into subunits. Each of these largest subdivided units of a frame may generally be referred to as a “block” for the purpose of this disclosure.
According to example embodiments of the present disclosure, a block may be subdivided into partitions having dimensions in multiples of 4x4 pixels. For example, a partition of a block may have dimensions of 8x4 pixels, 4x8 pixels, 8x8 pixels, 16x8 pixels, or 8x16 pixels.
According to example embodiments of the present disclosure, motion prediction coding formats may refer to data formats wherein frames are encoded with motion vector information and prediction information of a frame by the inclusion of one or more references to motion information and prediction units (PUs) of one or more other frames. Motion information may refer to data describing motion of a block structure of a frame or a unit or subunit thereof, such as motion vectors and references to blocks of a current frame or of another frame. PUs may refer to a unit or multiple subunits corresponding to a block structure among multiple block structures of a frame, such as an MB or a CTU, wherein blocks are partitioned based on the frame data and are coded according to established video codecs. Motion information corresponding to a PU may describe motion prediction as encoded by any motion vector coding tool, including, but not limited to, those described herein.
Likewise, frames may be encoded with transform information by the inclusion of one or more transformation units (TUs) . Transform information may refer to coefficients representing one of several spatial transformations, such as a diagonal flip, a vertical flip, or a rotation, which may be applied to a sub-block.
Sub-blocks of CUs such as PUs and TUs may be arranged in any combination of sub-block dimensions as described above. A CU may be subdivided into a residual quadtree (RQT) , a hierarchical structure of TUs. The RQT provides an order for motion prediction and residual coding over sub-blocks of each level and recursively down each level of the RQT.
An encoder according to motion prediction coding may obtain a current frame of a bitstream and derive a reconstructed frame (a “reconstructed frame” ) . Blocks of a reconstructed frame may be intra-coded or inter-coded.
A CTU may include as components a luma CTB and a chroma CTB. A luma CTB of the CTU may be divided into luma sub-blocks. A chroma CTB of the CTU may be divided into chroma sub-blocks, wherein each chroma sub-block may have four neighboring luma sub-blocks. For example, a neighboring luma sub-block may be a luma sub-block below, left of, right of, or above the chroma sub-block.
Luma and chroma sub-blocks may be partitioned in accordance with PUs and TUs as described above –that is, partitioned into sub-blocks having dimensions in multiples of 4x4 pixels.
FIGS. 1A and 1B illustrate an example block diagram of a video encoding process 100 and a video decoding process 118 according to an example embodiment of the present disclosure.
In a video encoding process 100, a picture from a video source 102 may be encoded to generate a reconstructed frame, and output the reconstructed frame at a destination such as a reference frame buffer 104 or a transmission buffer 116. The picture may be input into a coding loop, which may include the steps of inputting the picture into a first in-loop up-sampler or down-sampler 106, generating an up-sampled or down-sampled picture, inputting the up-sampled or down-sampled picture into a video encoder 108, generating a reconstructed frame based on a previous reconstructed frame of the reference frame buffer 104, inputting the reconstructed frame into one or more in-loop filters 110, and outputting the reconstructed frame from the loop, which may include, or may not include: inputting the reconstructed frame into a second up-sampler or down-sampler 114, generating an up-sampled or down-sampled reconstructed frame, and outputting the up-sampled or down-sampled reconstructed frame into the reference frame buffer 104 or into a transmission buffer 116 to be transmitted to a bitstream.
In a video decoding process 118, a coded frame is obtained from a source such as a bitstream 120. According to example embodiments of the present disclosure, given a current frame having position N in the bitstream 120, a previous frame having position N–1 in the bitstream 120 may have a resolution larger than or smaller than a resolution of current frame, and a next frame having position N+1 in the bitstream 120  may have a resolution larger than or smaller than the resolution of the current frame. The current frame may be input into a coding loop, which may include the steps of inputting the current frame into a video decoder 122, inputting the current frame into one or more in-loop filters 124, inputting the current frame into a third in-loop up-sampler or down-sampler 128, generating an up-sampled or down-sampled reconstructed frame, and outputting the up-sampled or down-sampled reconstructed frame into the reference frame buffer 104. Alternatively, the current frame may be output from the loop, which may include outputting the up-sampled or down-sampled reconstructed frame into a display buffer (not illustrated) .
According to example embodiments of the present disclosure, the video encoder 108 and the video decoder 122 may each implement a motion prediction coding format, including, but not limited to, those coding formats described herein. Generating a reconstructed frame based on a previous reconstructed frame of the reference frame buffer 104 may include inter-coded motion prediction as described herein, wherein the previous reconstructed frame may be an up-sampled or down-sampled reconstructed frame output by the in-loop up-sampler or down-sampler 114/128, and the previous reconstructed frame serves as a reference picture in inter-coded motion prediction as described herein.
According to example embodiments of the present disclosure, a first up-sampler or down-sampler 106, a second up-sampler or down-sampler 114, and a third up-sampler or down-sampler 128 may each implement an up-sampling or down-sampling algorithm suitable for respectively at least up-sampling or down-sampling coded pixel information of a frame coded in a motion prediction coding format. A first up-sampler or down-sampler 106, a second up-sampler or down-sampler 114, and a third up-sampler or down-sampler 128 may each implement an up-sampling or down-sampling algorithm further suitable for respectively upscaling and downscaling motion information such as motion vectors.
A frame serving as a reference picture in generating a reconstructed frame for the current frame, such as the previous reconstructed frame, may therefore be up-sampled or down-sampled in accordance with the resolution of the current frame  relative to the resolutions of the previous frame and of the next frame. For example, the frame serving as the reference picture may be up-sampled in the case that the current frame has a resolution larger than the resolutions of either or both the previous frame and the next frame. The frame serving as the reference picture may be down-sampled in the case that the current frame has a resolution smaller than either or both the previous frame and the next frame.
In light of the video coding process 100 as described above, a frame may be, for example, down-sampled in the encoding process of the coding loop by a down-sampler 106, and then the frame may be up-sampled by an up-sampler 114 or an up-sampler 128 and output into a reference frame buffer 104. A frame being down-sampled causes quality loss to the content of the frame, particularly high-frequency loss –for example, loss of sharp edges or fine patterns in the frame. This loss is not restored upon the subsequent up-sampling of the frame, resulting in a frame at its own original frame resolution but lacking high frequency detail. The use of a frame suffering from high-frequency loss as a reference frame in a reference frame buffer as described above leads to poor results for motion prediction of subsequent frames referencing the reference frame.
According to video coding standards such as H. 264/AVC, H. 265/HEVC, VVC, and the like, several types of in-loop filters may be conventionally applied to a reconstructed frame output from an encoder or a decoder. Given the implementation of an up-sampler or down-sampler in a coding loop, in-loop filters may, at a further level of granularity, be applied before or applied after an up-sampler.
FIGS. 2A through 2D illustrate coding loop flows including different arrangements of an up-sampler and multiple in-loop filters, including, for example, a deblocking filter, a SAO filter, and ALF.
According to example embodiments of the present disclosure, FIGS. 2A through 2D should be understood as illustrating receiving a frame that has been down-sampled during a video encoding process, as illustrated by FIG. 1A, and is to be up-sampled and output into a reference frame buffer. However, the frame may be received during the video encoding process 100 from a video encoder 108 as illustrated by FIG.  1A or may be received during the video decoding process 118 from a video decoder 122 as illustrated by FIG. 1B.
As illustrated by FIG. 2A, the up-sampler 211 may receive a down-sampled frame output by a video encoder 108 in the case of a video encoding process 100 as illustrated by FIG. 1A, or a down-sampled frame output by a video decoder 122 in the case of a video decoding process 118 as illustrated by FIG. 1B. The up-sampler 211 may up-sample the down-sampled frame and output the frame to the filters, including the deblocking filter 212, the SAO filter 213, and the ALF 214. This enables the encoder to analyze gradient, activity, and similar information of the frame, and utilize in-loop filter tools and parameters and coefficients thereof to evaluate image quality; furthermore, the encoder can transmit optimized parameters and/or coefficients of a deblocking filter, a SAO filter, and ALF to the decoder to enhance accuracy, objective quality, and subjective quality of reconstructed signals.
deblocking filter 212 may filter a frame on a per-boundary per-CU basis in a coding order among CUs of the frame, such as a raster scan order wherein a first-coded CU is an uppermost and leftmost CU of the frame, according to video encoding standards. Within a frame, the deblocking filter 212 may filter both CUs of a luma CTB and a chroma CTB of the frame.
FIGS. 3A and 3B illustrates a deblocking method 300 performed by a deblocking filter 212. For illustrative purposes, and without limitation thereto, the deblocking method 300 may be performed according to the HEVC specification or according to the VVC specification. Certain differences between the HEVC specification implementation and the VVC specification implementation thereof shall be noted herein for ease of understanding with reference to FIG. 3A and FIG. 3B, respectively, though this shall not be understood as being a comprehensive accounting of all such differences.
At step 302, the deblocking filter 212 determines block and sub-block boundaries to filter. Within a block, according to the HEVC specification, the deblocking filter 212 may filter NxN pixel boundaries of sub-blocks of the CU. For example, the deblocking filter 212 may filter PU boundaries based on differences  between motion vectors and reference frames of neighboring prediction sub-blocks. That is, the deblocking filter 212 may filter PU boundaries in the event that the difference in at least one motion vector component between blocks on different sides of the boundary is greater than or equal to a threshold of one sampled pixel. Additionally, the deblocking filter 212 may filter TU boundaries in the event that coefficients sampled from pixels of a transform sub-block on either side of the boundary are non-zero. As a consequence, the deblocking filter 212 also filters those boundaries of the CU itself which coincide with outer PU and TU boundaries. However, according to the HEVC specification, sub-block boundaries filtered by the deblocking filter may be at least 8x8 pixels in dimensions, such that boundaries of 4x4 sub-blocks, such as TU sub-blocks representing 4x4 transforms, are not filtered by the deblocking filter 212, reducing filter complexity. Particularly, when a PU has dimensions of 2NxN pixels and N is greater than 4 pixels and the PU is at RQT depth 1 (that is, the first level of the RQT having the largest sub-block dimensions among levels) , the deblocking filter 212 may filter PU boundaries between PUs in addition to outer PU boundaries and may filter TU boundaries at an 8x8 pixel grid in the RQT level.
Alternately, according to VVC specifications, sub-block boundaries filtered by the deblocking filter 212 may also be 4x4 pixels in dimensions where a boundary to filter is a boundary of a luma CTB, including CU boundaries and transform sub-block boundaries. Transform sub-block boundaries may include, for example, transform unit boundaries included by sub-block transform ( “SBT” ) and intra sub-partitioning ( “ISP” ) modes, and transforms due to implicit split of large CUs. Sub-block boundaries filtered by the deblocking filter 212 may still be 8x8 pixels in dimensions where a boundary to filter is a prediction sub-block boundary. Prediction sub-block boundaries may include, for example, prediction unit boundaries introduced by spatio-temporal motion vector prediction ( “STMVP” ) , sub-block temporal motion vector prediction ( “SbTMVP” ) , and affine motion prediction modes.
As in the implementations pertaining to TU boundaries according to HEVC specifications, the deblocking filter 212 may filter SBT and ISP boundaries in the event that coefficients sampled from pixels of a transform sub-block on either side of the  boundary are non-zero. Moreover, to facilitate concurrent computation of the deblocking filter 212 by parallel computing threads, in the event that the filtered boundary is also part of a STMVP, SbTMVP, or affine motion prediction sub-block, the deblocking filter 212 filters at most five samples on one side of the filtered boundary.
As in the implementations pertaining to PU boundaries according to HEVC specifications, the deblocking filter 212 may filter STMVP, SbTMVP, and affine motion prediction boundaries based on differences between motion vectors and reference frames of neighboring prediction sub-blocks. That is, the deblocking filter 212 may filter PU boundaries in the event that the difference in at least one motion vector component between blocks on different sides of the boundary is greater than or equal to a threshold of half a sampled luma pixel. Thus, blocking artifacts originating from boundaries between inter prediction blocks having only a small difference in motion vectors are filtered. Moreover, to facilitate concurrent computation of the deblocking filter 212 by parallel computing threads, in the case that four pixels are sampled between the filtered boundary and a transform block boundary, the filtered boundary is filtered by at most one sampled pixel on each side; in the case that eight pixels are sampled between the filtered boundary and a transform block boundary, the filtered boundary is filtered by at most two sampled pixels on each side; and in the case that any other number of pixels are sampled between the filtered boundary and a transform block boundary, the filtered boundary is filtered by at most three sampled pixels on each side.
Differences between the HEVC and VCC implementations are further described subsequently with reference to at least the alternate steps 312A and 312B.
Within an NxN pixel grid, a deblocking filter 212 may first filter vertical edges of a CU to perform horizontal filtering, and then filter horizontal edges of a CU to perform vertical filtering.
deblocking filter 212 filtering a boundary of a current CU of a frame may reference a block P and a block Q adjacent to the boundary on both sides. While the deblocking filter 212 performs horizontal filtering, a block P may be a block left of the boundary and a block Q may be a block right of the boundary. While the deblocking  filter 212 performs vertical filtering, a block P may be a block above the boundary and a block Q may be a block below the boundary.
The deblocking filter 212 may identify a frame as having been down-sampled by determining that an inter-coded block P or block Q has a reference frame having a resolution different from a resolution of the current frame.
At step 304, the deblocking filter 212 determines a boundary strength (bS) of a boundary being filtered. A bS value may determine a strength of deblocking filtering to be applied by the deblocking filter to a boundary. A bS value may be 0, 1, or 2, where 0 indicates no filtering to be applied; 1 indicates weak filtering to be applied; and 2 indicates strong filtering to be applied. A bS value may be determined for a boundary having dimensions of 4x4 pixels, but mapped to a boundary having dimensions of 8x8 pixels. For an 8-pixel segment on a boundary in an 8x8 pixel grid, a bS value of the entire segment may be set to the larger of two bS values for two 4-pixel segments making up the 8-pixel segment.
Conventionally, the deblocking filter 212 may only determine a bS value of 2 in the case that block P or block Q is an intra-coded block.
According to an example embodiment of the present disclosure, the deblocking filter 212 may also determine a bS value of 2 in the case that block P or block Q is an inter-coded block rather than an intra-coded block, and has a resolution different from a resolution of the current frame. According to another example embodiment of the present disclosure, the deblocking filter 212 may determine a bS value of 1 in the case that block P or block Q is an inter-coded block rather than an intra-coded block, and has a resolution different from a resolution of the current frame. FIGS. 4A and 4B illustrate flowcharts of deblocking filter logic according to example embodiments of the present disclosure.
Further examples of deblocking filter logic according to example embodiments of the present disclosure with reference to Tables 4, 5, 6, 7, 8, 9, and 10. For the purpose of illustration, and without limitation to the context thereof, these example embodiments are based on deblocking filter logic according to the VVC specification.
Based on the VVC specification, the deblocking filter 212 receives at least the following inputs: with reference to an upper-left sample of a current frame, a coordinate (xCb, yCb) locating an upper-left sample pixel of a current coding block relative to an upper-left sample of a current frame; a variable nCbW specifying width of the current coding block; a variable nCbH specifying height of the current coding block; a variable edgeType specifying whether a vertical edge (denoted by, for example, the value EDGE_VER) or a horizontal edge (denoted by, for example, the value EDGE_HOR) is filtered; a variable cIdx specifying the color component of the current coding block; and a two-dimensional array edgeFlags having dimensions (nCbW) x (nCbH) .
The deblocking filter 212 may then determine bS values bS [xD i] [yD j] for xD i coordinates ranging over i = 0 . . . xN and yD j coordinates ranging over j = yN, where xD i, yD j, xN, and yN are determined as follows:
The deblocking filter 212 sets a variable gridSize:
gridSize = cIdx == 0 ? 4∶8
In the event that edgeType has value EDGE_VER:
xD i = (i *gridSize )
yD j = cIdx = = 0 ? (j << 2 ) : (j << 1 )
xN is set equal to Max (0, (nCbW /gridSize ) -1 )
yN = cIdx = = 0 ? (nCbH /4 ) -1 : (nCbH /2 ) -1
Otherwise, edgeType has value EDGE_HOR:
xD i = cIdx = = 0 ? (i << 2 ) : (i << 1 )
yD j = (j *gridSize )
xN = cIdx = = 0 ? (nCbW /4 ) -1 : (nCbW /2 ) -1
yN = Max (0, (nCbH /gridSize ) -1 )
Thus, for xD i coordinates ranging over i = 0 . . . xN and yD j coordinates ranging over j = yN, bS values bS [xD i] [yD j] are set as follows:
For xD i and yD j where edgeFlags [xD i] [yD j] is 0, bS [xD i] [yD j] is 0.
For xD i and yD j where edgeFlags [xD i] [yD j] is not 0, first sample values p 0 and q 0 are derived. In the event that edgeType has value EDGE_VER, p 0 is set to recPicture [xCb + xD i -1] [yCb + yD j] and q 0 is set to recPicture [xCb + xD i] [yCb + yD j] . Otherwise, edgeType has value EDGE_HOR, and p 0 is set to recPicture [xCb + xD i] [yCb + yD j -1] and q 0 is set to recPicture [xCb + xD i] [yCb + yD j] . Next, bS values bS [xD i] [yD j] are set as follows:
If cIdx has value 0 and both samples p 0 and q 0 are in respective coding blocks with intra_bdpcm_luma_flag having value 1, bS [xD i] [yD j] is set to 0.
Otherwise, in the event that cIdx has value greater than 0 and both sampled pixels p 0 and q 0 are in respective coding blocks with intra_bdpcm_chroma_flag having value 1, bS [xD i] [yD j] is set to 0.
Otherwise, in the event that a sampled pixel p 0 or q 0 is in a coding block of a coding unit coded by intra prediction mode, bS [xD i] [yD j] is set to 2.
Otherwise, in the event that the block edge is also a transform block edge and a sampled pixel p 0 or q 0 is in a coding block with ciip_flag having value 1, bS[xD i] [yD j] is set to 2.
Otherwise, in the event that the block edge is also a transform block edge and the sampled pixel p 0 or q 0 is in a transform block which contains one or more non-zero transform coefficient levels, bS [xD i] [yD j] is set equal to 1.
Otherwise, in the event that a prediction mode of a first coding sub-block containing the sampled pixel p 0 is different from a prediction mode of a second coding sub-block containing the sampled pixel q 0 (i.e., one of these coding sub-blocks is coded in IBC prediction mode and the other of the coding sub-blocks is coded in inter prediction mode) , bS [xD i] [yD j] is set to 1.
Otherwise, in the event that cIdx has value 0, edgeFlags [xD i] [yD j] has value 2, and one or more of the following conditions are true, bS [xD i] [yD j] is set to 1:
A first coding sub-block containing the sampled pixel p 0 and a second coding sub-block containing the sampled pixel q 0 are both coded by IBC prediction mode, and an absolute difference between the horizontal or vertical component of the block vectors used in motion prediction of the two coding sub-blocks is greater than or equal to 8 in units of 1/16 luma sampled pixels; and/or:
In motion prediction of a first coding sub-block containing the sampled pixel p 0, different reference pictures or a different number of motion vectors are used than in motion prediction of a second coding sub-block containing the sampled pixel q 0;and/or:
(Herein, determination of whether the reference pictures used for the two coding sub-blocks are same or different may be based only on which pictures are referenced, without regard to whether a prediction is formed using an index into reference picture list 0 or an index into reference picture list 1, and also without regard to whether the index position within a reference picture list is different; and
Herein, the number of motion vectors that are used in motion prediction of a coding sub-block with upper-left sample covering (xSb, ySb) , is equal toPredFlagL0 [xSb] [ySb] + PredFlagL1 [xSb] [ySb] . ) 
A first motion vector is used in motion prediction of a first coding sub-block containing the sample p 0 and a second motion vector is used in motion prediction of a second coding sub-block containing the sample q 0, and an absolute difference between the horizontal component or vertical component of the first and the second motion vectors is greater than or equal to 8 in units of 1/16 luma sampled pixels; and/or:
A first and a second motion vector and a first and a second reference picture are used in motion prediction of a first coding sub-block containing the sample p 0, a third and a fourth motion vector for the first and the second reference pictures are used in motion prediction of a second coding sub-block containing the sample q 0, and an absolute difference between the horizontal or vertical component of two respective motion vectors used in motion prediction of the two coding sub-blocks for either  reference picture is greater than or equal to 8 in units of 1/16 luma sampled pixels; and/or:
A first and a second motion vector for a first reference picture are used in motion prediction of a first coding sub-block containing the sampled pixel p 0, a third and a fourth motion vector for a second reference picture are used in motion prediction of a second coding sub-block containing the sampled pixel q 0, and both following conditions are true:
An absolute difference between the horizontal or vertical component of a list 0 motion vector used in motion prediction of the two coding sub-blocks is greater than or equal to 8 in 1/16 luma sampled pixels, or an absolute difference between the horizontal or vertical component of a list 1 motion vector used in motion prediction of the two coding sub-blocks is greater than or equal to 8 in units of 1/16 luma sampled pixels; and:
An absolute difference between the horizontal or vertical component of a list 0 motion vector used in motion prediction of a first coding sub-block containing the sample p 0 and a list 1 motion vector used in motion prediction of a second coding sub-block containing the sample q 0 is greater than or equal to 8 in units of 1/16 luma sampled pixels, or an absolute difference between the horizontal or vertical component of a list 1 motion vector used in motion prediction of the first coding sub-block containing the sample p 0 and a list 0 motion vector used in motion prediction of the second coding sub-block containing the sample q0 is greater than or equal to 8 in units of 1/16 luma sampled pixels.
In the event that none of the above conditions applies, bS [xD i] [yD j] is set to 0.
Furthermore, as illustrated by FIG. 5, within every two pairs of P and Q blocks across a CTU boundary, the deblocking filter 212 may determine a bS value of a block by referencing blocks left and above the block within the two pairs of blocks, thus reducing memory requirements for computation.
bS values determined in step 304 may subsequently be referenced by the deblocking filter 212 in step 312B to determine whether the deblocking filter 212 should apply strong filtering, or not.
According to example embodiments of the present disclosure, further modifications are made to the above process of setting values of bS [xD i] [yD j] :
According to an example embodiment of the present disclosure, in the event that edgeFlags [xD i] [yD j] has value 2, and a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel p 0 or q 0 is different from a resolution of the current picture, bS [xD i] [yD j] is set to 2, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 4.
According to another example embodiment of the present disclosure, the one or more of the following conditions wherein, if true, bS [xD i] [yD j] is set to 1 further includes: a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 or q 0 being different than a resolution of the current picture, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 5.
According to another example embodiment of the present disclosure, in the event that edgeFlags [xD i] [yD j] has value 2, and a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel p 0 is different from a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel q 0, bS [xD i] [yD j] is set to 2, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 6.
According to another example embodiment of the present disclosure, in the event that edgeFlags [xD i] [yD j] has value 2, and in the event that at least one of the following conditions is true, bS [xD i] [yD j] is set to 2, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 7: a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel p 0 or q 0 is lower than a resolution of the current  picture, or; a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 is different from a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel q 0.
According to another example embodiment of the present disclosure, the one or more of the following conditions wherein, if true, bS [xD i] [yD j] is set to 1 further includes: a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 or q 0 being lower than a resolution of the current picture, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 8.
According to another example embodiment of the present disclosure, in the event that edgeFlags [xD i] [yD j] has value 2, and in the event that at least one of the following conditions is true, bS [xD i] [yD j] is set to 2, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 9: a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel p 0 or q 0 is higher than a resolution of the current picture, or; a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 is different from a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sampled pixel q 0.
According to another example embodiment of the present disclosure, the one or more of the following conditions wherein, if true, bS [xD i] [yD j] is set to 1 further includes: a resolution of one of the reference pictures used in motion prediction of a coding sub-block containing the sample p 0 or q 0 being higher than a resolution of the current picture, and Table 3 subsequently described with reference to step 312B is further modified as shown by Table 10.
With regard to the example embodiments as described above, generally, those example embodiments wherein bS [xD i] [yD j] is set to 2 may describe conditions wherein blocking artifacts are expected to be severe as a result of resolution differences, and thus the deblocking filter 212 should apply a strong filter. Those example  embodiments wherein bS [xD i] [yD j] is set to 1 may describe conditions wherein blocking artifacts are expected to be moderate as a result of resolution differences, and thus the deblocking filter 212 should not apply a strong filter.
At step 306A, performed according to the HEVC specification, the deblocking filter 212 determines threshold values β and t C. The threshold values β and t C may be utilized in the subsequent steps 308, 310, and 312 to control strength of the deblocking filter 212. The threshold values β and t C may be determined by lookup of corresponding values β’ a nd t C’ from a table such as Table 1 below according to the HEVC specification, based on a value of a luma quantization parameter Q (also referred to as qP L) .
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
β’ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 7 8
t C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Q 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
β’ 9 10 11 12 13 14 15 16 17 18 29 22 24 26 28 30 32 34 36
t C 1 1 1 1 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4
Q 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53      
β’ 38 40 42 44 46 48 50 52 54 56 58 60 62 64 - -      
t C 5 5 6 6 7 8 9 10 11 13 14 16 18 20 22 24      
Values of t C may be determined from values of t C’ by the following equation:
t C = BitDepth < 10 ? (t C +2) >> (10-BitDepth)
∶ t C * (1 ≤ (BitDepth-10) )
According to the VVC specification, Table 1 may be modified by extending Q values through to a maximum of 63, by extending β’ values as follows, and by  replacing t C’ values with the following (corresponding to Q values 0 through 63, in order) :
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 5, 5, 5, 5, 7, 7, 8, 9, 10, 10, 11, 13, 14, 15, 17, 19, 21, 24, 25, 29, 33, 36, 41, 45, 51, 57, 64, 71, 80, 89, 100, 112, 125, 141, 157, 177, 198, 222, 250, 280, 314, 352, 395]
Q 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
β′ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6
t C 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Q 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
β′ 7 8 9 10 11 12 13 14 15 16 17 18 20 22 24 26 28
t C 0 3 4 4 4 4 5 5 5 5 7 7 8 9 10 10 11
Q 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50
β′ 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62
t C 13 14 15 17 19 21 24 25 29 33 36 41 45 51 57 64 71
Q 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65    
β′ 64 66 68 70 72 74 76 78 80 82 84 86 88 - -    
t C 80 89 100 112 125 141 157 177 198 222 250 280 314 352 395    
Q may be determined based on pixel samples from reconstructed luma sub-blocks of the current CU.
Q = Clip3 (0, 65, qP+2* (bS-1) + (slice_tc_offset_div2 ≤ 1) )
A value of β may be derived from β’ as follows:
β = β * (1 ≤ (BitDepth Y-8) )
A value of t C may be derived from t C’ as follows:
t C = BitDepth < 10 ? (t C +2) >> (10-BitDepth) ∶ t C′* (1≤ (BitDepth Y-10) )
At step 306B, performed according to the VVC specification, regardless of the deblocking filter 212 determining threshold values β and t C, the deblocking filter 212 applies an offset to the luma quantization parameter qP L. According to the VVC  specification, this offset may take precedence in controlling strength of the deblocking filter 212 over the effects of β and t C.
An offset qpOffset may be derived from luma levels ( “LL” ) of pixel samples from luma sub-blocks of the current CU as follows:
LL = ( (p 0, 0+p 0, 3+q 0, 0+q 0,3) >> 2) / (1≤bitDepth) 
FIGS. 6A and 6B, described subsequently, illustrate the coordinates of those particular p and q pixels from which luma levels are sampled from.
From the value of LL as derived above, a transfer function may be applied to derive the offset qpOffset. A base value of qpOffset may be derived from a flag sps_ladf_lowest_interval_qp_offset in a slice header of the frame. The slice header further contains flags specifying lower bounds of multiple luma intensity level intervals. For each of these intervals, if LL exceeds the lower bound set for the interval, the base value of qpOffset is offset by an offset value in the range of -63 to 63, inclusive, preset for the interval in an offset array recorded in the slice header.
According to the VVC specification, qP L may be derived as follows:
qP L = ( (Qp Q+Qp P+1) >> 1)
Where Qp Q is a quantization parameter of a coding block containing the pixel q 0,0 and Qp P is a quantization parameter of a coding block containing the pixel p 0, 0, as FIGS. 6A and 6B illustrate subsequently.
According to example embodiments of the present disclosure, qP L may be derived as follows:
qP L = ((Qp Q+Qp P+1)>>1) +qpOffset
At step 308A, the deblocking filter 212 determines whether the deblocking filter 212 is active for a first four vertical lines of pixels or horizontal lines of pixels running across an 8x8 boundary, and whether the deblocking filter is active for a second four vertical lines of pixels or horizontal lines of pixels running across the 8x8 boundary. 
At step 308B, the deblocking filter 212 determines whether the deblocking filter 212 is active for a first four vertical lines of pixels or horizontal lines of pixels running across an 8x8 or 4x4 boundary, and whether the deblocking filter is active for a second four vertical lines of pixels or horizontal lines of pixels running across the 8x8 or 4x4 boundary.
FIGS. 6A and 6B illustrate determining whether the deblocking filter 212 is active for a first four horizontal lines and a second four horizontal lines running across a vertical 8x8 boundary according to example embodiments of the present disclosure, the lines being numbered from 0 to 7; for vertical lines, whether the deblocking filter 212 is active may be derived similarly. For each line, six pixels of the line on either side of the boundary are sampled. As illustrated by FIGS. 6A and 6B, the deblocking filter 212 samples the pixels p2 0, p1 0, p0 0, q0 0, q1 0, and q2 0 of the first line and the pixels p2 3, p1 3, p0 3, q0 3, q1 3, and q2 3 in the fourth line among the first four lines to determine whether the deblocking filter is active for the first four lines.
From the intensity values of these pixels, the following calculations are performed with regard to the first four lines.
dp0 = |p2 0-2*p1 0+p0 0|
dp3 = |p2 3-2*p1 3+p0 3|
dq0 = |q2 0-2*q1 0+q0 0|
dq3 = |q2 3-2*q1 3+q0 3|
In the case the sum of dp0, dp3, dq0, and dq3 is less than the value of β, the deblocking filter 212 will be active for the first four lines, and furthermore, the following variables are also set as inputs for filters.
The variable dE is set equal to 1.
If dp0 + dp3 < (β+ (β>> 1) ) >> 3, the variable dEp1 is set equal to 1.
If dq0 + dq3 < (β+ (β>> 1) ) >> 3, the variable dEq1 is set equal to 1.
In the case the sum of dp0, dp3, dq0, and dq3 is not less than the value of β, the deblocking filter 212 will not be active for the first four lines.
The deblocking filter 212 also samples the pixels p2 4, p1 4, p0 4, q0 4, q1 4, and q2 4 in the first line and the pixels p2 7, p1 7, p0 7, q0 7, q1 7, and q2 7 in the fourth line among  the second four lines to determine whether the deblocking filter is active for the second four horizontal lines. This is performed in a manner similar to the above-mentioned process for the first four lines.
At step 310A, performed according to the HEVC specification, the deblocking filter 212 determines whether strong or weak filtering is applied for the first four vertical lines or horizontal lines in the 8x8 boundary in the case that the deblocking filter 212 is active for those lines, and whether strong or weak filtering is applied for the second four vertical lines or horizontal lines in the 8x8 boundary in the case that the deblocking filter 212 is active for those lines.
At step 310B, performed according to the VVC specification, the deblocking filter 212 determines whether strong or weak filtering is applied for the first four vertical lines or horizontal lines in the 8x8 or 4x4 boundary in the case that the deblocking filter 212 is active for those lines, and whether strong or weak filtering is applied for the second four vertical lines or horizontal lines in the 8x8 or 4x4 boundary in the case that the deblocking filter 212 is active for those lines.
The deblocking filter 212 applies strong filtering to the first four lines if the following two sets of conditions are met, and weak filtering otherwise.
2*(dp0+dq0) < (β>> 2) , |p3 0-p0 0|+|q0 0-q3 0|
<(β>>3) and |p0 0-q0 0|< (5*t C+1) >>1
2*(dp3+dq3) < (β>>2) , |p3 3-p0 3|+|q0 3-q3 3|
<(β>>3) and |p0 3-q0 3|< (5*t C+1) >>1
The deblocking filter 212 determines whether to apply strong or weak filtering to the second four lines in a manner similar to the above-mentioned process for the first four lines.
At step 312A, performed according to the HEVC specification, the deblocking filter 212 applies a strong filter to vertical lines or horizontal lines wherein the deblocking filter 212 determined to apply a strong filter.
The strong filter is applied to three pixels p 0, p 1, and p 2 of the block P side of the boundary with four pixels total as input, outputting pixels p 0’ , p 1’ , and p 2’ , respectively; and three pixels q 0, q 1, and q 2 of the block Q side of the boundary with  four pixels total as input, outputting pixels q 0’ , q 1’ , and q 2’ , respectively. The outputs are derived as below.
p 0  = (p 2+2*p 1+2*p 0+2*q 0+q 1+4) >>3
q 0  = (p 1+2*p 0+2*q 0+2*q 1+q 2+4) >>3
p 1  = (p 2+p 1+p 0+q 0+2) >>2
q 1  = (p 0+q 0+q 1+q 2+2) >>2
p 2  = (2*p 3+3*p 2+p 1+p 0+q 0+4) >>3
q 2  = (p 0+q 0+q 1+3*q 2+2*q 3+4) >>3
At step 312B, performed according to the VVC specification, the deblocking filter 212 applies a strong filter to vertical lines or horizontal lines wherein the deblocking filter 212 determined to apply a strong filter.
In step 312B, in addition to the strong filter as described above with reference to step 312A, the deblocking filter 212 may apply a filter according to the VVC specification, for luma CTBs in particular, to a sub-block boundary 4x4 in dimensions rather than a sub-block boundary 8x8 in dimensions, as described above with reference to step 302. For such a filter, rather than being applied to three pixels each of the respective blocks to each side of the boundary, instead the filter may be applied to one pixel each of the respective blocks to each side of the boundary, where a block to one side of the boundary has a width of 4 pixels or less in the event that the boundary is vertical, or a block to one side of the boundary has a height of 4 pixels or less in the event that the boundary is horizontal. Such implementations may handle blocking artifacts from rectangular transform shapes, and may facilitate concurrent computation of the deblocking filter 212 by parallel computing threads.
Additionally, in step 312B, the deblocking filter 212 may apply a stronger deblocking filter (for example, a bilinear filter) according to the VVC specification, for luma CTBs in particular, in the event that sampled pixels on either the P side or the Q side of the boundary belong to a large block and in the event that two further conditions are also satisfied. Large blocks may be those blocks where width of a horizontal edge is greater than or equal to 32 pixels, or those blocks where height of a vertical edge is greater than or equal to 32 pixels.
The two further conditions are determined as follows:
Condition2 = (d < β) ? TRUE: FALSE
Condition3 = StrongFilterCondition = (dpq is less than (β >> 2 ) , sp 3 + sq 3 is less
than (3*β >> 5 ) , and Abs (p 0 -q 0 ) is less than (5 *t C + 1 ) >> 1) ? TRUE :
FALSE
The outputs are then derived as follows, where values p i are block boundary samples for i = 0 to i = Sp –1 on either the P side or the Q side of the boundary, values q j are block boundary samples for j = 0 to Sq –1 likewise on either the P side or the Q side of the boundary, and p i’ and q j’ are outputs for those respective inputs:
p i′ = (f i*Middle s, t+ (64-f i) *P s+32) >>6) , clipped to p i±tcPD i (3-
1)
q j′ = (g j*Middle s, t+ (64-g j)*Q s+32) >> 6) , clipped to q j±tcPD j
(3-2) 
Wherein tcPD i and tcPD j are position-dependent clippings and g j, f i, Middle s, t, P s and Q s are derived based on Table 2 below.
Figure PCTCN2020073563-appb-000001
Figure PCTCN2020073563-appb-000002
Additionally, in step 312B, the deblocking filter 212 may apply a stronger deblocking filter according to the VVC specification, for chroma CTBs in particular, to a sub-block boundary 8x8 in dimensions as described above with reference to step 302, in the event that both the P side and the Q side of the chroma CTB boundary have dimensions greater than or equal to 8 pixels of chroma sample and in the event that three further conditions are also satisfied.
The first condition is satisfied by a determination to apply strong filtering as described below with reference to Table 3, and the deblocking filter 212 determining in step 312B as described above that sampled pixels on both the P side and the Q side of the chroma CTB boundary belong to large blocks.
Table 3 below describes a decision-making process wherein the deblocking filter 212 may determine to apply strong filtering, or may not. “Adjacent blocks” may refer to the block on the P side and the block on the Q side of the filtered boundary. Where any of the Y, U, or V bS values in the rightmost three columns is determined as 2, the first condition may be satisfied. Where any of the Yu, U, or V bS values in the rightmost three columns is determined as 1, and sampled pixels on both the P side and  the Q side of the chroma boundary are determined to belong to large blocks, the first condition may also be satisfied.
Figure PCTCN2020073563-appb-000003
Table 4 below describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
Figure PCTCN2020073563-appb-000004
Figure PCTCN2020073563-appb-000005
Table 5 below describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
Figure PCTCN2020073563-appb-000006
Figure PCTCN2020073563-appb-000007
Table 6 below describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
Figure PCTCN2020073563-appb-000008
Table 7 below describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
Figure PCTCN2020073563-appb-000009
Table 8 below describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
Figure PCTCN2020073563-appb-000010
Figure PCTCN2020073563-appb-000011
Table 9 below describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
Figure PCTCN2020073563-appb-000012
Figure PCTCN2020073563-appb-000013
Table 10 below describes a decision-making process according to another example embodiment of the present disclosure, as referenced above.
Figure PCTCN2020073563-appb-000014
The second condition is satisfied by the deblocking filter 212 determining in step 308 as described above to be active across a boundary.
The third condition is satisfied by the deblocking filter 212 determining in step 310 as described above to apply strong filtering over the boundary.
At step 314, the deblocking filter 212 applies a weak filter to vertical lines or horizontal lines wherein the deblocking filter 212 determined to apply a weak filter.
To apply a weak filter, the deblocking filter 212 determines a value Δ.
Δ = (9* (q 0-p 0) -3* (q 1-p 1) +8) >>4
Then, when the absolute value of Δ is less than t C *10, the weak filter is applied to pixels p 0 and q 0 on either side of the boundary, outputting pixels p 0’ a nd q 0’ , respectively.
Δ = Clip3 (-t C, t C, Δ) 
p 0′ = Clip1 Y (p 0+Δ) 
q 0′ = Clip1 Y (q 0-Δ) 
Furthermore, depending on the previously calculated values of dEp1 and dEq1, the weak filter may be applied to either or both of pixels p 1 and q 1 on either side of the boundary, each with three pixels total as input, outputting either or both of pixels p 1’ a nd q 1’ , respectively.
If dEp1 is equal to 1:
Δp = Clip3 (- (t C>>1) , t C>>1, (( (p 2+p 0+1) >>1)-p 1+Δ)>>1)
p 1′=Clip1 Y (p 1+Δp) 
If dEq1 is equal to 1:
Δq = Clip3 (- (t C>>1) , t C>>1, (( (q 2+q 0+1) >>1)-q 1+Δ)>>1)
q 1′=Clip1 Y (q 1+Δq) 
According to example embodiments of the present disclosure implementing VVC, the above-described method 300 may be largely performed in a similar manner, except that the filter strength of a deblocking filter 212 may be further dependent upon averaged luma level of pixel samples of the reconstructed frame; the t C’ lookup table may be further extended; and stronger deblocking filters may be applied for both the luma and chroma CTBs. Further details of these processes need not be described for understanding of example embodiments of the present disclosure, and shall not be reiterated herein.
Next, a SAO filter may filter a CTB on a per-pixel basis by applying an offset to each pixel based on determining a SAO type of each pixel.
FIG. 7 illustrates an example flowchart of a SAO filter method 700 according to example embodiments of the present disclosure.
At step 702, a SAO filter 213 receives a frame and decides to apply SAO to a CTB of the frame.
A frame may store a flag sao_type_idx in a slice header of the frame, the value thereof indicating whether SAO is to be applied to the CTB, and, if so, which type of SAO is to be applied. A sao_type_idx value of 0 may indicate that SAO is not to be applied to a CTB of the frame; a sao_type_idx value of 1 may indicate that an edge offset filter, as described below, is to be applied to a CTB of the frame; and a sao_type_idx value of 2 may indicate that a band offset filter, as described below, is to be applied to a CTB of the frame.
According to example embodiments of the present disclosure, a sao_type_idx value of 3 may indicate that both edge offset and band offset are to be applied to a CTB of the frame.
Furthermore, each applicable CTB may have further SAO parameters stored including sao_merge_left flag, sao_merge_up_flag, SAO type, and four offsets. A sao_merge_left_flag value of 1 for a CTB may denote that the SAO filter 213 should apply SAO type and offsets of a CTB left of the current CTB to the current CTB. A sao_merge_up_flag value of 1 for a CTB may indicate that the SAO filter 213 should apply SAO type and offsets of the CTB above the current CTB to the current CTB.
At step 704, the SAO filter 213 classifies a CTB as one of several SAO types.
Table 11 below illustrates that each CTB of a frame may be classified as type 0, in which case no SAO will be applied to the CTB, or may be classified as types 1 through 5, where in each case a different SAO will be applied to the CTB. Furthermore, for types 1 through 5, pixels of the CTB will be categorized into one of multiple categories.
Figure PCTCN2020073563-appb-000015
Types 1 through 4 of CTBs are identified by an angle of an edge pattern of pixels including the current pixel p and two neighbor pixels. FIGS. 8A through 8D illustrate possible edge patterns that include the current pixel p and two neighbor pixels. FIG. 8A illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 0-degree angle. FIG. 8B illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 90-degree angle. FIG. 8C illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 135-degree angle. FIG. 8D illustrates an edge pattern made up of pixels including the current pixel p and two neighbor pixels at a 45-degree angle.
At step 706, in the case that a CTB is classified as a type for applying edge offset, the SAO filter 213 classifies a pixel of a CTB according to edge properties.
Each pixel has an 8-bit intensity value ranging from 0 through 255. The current pixel p may be classified by a comparison of its intensity with the two neighbor pixels (in either order) in accordance with Table 12 below.
Figure PCTCN2020073563-appb-000016
According to an example embodiment of the present disclosure, the current pixel p may be classified by a comparison of its intensity with the two neighbor pixels (in either order, which determine cases 1 through 5 below) , as well as with neighbor pixels in general in two opposing directions (which determines case 0 below) in accordance with Table 13 below.
Figure PCTCN2020073563-appb-000017
Figure PCTCN2020073563-appb-000018
At step 708, based on pixel classification, the SAO filter 213 applies an offset to the current pixel based on an offset value. The offset value of the current pixel may be determined based on the classification of the current pixel. Furthermore, by classifying a strong edge (also referred to as a real edge) based on significant differences in pixels in two opposing directions, the SAO filter 213 may determine pixels on the strong edge that are likely to be smoothed during up-sampling, and apply an offset value to compensate for this behavior.
At step 710, in the case that a CTB is classified as a type for applying band offset, the SAO filter 213 classifies a pixel of a CTB into a band.
A pixel index over the entire range of pixel intensity values may be established by reducing all 8-bit pixel intensity values to their five most significant bits, thus equalizing all pixel intensity values within each of 32 bands, each covering a same-sized segment of the original range of pixel intensity values. Thus, each pixel lies within one of these 32 bands based on its pixel intensity value. Furthermore, each set of four adjacent bands may be grouped together, with each group being identified by its starting position counting from low to high values over the 32 bands.
At step 712, the SAO filter 213 applies an offset to each band based on an offset value. The offset value may be determined by the intensity value of the band. The offset value may reduce distortion of the band.
Next, an ALF 214 may filter a frame per 4x4 pixel sub-block of a luma CTB and a chroma CTB of a frame.
FIG. 9A illustrates an example flowchart of an ALF method 900 according to example embodiments of the present disclosure.
At step 902, an ALF 214 receives a frame and decides to apply ALF to a luma CTB and/or a chroma CTB of the frame.
A luma CTB has a flag to indicate whether ALF should be applied to the luma CTB. A chroma CTB may have a flag to indicate whether ALF should be applied to the chroma CTB. The ALF 214 may decide to apply ALF based on values of these flags.
A frame may store ALF filter parameters in a slice header of the frame. ALF filter parameters may include 25 sets of luma filter coefficients, which may be accordingly applied to luma CTBs based on classification thereof. According to example embodiments of the present disclosure, ALF filter parameters may include more than 25 sets of luma filter coefficients to accommodate more types of classification, such as 35 sets of luma filter coefficients derived from a classification scheme as described below.
Filter coefficients may be mapped to the pixels that make up the shape of the filter. As illustrated by FIG. 9B, a chroma filter 912 may have a 5x5 pixel diamond shape, and a luma filter 914 may have a 7x7 pixel diamond shape, with each pixel showing an assigned filter coefficient value.
To reduce bit overhead, filter coefficients of different classifications may be merged to some extent. Filter coefficients may be quantized with norm equal to 128. To further reduce multiplication complexity, a bitstream conformance may be applied, wherein a coefficient value of a central position of a filter may fall within a range of 0 through 2 8, and coefficient values of all other positions of the filter may fall within a range of –2 7 to 2 7–1, inclusive.
At step 904, the ALF 214 calculates gradient values of a sub-block of the luma CTB in multiple directions by obtaining reconstructed samples.
Starting from an upper left pixel (i, j) of the sub-block in the frame, a 1-D Laplacian calculation may be performed in four different directions by obtaining reconstructed samples R (x, y) at intervals from pixels (x, y) of the reconstructed frame. Based on 1-D Laplacian calculations, a horizontal gradient of the sub-block may be calculated as follows:
Figure PCTCN2020073563-appb-000019
A vertical gradient of the sub-block may be calculated as follows:
Figure PCTCN2020073563-appb-000020
A gradient of the sub-block in a first diagonal direction may be calculated as follows:
Figure PCTCN2020073563-appb-000021
A gradient of the sub-block in a second diagonal direction may be calculated as follows:
Figure PCTCN2020073563-appb-000022
Rather than sample over the entire 4x4 pixel sub-block, each of the above calculations may be performed as a subsampled 1-D Laplacian calculation, which is performed by subsampling over only the shaded portions of the sub-block as illustrated by FIGS. 9C with regard to a vertical direction, FIG. 9D with regard to a horizontal direction, and FIGS. 9E and 9F with regard to diagonal directions. The subsampled pixel positions may be in common for each of the four calculations.
Among the horizontal and vertical gradients, 
Figure PCTCN2020073563-appb-000023
and
Figure PCTCN2020073563-appb-000024
are determined as the maximum and minimum values, respectively, among the horizontal and vertical  gradients g h and g v
Figure PCTCN2020073563-appb-000025
and
Figure PCTCN2020073563-appb-000026
are determined as the maximum and minimum values, respectively, among the two diagonal gradients g d0 and g d1.
At step 906, the ALF 214 classifies a sub-block of a luma CTB.
For each sub-block of a luma CTB, an ALF 214 classifies the sub-block into one of multiple classes based on a classification index C which is derived from a directionality D and a quantized value of activity
Figure PCTCN2020073563-appb-000027
of the sub-block. The value of D represents a direction of local gradients in the sub-block, and the value of
Figure PCTCN2020073563-appb-000028
represents activity of local gradients in the sub-block. C may be derived as follows.
Figure PCTCN2020073563-appb-000029
Sub-block of a chroma CTB are not classified.
From the four values
Figure PCTCN2020073563-appb-000030
and
Figure PCTCN2020073563-appb-000031
directionality D is set according to the following steps comparing the gradient values to each other and to two threshold values t 1 and t 2, providing D with a range of values from 0 through 4. When D has this range of possible values, 25 possible values may be derived for C from the above equation, corresponding to 25 different filters that may be applied to the sub-block.
1. If
Figure PCTCN2020073563-appb-000032
and
Figure PCTCN2020073563-appb-000033
D is set to 0.
2. If
Figure PCTCN2020073563-appb-000034
continue to step 3 below; otherwise continue to step 4 below.
3. If
Figure PCTCN2020073563-appb-000035
D is set to 2; otherwise, D is set to 1.
4. If
Figure PCTCN2020073563-appb-000036
D is set to 4; otherwise, D is set to 3.
According to example embodiments of the present disclosure, directionality D may be set according to the following steps instead comparing the gradient values to each other and to three threshold values t 1, t 2, and t 3, providing D with a range of values from 0 through 6. When D has a greater range of possible values than 0 to 4, more possible values than 25 may be derived for C from the above equation. For example, when D has a range of possible values from 0 to 6, 35 possible values may be derived for C from the above equation, corresponding to 35 different filters that may be applied to the sub-block.
1. If
Figure PCTCN2020073563-appb-000037
and
Figure PCTCN2020073563-appb-000038
D is set to 0.
2. If
Figure PCTCN2020073563-appb-000039
continue to step 3 below; otherwise continue to step 5 below.
3. If
Figure PCTCN2020073563-appb-000040
D is set to 1; otherwise continue to step 4 below.
4. If
Figure PCTCN2020073563-appb-000041
and
Figure PCTCN2020073563-appb-000042
D is set to 2; otherwise, D is set to 3.
5. If
Figure PCTCN2020073563-appb-000043
D is set to 4; otherwise continue to step 6 below.
6. If
Figure PCTCN2020073563-appb-000044
and
Figure PCTCN2020073563-appb-000045
D is set to 5; otherwise, D is set to 6.
According to other example embodiments of the present disclosure, directionality D may be set according to the following steps instead comparing the gradient values to each other, the maximum gradient among the gradient values, and to two threshold values t 1 and t 2, providing D with a range of values from 0 through 6.
1. If
Figure PCTCN2020073563-appb-000046
and
Figure PCTCN2020073563-appb-000047
D is set to 0.
2. If
Figure PCTCN2020073563-appb-000048
continue to step 3 below; otherwise continue to step 6 below.
3. If
Figure PCTCN2020073563-appb-000049
and the maximum gradient is horizontal, D is set to 1; otherwise continue to step 4 below.
4. If
Figure PCTCN2020073563-appb-000050
and the maximum gradient is vertical, D is set to 2; otherwise continue to step 5 below.
5. If
Figure PCTCN2020073563-appb-000051
and the maximum gradient is horizontal, D is set to 3; otherwise, D is set to 4.
6. If
Figure PCTCN2020073563-appb-000052
D is set to 5; otherwise, D is set to 6.
Furthermore, activity A is calculated by the following variation of the 1-D Laplacian calculation variation.
Figure PCTCN2020073563-appb-000053
The value of activity A is quantized to a value 
Figure PCTCN2020073563-appb-000054
 over a range of 0 through 4.
At step 908, prior to applying a filter to a sub-block, the ALF 214 applies one of several geometric transformations to filter coefficients of the filter. A geometric transformation may be chosen based on comparisons between gradient values according to Table 14 below.
Gradient value comparisons Transformation 
g d2 < g d1, g h < g v None 
g d2 < g d1, g v < g h Diagonal 
g d1 < g d2, g h < g v Vertical flip 
g d1 < g d2, g v < g h Rotation 
The above geometric transformations may be defined as the following functions: 
Diagonal: f D (k, l) = f (l, k)
Vertical flip: f V (k, l) = f (k, K-l-1)
Rotation: f R (k, l) = f (K-l-1, k)
K is the size of the filter, and 0≤k, l≤K-1 are coefficients coordinates, such that coordinate (0, 0) is at an upper left corner of the filter and coordinate (K–1, K–1) is at a lower right corner of the filter. Each transformation is applied to the filter coefficients f (k, l) according to gradient values calculated as described above.
At step 910, the ALF 214 applies a filter having a filter coefficient f (k, l) over each sub-block. For a luma sub-block, the filter coefficients to be applied may depend on the filter to be applied among all available filters, according to the classification index C. For chroma sub-blocks, the filter coefficients to be applied may be constant. 
The filter may act upon a sample value R (i, j) of a reconstructed frame, outputting a sample value R’ (i, j) as below.
Figure PCTCN2020073563-appb-000055
L is a filter length, f m, n denotes a filter coefficient, and f (k, l) denotes a decoded filter coefficient.
FIG. 10 illustrates an example system 1000 for implementing the processes and methods described above for implementing resolution-adaptive video coding in deblocking filters.
The techniques and mechanisms described herein may be implemented by multiple instances of the system 1000 as well as by any other computing device, system, and/or environment. The system 1000 shown in FIG. 10 is only one example of a system and is not intended to suggest any limitation as to the scope of use or functionality of any computing device utilized to perform the processes and/or procedures described above. Other well-known computing devices, systems, environments and/or configurations that may be suitable for use with the embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, game consoles, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, implementations using field programmable gate arrays (“FPGAs” ) and application specific integrated circuits ( “ASICs” ) , and/or the like.
The system 1000 may include one or more processors 1002 and system memory 1004 communicatively coupled to the processor (s) 1002. The processor (s) 1002 may execute one or more modules and/or processes to cause the processor (s) 1002 to perform a variety of functions. In some embodiments, the processor (s) 1002 may include a central processing unit (CPU) , a graphics processing unit (GPU) , both CPU and GPU, or other processing units or components known in the art. Additionally, each of the processor (s) 1002 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.
Depending on the exact configuration and type of the system 1000, the system memory 1004 may be volatile, such as RAM, non-volatile, such as ROM, flash memory, miniature hard drive, memory card, and the like, or some combination thereof.  The system memory 1004 may include one or more computer-executable modules 1006 that are executable by the processor (s) 1002.
The modules 1006 may include, but are not limited to, a deblocking filter module 1008, which includes a boundary determining module 1010, a boundary strength determining module 1012, a threshold determining module 1014, an offset applying module 1016, a filter activity determining module 1018, a filter strength determining module 1020, a strong filter applying module 1022, and a weak filter applying module 1024.
The boundary determining module 1010 may be configured to determine block and sub-block boundaries to filter as abovementioned with reference to FIGS. 3A and 3B.
The boundary strength determining module 1012 may be configured to determines a bS value of a boundary being filtered as abovementioned with reference to FIGS. 3A and 3B.
The threshold determining module 1014 may be configured to determine threshold values, as abovementioned with reference to FIG. 3A.
The offset applying module 1016 may be configured to apply an offset to a luma quantization parameter, as abovementioned with reference to FIG. 3B.
The filter activity determining module 1018 may be configured to determine whether the deblocking filter module 1008 is active for a first four vertical lines of pixels or horizontal lines of pixels running across an 8x8 boundary, and whether the deblocking filter is active for a second four vertical lines of pixels or horizontal lines of pixels running across the 8x8 boundary, as abovementioned with reference to FIG. 3A, or may be configured to determine whether the deblocking filter module 1008 is active for a first four vertical lines of pixels or horizontal lines of pixels running across an 8x8 or 4x4 boundary, and whether the deblocking filter is active for a second four vertical lines of pixels or horizontal lines of pixels running across the 8x8 or 4x4 boundary, as abovementioned with reference to FIG. 3B.
The filter strength determining module 1020 may be configured to determine whether strong or weak filtering is applied for the first four vertical lines or horizontal  lines in the 8x8 boundary in the case that the deblocking filter module 1008 is active for those lines, and whether strong or weak filtering is applied for the second four vertical lines or horizontal lines in the 8x8 boundary in the case that the deblocking filter module 1008 is active for those lines, as abovementioned with reference to FIG. 3A, or may be configured to determine whether strong or weak filtering is applied for the first four vertical lines or horizontal lines in the 8x8 or 4x4 boundary in the case that the deblocking filter module 1008 is active for those lines, and whether strong or weak filtering is applied for the second four vertical lines or horizontal lines in the 8x8 or 4x4 boundary in the case that the deblocking filter module 1008 is active for those lines, as abovementioned with reference to FIG. 3B.
The strong filter applying module 1022 may be configured to apply a strong filter to vertical lines or horizontal lines wherein the deblocking filter module 1008 determined to apply a strong filter, as abovementioned with reference to FIGS. 3A and 3B.
The weak filter applying module 1024 may be configured to apply a weak filter to vertical lines or horizontal lines wherein the deblocking filter module 1008 determined to apply a weak filter, as abovementioned with reference to FIGS. 3A and 3B.
The system 1000 may additionally include an input/output (I/O) interface 1040 for receiving video source data and bitstream data, and for outputting reconstructed frames into a reference frame buffer, a transmission buffer, and/or a display buffer. The system 1000 may also include a communication module 1050 allowing the system 1000 to communicate with other devices (not shown) over a network (not shown) . The network may include the Internet, wired media such as a wired network or direct-wired connections, and wireless media such as acoustic, radio frequency (RF) , infrared, and other wireless media.
FIG. 11 illustrates an example system 1100 for implementing the processes and methods described above for implementing resolution-adaptive video coding in SAO filters.
The techniques and mechanisms described herein may be implemented by multiple instances of the system 1100 as well as by any other computing device, system, and/or environment. The system 1100 shown in FIG. 11 is only one example of a system and is not intended to suggest any limitation as to the scope of use or functionality of any computing device utilized to perform the processes and/or procedures described above. Other well-known computing devices, systems, environments and/or configurations that may be suitable for use with the embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, game consoles, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, implementations using field programmable gate arrays (“FPGAs” ) and application specific integrated circuits ( “ASICs” ) , and/or the like.
The system 1100 may include one or more processors 1102 and system memory 1104 communicatively coupled to the processor (s) 1102. The processor (s) 1102 may execute one or more modules and/or processes to cause the processor (s) 1102 to perform a variety of functions. In some embodiments, the processor (s) 1102 may include a central processing unit (CPU) , a graphics processing unit (GPU) , both CPU and GPU, or other processing units or components known in the art. Additionally, each of the processor (s) 1102 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.
Depending on the exact configuration and type of the system 1100, the system memory 1104 may be volatile, such as RAM, non-volatile, such as ROM, flash memory, miniature hard drive, memory card, and the like, or some combination thereof. The system memory 1104 may include one or more computer-executable modules 1106 that are executable by the processor (s) 1102.
The modules 1106 may include, but are not limited to, a SAO filter module 1108. The SAO filter module 1108 may include a filter application deciding module 1110, a CTB classifying module 1112, a pixel classifying module 1114, an edge offset  applying module 1116, a band classifying module 1118, and a band offset applying module 1120.
The filter application deciding module 1110 may be configured to receive a frame and decide to apply SAO to a CTB of the frame, as abovementioned with reference to FIG. 7.
The CTB classifying module 1112 may be configured to classify a CTB as one of several SAO types, as abovementioned with reference to FIG. 7.
The pixel classifying module 1114 may be configured to classify a pixel of a CTB according to edge properties in the case that a CTB is classified as a type for applying edge offset as abovementioned with reference to FIG. 7.
The edge offset applying module 1116 may be configured to apply an offset to the current pixel based on pixel classification and based on an offset value, as abovementioned with reference to FIG. 7.
The band classifying module 1118 may be configured to classify a pixel of a CTB into a band in the case that a CTB is classified as a type for applying band offset, as abovementioned with reference to FIG. 7.
The band offset applying module 1120 may be configured to apply an offset to each band based on an offset value, as abovementioned with reference to FIG. 7.
The system 1100 may additionally include an input/output (I/O) interface 1140 for receiving video source data and bitstream data, and for outputting reconstructed frames into a reference frame buffer, a transmission buffer, and/or a display buffer. The system 1100 may also include a communication module 1150 allowing the system 1100 to communicate with other devices (not shown) over a network (not shown) . The network may include the Internet, wired media such as a wired network or direct-wired connections, and wireless media such as acoustic, radio frequency (RF) , infrared, and other wireless media.
FIG. 12 illustrates an example system 1200 for implementing the processes and methods described above for implementing resolution-adaptive video coding in ALF.
The techniques and mechanisms described herein may be implemented by multiple instances of the system 1200 as well as by any other computing device, system, and/or environment. The system 1200 shown in FIG. 12 is only one example of a system and is not intended to suggest any limitation as to the scope of use or functionality of any computing device utilized to perform the processes and/or procedures described above. Other well-known computing devices, systems, environments and/or configurations that may be suitable for use with the embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, game consoles, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, implementations using field programmable gate arrays (“FPGAs” ) and application specific integrated circuits ( “ASICs” ) , and/or the like.
The system 1200 may include one or more processors 1202 and system memory 1204 communicatively coupled to the processor (s) 1202. The processor (s) 1202 may execute one or more modules and/or processes to cause the processor (s) 1202 to perform a variety of functions. In some embodiments, the processor (s) 1202 may include a central processing unit (CPU) , a graphics processing unit (GPU) , both CPU and GPU, or other processing units or components known in the art. Additionally, each of the processor (s) 1202 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.
Depending on the exact configuration and type of the system 1200, the system memory 1204 may be volatile, such as RAM, non-volatile, such as ROM, flash memory, miniature hard drive, memory card, and the like, or some combination thereof. The system memory 1204 may include one or more computer-executable modules 1206 that are executable by the processor (s) 1202.
The modules 1206 may include, but are not limited to, an ALF module 1208. The ALF module 1208 may include a filter application deciding module 1210, a gradient value calculating module 1212, a block classifying module 1214, a transformation applying module 1216, and a filter applying module 1218.
The filter application deciding module 1210 may be configured to receive a frame and decide to apply ALF to a luma CTB and/or a chroma CTB of the frame, as abovementioned with reference to FIG. 9.
The gradient value calculating module 1212 may be configured to calculate gradient values of the sub-block in multiple directions by obtaining reconstructed samples, as abovementioned with reference to FIG. 9.
The block classifying module 1214 may be configured to classify a sub-block of a luma CTB, as abovementioned with reference to FIG. 9.
The transformation applying module 1216 may be configured to apply one of several geometric transformations to filter coefficients of the filter, as abovementioned with reference to FIG. 9.
The filter applying module 1218 may be configured to apply a filter having a filter coefficient f (k, l) over each sub-block, as abovementioned with reference to FIG. 9.
The system 1200 may additionally include an input/output (I/O) interface 1240 for receiving video source data and bitstream data, and for outputting reconstructed frames into a reference frame buffer, a transmission buffer, and/or a display buffer. The system 1200 may also include a communication module 1250 allowing the system 1200 to communicate with other devices (not shown) over a network (not shown) . The network may include the Internet, wired media such as a wired network or direct-wired connections, and wireless media such as acoustic, radio frequency (RF) , infrared, and other wireless media.
Some or all operations of the methods described above can be performed by execution of computer-readable instructions stored on a computer-readable storage medium, as defined below. The term “computer-readable instructions” as used in the description and claims, include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers,  personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like.
The computer-readable storage media may include volatile memory (such as random-access memory (RAM) ) and/or non-volatile memory (such as read-only memory (ROM) , flash memory, etc. ) . The computer-readable storage media may also include additional removable storage and/or non-removable storage including, but not limited to, flash memory, magnetic storage, optical storage, and/or tape storage that may provide non-volatile storage of computer-readable instructions, data structures, program modules, and the like.
A non-transient computer-readable storage medium is an example of computer-readable media. Computer-readable media includes at least two types of computer-readable media, namely computer-readable storage media and communications media. Computer-readable storage media includes volatile and non-volatile, removable and non-removable media implemented in any process or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer-readable storage media includes, but is not limited to, phase change memory (PRAM) , static random-access memory (SRAM) , dynamic random-access memory (DRAM) , other types of random-access memory (RAM) , read-only memory (ROM) , electrically erasable programmable read-only memory (EEPROM) , flash memory or other memory technology, compact disk read-only memory (CD-ROM) , digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. A computer-readable storage medium employed herein shall not be interpreted as a transitory signal itself, such as a radio wave or other free-propagating electromagnetic wave, electromagnetic waves propagating through a waveguide or other transmission  medium (such as light pulses through a fiber optic cable) , or electrical signals propagating through a wire.
The computer-readable instructions stored on one or more non-transitory computer-readable storage media that, when executed by one or more processors, may perform operations described above with reference to FIGS. 1A-12. Generally, computer-readable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes.
By the abovementioned technical solutions, the present disclosure provides inter-coded resolution-adaptive video coding supported by multiple in-loop filters, restoring high-frequency loss that occurs when a picture is down-sampled and subsequently up-sampled, and improving image quality during a resolution-adaptive video coding process. The methods and systems described herein provide a deblocking filter which takes resolution differences between frames undergoing motion prediction into account in determining filter strength, with further modifications for the next-generation video codec specification VVC. The deblocking filter may apply a strong filter or a weak filter in cases where a first reference frame referenced in motion prediction of a block adjacent to the block boundary has a resolution different from that of a second reference frame referenced in motion prediction of a block adjacent to the block boundary, that of a second reference frame referenced in motion prediction of the current frame, or that of the current frame.
EXAMPLE CLAUSES
A. A method comprising: receiving a current frame; determining a block boundary to be filtered within the current frame; determining a boundary strength of the block boundary based on a difference in resolution between a first reference frame referenced in motion prediction of a block adjacent to the block boundary and another frame; and applying a deblocking filter to the block boundary based on the boundary strength.
B. The method as paragraph A recites, wherein the second frame is a reference frame referenced in motion prediction of another block adjacent to the block boundary.
C. The method as paragraph A recites, wherein the second frame is a reference frame referenced in motion prediction of the current frame.
D. The method as paragraph A recites, wherein the second frame is the current frame.
E. The method as paragraph A recites, wherein the first reference frame has a different resolution than the second frame.
F. The method as paragraph A recites, wherein the first reference frame has a lower resolution than the second frame.
G. The method as paragraph A recites, wherein the first reference frame has a higher resolution than the second frame.
H. A method comprising: receiving a frame and deciding to apply SAO to a CTB of the frame; classifying the CTB as one of a plurality of SAO types; classifying a pixel of a CTB according to edge properties by at least comparing difference sums of neighbor pixels in two opposing directions; and applying an edge offset to the pixel based on an offset value.
I. The method as paragraph H recites, wherein deciding to apply SAO to a CTB of the frame comprises deciding to apply at least edge offset to the CTB based on a value of a flag stored in a slice header of the frame.
J. The method as paragraph I recites, wherein deciding to apply SAO to a CTB of the frame further comprises deciding to apply a band offset to the CTB based on the value of a flag stored in a slice header of the frame.
K. The method as paragraph J recites, further comprising classifying a pixel of a CTB into a band and applying an offset to the band based on an offset value.
L. The method as paragraph A recites, wherein the frame is received from an up-sampler.
M. A method comprising: receiving a frame and deciding to apply ALF to a CTB of the frame; calculating a plurality of gradient values of a block of the  CTB; determining a classification of the block based on computing a directionality value of at least six possible directionality values based on the plurality of gradient values; and applying a filter to the block, the filter comprising a set of filter coefficients determined by classification of the block.
N. The method as paragraph M recites, wherein the CTB is a luma CTB and the block is a luma block of the luma CTB.
O. The method as paragraph M recites, wherein the directionality value is computed by comparing the plurality of gradient values with at least three threshold values.
P. The method as paragraph M recites, wherein the directionality value is computed by comparing the plurality of gradient values with at least two threshold values and a maximum among the plurality of gradient values.
Q. The method as paragraph M recites, wherein a set of filter coefficients comprises a plurality of values arranged among 7x7 pixels.
R. The method as paragraph Q recites, wherein the set of filter coefficients is stored in a header of the frame.
S. The method as paragraph R recites, wherein the header stores more than 25 sets of filter coefficients and each set of filter coefficient corresponds to a classification of the block.
T. The method as paragraph M recites, wherein the frame is received from an up-sampler.
U. A system comprising: one or more processors; and memory communicatively coupled to the one or more processors, the memory storing computer-executable modules executable by the one or more processors that, when executed by the one or more processors, perform associated operations, the computer-executable modules including: a deblocking filter module configured to receive a current frame in a coding loop, the deblocking filter module further comprising a boundary determining module configured to determine a block boundary to be filtered within the current frame; a boundary strength determining module configured to determine a boundary strength of a block boundary to be filtered based on a difference in resolution between a first  reference frame referenced in motion prediction of a block adjacent to the block boundary and a second frame; and a strong filter applying module and a weak filter applying module each configured to apply a deblocking filter to the block boundary based on the boundary strength.
V. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a lower resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
W. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a lower resolution than the second frame, and the second frame being the current frame.
X. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a different resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of another block adjacent to the block boundary.
Y. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a lower resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
Z. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a lower resolution than the second frame, and the second frame being the current frame.
AA. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a higher resolution than the second  frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
BB. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a higher resolution than the second frame, and the second frame being the current frame.
CC. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a different resolution than the second frame, and the second frame being the current frame.
DD. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a higher resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
EE. The system as paragraph U recites, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a higher resolution than the second frame, and the second frame being the current frame.
B. A system comprising: a SAO filter module configured to receive a frame, the SAO filter module further comprising a filter application deciding module configured to decide to apply SAO to a CTB of the frame; a CTB classifying module configured to classify the CTB as one of a plurality of SAO types; a pixel classifying module configured to classify a pixel of a CTB according to edge properties by at least comparing difference sums of neighbor pixels in two opposing directions; and an edge offset applying module configured to apply an edge offset to the pixel based on an offset value.
CC. The system as paragraph BB recites, wherein the filter application deciding module is further configured to decide to apply at least edge offset to the CTB based on a value of a flag stored in a slice header of the frame.
DD. The system as paragraph CC recites, wherein the filter application deciding module is further configured to decide to apply a band offset to the CTB based on the value of a flag stored in a slice header of the frame.
EE. The system as paragraph DD recites, further comprising a band classifying module configured to classify a pixel of a CTB into a band and a band offset applying module configured to apply an offset to the band based on an offset value.
FF. The system as paragraph EE recites, wherein the SAO filter module is configured to receive the frame from an up-sampler.
GG. A system comprising: an ALF module configured to receive a frame, the ALF module further comprising a filter application deciding module configured to decide to apply ALF to a CTB of the frame; a gradient value calculating module configured to calculate a plurality of gradient values of a block of the CTB; a block classifying module configured to determine a classification of the block based on computing a directionality value of at least six possible directionality values based on the plurality of gradient values; and a filter applying module configured to apply a filter to the block, the filter comprising a set of filter coefficients determined by classification of the block.
HH. The system as paragraph GG recites, wherein the CTB is a luma CTB and the block is a luma block of the luma CTB.
II. The system as paragraph GG recites, wherein the block classifying module is configured to compute a directionality value by comparing the plurality of gradient values with at least three threshold values.
JJ. The system as paragraph GG recites, wherein the block classifying module is configured to compute a directionality value by comparing the plurality of gradient values with at least two threshold values and a maximum among the plurality of gradient values.
KK. The system as paragraph GG recites, wherein a set of filter coefficients comprises a plurality of values arranged among 7x7 pixels.
LL. The system as paragraph KK recites, wherein the set of filter coefficients is stored in a header of the frame.
MM. The system as paragraph MM recites, wherein the header stores more than 25 sets of filter coefficients and each set of filter coefficient corresponds to a classification of the block.
NN. The system as paragraph GG recites, wherein the ALF module is configured to receive the frame from an up-sampler.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claims.

Claims (29)

  1. A method comprising:
    receiving a current frame;
    determining block boundaries to be filtered within the current frame;
    determining a boundary strength of a block boundary to be filtered based on a difference in resolution between a first reference frame referenced in motion prediction of a block adjacent to the block boundary and a second frame; and
    applying a deblocking filter to the block boundary based on the boundary strength.
  2. The method of claim 1, wherein the second frame is a reference frame referenced in motion prediction of another block adjacent to the block boundary.
  3. The method of claim 1, wherein the second frame is a reference frame referenced in motion prediction of the current frame.
  4. The method of claim 1, wherein the second frame is the current frame.
  5. The method of claim 1, wherein the first reference frame has a different resolution than the second frame.
  6. The method of claim 1, wherein the first reference frame has a lower resolution than the second frame.
  7. The method of claim 1, wherein the first reference frame has a higher resolution than the second frame.
  8. A system comprising:
    one or more processors; and
    memory communicatively coupled to the one or more processors, the memory storing computer-executable modules executable by the one or more processors that, when executed by the one or more processors, perform associated operations, the computer-executable modules including:
    a deblocking filter module configured to receive a current frame in a coding loop, the deblocking filter module further comprising a boundary determining module configured to determine a block boundary to be filtered within the current frame;
    a boundary strength determining module configured to determine a boundary strength of a block boundary to be filtered based on a difference in resolution between a first reference frame referenced in motion prediction of a block adjacent to the block boundary and a second frame; and
    a strong filter applying module and a weak filter applying module each configured to apply a deblocking filter to the block boundary based on the boundary strength.
  9. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a lower resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  10. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a lower resolution than the second frame, and the second frame being the current frame.
  11. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a different resolution than the second frame, and the  second frame being a reference frame referenced in motion prediction of another block adjacent to the block boundary.
  12. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a lower resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  13. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a lower resolution than the second frame, and the second frame being the current frame.
  14. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a higher resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  15. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a higher resolution than the second frame, and the second frame being the current frame.
  16. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 2 based on the first reference frame having a different resolution than the second frame, and the second frame being the current frame.
  17. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a higher resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  18. The system of claim 8, wherein the boundary strength determining module is configured to determine the boundary strength as having a value of 1 based on the first reference frame having a higher resolution than the second frame, and the second frame being the current frame.
  19. A computer-readable storage medium storing computer-readable instructions executable by one or more processors, that when executed by the one or more processors, cause the one or more processors to perform operations comprising:
    determining block boundaries to be filtered within the current frame;
    determining a boundary strength of a block boundary to be filtered based on a difference in resolution between a first reference frame referenced in motion prediction of a block adjacent to the block boundary and a second frame; and
    applying a deblocking filter to the block boundary based on the boundary strength
  20. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 2 based on the first reference frame having a lower resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  21. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 2  based on the first reference frame having a lower resolution than the second frame, and the second frame being the current frame.
  22. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 2 based on the first reference frame having a different resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of another block adjacent to the block boundary.
  23. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 1 based on the first reference frame having a lower resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  24. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 1 based on the first reference frame having a lower resolution than the second frame, and the second frame being the current frame.
  25. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 2 based on the first reference frame having a higher resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  26. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 2 based on the first reference frame having a higher resolution than the second frame, and the second frame being the current frame.
  27. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 2 based on the first reference frame having a different resolution than the second frame, and the second frame being the current frame.
  28. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 1 based on the first reference frame having a higher resolution than the second frame, and the second frame being a reference frame referenced in motion prediction of the current frame.
  29. The computer-readable storage medium of claim 19, wherein the operations further comprise determining the boundary strength as having a value of 1 based on the first reference frame having a higher resolution than the second frame, and the second frame being the current frame.
PCT/CN2020/073563 2020-01-21 2020-01-21 Next-generation loop filter implementations for adaptive resolution video coding WO2021146933A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080083267.5A CN114762326B (en) 2020-01-21 2020-01-21 Next generation loop filter implementation for adaptive resolution video coding
PCT/CN2020/073563 WO2021146933A1 (en) 2020-01-21 2020-01-21 Next-generation loop filter implementations for adaptive resolution video coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073563 WO2021146933A1 (en) 2020-01-21 2020-01-21 Next-generation loop filter implementations for adaptive resolution video coding

Publications (1)

Publication Number Publication Date
WO2021146933A1 true WO2021146933A1 (en) 2021-07-29

Family

ID=76992773

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073563 WO2021146933A1 (en) 2020-01-21 2020-01-21 Next-generation loop filter implementations for adaptive resolution video coding

Country Status (2)

Country Link
CN (1) CN114762326B (en)
WO (1) WO2021146933A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023044251A1 (en) * 2021-09-15 2023-03-23 Tencent America LLC On propagating intra prediction mode information of ibc block by using block vector

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101267560A (en) * 2008-03-19 2008-09-17 浙江大学 Block-removal filtering method and device
US20100322304A1 (en) * 2009-06-17 2010-12-23 Novatek Microelectronics Corp. Multi-source filter and filtering method based on h.264 de-blocking
CN103931185A (en) * 2011-10-25 2014-07-16 高通股份有限公司 Determining boundary strength values for deblocking filtering for video coding
US20150365666A1 (en) * 2013-01-07 2015-12-17 Vid Scale, Inc. Enhanced deblocking filters for video coding
CN109479152A (en) * 2016-05-13 2019-03-15 交互数字Vc控股公司 The method and apparatus of the intra-frame prediction block of decoding picture and coding method and equipment
WO2019121164A1 (en) * 2017-12-18 2019-06-27 Telefonaktiebolaget Lm Ericsson (Publ) De-blocking for video coding

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101371584B (en) * 2006-01-09 2011-12-14 汤姆森特许公司 Method and apparatus for providing reduced resolution update mode for multi-view video coding
US9350972B2 (en) * 2011-04-28 2016-05-24 Sony Corporation Encoding device and encoding method, and decoding device and decoding method
CN108141593B (en) * 2015-07-31 2022-05-03 港大科桥有限公司 Depth discontinuity-based method for efficient intra coding for depth video
AU2015410097B2 (en) * 2015-09-25 2020-01-23 Huawei Technologies Co., Ltd. Apparatus and method for video motion compensation with selectable interpolation filter

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101267560A (en) * 2008-03-19 2008-09-17 浙江大学 Block-removal filtering method and device
US20100322304A1 (en) * 2009-06-17 2010-12-23 Novatek Microelectronics Corp. Multi-source filter and filtering method based on h.264 de-blocking
CN103931185A (en) * 2011-10-25 2014-07-16 高通股份有限公司 Determining boundary strength values for deblocking filtering for video coding
US20150365666A1 (en) * 2013-01-07 2015-12-17 Vid Scale, Inc. Enhanced deblocking filters for video coding
CN109479152A (en) * 2016-05-13 2019-03-15 交互数字Vc控股公司 The method and apparatus of the intra-frame prediction block of decoding picture and coding method and equipment
WO2019121164A1 (en) * 2017-12-18 2019-06-27 Telefonaktiebolaget Lm Ericsson (Publ) De-blocking for video coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HENDRY (HUAWEI), S. HONG (HUAWEI), Y.-K. WANG (HUAWEI), J. CHEN (HUAWEI), Y.-C SUN (ALIBABA-INC), T.-S CHANG (ALIBABA-INC), J. LOU: "AHG19: Adaptive resolution change (ARC) support in VVC", 14. JVET MEETING; 20190319 - 20190327; GENEVA; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), no. JVET-N0118 PD, 12 March 2019 (2019-03-12), XP030202642 *
HENDRY (HUAWEI), Y.-K WANG (HUAWEI), J. CHEN (HUAWEI), T. DAVIES (CISCO), A. FULDSETH (CISCO), Y.-C SUN (ALIBABA-INC), T.-S CHANG : "On adaptive resolution change (ARC) for VVC", 125. MPEG MEETING; 20190114 - 20190118; MARRAKECH; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), no. m45397, 2 January 2019 (2019-01-02), XP030197810 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023044251A1 (en) * 2021-09-15 2023-03-23 Tencent America LLC On propagating intra prediction mode information of ibc block by using block vector

Also Published As

Publication number Publication date
CN114762326B (en) 2024-03-22
CN114762326A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
RU2696552C1 (en) Method and device for video coding
JP5792305B2 (en) Method and apparatus for adaptive loop filtering
US9363515B2 (en) Image processing method, image processing apparatus, video encoding/decoding methods, video encoding/decoding apparatuses, and non-transitory computer-readable media therefor that perform denoising by means of template matching using search shape that is set in accordance with edge direction of image
US10448015B2 (en) Method and device for performing adaptive filtering according to block boundary
EP3709644A1 (en) Method for image processing and apparatus for implementing the same
WO2020249123A1 (en) Handling video unit boundaries and virtual boundaries
WO2020249124A1 (en) Handling video unit boundaries and virtual boundaries based on color format
US20120183078A1 (en) Filter adaptation with directional features for video/image coding
US20230421777A1 (en) Video coding method and device which use sub-block unit intra prediction
US20230276076A1 (en) Apparatus and method for deblocking filter in video coding
CN114885159B (en) Method and apparatus for mode dependent and size dependent block level restriction of position dependent prediction combinations
US12015771B2 (en) Apparatus and method for performing deblocking
WO2020252745A1 (en) Loop filter design for adaptive resolution video coding
JP7393550B2 (en) Sample padding for cross-component adaptive loop filtering
WO2022002007A1 (en) Boundary location for adaptive loop filtering
WO2021146933A1 (en) Next-generation loop filter implementations for adaptive resolution video coding
EP2735144B1 (en) Adaptive filtering based on pattern information
EP3525461A1 (en) Adaptive loop filtering
US12047567B2 (en) System and method for applying adaptive loop filter in video coding
US11044472B2 (en) Method and apparatus for performing adaptive filtering on reference pixels based on size relationship of current block and reference block
WO2023193551A9 (en) Method and apparatus for dimd edge detection adjustment, and encoder/decoder including the same
TW202406336A (en) Method and apparatus for adaptive loop filter processing of reconstructed video
CN117478893A (en) Image encoding method, image encoding device, electronic device and storage medium
CN115176468A (en) Cross-component adaptive loop filter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914854

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914854

Country of ref document: EP

Kind code of ref document: A1