CN111083486A - Method and device for determining chrominance information of coding unit - Google Patents

Method and device for determining chrominance information of coding unit Download PDF

Info

Publication number
CN111083486A
CN111083486A CN201911378799.7A CN201911378799A CN111083486A CN 111083486 A CN111083486 A CN 111083486A CN 201911378799 A CN201911378799 A CN 201911378799A CN 111083486 A CN111083486 A CN 111083486A
Authority
CN
China
Prior art keywords
brightness
target
luminance
sampling
video block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911378799.7A
Other languages
Chinese (zh)
Inventor
陈漪纹
王祥林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reach Best Technology Co Ltd
Original Assignee
Reach Best Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reach Best Technology Co Ltd filed Critical Reach Best Technology Co Ltd
Publication of CN111083486A publication Critical patent/CN111083486A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure relates to a method and apparatus for determining chroma information of a coding unit, including: selecting a reconstructed reference video block at a periphery of the coding unit, the reference video block including a target video block adjacent to a lateral boundary of the coding unit, down-sampling the brightness sampling points in the reference video block to obtain brightness reference points corresponding to the chroma sampling points in the reference video block, determining parameters of a linear model between the chroma and the brightness of the coding unit according to the chroma values of the chroma sampling points corresponding to the brightness reference point with the maximum brightness value, the brightness reference point with the minimum brightness value and the two brightness reference points respectively, and determining the chroma value of the chroma sampling point in the coding unit according to the parameter and the brightness value of the brightness sampling point in the coding unit, wherein the brightness reference point corresponding to the target chroma sampling point in the target video block is obtained by performing downsampling processing on the brightness sampling point closest to the target chroma sampling point, and the target chroma sampling point is closest to the first row of brightness sampling points in the coding unit.

Description

Method and device for determining chrominance information of coding unit
This application claims priority from the united states intellectual property office, U.S. patent application No. 62/788,124 entitled "Simplifications of cross-component linear model" filed on 03.01.2019, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of video coding and compression technologies, and in particular, to a method and an apparatus for determining chroma information of a coding unit.
Background
Video coding typically utilizes prediction methods such as inter-prediction, intra-prediction, etc. to remove redundancy present in a video frame or video sequence. Currently, video coding is performed according to one or more video coding standards. For example, Video codec standards include multifunctional Video Coding (VVC), joint exploration test model (JEM), High-Efficiency Video Coding (HEVC) (h.265/HEVC), Advanced Video Coding (AVC) (h.264/AVC), Moving Picture Expert Group (MPEG), and the like.
Conceptually, the video codec standards described above are similar. For example, these video codec standards are processed on a block basis and share similar video codec block diagrams to achieve video compression, and fig. 1 is a block diagram of a typical encoder for the video codec standards according to an exemplary embodiment.
In an encoder, a video frame is divided into a plurality of video blocks, and then a prediction value of each video block is formed based on interframe prediction or intraframe prediction, wherein the interframe prediction is to perform motion estimation and motion compensation on pixels of a previously reconstructed frame to determine the prediction value of the video block; intra prediction is the determination of the prediction value of a video block from reconstructed pixels in the current frame. Typically, the prediction value of a video block is multiple, and thus, the video block may also be predicted by selecting the best prediction value through mode decision.
Further, the prediction residual information of the best predictor (i.e., the pixel difference information between the current block and its predictor) is sent to the transform module, the transform coefficients are sent to the quantization module for entropy encoding, and the quantized coefficients are sent to the entropy encoding module to generate a compressed video bitstream. As shown in fig. 1, prediction related information (e.g., block partition information, motion vectors, reference picture indices, and intra prediction modes, etc.) from the inter and/or intra prediction modules also passes through the entropy coding module and is stored in the bitstream.
In order to enable the decoder to reconstruct the video block well, the encoder also needs to consider the information needed by the relevant modules at the decoder side when reconstructing the video block, and for this purpose, the encoder may also reconstruct the prediction residual of the video block by inverse quantization and inverse transformation, and then combine the reconstructed prediction residual and the prediction value of the video block to generate the non-filtered reconstructed pixel of the video block.
In order to improve coding efficiency and video quality, a loop filter is generally used. For example, deblocking filters may be used for AVC, HEVC, and current VVC. An additional loop filter called Sample Adaptive Offset (SAO) is defined in HEVC to further improve coding efficiency. In recent VVCs, a Loop Filter called an Adaptive Loop Filter (ALF) is actively studied, and it is highly likely to be incorporated into a final standard.
In particular implementations, the loop filters are optional, and usually turning on these loop filters helps to improve coding efficiency and video quality, but may also be turned off based on encoder decisions to save computational complexity. It should be noted that intra-prediction is typically based on non-filtered reconstructed pixels, whereas inter-prediction is based on filtered reconstructed pixels if these filter options are turned on by the encoder.
Fig. 2 is a block diagram illustrating a typical decoder for the above-described video codec standard according to an exemplary embodiment, which can be seen to be almost identical to the reconstruction related part residing in the encoder.
In the decoder, the received bitstream is first decoded by an entropy decoding module to derive quantized coefficient levels and prediction related information. Then, the quantized coefficient levels are processed by an inverse quantization module and an inverse transformation module to obtain reconstructed prediction residuals, prediction values are formed by intra prediction or motion compensation processing based on the decoded prediction information, and non-filtered reconstructed pixels are obtained by summing the reconstructed prediction residuals and the prediction values. In addition, in the case where the loop filter is turned on, a filtering operation is performed on the above-described reconstructed pixels to obtain a final reconstructed video.
In practical applications, YUV is a commonly used color coding method, where YUV represents three color components, namely, luminance, chrominance, and saturation, respectively, and there are multiple color coding formats for YUV such as YUV4:2:0, YUV4:2:2, and YUV4:4: 4.
During VCC development, YUV4:2:0 is a common test condition, and the sampling grid of luminance sampling points and chrominance sampling points when encoding the colors of a video using this color encoding format is shown in fig. 3, where x represents a luminance sampling point, ○ represents a chrominance sampling point, and ○ simultaneously represents the down-sampling position of the luminance sampling point (i.e., the luminance reference point position), and as can be seen from fig. 3, the luminance sampling points and the luminance reference points are in one-to-one correspondence.
Currently, in order to reduce redundancy between different color components in video, a Cross-component Linear Model (CCLM) prediction mode is used in the VVC reference software VTM-3.0. In this prediction mode, the luminance of the same Coding Unit (CU) is used to predict the chroma of the CU, using the following linear model:
predC(i,j)=α·recL'(i,j)+β(1);
therein, predC(i, j) represents the chroma value of the ith row and jth column chroma sampling point in the CU; recL' (i, j) denotes the luminance value of the ith row and jth column luminance reference points, which are obtained by down-sampling the luminance sample points in the CU and correspond to the chrominance sample points in the CU one by one, α, β are parameters to be determined, and can be determined by using a method of a straight-line equation (referred to as a min-Max method in the following sections).
In particular, a reconstructed reference video block may be selected from the periphery of the CU, and the reference video block may be used to determine α, β. however, when the YUV4:2:0 is used for color coding, the number of chrominance sampling points and the number of luminance sampling points are not the same (see fig. 3), and therefore, downsampling processing needs to be performed on the luminance sampling points in the reference video block to obtain luminance reference points (also called reconstructed luminance sampling points) corresponding to the chrominance sampling points in the reference video block one to one.
Referring to fig. 4, fig. 4 shows a schematic diagram of a reference video block, a left graph and a right graph each show a square by a thick solid line, where the square in the left graph represents a chrominance block corresponding to a CU, a gray origin in the left graph represents a chrominance sampling point in the reference video block, the square in the right graph represents a luminance block corresponding to the CU, and the gray origin in the right graph represents a luminance reference point obtained by downsampling a luminance sampling point in the reference video block (each small square around the square in the right graph can be regarded as a luminance sampling point), and the luminance reference point corresponds to the chrominance sampling point in the reference video block one to one. Rec'L[x,y]Representing sets of luma reference points, Rec, adjacent to the top and left sides of a luma blockc[x,y]Representing the set of chroma sample points adjacent to the upper and left sides of the chroma block. The value of N is equal to twice the minimum of the width and height of the current chroma block. For a square CU, the min-Max method can be directly applied; for non-square CUs, the set of neighboring sample points on the longer boundary in the CU may be sub-sampled first to have the same number of sample points as the shorter boundary.
In the related art, when a reference video block includes a target video block (i.e., a video block on the upper side of a luminance block in fig. 4) adjacent to a horizontal boundary of a CU, a target chrominance sampling point closest to a luminance sampling point on the first row and the first column in the CU in the target video block is considered, whether the CU is located on the boundary of a video frame or not is considered, and whether the CU and the target video block belong to different Coding Tree Units (CTUs) or not is considered, so that downsampling processing of luminance sampling points around the target chrominance sampling point is divided into four cases.
Disclosure of Invention
The present disclosure provides a method and apparatus for determining chroma information of a coding unit, so as to solve at least the problem of complicated CCLM prediction mode in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a method of determining chroma information of a coding unit, including:
selecting a reconstructed reference video block at the periphery of a coding unit to be processed in a video frame, wherein the reconstructed reference video block at least comprises a target video block, and the target video block is adjacent to the transverse boundary of the coding unit;
downsampling the brightness sampling points in the reference video block to obtain brightness reference points corresponding to the chromaticity sampling points in the reference video block, wherein a target chromaticity sampling point is the closest chromaticity sampling point to the first row of the coding unit in the target video block, a target brightness sampling point is the closest brightness sampling point to the target chromaticity sampling point in the target video block, and the brightness reference points corresponding to the target chromaticity sampling points are obtained by downsampling the target brightness sampling points;
determining parameters of a linear model between the chromaticity and the brightness of the coding unit according to the brightness reference point with the maximum brightness value and the brightness reference point with the minimum brightness value and the chromaticity values of the chromaticity sampling points corresponding to the brightness reference point with the maximum brightness value and the brightness reference point with the minimum brightness value;
and determining the chroma value of the chroma sampling point in the coding unit according to the parameter of the linear model and the brightness value of the luma sampling point in the coding unit.
In one possible implementation, the down-sampling processing is performed on the target brightness sample points as follows:
if the coding unit and the target video block belong to the same coding tree unit, performing downsampling processing on the target brightness sampling point;
and if the coding unit and the target video block belong to different coding tree units, performing downsampling processing on brightness sampling points which are closest to the coding unit in the target brightness sampling points.
In one possible implementation, the down-sampling processing is performed on the target brightness sample points as follows:
and directly carrying out downsampling processing on the brightness sampling points which are closest to the coding unit in the target brightness sampling points.
In one possible implementation, the downsampling the target brightness sample points includes:
calculating the brightness average value of each brightness sampling point in the target brightness sampling points;
and determining the brightness value of the corresponding brightness reference point according to the brightness average value.
In one possible implementation, the downsampling processing on the luminance sample closest to the coding unit among the target luminance samples includes:
and taking the brightness value of the brightness sampling point which is closest to the coding unit in the target brightness sampling points as the brightness value of the corresponding brightness reference point.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for determining chroma information of a coding unit, including:
the selecting module is configured to select a reconstructed reference video block around a coding unit to be processed in a video frame, wherein the reconstructed reference video block at least comprises a target video block, and the target video block is adjacent to the transverse boundary of the coding unit;
the sampling module is configured to perform downsampling processing on brightness sampling points in the reference video block to obtain a brightness reference point corresponding to the chromaticity sampling points in the reference video block, wherein a target chromaticity sampling point is a chromaticity sampling point which is closest to a first row of brightness sampling points in the coding unit in the target video block, a target brightness sampling point is a brightness sampling point which is closest to the target chromaticity sampling point in the target video block, and the brightness reference point corresponding to the target chromaticity sampling point is obtained by downsampling the target brightness sampling point;
a parameter determination module configured to perform determining a parameter of a linear model between the chromaticity and the luminance of the encoding unit according to a luminance reference point with a maximum luminance value and a luminance reference point with a minimum luminance value, and chromaticity values of chromaticity sampling points corresponding to the luminance reference point with the maximum luminance value and the luminance reference point with the minimum luminance value;
a chroma determination module configured to perform determining chroma values of chroma sampling points in the coding unit according to parameters of the linear model and luma values of the luma sampling points in the coding unit.
In one possible implementation, the sampling module is specifically configured to perform downsampling processing on the target brightness sampling points in the following manner:
if the coding unit and the target video block belong to the same coding tree unit, performing downsampling processing on the target brightness sampling point;
and if the coding unit and the target video block belong to different coding tree units, performing downsampling processing on brightness sampling points which are closest to the coding unit in the target brightness sampling points.
In one possible implementation, the sampling module is specifically configured to perform downsampling processing on the target brightness sampling points in the following manner:
and directly carrying out downsampling processing on the brightness sampling points which are closest to the coding unit in the target brightness sampling points.
In a possible implementation, the sampling module is specifically configured to perform:
calculating the brightness average value of each brightness sampling point in the target brightness sampling points;
and determining the brightness value of the corresponding brightness reference point according to the brightness average value.
In a possible implementation, the sampling module is specifically configured to perform:
and taking the brightness value of the brightness sampling point which is closest to the coding unit in the target brightness sampling points as the brightness value of the corresponding brightness reference point.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the above methods of determining chroma information for a coding unit.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a storage medium, wherein when instructions of the storage medium are executed by a processor of an electronic device, the electronic device is capable of executing any one of the above methods for determining chroma information of an encoding unit.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product, which when invoked by a computer, can cause the computer to perform any of the above-mentioned methods of determining chroma information of a coding unit.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
selecting a reconstructed reference video block at the periphery of a coding unit to be processed in a video frame, wherein the reconstructed reference video block at least comprises a target video block adjacent to the transverse boundary of the coding unit, then performing downsampling processing on a brightness sampling point in the reference video block to obtain a brightness reference point corresponding to a chromaticity sampling point in the reference video block, determining a parameter of a linear model between the chromaticity and the brightness of the coding unit according to the brightness reference point with the maximum brightness value and the brightness reference point with the minimum brightness value, and the chromaticity values of the chromaticity sampling points corresponding to the brightness reference point with the maximum brightness value and the brightness reference point with the minimum brightness value, and further determining the chromaticity value of the chromaticity sampling point in the coding unit according to the parameter and the brightness value of the brightness sampling point in the coding unit, wherein the target chromaticity sampling point is the chromaticity sampling point closest to the first row of the coding unit in the target video block, the target brightness sampling point is a brightness sampling point which is closest to the target chromaticity sampling point in the target video block, and the brightness reference point corresponding to the target chromaticity sampling point is obtained by performing downsampling processing on the target brightness sampling point, so that whether the coding unit is positioned at the left boundary of the video frame or not is not required to be considered when the brightness reference point corresponding to the target chromaticity sampling point is determined, downsampling processing on the brightness sampling point around the target chromaticity sampling point is simpler, and the CCLM prediction mode can be simplified.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a block diagram illustrating an exemplary encoder in accordance with an exemplary embodiment;
FIG. 2 is a block diagram illustrating an exemplary decoder in accordance with an exemplary embodiment;
FIG. 3 is a diagram illustrating sample positions of luma and chroma sample points in a video frame according to an example embodiment;
FIG. 4 is a schematic diagram illustrating a reference video block in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating yet another reference video block in accordance with an example embodiment;
FIG. 6 is a schematic diagram illustrating yet another reference video block in accordance with an example embodiment;
FIG. 7 is a schematic diagram illustrating yet another reference video block in accordance with an example embodiment;
FIG. 8 is a diagram illustrating a linear relationship between chroma and luma for the same CU, according to an example embodiment;
FIG. 9 is a schematic diagram illustrating yet another reference video block in accordance with an example embodiment;
FIG. 10 is a schematic diagram illustrating yet another reference video block in accordance with an example embodiment;
FIG. 11 is a schematic diagram illustrating yet another reference video block in accordance with an example embodiment;
FIG. 12 is a schematic diagram illustrating a multi-component linear model in accordance with an exemplary embodiment;
FIG. 13 is a diagram illustrating LM prediction using a single column of adjacent luma samples, according to an illustrative embodiment;
FIG. 14 is a diagram illustrating a classification-based CCLM coefficient derivation in accordance with an exemplary embodiment;
FIG. 15 is a diagram illustrating the numbering of an index sequence number in accordance with an exemplary embodiment;
FIG. 16 is a diagrammatical illustration of a further index number numbering illustrated in accordance with an exemplary embodiment;
FIG. 17 is a diagrammatical representation of yet another index number numbering illustrated in accordance with an exemplary embodiment;
FIG. 18 is a diagrammatical representation of yet another index number numbering illustrated in accordance with an exemplary embodiment;
FIG. 19 is a diagrammatical representation of yet another index number numbering illustrated in accordance with an exemplary embodiment;
fig. 20 is a flowchart illustrating a method of determining chroma information for a coding unit in accordance with an exemplary embodiment;
FIG. 21 is a schematic diagram illustrating a downsampling process of luminance sample points in accordance with an exemplary embodiment;
fig. 22 is a block diagram illustrating an apparatus for determining chrominance information of a coding unit in accordance with an exemplary embodiment;
fig. 23 is a schematic structural diagram of an electronic device for implementing a method of determining chroma information of a coding unit according to an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Referring to FIG. 5, FIG. 5 illustrates the luma samples in the luma block of FIG. 4, where each tile may represent a luma sample point, the first row and the first column of the luma block are the origin of coordinates pY [0] [0], pTopDsY [ x ] represents the luma reference point on the top side of the luma block, e.g., pTopDsY [0] represents the first luma reference point on the top side of the luma block (i.e., the luma reference point corresponding to the target chroma sample point), pTopDsY [1] represents the second luma reference point on the top side of the luma block, pLeftDsY [ y ] represents the luma reference point on the left side of the luma block, e.g., pLeftDsY [0] represents the first luma reference point on the left side of the luma block, and pLeftDsY [1] represents the second luma reference point on the left side of the luma block. The luminance value of each luminance reference point of pTopDsY [ x ] and pfefdsy [ y ] may be determined according to the luminance values of its surrounding luminance sampling points.
Assume that numSampL and numSampT respectively represent the numbers of luminance reference points on the left and upper sides of the luminance block in fig. 5. In the related art, when a CCLM prediction mode is adopted, when an upper luminance sampling point is located in an upper CTU, bCTUboundary is set to true, otherwise, bCTUboundary is set to false; availTL is set to true when the upper left luma sample point is available (i.e., the CU is not located at the boundary of the video frame), otherwise availTL is set to false.
In specific implementation, for the left luminance reference point pfleftdsy [ y ], the luminance value of pfleftdsy [ y ] may be determined according to the 2 rows and 3 columns of luminance sampling points around pfleftdsy [ y ].
For example, the following formula is used to determine the luminance value of pLeftDsY [ y ]:
pLeftDsY[y]=(pY[-1][2*y]+pY[-1][2*y+1]+2*pY[-2][2*y]+2*pY[-2][2*y+1]+pY[-3][2*y]+pY[-3][2*y+1]+4)>>3;
wherein y is 0, …, numSampL-1.
The case where x is 1, …, numSampT-1, and x is 0 is different from the case of the luminance reference point pTopDsY [ x ] on the upper side, and the following description is made.
When x is 1, …, numSampT-1, the following two cases are distinguished:
if bCTOBoundary is false, the luminance value of pTopDsY [ x ] is determined using the following formula:
pTopDsY[x]=(pY[2*x-1][-2]+pY[2*x-1][-1]+2*pY[2*x][-2]+2*pY[2*x][-1]+pY[2*x+1][-2]+pY[2*x+1][-1]+4)>>3;
if bCTOBoundary is true, the luminance value of pTopDsY [ x ] is determined using the following equation:
pTopDsY[x]=(pY[2*x-1][-1]+2*pY[2*x][-1]+pY[2*x+1][-1]+2)>>2;
when x is 0, the following four cases are classified:
if availTL is true and bCTOBounday is false, the luminance value of pTopDsY [0] is determined using the following equation:
pTopDsY[0]=(pY[-1][-2]+pY[-1][-1]+2*pY[0][-2]+2*pY[0][-1]+pY[1][-2]+pY[1][-1]+4)>>3;
if availTL is true and bCTOBounday is true, the luminance value of pTopDsY [0] is determined using the following equation:
pTopDsY[0]=(pY[-1][-1]+2*pY[0][-1]+pY[1][-1]+2)>>2;
if availTL is false and bCTOBounday is false, the luminance value of pTopDsY [0] is determined using the following formula:
pTopDsY[0]=(pY[0][-2]+pY[0][-1]+1)>>1;
if availTL is false and bCTOBounday is true, the luminance value of pTopDsY [0] is determined using the following equation:
pTopDsY[0]=pY[0][-1]。
in the above scheme, when the CCLM prediction mode is adopted, pTopDsY [0] considers whether a CU is located at the boundary of a video frame or whether the CU and a target video block belong to different CTUs, which complicates the CCLM prediction mode.
In the related art, when obtaining the luminance reference point, downsampling is performed on the neighboring luminance sample points of 2 rows above and 3 columns left of the luminance block, which increases the line buffer and also increases the complexity of model coefficient derivation, as shown in fig. 6.
To solve these problems, it is proposed in the related art that only luminance sampling points adjacent to 1 row above and 1 column to the left of the luminance block are required, which means that only the first upper row and the second left column will be used for downsampling to obtain the luminance reference point, as shown in fig. 7. According to the scheme, the number of brightness sampling points used in downsampling processing is reduced, the accuracy of chroma prediction is low, therefore, the final video compression ratio is not good, and for a brightness reference point at the leftmost end above a brightness block (namely, a brightness reference point corresponding to a target chroma sampling point), whether the brightness sampling point at the left side of the brightness reference point is available or not (namely, whether a CU is located at the left boundary of a video frame or not) needs to be considered when the brightness reference point is determined, and the CCLM prediction mode is still complex.
To simplify the CCLM prediction mode, embodiments of the present disclosure propose to remove some of the conditions in the existing schemes to determine the luminance reference point, in particular to simplify the derivation of pTopDsY [0 ].
With continued reference to fig. 5, in the embodiment of the present disclosure, when x is 1, …, numSampT-1, the luminance value of pTopDsY [ x ] is determined considering a condition whether a video block located on the left side of the luminance block in the reference video block and a CU belong to the same CTU.
If it is determined that the video block located at the left side of the luminance block in the reference video block does not belong to the same CTU as the CU, bCTUboundary is false, and the luminance value of pTopDsY [ x ] can be determined according to the luminance values of 6 luminance samples around pTopDsY [ x ].
For example, the luminance value of pTopDsY [ x ] is determined using the following formula:
pTopDsY[x]=(pY[2*x-1][-2]+pY[2*x-1][-1]+2*pY[2*x][-2]+2*pY[2*x][-1]+pY[2*x+1][-2]+pY[2*x+1][-1]+4)>>3;
if the video block on the left side of the luminance block in the reference video block is determined to belong to the same CTU as the CU, indicating that bCTTUBboundary is true, the luminance value of pTopDsY [ x ] can be determined according to the luminance values of 3 luminance samples around pTopDsY [ x ].
For example, the luminance value of pTopDsY [ x ] is determined using the following formula:
pTopDsY[x]=(pY[2*x-1][-1]+2*pY[2*x][-1]+pY[2*x+1][-1]+2)>>2;
when x is 0, one scheme may determine the luma value of pTopDsY [0] based only on the condition of whether a CU and the target video block belong to the same CTU, regardless of whether the CU is located at the left boundary of the video frame.
If it is determined that the CU and the target video block belong to the same CTU, indicating that bCTTUBoundary is false, the luma value of pTopDsY [0] can be determined from the luma values of the luma samples closest to pTopDsY [0] (see 2X in column 2 of the dashed box in FIG. 3, corresponding to pY [0] [ -2] and pY [0] [ -1] in FIG. 5).
For example, the luminance value of pTopDsY [0] is determined using the following formula:
pTopDsY[0]=(pY[0][-2]+pY[0][-1]+1)>>1;
if it is determined that the CU and the target video block do not belong to the same CTU, indicating that bCTTUBboundary is true, the luma value of pTopDsY [0] can be determined based on the luma values of the luma samples closest to pTopDsY [0] (see X in the middle of row 2 in the dashed box in FIG. 3, corresponding to pY [0] [ -1] in FIG. 5).
For example, the luminance value of pTopDsY [0] is determined using the following formula:
pTopDsY[0]=pY[0][-1]。
when x is 0, another scheme may determine the luminance value of pTopDsY [0] directly from the luminance value of the luminance sample closest to CU among the luminance samples closest to pTopDsY [0] (see x in the middle of row 2 in the dashed box in fig. 3, corresponding to pY [0] [ -1] in fig. 5) regardless of whether the CU is located at the left boundary of the video frame or whether the target video block belongs to the same CTU as the CU.
For example, the luminance value of pTopDsY [0] can be determined directly using the following formula:
pTopDsY[0]=pY[0][-1]。
whichever of the above schemes is employed, the conditions to be taken into account are simplified in determining the luminance value of pTopDsY [0] compared to the prior art approach, and therefore, the CCLM prediction mode can be simplified.
Referring to fig. 8, fig. 8 is a schematic diagram of a linear relationship between the chroma and the luminance of the same CU, wherein the luminance value of the point a is the smallest and the luminance value of the point B is the largest, and after determining the chroma of the two points, α and β can be determined according to the following formulas:
Figure BDA0002341716950000131
wherein x isADenotes the brightness value of A point, yADenotes the chromaticity value, x, of the A pointBBrightness value, y, representing B pointBIndicating the chrominance value of point B.
In specific implementation, after downsampling is performed on the luminance sampling points in the reference video frame to obtain the luminance values of the luminance reference points (i.e., reconstructed luminance sampling points), the luminance reference point with the maximum luminance value and the luminance reference point with the minimum luminance value may be screened from the luminance reference points, and the parameters of the linear model between the chrominance and the luminance of the CU (i.e., the α and β) may be determined according to the luminance reference point with the maximum luminance value and the corresponding chrominance sampling point (i.e., the point B in fig. 8), the luminance reference point with the minimum luminance value and the corresponding chrominance sampling point (i.e., the point a in fig. 8).
Further, downsampling may be performed on the luminance sampling points in the CU to obtain luminance reference points corresponding to the chrominance sampling points in the CU, and then, the chrominance value of each chrominance sampling point in the CU is determined according to the following formula:
predC(i,j)=α·recL'(i,j)+β(1);
therein, predC(i, j) represents the chroma value of the ith row and jth column chroma sampling point in the CU; recL' (i, j) denotes the luminance value of the luminance reference point in the ith row and jth column obtained by down-sampling the luminance sampling points in the CU.
In practice, the min-Max method calculations are performed as part of the decoding process, not just the encoder search operation, and therefore there is no syntax for communicating α and β to the decoder, at present, the luminance reference points can be obtained by downsampling the luminance sample points using any of equations (3) - (19).
Rec′L(x,y)=(RecL(2x,2y)*2+RecL(2x+1,2y)+RecL(2x-1,2y)+RecL(2x,2y+1)*2+RecL(2x+1,2y+1)+RecL(2x-1,2y+1)+4)>>3(3);
Rec′L(x,y)=(RecL(2x,2y)+RecL(2x,2y+1)+RecL(2x+1,2y)+RecL(2x+1,2y+1)+2)>>2(4);
Rec′L(x,y)=RecL(2x,2y)(5);
Rec′L(x,y)=RecL(2x+1,2y)(6);
Rec′L(x,y)=RecL(2x-1,2y)(7);
Rec′L(x,y)=RecL(2x-1,2y+1)(8);
Rec′L(x,y)=RecL(2x,2y+1)(9);
Rec′L(x,y)=RecL(2x+1,2y+1)(10);
Rec′L(x,y)=(RecL(2x,2y)+RecL(2x,2y+1)+1)>>1(11);
Rec′L(x,y)=(RecL(2x,2y)+RecL(2x+1,2y)+1)>>1(12);
Rec′L(x,y)=(RecL(2x+1,2y)+RecL(2x+1,2y+1)+1)>>1(13);
Rec′L(x,y)=(RecL(2x,2y+1)+RecL(2x+1,2y+1)+1)>>1(14);
Rec′L(x,y)=(2×RecL(2x,2y+1)+RecL(2x-1,2y+1)+RecL(2x+1,2y+1)+2)>>2(15);
Rec′L(x,y)=(RecL(2x+1,2y)+RecL(2x+1,2y+1)+1)>>1(16);
Rec′L(x,y)=(RecL(2x-1,2y)+3·RecL(2x,2y)+3·RecL(2x+1,2y)+RecL(2x+2,2y)+RecL(2x-1,2y+1)+3·RecL(2x,2y+1)+3·RecL(2x+1,2y+1)+RecL(2x+2,2y+1)+8)>>4(17);
Rec′L(x,y)=(RecL(2x-1,2y-1)+2·RecL(2x,2y-1)+RecL(2x+1,2y-1)+2·RecL(2x-1,2y)+4·RecL(2x,2y)+2·RecL(2x+1,2y)+RecL(2x-1,2y+1)+2·RecL(2x,2y+1)+RecL(2x+1,2y+1)+8)>>4(18);
Rec′L(x,y)=(RecL(2x,2y-1)+RecL(2x+1,2y-1)+2·RecL(2x,2y)+2·RecL(2x+1,2y)+RecL(2x,2y+1)+RecL(2x+1,2y+1)+4)>>3(19)。
The following describes a process for selecting a reference video block in the embodiments of the present disclosure.
One possible case may be that the immediately upper 2 rows and the immediately left 3 columns of luminance sampling points of the luminance block are selected as shown in fig. 5 (only the immediately left 2 columns of luminance sampling points are shown in fig. 5, and one column is not shown), and at this time, the number of columns of the 2 rows of luminance sampling points may be the same as the number of columns of luminance sampling points in the CU, and the number of rows of the 3 columns of luminance sampling points may be the same as the number of rows of luminance sampling points in the CU.
In particular implementations, in addition to using the upper and left sample sets together to compute the parameters α, β, the parameters α, β may also be computed using the LM _ A mode and the LM _ L mode, where:
in the LM _ a mode, the parameters α, β are calculated using only the upper side sample set, and in order to obtain more luminance sample points, the upper side sample set may be expanded to (W + H) as shown in fig. 9, where W is the width of the CU and H is the height of the CU.
In the LM _ L mode, the parameters α, β are calculated using only the left sample set, and in order to obtain more luminance sample points, the left sample set is expanded to (H + W), as shown in fig. 10.
For non-square video blocks, the upper sample set may be expanded to W + W and the left sample set may be expanded to H + H.
In specific implementation, if the upper/left side sampling set is not available, the LM _ a mode and the LM _ L mode may not be checked or signaled; if the upper/left sample set is available but the luminance sample points are not enough, the required number of sample points can be reached by copying the rightmost sample point (for the upper sample set) or the bottommost sample point (for the left sample set) to the corresponding position for filling.
The CCLM prediction mode in the embodiments of the present disclosure may also have a variety of extension cases, which are described below.
In one case, the LM mode is extended.
In this Mode, the luma reference point is divided into two categories, one for each linear Model, and the luma value of the luma reference point is determined in each Model, α and β, respectively.
Multi-filter LMMode (MFLM) refers to downsampling luma samples with different filters in a prediction model, e.g., using four filters, and indicating the filters used in the bitstream or signaling the filters used directly.
The LM-angle prediction mode, which is a combination of MMLM and non-LM modes, averages the chroma prediction values determined by the two modes.
In another case, a plurality of adjacent linear models (MNLM) are used.
This scheme uses multiple neighbor sets in the MMLM derivation to cover various linear relationships between the chroma and luma of a CU, as shown in fig. 11, where three MMLMs with different neighbor sets are proposed in the MNLM, where A, B, C, D, E, F, G, H respectively represent different reconstructed video blocks around the CU.
MMLM: A. b, C, D (including upper and left adjacent sets);
upside-MMLM: C. d, F, H (including only the upper neighbor set);
left-MMLM: (including only the left neighbor set);
the CCLM prediction modes in MNLM are listed in table 1 below:
TABLE 1
Figure BDA0002341716950000161
Figure BDA0002341716950000171
Where mode 0, mode 1, mode 2 and mode 3 use the same downsampling filter, but the LM and MMLM derivations use different neighbor sets.
In another case, an adaptive multi-component linear model is used.
In this scheme, a single or multiple sets for linear model prediction are generated. First, the sample points from the upper or left side neighboring video block of the CU are grouped according to the luminance and chrominance values by scanning the neighboring sample points and determining whether they can be added to the existing group according to criteria one and one, and if the neighboring sample points cannot satisfy the criteria of any existing group, a new group is created for them, so that all the neighboring sample points will be divided into a plurality of groups, as shown in fig. 12.
The first standard: y isgroup_min–Ymargin<Yp<Ygroup_max+Ymargin
And a second standard: cgroup_min–Cmargin<Cp<Cgroup_max+Cmargin
Wherein, YpAnd CpRespectively representing the luminance and chrominance values of adjacent sample points, Ygroup_maxAnd Cgroup_maxRespectively representing the maximum luminance and maximum chrominance value within a group, Ygroup_minAnd Cgroup_minMaximum represents the minimum luminance and minimum chrominance values within the set, YmarginAnd CmarginRepresenting the margins of the luma and chroma groups, respectively.
Thereafter, the parameters α for LM in each group are determined by minimizing the regression error of the equations defined in CCLMgAnd βgWhen predicting the chroma value of any chroma sampling point, determining the nearest group according to the average brightness value, and selecting the parameter α of LM in the nearest groupgAnd βgCalculating the chroma value of the chroma sampling point by the following formula:
PredC(x,y)=αg·Rec′L(x,y)+βg
alternatively, CCLM prediction is performed using a single column of adjacent luminance samples.
To limit the adjacent luma samples required for the training process to a single column, a downsampling filter with fewer taps is applied, as shown in fig. 13 (a).
Using equation (15) for the upper adjacent luminance sample set;
using equation (16) for the left neighboring luma sample set;
the luma samples within the block are still downsampled using a six-tap filter.
Two solutions are provided in the related art, using different sets of adjacent luminance samples. Let the width and height of the CU be W and H, respectively. In solution #1, the upper neighboring sample W and the left neighboring sample H participate in the training process in fig. 13 (a). In solution #2, the upper adjacent sample 2W and the left adjacent sample 2H participate in the training process in fig. 13 (b). It should be noted that the extended neighboring samples in solution #2 have already been used by wide-angle intra prediction.
Further, the related art also proposes solution #1A, solution #2A, and solution #1A, solution #2A apply the same methods of solution #1, solution #2, respectively, but only to the above-described adjacent samples. In solutions #1A and #2A, the left adjacent samples are downsampled as in VTM-2.0, i.e. the H left adjacent luma samples are downsampled through a 6-tap filter.
In another case, the average is derived based on the classified CCLM coefficients.
As shown in FIG. 14, LmeanIs the average of the luminance sample set. Through LmeanDividing the brightness sample set into two brightness levels, determining two brightness levels LLmeanAnd LRmeanAverage value of (a). Accordingly, C is determinedLmeanAnd CRmeanThe parameters in the CCLM are determined according to the following formula:
Figure BDA0002341716950000181
β=CLmean-α*LLmean
the following describes a sampling check sequence of the min-Max method in the embodiment of the present disclosure.
In particular implementation, when searching for the respective minimum and maximum luminance value for each chrominance block, the K sample points are examined according to a predefined examination order, where K may be implicitly determined or explicitly signaled. Further, K may vary with the size of the chroma block.
Examination order # 1: as shown in fig. 15 and 16, the examination order is arranged in ascending order by the index number. Conceptually, the inspection order is to inspect the samples from the top right or bottom left corners, staggered towards the top left corner. It is to be noted that, when the width of the chroma block is not equal to the height of the chroma block, after checking all samples in the short side, the sampling points in the long side are checked only according to the index numbers arranged in the ascending order.
Examination order # 2: as shown in fig. 17, the checking order of the LM _ a and LM _ L modes is arranged in ascending order by index number. Conceptually, the inspection order is that the samples will be inspected from the top right or bottom left corner towards the top left corner.
Examination order # 3: as shown in fig. 18, the examination order is arranged in ascending order by the index number. Conceptually, the inspection sequence is to inspect the samples from the ends of the two sides toward the center of the two sides. It is to be noted that, when the width of the chroma block is not equal to the height of the chroma block, after checking all samples in the short side, the sampling points in the long side are checked only according to the index numbers arranged in the ascending order.
Checking order # 4: as shown in fig. 19, the checking order of the LM _ a and LM _ L modes is arranged in ascending order by index number. Conceptually, the inspection sequence is to inspect the samples from the ends of the two sides toward the center of the two sides.
It should be noted that the examination order listed here is only an example, and does not constitute a limitation on the examination order in the embodiments of the present disclosure. Furthermore, only sub-sample sets of the sample set may be examined. It is worth mentioning that any type of filter (e.g., equation (4) -equation (9)) may be used to downsample the luminance sample points.
The process of selecting sample points to search for the minimum and maximum values in the embodiments of the present disclosure is described below.
In particular implementations, rather than examining the minimum and maximum filtered luminance values, the minimum and maximum luminance values are generated using an unfiltered method such as equations (5) through (10), and after determining the luminance sample points having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance sample points using a filtering method such as equations (3) through (19). Note that the upsampling and left sampling of the current block should use different filters, the sample point located in the upper left corner of the current block may use the same filter as the upsampling set or the left sampling set or another filter.
It should be noted that when equation (1) is used to generate the final LM predictor, the samples can still use the conventional filter (e.g., equation (4)).
In one particular example, the min-Max method is modified to examine the minimum and maximum luminance values generated using the unfiltered method (equation (5)), and after determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using the filtered method (equation (3)).
In one particular example, the min-Max method is modified to examine the minimum and maximum luminance values generated using the unfiltered method (equation (9)), and after determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using the filtered method (equation (3)).
In another particular example, the min-Max method is modified to check the minimum and maximum luminance values generated using the unfiltered method (equation (5)). After determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using a filtering method (equation (15)).
In another particular example, the min-Max method is modified to check the minimum and maximum luminance values generated using the unfiltered method (equation (5)). After determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using a filtering method (equation (16)).
In another particular example, the min-Max method is modified to examine the minimum and maximum luminance values generated using the unfiltered method (equation (5)), and after determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using the following conditional filtering.
Using equation (15) for the upper adjacent luminance sample point;
equation (16) is used for the left adjacent luminance sample point.
In another particular example, the min-Max method is modified to check the minimum and maximum luminance values generated using the conditional unfiltered method as follows.
Using equation (9) for the upper side adjacent luminance sample;
using equation (6) for the left neighboring luminance sample;
after determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using conditional filtering as follows.
Using equation (15) for the upper side adjacent luminance sample;
equation (16) is used for the left adjacent luminance sample.
In another particular example, the min-Max method is modified to check the minimum and maximum luminance values generated using the conditional unfiltered method as follows.
Using equation (9) for the upper side adjacent luminance sample;
using equation (10) for the left neighboring luminance sample;
after determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using conditional filtering as follows.
Using equation (15) for the upper side adjacent luminance sample;
equation (16) is used for the left adjacent luminance sample.
In another particular example, the min-Max method is modified to check the minimum and maximum luminance values generated using the unfiltered method (equation (9)). After determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using conditional filtering as follows.
Using equation (15) for the upper side adjacent luminance sample;
equation (11) is used for the left adjacent luminance sample.
In another particular example, the min-Max method is modified to check the minimum and maximum luminance values generated using the conditional unfiltered method as follows.
Using equation (9) for the upper side adjacent luminance sample;
using equation (5) for the left neighboring luminance sample;
after determining the luminance samples having the minimum and maximum values, the minimum and maximum luminance values are regenerated for the associated luminance samples using conditional filtering as follows.
Using equation (15) for the upper side adjacent luminance sample;
equation (11) is used for the left adjacent luminance sample.
In one possible implementation, where there are multiple luminance sample points having the same luminance value, the associated chrominance values may be weighted averaged and the weighted average used as the final minimum and maximum luminance values to determine α and β.
In another possible embodiment, where there are multiple luminance sample points with similar luminance values (the luminance values of these luminance sample points are all within a predetermined range), the luminance values and associated chrominance values may be weighted averaged and the weighted average used as the final minimum and maximum luminance and chrominance values to determine α and β.
Fig. 20 is a flowchart illustrating a method of determining chrominance information of a coding unit, which may be performed at an encoding side or a decoding side, according to an exemplary embodiment, and the flowchart includes the following steps.
S2001: and selecting a reconstructed reference video block at the periphery of the CU to be processed in the video frame, wherein the reconstructed reference video block at least comprises a target video block adjacent to the transverse boundary of the CU.
The color sampling format of the video frame is YUV4:2:0, and the reconstructed video block refers to a video block in the video frame, the luminance prediction and the chrominance prediction of which are completed.
In particular, the reconstructed video blocks around the CU are generally located on the upper side and the left side of the CU, so the reference video frame can be selected from the left side and/or the upper side of the CU, and the target video block in the reference video frame is adjacent to the lateral boundary of the CU, so the target video block is selected from the upper side of the CU.
In a possible implementation, the reference video block that has completed reconstruction is selected from the left side and the upper side of the CU, and at this time, the video block located on the left side of the CU in the reference video block may include a number of rows of luminance sampling points equal to the number of rows of luminance sampling points included in the CU, and the video block may include a number of columns of luminance sampling points equal to 3M rows, where M is a positive integer; the video block located on the upper side of the CU in the reference video block (i.e., the target video block) may include a column number of luminance sampling points equal to the column number of luminance sampling points included in the CU, and the target video block may include a row number of luminance sampling points equal to 2M rows, see fig. 4.
In another possible implementation, the reference video block that has been completely reconstructed is only selected from the upper side of the CU, and at this time, the number of columns of the luminance sampling points included in the video block located on the upper side of the CU (i.e., the target video block) in the reference video block may be twice as large as the number of columns of the luminance sampling points included in the CU, and the number of rows of the luminance sampling points included in the target video block may be equal to 2M rows, as shown in fig. 9.
S2002: and performing downsampling processing on the brightness sampling points in the reference video block to obtain brightness reference points corresponding to the chromaticity sampling points in the reference video block, wherein the target chromaticity sampling points are the chromaticity sampling points closest to the brightness sampling points in the first row of the CU in the target video block, the target brightness sampling points are the brightness sampling points closest to the target chromaticity sampling points in the target video block, and the brightness reference points corresponding to the target chromaticity sampling points are obtained by performing downsampling processing on the target brightness sampling points.
Referring to fig. 3, since the color sampling format of the video frame is YUV4:2:0, and the sampling frequency of the luminance component is greater than the sampling frequency of the chrominance component, the number of luminance sampling points in the reference video block is greater than the number of chrominance sampling points, and in order to determine the linear relationship between the chrominance and the luminance of the CU using the reference video block, the luminance sampling points in the reference video block may be downsampled to obtain luminance reference points (i.e., reconstructed luminance sampling points) corresponding to the chrominance sampling points in the reference video block one to one.
And, in YUV4:2:0, the nearest luminance sample point to any chrominance sample point in the target video block is generally 2, see fig. 3, and in the dashed box, the nearest chrominance sample point to the middle position is the middle column of 2 luminance sample points, that is, in the embodiment of the present disclosure, there are 2 target luminance sample points.
In one possible implementation, when determining the luma reference point corresponding to the target chroma sampling point, the downsampling of the luma sampling points around the target chroma sampling point may be performed without considering whether the CU is located at the left boundary of the video frame, and only considering whether the CU and the target video block belong to the same CTU.
In specific implementation, if it is determined that the CU and the target video block belong to the same CTU, downsampling processing may be performed on the target luminance sample points.
Specifically, the brightness average value of each brightness sampling point in the target brightness sampling points can be calculated, and then the brightness value of the corresponding brightness reference point is determined according to the brightness average value.
For example, the luminance value of the corresponding luminance reference point is determined directly from the luminance average value, and then, for example, the value obtained by rounding the luminance average value is used as the luminance value of the corresponding luminance reference point.
In specific implementation, if it is determined that the CU and the target video block belong to different CTUs, downsampling may be performed on a luminance sample closest to the CU among the target luminance samples.
For example, the brightness value of the brightness sample closest to CU in the target brightness sample points is taken as the brightness value of the corresponding brightness reference point.
In another possible implementation, when determining the luminance reference point corresponding to the target chroma sampling point, the downsampling process may be directly performed on the luminance sampling point closest to the CU among the target luminance sampling points, regardless of whether the CU is located at the left boundary of the video frame, and regardless of whether the CU and the target video block belong to the same CTU.
For example, the brightness value of the brightness sample closest to CU in the target brightness sample points is taken as the brightness value of the corresponding brightness reference point.
In specific implementation, for each chroma sampling point in the target video block except the target chroma sampling point, whether the CU and the target video block belong to the same CTU or not may be considered to determine a corresponding luminance reference point. Taking the luminance sampling points of 2 rows and 3 columns around the chrominance sampling point as an example, at this time, if the CU and the target video block belong to the same CTU, downsampling the luminance sampling points of the 2 rows and 3 columns to obtain corresponding luminance reference points; if the CU and the target video block belong to different CTUs, downsampling processing can be performed on the brightness sampling points of the 2 rows and 3 columns, which are close to the 1 row and 3 columns of the CU, so that corresponding brightness reference points are obtained.
Moreover, if it is determined that the reference video block further includes video blocks other than the target video block, the corresponding luminance reference points may also be determined for the chrominance sampling points in these video blocks in this manner, which is not described herein again.
The above process is described below with reference to specific embodiments.
Referring to fig. 21, it is assumed that a reference video block that has been reconstructed is selected only from the upper side of the CU, at this time, the reference video block is a target video block, and it is assumed that, for each chroma sampling point in the target video block, 2 rows and 3 columns of luma sampling points around the chroma sampling point are considered when determining a luma reference point corresponding to the chroma sampling point, where the numbers of the 1 st row of 3 luma sampling points are 1, 2, and 3 from left to right, and the numbers of the 2 nd row of 3 luma sampling points are 4, 5, and 6 from left to right.
In specific implementation, for each chrominance sampling point which is not the target chrominance sampling point in the target video block, downsampling processing is performed on the luminance sampling points around the chrominance sampling point only by considering the condition that whether the CU and the target video block belong to the same CTU.
For example, when the CU and the target video block belong to the same CTU, downsampling the luminance sampling points at positions 1 to 6 to obtain the luminance values of the luminance reference points corresponding to the chrominance sampling points; and when the CU and the target video block belong to different CTUs, downsampling the brightness sampling points at the positions from 4 to 6 to obtain the brightness value of the brightness reference point corresponding to the chromaticity sampling point.
In one possible implementation, for a target chroma sampling point in a target video block, downsampling may be performed on luma sampling points around the target chroma sampling point without considering whether a CU is a left boundary of a video frame, and only considering a condition whether the CU and the target video block belong to the same CTU.
In specific implementation, when the CU and the target video block belong to the same CTU, the average luminance value of the luminance sampling points at the 2 and 5 positions can be determined as the luminance of the luminance reference point corresponding to the target chrominance sampling point; when the CU and the target video block belong to different CTUs, the luminance of the 5-position luminance sampling point may be determined as the luminance of the luminance reference point corresponding to the target chrominance sampling point.
In a possible implementation manner, for a target chrominance sampling point in a target video block, the luminance of a luminance sampling point at the 5 position can be directly determined as the luminance of a luminance reference point corresponding to the target chrominance sampling point, regardless of whether a CU is a boundary of a video frame or not, and regardless of whether the CU and the target video block belong to the same CTU.
In the two modes, when the brightness reference point corresponding to the target chroma sampling point is determined, the conditions needing to be considered are reduced, so that the CCLM prediction mode can be simplified.
S2003: and determining parameters of a linear model between the chromaticity and the brightness of the CU according to the chromaticity values of the chromaticity sampling points corresponding to the brightness reference point with the maximum brightness value and the brightness reference point with the minimum brightness value and the chromaticity sampling points corresponding to the brightness reference point with the maximum brightness value and the brightness reference point with the minimum brightness value.
In specific implementation, the luminance reference point with the maximum luminance value and the corresponding chrominance sampling point thereof may be used as a point B, the luminance reference point with the minimum luminance value and the corresponding chrominance sampling point thereof may be used as a point a, and then, parameters of a linear model between the chrominance and the luminance of the CU are determined according to the following formula:
Figure BDA0002341716950000251
wherein x isADenotes the brightness value of A point, yADenotes the chromaticity value, x, of the A pointBBrightness value, y, representing B pointBIndicating the chrominance value of point B.
S2004: and determining the chroma value of the chroma sampling point in the CU according to the parameters of the linear model and the brightness value of the luma sampling point in the CU.
In specific implementation, downsampling may be performed on the luminance sampling points in the CU to obtain luminance reference points corresponding to the chrominance sampling points in the CU one to one, and then, the chrominance value of each chrominance sampling point in the CU is determined according to the following formula:
predC(i,j)=α·recL'(i,j)+β(2);
therein, predC(i, j) represents the chroma value of the ith row and jth column chroma sampling point in the CU; recL' (i, j) denotes the luminance value of the ith row and jth column luminance reference point.
When the method provided in the embodiments of the present disclosure is implemented in software or hardware or a combination of software and hardware, a plurality of functional modules may be included in the electronic device, and each functional module may include software, hardware or a combination of software and hardware.
Specifically, fig. 22 is a block diagram illustrating an apparatus for determining chroma information of a coding unit according to an exemplary embodiment, which includes a selecting module 2201, a sampling module 2202, a parameter determining module 2203, and a chroma determining module 2204.
A selecting module 2201 configured to perform selecting, in a periphery of a coding unit to be processed in a video frame, a reference video block that has been completely reconstructed, where the reference video block that has been completely reconstructed at least includes one target video block, and the target video block is adjacent to a lateral boundary of the coding unit;
a sampling module 2202 configured to perform downsampling processing on the luminance sampling points in the reference video block to obtain luminance reference points corresponding to the chrominance sampling points in the reference video block, where a target chrominance sampling point is a chrominance sampling point closest to a luminance sampling point in a first row in the coding unit in the target video block, a target luminance sampling point is a luminance sampling point closest to the target chrominance sampling point in the target video block, and the luminance reference point corresponding to the target chrominance sampling point is obtained by downsampling the target luminance sampling point;
a parameter determining module 2203 configured to determine a parameter of a linear model between the chrominance and the luminance of the coding unit according to the luminance reference point with the maximum luminance value and the luminance reference point with the minimum luminance value and the chrominance values of the chrominance sampling points corresponding to the luminance reference point with the maximum luminance value and the luminance reference point with the minimum luminance value respectively;
a chrominance determination module 2204 configured to determine chrominance values of chrominance sampling points in the coding unit according to the parameters of the linear model and the luminance values of the luminance sampling points in the coding unit.
In one possible implementation, the sampling module 2202 is specifically configured to perform downsampling processing on the target brightness sampling points in the following manner:
if the coding unit and the target video block belong to the same coding tree unit, performing downsampling processing on the target brightness sampling point;
and if the coding unit and the target video block belong to different coding tree units, performing downsampling processing on brightness sampling points which are closest to the coding unit in the target brightness sampling points.
In one possible implementation, the sampling module 2202 is specifically configured to perform downsampling processing on the target brightness sampling points in the following manner:
and directly carrying out downsampling processing on the brightness sampling points which are closest to the coding unit in the target brightness sampling points.
In a possible implementation, the sampling module 2202 is specifically configured to perform:
calculating the brightness average value of each brightness sampling point in the target brightness sampling points;
and determining the brightness value of the corresponding brightness reference point according to the brightness average value.
In a possible implementation, the sampling module 2202 is specifically configured to perform:
and taking the brightness value of the brightness sampling point which is closest to the coding unit in the target brightness sampling points as the brightness value of the corresponding brightness reference point.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The division of the modules in the embodiments of the present disclosure is illustrative, and is only a logical function division, and there may be another division manner in actual implementation, and in addition, each functional module in each embodiment of the present disclosure may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The coupling of the various modules to each other may be through interfaces that are typically electrical communication interfaces, but mechanical or other forms of interfaces are not excluded. Thus, modules described as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 23 is a schematic diagram of an electronic device according to an exemplary embodiment, the electronic device includes a transceiver 2301 and a processor 2302, wherein the processor 2302 may be a Central Processing Unit (CPU), a microprocessor, an application specific integrated circuit, a programmable logic circuit, a large scale integrated circuit, a digital processing unit, or the like. The transceiver 2301 is used for data transmission and reception between an electronic device and other devices.
The electronic device may further comprise a memory 2303 for storing software instructions executed by the processor 2302, but may also store some other data required by the electronic device, such as identification information of the electronic device, encryption information of the electronic device, user data, etc. The memory 2303 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 2303 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD), or the memory 2303 may be any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 2303 may be a combination of the above.
The specific connection medium among the processor 2302, the memory 2303 and the transceiver 2301 is not limited in the embodiments of the disclosure. In fig. 23, the embodiment of the present disclosure is described by taking only an example in which the memory 2303, the processor 2302, and the transceiver 2301 are connected by the bus 2304, the bus is shown by a thick line in fig. 23, and the connection manner between other components is merely illustrative and not limiting. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 23, but it is not intended that there be only one bus or one type of bus.
The processor 2302 may be dedicated hardware or a processor running software, and when the processor 2302 can run software, the processor 2302 reads the software instructions stored in the memory 2303 and executes the method for determining the chrominance information of the coding unit involved in the foregoing embodiments under the driving of the software instructions.
The embodiment of the present disclosure also provides a storage medium, and when instructions in the storage medium are executed by a processor of an electronic device, the electronic device is capable of executing the method for determining chroma information of an encoding unit in the foregoing embodiment.
In some possible embodiments, the various aspects of the method for determining chrominance information of an encoding unit provided by the present disclosure may also be implemented in the form of a program product comprising program code for causing an electronic device to perform the method for determining chrominance information of an encoding unit referred to in the preceding embodiments, when the program product is run on the electronic device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A program product for determining chroma information for an encoding unit provided by embodiments of the present disclosure may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1. A method for determining chroma information for a coding unit, comprising:
selecting a reconstructed reference video block at the periphery of a coding unit to be processed in a video frame, wherein the reconstructed reference video block at least comprises a target video block, and the target video block is adjacent to the transverse boundary of the coding unit;
downsampling the brightness sampling points in the reference video block to obtain brightness reference points corresponding to the chromaticity sampling points in the reference video block, wherein a target chromaticity sampling point is the closest chromaticity sampling point to the first row of the coding unit in the target video block, a target brightness sampling point is the closest brightness sampling point to the target chromaticity sampling point in the target video block, and the brightness reference points corresponding to the target chromaticity sampling points are obtained by downsampling the target brightness sampling points;
determining parameters of a linear model between the chromaticity and the brightness of the coding unit according to the brightness reference point with the maximum brightness value and the brightness reference point with the minimum brightness value and the chromaticity values of the chromaticity sampling points corresponding to the brightness reference point with the maximum brightness value and the brightness reference point with the minimum brightness value;
and determining the chroma value of the chroma sampling point in the coding unit according to the parameter of the linear model and the brightness value of the luma sampling point in the coding unit.
2. The method of claim 1, wherein the down-sampling of the target luma sample points is performed as follows:
if the coding unit and the target video block belong to the same coding tree unit, performing downsampling processing on the target brightness sampling point;
and if the coding unit and the target video block belong to different coding tree units, performing downsampling processing on brightness sampling points which are closest to the coding unit in the target brightness sampling points.
3. The method of claim 1, wherein the down-sampling of the target luma sample points is performed as follows:
and directly carrying out downsampling processing on the brightness sampling points which are closest to the coding unit in the target brightness sampling points.
4. The method of claim 2, wherein downsampling the target luminance sample points comprises:
calculating the brightness average value of each brightness sampling point in the target brightness sampling points;
and determining the brightness value of the corresponding brightness reference point according to the brightness average value.
5. The method according to claim 2 or 3, wherein the downsampling processing of the luminance sample closest to the coding unit among the target luminance samples comprises:
and taking the brightness value of the brightness sampling point which is closest to the coding unit in the target brightness sampling points as the brightness value of the corresponding brightness reference point.
6. An apparatus for determining chroma information for a coding unit, comprising:
the selecting module is configured to select a reconstructed reference video block around a coding unit to be processed in a video frame, wherein the reconstructed reference video block at least comprises a target video block, and the target video block is adjacent to the transverse boundary of the coding unit;
the sampling module is configured to perform downsampling processing on brightness sampling points in the reference video block to obtain a brightness reference point corresponding to the chromaticity sampling points in the reference video block, wherein a target chromaticity sampling point is a chromaticity sampling point which is closest to a first row of brightness sampling points in the coding unit in the target video block, a target brightness sampling point is a brightness sampling point which is closest to the target chromaticity sampling point in the target video block, and the brightness reference point corresponding to the target chromaticity sampling point is obtained by downsampling the target brightness sampling point;
a parameter determination module configured to perform determining a parameter of a linear model between the chromaticity and the luminance of the encoding unit according to a luminance reference point with a maximum luminance value and a luminance reference point with a minimum luminance value, and chromaticity values of chromaticity sampling points corresponding to the luminance reference point with the maximum luminance value and the luminance reference point with the minimum luminance value;
a chroma determination module configured to perform determining chroma values of chroma sampling points in the coding unit according to parameters of the linear model and luma values of the luma sampling points in the coding unit.
7. The apparatus of claim 6, wherein the sampling module is specifically configured to perform downsampling the target luminance sample points by:
if the coding unit and the target video block belong to the same coding tree unit, performing downsampling processing on the target brightness sampling point;
and if the coding unit and the target video block belong to different coding tree units, performing downsampling processing on brightness sampling points which are closest to the coding unit in the target brightness sampling points.
8. The apparatus of claim 6, wherein the sampling module is specifically configured to perform downsampling the target luminance sample points by:
and directly carrying out downsampling processing on the brightness sampling points which are closest to the coding unit in the target brightness sampling points.
9. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores instructions executable by the at least one processor, the at least one processor being capable of performing the method of any one of claims 1-5 when the instructions are executed by the at least one processor.
10. A storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-5.
CN201911378799.7A 2019-01-03 2019-12-27 Method and device for determining chrominance information of coding unit Pending CN111083486A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962788124P 2019-01-03 2019-01-03
US62/788,124 2019-01-03

Publications (1)

Publication Number Publication Date
CN111083486A true CN111083486A (en) 2020-04-28

Family

ID=70318643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911378799.7A Pending CN111083486A (en) 2019-01-03 2019-12-27 Method and device for determining chrominance information of coding unit

Country Status (1)

Country Link
CN (1) CN111083486A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114503358A (en) * 2020-06-08 2022-05-13 腾讯美国有限责任公司 String matching with monochrome values
CN114598880A (en) * 2022-05-07 2022-06-07 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103200401A (en) * 2012-01-06 2013-07-10 索尼公司 Image processing device and image processing method
CN103688533A (en) * 2011-06-20 2014-03-26 联发科技(新加坡)私人有限公司 Method and apparatus of chroma intra prediction with reduced line memory
CN103918269A (en) * 2012-01-04 2014-07-09 联发科技(新加坡)私人有限公司 Method and apparatus of luma-based chroma intra prediction
CN107409209A (en) * 2015-03-20 2017-11-28 高通股份有限公司 Down-sampled for Linear Model for Prediction pattern is handled
US20180063527A1 (en) * 2016-08-31 2018-03-01 Qualcomm Incorporated Cross-component filter

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103688533A (en) * 2011-06-20 2014-03-26 联发科技(新加坡)私人有限公司 Method and apparatus of chroma intra prediction with reduced line memory
CN103918269A (en) * 2012-01-04 2014-07-09 联发科技(新加坡)私人有限公司 Method and apparatus of luma-based chroma intra prediction
CN103200401A (en) * 2012-01-06 2013-07-10 索尼公司 Image processing device and image processing method
CN107409209A (en) * 2015-03-20 2017-11-28 高通股份有限公司 Down-sampled for Linear Model for Prediction pattern is handled
US20180063527A1 (en) * 2016-08-31 2018-03-01 Qualcomm Incorporated Cross-component filter

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
G. LARCHE, J. TAQUET, C. GISQUET, AND P. ONNO: "JVET-L0191:on cross-component linear model simplification", 《JOINT VIDEO EXPERTS TEAM (JVET) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 12TH MEETING: MACAO, CN,OCT.2018》 *
MEI GUO,XUN GUO,YU-WEN, HUANG,SHAWMIN LEI: "JCTVC-F121:Intra Chroma LM Mode with Reduced Line Buffer", 《JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC)OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG116TH MEETING:TORINO,ITALY,JULY 2011》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114503358A (en) * 2020-06-08 2022-05-13 腾讯美国有限责任公司 String matching with monochrome values
CN114598880A (en) * 2022-05-07 2022-06-07 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium

Similar Documents

Publication Publication Date Title
CN105474639B (en) Video coding apparatus, video decoder, video system, method for video coding, video encoding/decoding method and program
CN105794206B (en) For rebuilding the adaptive loop filter method of video
KR20220154068A (en) Method and apparatus for intra picture coding based on template matching
KR20200040773A (en) Method and apparatus for filtering with mode-aware deep learning
KR102632259B1 (en) Video coding using cross-component linear model
US11936890B2 (en) Video coding using intra sub-partition coding mode
WO2021110116A1 (en) Prediction from multiple cross-components
CN115104301A (en) Neural network based intra prediction for video encoding or decoding
CN115176474A (en) Cross-component prediction for multi-parameter models
CN112703732A (en) Local illumination compensation for video encoding and decoding using stored parameters
WO2021115235A1 (en) Cross-component prediction using multiple components
EP4224842A1 (en) Image prediction method, encoder, decoder, and computer storage medium
WO2020106668A1 (en) Quantization for video encoding and decoding
TW202325024A (en) Image processing device and image processing method
CN111083486A (en) Method and device for determining chrominance information of coding unit
CN111225212A (en) Method and device for determining chrominance information of video block
JP2023113882A (en) Predictive decoding method, device, and computer storage medium
CN110944175B (en) Video coding and decoding method and device
JP2024531428A (en) Method and device for decoder-side intra mode derivation - Patents.com
EP3991417A1 (en) Motion vector prediction in video encoding and decoding
CN113615202A (en) Method and apparatus for intra prediction for screen content encoding and decoding
CN113132724A (en) Encoding and decoding method, device and equipment thereof
WO2023193551A9 (en) Method and apparatus for dimd edge detection adjustment, and encoder/decoder including the same
TW202209893A (en) Inter-frame prediction method, coder, decoder and computer storage medium characterized by reducing the prediction errors and promoting the coding performance to increase the coding/decoding efficiency
CN118202651A (en) Prediction method and device based on cross component linear model in video coding and decoding system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428