US20200204796A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
US20200204796A1
US20200204796A1 US16/805,500 US202016805500A US2020204796A1 US 20200204796 A1 US20200204796 A1 US 20200204796A1 US 202016805500 A US202016805500 A US 202016805500A US 2020204796 A1 US2020204796 A1 US 2020204796A1
Authority
US
United States
Prior art keywords
block
prediction
section
pixels
color difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/805,500
Inventor
Kazushi Sato
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Priority to US16/805,500 priority Critical patent/US20200204796A1/en
Assigned to SONY CORPORATION reassignment SONY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SATO, KAZUSHI
Publication of US20200204796A1 publication Critical patent/US20200204796A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Definitions

  • the present disclosure relates to an image processing device, and an image processing method.
  • a compression technology is widespread that has its object to effectively transmit or accumulate digital images, and that compresses the amount of information of an image by motion compensation and orthogonal transform such as discrete cosine transform, for example, by using redundancy unique to the image.
  • an image encoding device and an image decoding device conforming to a standard technology such as H.26x standards developed by ITU-T or MPEG-y standards developed by MPEG (Moving Picture Experts Group) are widely used in various scenes, such as accumulation and distribution of images by a broadcaster and reception and accumulation of images by a general user.
  • the H.26x standards (ITU-T Q6/16 VCEG) are standards developed initially with the aim of performing encoding that is suitable for communications such as video telephones and video conferences.
  • the H.26x standards are known to require a large computation amount for encoding and decoding, but to be capable of realizing a higher compression ratio, compared with the MPEG-y standards.
  • Joint Model of Enhanced-Compression Video Coding which is a part of the activities of MPEG4
  • a standard allowing realization of a higher compression ratio by adopting a new function while being based on the H.26x standards is developed. This standard was made an international standard under the names of H.264 and MPEG-4 Part10 (Advanced Video Coding; AVC) in March 2003.
  • Intra prediction is a technique of using a correlation between adjacent blocks in an image and predicting the pixel value of a certain block from the pixel value of another block that is adjacent to thereby reduce the amount of information to be encoded.
  • intra prediction is possible for all the pixel values.
  • the intra prediction can be made using a block of, for example, 4 ⁇ 4 pixels, 8 ⁇ 8 pixels, or 16 ⁇ 16 pixels as a processing unit (that is, a prediction unit (PU)).
  • a processing unit that is, a prediction unit (PU)
  • PU prediction unit
  • the size of the prediction unit is about to be extended to 32 ⁇ 32 pixels and 64 ⁇ 64 pixels (see Non-Patent Literature 1).
  • the optimum prediction mode to predict a pixel value of a block to be predicted is normally selected from a plurality of prediction modes.
  • the prediction mode is typically distinguished by the prediction direction from a reference pixel to a pixel to be predicted.
  • four prediction modes of the average value prediction, horizontal prediction, vertical prediction, and plane prediction can be selected.
  • an additional prediction mode called a linear model (LM) mode that predicts the pixel value of a color difference component using a linear function of a dynamically built luminance component as a prediction function is proposed (see Non-Patent Literature 2).
  • Non-Patent Literature 2 memory resources needed to build a prediction function in LM mode increase with an increasing number of reference pixels.
  • HEVC in which the size of the prediction unit is extended up to 64 ⁇ 64 pixels, it becomes necessary to provide a large memory to adopt the LM mode, which could present an obstacle to miniaturization or cost reduction of hardware.
  • an image processing apparatus including a decoding section that decodes a luminance component and a color difference component of a block inside a coding unit in the order of the luminance component and the color difference component in each block.
  • the image processing device mentioned above may be typically realized as an image decoding device that decodes an image.
  • an image processing method including decoding a luminance component and a color difference component of a block inside a coding unit in the order of the luminance component and the color difference component in each block.
  • an image processing apparatus including an encoding section that encodes a luminance component and a color difference component of a block inside a coding unit in the order of the luminance component and the color difference component in each block.
  • the image processing device mentioned above may be typically realized as an image encoding device that encodes an image.
  • an image processing method including encoding a luminance component and a color difference component of a block inside a coding unit in the order of the luminance component and the color difference component in each block.
  • the amount of memory resources needed when an intra prediction is made based on a dynamically built prediction function can be reduced.
  • FIG. 1 is a block diagram showing an example of a configuration of an image encoding device according to an embodiment.
  • FIG. 2 is a block diagram showing an example of a detailed configuration of an intra prediction section of the image encoding device of the embodiment.
  • FIG. 3 is an explanatory view illustrating examples of prediction mode candidates for a luminance component of a prediction unit of 4 ⁇ 4 pixels.
  • FIG. 4 is an explanatory view illustrating prediction directions related to the examples in FIG. 3 .
  • FIG. 5 is an explanatory view illustrating reference pixels related to the examples in FIG. 3 .
  • FIG. 6 is an explanatory view illustrating examples of prediction mode candidates for a luminance component of a prediction unit of 8 ⁇ 8 pixels.
  • FIG. 7 is an explanatory view illustrating examples of prediction mode candidates for a luminance component of a prediction unit of 16 ⁇ 16 pixels.
  • FIG. 8 is an explanatory view illustrating examples of prediction mode candidates for a color difference component.
  • FIG. 9 is an explanatory view illustrating a difference between the order of processing in an existing technique and the order of processing in the present embodiment.
  • FIG. 10A is an explanatory view illustrating a first example of the order of new encoding processing.
  • FIG. 10B is an explanatory view illustrating a second example of the order of new encoding processing.
  • FIG. 11A is a first explanatory view illustrating reference pixels in LM mode.
  • FIG. 11B is a second explanatory view illustrating reference pixels in LM mode.
  • FIG. 12 is an explanatory view showing an example of the definition of a reference ratio in a first scenario.
  • FIG. 13A is an explanatory view showing a first example of the number of reference pixels controlled according to the first scenario.
  • FIG. 13B is an explanatory view showing a second example of the number of reference pixels controlled according to the first scenario.
  • FIG. 14 is an explanatory view showing an example of the definition of a reference ratio in a second scenario.
  • FIG. 15A is an explanatory view showing a first example of the number of reference pixels controlled according to the second scenario.
  • FIG. 15B is an explanatory view showing a second example of the number of reference pixels controlled according to the second scenario.
  • FIG. 15C is an explanatory view showing a third example of the number of reference pixels controlled according to the second scenario.
  • FIG. 15D is an explanatory view showing a fourth example of the number of reference pixels controlled according to the second scenario.
  • FIG. 16 is an explanatory view showing an example of the definition of a reference ratio in a third scenario.
  • FIG. 17A is an explanatory view showing a first example of the number of reference pixels controlled according to the third scenario.
  • FIG. 17B is an explanatory view showing a second example of the number of reference pixels controlled according to the third scenario.
  • FIG. 18 is an explanatory view showing an example of the definition of a reference ratio in a fourth scenario.
  • FIG. 19A is an explanatory view showing a first example of the number of reference pixels controlled according to the fourth scenario.
  • FIG. 19B is an explanatory view showing a second example of the number of reference pixels controlled according to the fourth scenario.
  • FIG. 20A is an explanatory view showing a first example of the number of reference pixels controlled according to a fifth scenario.
  • FIG. 20B is an explanatory view showing a second example of the number of reference pixels controlled according to the fifth scenario.
  • FIG. 21 is a flow chart showing an example of the intra prediction process at the time of encoding according to the embodiment.
  • FIG. 22 is a flow chart showing an example of a detailed flow of LM mode prediction processing in FIG. 21 .
  • FIG. 23 is a block diagram showing an example of a detailed configuration of the image decoding device according to the embodiment.
  • FIG. 24 is a block diagram showing an example of the detailed configuration of the intra prediction section of the image decoding device according to the embodiment.
  • FIG. 25 is a flow chart showing an example of a flow of an intra prediction process at the time of decoding according to an embodiment.
  • FIG. 26 is an explanatory view illustrating an example of thinning processing according to a modification.
  • FIG. 27A is a first explanatory view illustrating an example of thinning processing different from the example of FIG. 26 .
  • FIG. 27B is a second explanatory view illustrating an example of thinning processing different from the example of FIG. 26 .
  • FIG. 27C is a third explanatory view illustrating an example of thinning processing different from the example of FIG. 26 .
  • FIG. 28A is an explanatory view illustrating a first example of correspondence between thinning positions of reference pixels and thinning positions of luminance components.
  • FIG. 28B is an explanatory view illustrating a second example of correspondence between thinning positions of reference pixels and thinning positions of luminance components.
  • FIG. 29 is a block diagram showing an example of a schematic configuration of a television.
  • FIG. 30 is a block diagram showing an example of a schematic configuration of a mobile phone.
  • FIG. 31 is a block diagram showing an example of a schematic configuration of a recording/reproduction device.
  • FIG. 32 is a block diagram showing an example of a schematic configuration of an image capturing device.
  • FIG. 1 is a block diagram showing an example of a configuration of an image encoding device 10 according to an embodiment.
  • the image encoding device 10 includes an A/D (Analogue to Digital) conversion section 11 , a sorting buffer 12 , a subtraction section 13 , an orthogonal transform section 14 , a quantization section 15 , a lossless encoding section 16 , an accumulation buffer 17 , a rate control section 18 , an inverse quantization section 21 , an inverse orthogonal transform section 22 , an addition section 23 , a deblocking filter 24 , a frame memory 25 , selectors 26 and 27 , a motion estimation section 30 and an intra prediction section 40 .
  • A/D Analogue to Digital
  • the A/D conversion section 11 converts an image signal input in an analogue format into image data in a digital format, and outputs a series of digital image data to the sorting buffer 12 .
  • the sorting buffer 12 sorts the images included in the series of image data input from the A/D conversion section 11 . After sorting the images according to the a GOP (Group of Pictures) structure according to the encoding process, the sorting buffer 12 outputs the image data which has been sorted to the subtraction section 13 , the motion estimation section 30 and the intra prediction section 40 .
  • GOP Group of Pictures
  • the image data input from the sorting buffer 12 and predicted image data input by the motion estimation section 30 or the intra prediction section 40 described later are supplied to the subtraction section 13 .
  • the subtraction section 13 calculates predicted error data which is a difference between the image data input from the sorting buffer 12 and the predicted image data and outputs the calculated predicted error data to the orthogonal transform section 14 .
  • the orthogonal transform section 14 performs orthogonal transform on the predicted error data input from the subtraction section 13 .
  • the orthogonal transform to be performed by the orthogonal transform section 14 may be discrete cosine transform (DCT) or Karhunen-Loeve transform, for example.
  • the orthogonal transform section 14 outputs transform coefficient data acquired by the orthogonal transform process to the quantization section 15 .
  • the transform coefficient data input from the orthogonal transform section 14 and a rate control signal from the rate control section 18 described later are supplied to the quantization section 15 .
  • the quantization section 15 quantizes the transform coefficient data, and outputs the transform coefficient data which has been quantized (hereinafter, referred to as quantized data) to the lossless encoding section 16 and the inverse quantization section 21 . Also, the quantization section 15 switches a quantization parameter (a quantization scale) based on the rate control signal from the rate control section 18 to thereby change the bit rate of the quantized data to be input to the lossless encoding section 16 .
  • the lossless encoding section 16 generates an encoded stream by performing lossless encoding processing on quantized data input from the quantization section 15 .
  • Lossless encoding by the lossless encoding section 16 may be, for example, variable-length encoding or arithmetic encoding.
  • the lossless encoding section 16 multiplexes information on an intra prediction or information on an inter prediction input from the selector 27 into a header region of the encoded stream. Then, the lossless encoding section 16 outputs the generated encoded stream to the accumulation buffer 17 .
  • one coding unit one prediction unit or more for the luminance component (Y) and one prediction unit or more for each of the color difference components (Cb, Cr).
  • quantized data of these prediction units is encoded in the order by component. That is, after data of the luminance component (Y) inside one CU is encoded, data of the color difference component (Cb) is encoded and then, data of the color difference component (Cr) is encoded.
  • the lossless encoding section 16 encodes quantized data in the order by PU in the coding unit for which an intra prediction is made to generate an encoded stream. The order of encoding will further be described later.
  • the accumulation buffer 17 temporarily stores an encoded stream input from the lossless encoding section 16 using a storage medium such as a semiconductor memory. Then, the accumulation buffer 17 outputs accumulated encoded streams to a transmission section (not shown) (for example, a communication interface or a connection interface to a peripheral device) at a rate in accordance with the band of the transmission line.
  • a transmission section for example, a communication interface or a connection interface to a peripheral device
  • the rate control section 18 monitors the free space of the accumulation buffer 17 . Then, the rate control section 18 generates a rate control signal according to the free space on the accumulation buffer 17 , and outputs the generated rate control signal to the quantization section 15 . For example, when there is not much free space on the accumulation buffer 17 , the rate control section 18 generates a rate control signal for lowering the bit rate of the quantized data. Also, for example, when the free space on the accumulation buffer 17 is sufficiently large, the rate control section 18 generates a rate control signal for increasing the bit rate of the quantized data.
  • the inverse quantization section 21 performs an inverse quantization process on the quantized data input from the quantization section 15 . Then, the inverse quantization section 21 outputs transform coefficient data acquired by the inverse quantization process to the inverse orthogonal transform section 22 .
  • the inverse orthogonal transform section 22 performs an inverse orthogonal transform process on the transform coefficient data input from the inverse quantization section 21 to thereby restore the predicted error data. Then, the inverse orthogonal transform section 22 outputs the restored predicted error data to the addition section 23 .
  • the addition section 23 adds the restored predicted error data input from the inverse orthogonal transform section 22 and the predicted image data input from the motion estimation section 30 or the intra prediction section 40 to thereby generate decoded image data. Then, the addition section 23 outputs the generated decoded image data to the deblocking filter 24 and the frame memory 25 .
  • the deblocking filter 24 performs a filtering process for reducing block distortion occurring at the time of encoding of an image.
  • the deblocking filter 24 filters the decoded image data input from the addition section 23 to remove the block distortion, and outputs the decoded image data after filtering to the frame memory 25 .
  • the frame memory 25 stores, using a storage medium, the decoded image data input from the addition section 23 and the decoded image data after filtering input from the deblocking filter 24 .
  • the selector 26 reads the decoded image data after filtering which is to be used for inter prediction from the frame memory 25 , and supplies the decoded image data which has been read to the motion estimation section 30 as reference image data. Also, the selector 26 reads the decoded image data before filtering which is to be used for intra prediction from the frame memory 25 , and supplies the decoded image data which has been read to the intra prediction section 40 as reference image data.
  • the selector 27 In the inter prediction mode, the selector 27 outputs predicted image data which is a result of inter prediction output from the motion estimation section 30 to the subtraction section 13 , and also, outputs the information about inter prediction to the lossless encoding section 16 . Furthermore, in the intra prediction mode, the selector 27 outputs predicted image data which is a result of intra prediction output from the intra prediction section 40 to the subtraction section 13 , and also, outputs the information about intra prediction to the lossless encoding section 16 . The selector 27 switches between the inter prediction mode and the intra prediction mode depending on the size of a cost function output from the motion estimation section 30 or the intra prediction section 40 .
  • the motion estimation section 30 performs inter prediction processing (inter-frame prediction processing) based on image data (original image data) to be encoded and input from the reordering buffer 12 and decoded image data supplied via the selector 26 . For example, the motion estimation section 30 evaluates prediction results by each prediction mode using a predetermined cost function. Next, the motion estimation section 30 selects the prediction mode that produces the minimum cost function value, that is, the prediction mode that produces the highest compression ratio as the optimum prediction mode. Also, the motion estimation section 30 generates predicted image data according to the optimum prediction mode. Then, the motion estimation section 30 outputs prediction mode information indicating the selected optimum prediction mode, information on inter predictions including motion vector information and reference image information, the cost function value, and predicted image data to the selector 27 .
  • inter prediction processing inter-frame prediction processing
  • the intra prediction section 40 performs intra prediction processing on each block set inside an image based on original image data input from the reordering buffer 12 and decoded image data as reference image data supplied from the memory frame 25 . Then, the intra prediction section 40 outputs information on intra predictions, including prediction mode information indicating the optimum prediction mode and size related information, the cost function value, and predicted image data to the selector 27 .
  • Prediction modes that can be selected by the intra prediction section 40 include, in addition to existing intra prediction modes, a linear model (LM) mode about the color difference component.
  • LM linear model
  • the LM mode in the present embodiment is characterized in that the ratio of the number of reference pixels to the block size can change. The intra prediction processing by the intra prediction section 40 will be described later in detail.
  • FIG. 2 is a block diagram showing an example of a detailed configuration of the intra prediction section 40 of an image encoding device 10 shown in FIG. 1 .
  • the intra prediction section 40 includes a prediction controller 42 , a coefficient calculation section 44 , a prediction section 46 , and a mode determination section 48 .
  • the prediction controller 42 controls intra prediction processing by the intra prediction section 40 .
  • the prediction controller 42 performs intra prediction processing of the luminance component (Y) and then performs intra prediction processing of color difference components (Cb, Cr) in come processing unit.
  • the prediction controller 42 causes the prediction section 46 to generate a predicted pixel value of each pixel in a plurality of prediction modes and causes the mode determination section 48 to determine the optimum prediction mode of the luminance component.
  • the arrangement of prediction units in the coding unit is also decided.
  • the prediction controller 42 causes the prediction section 46 to generate a predicted pixel value of each pixel in a plurality of prediction modes for each prediction unit and causes the mode determination section 48 to determine the optimum prediction mode of the color difference components.
  • Prediction mode candidates for the luminance component may be a prediction mode adopted by an existing image encoding scheme such as H.264/AVC or a different prediction mode.
  • Prediction mode candidates for the color difference component may also contain a prediction mode adopted by an existing image encoding scheme.
  • prediction mode candidates for the color difference component contain the above-mentioned LM mode.
  • the LM mode is added as a prediction mode candidate for the prediction unit of the color difference component (that is, a search range of the optimum prediction mode).
  • Predetermined conditions may be a condition that “the size of the prediction unit of the color difference component is equal to a size determined from the size of the prediction unit of the corresponding luminance component in accordance with the chroma-format or less”.
  • the “corresponding prediction unit” is a prediction unit sharing a pixel at least partially.
  • the size of the prediction unit of the color difference component may be decided without being dependent on the size of the prediction unit of the luminance component. Therefore, regarding a prediction unit of a certain color difference component, one or a plurality of corresponding prediction units of the luminance component may be present.
  • a size determined from the size of the prediction unit of the corresponding luminance component in accordance with the chroma-format” in the above condition is, when the size of the prediction unit of the luminance component is M ⁇ N pixels, (M/2) ⁇ (N/2) pixels if the chroma-format is 4:2:0, (M/2) ⁇ N pixels if the chroma-format is 4:2:2, and M ⁇ N pixels if the chroma-format is 4:4:4.
  • the prediction controller 42 controls the above ratio variably.
  • the block size here is in principle a size of the prediction unit.
  • the above-mentioned ration controlled by the prediction controller 42 that is, the ratio of the number of reference pixels to the block size will herein be called a “reference ratio”.
  • the control of the reference ratio by the prediction controller 42 is typically performed in accordance with the block size.
  • the prediction controller 42 may control the reference ratio in accordance with a chroma-format affecting the block size of the color difference component.
  • the prediction controller 42 may also control the reference ratio in accordance with parameters (for example, a profile or level) defining capabilities of a device for encoding and decoding images. A plurality of scenarios of controlling the reference ratio by the prediction controller 42 will be described later in more detail.
  • the coefficient calculation section 44 calculates coefficients of a prediction function used by the prediction section 46 in LM mode by referring to pixels around the prediction unit to which the pixel to be predicted belongs, that is, reference pixels.
  • the prediction function used by the prediction section 46 is typically a linear function of the value of the luminance component.
  • the number of reference pixels referenced by the coefficient calculation section 44 to calculate coefficients of a prediction function is controlled by, as described above, the prediction controller 42 .
  • the prediction section 46 predicts the pixel value of the luminance component and the pixel value of the color difference component of a pixel to be predicted according to various prediction mode candidates under the control of the prediction controller 42 . Examples of prediction mode candidates used by the prediction section 46 will be described later in more detail. Predicted image data generated as a result of prediction by the prediction section 46 is output to the mode determination section 48 for each prediction mode.
  • the mode determination section 48 calculates the cost function value of each prediction mode based on original image data input from the reordering buffer 12 and predicted image data input from the prediction section 46 . Then, based on the calculated cost function value, the mode determination section 48 decides the optimum prediction mode for the luminance component and the arrangement of prediction units inside the coding unit. Similarly, based on the cost function value of the color difference component, the mode determination section 48 decides the optimum prediction mode for the color difference component and the arrangement of prediction units. Then, the mode determination section 48 outputs information on intra predictions including prediction mode information indicating the decided optimum prediction mode and size related information, the cost function value, and predicted image data including predicted pixel values of the luminance component and the color difference component to the selector 27 .
  • the size related information output from the mode determination section 48 may contain, in addition to information to identify the size of each prediction unit, information specifying the chroma-format.
  • prediction mode candidates that can be used by the prediction section 46 of the intra prediction section 40 will be described.
  • Prediction mode candidates for the luminance component may be a prediction mode adopted by an existing image encoding scheme such as H.264/AVC.
  • FIGS. 3 to 5 are explanatory views illustrating such prediction mode candidates when the size of the prediction unit is 4 ⁇ 4 pixels.
  • FIG. 3 nine prediction modes (Mode 0 to Mode 8) that can be used for the prediction unit of the 4 ⁇ 4 pixels are shown.
  • FIG. 4 the prediction direction corresponding to each mode number is schematically shown.
  • lower-case alphabetic characters a to p represent the pixel value of each pixel (that is, each pixel to be predicted) in the prediction unit of 4 ⁇ 4 pixels.
  • each predicted pixel value is calculated as below:
  • the prediction direction in Mode 1 is horizontal, and each predicted pixel value is calculated as below:
  • Mode 2 represents the DC prediction (average value prediction) and each predicted pixel value is calculated according to one of the following four formulas depending on which reference pixel can be used.
  • the prediction direction in Mode 3 is diagonal down left, and each predicted pixel value is calculated as below:
  • the prediction direction in Mode 4 is diagonal down right, and each predicted pixel value is calculated as below:
  • the prediction direction in Mode 5 is vertical right, and each predicted pixel value is calculated as below:
  • the prediction direction in Mode 6 is horizontal down, and each predicted pixel value is calculated as below:
  • n ( Rj+ 2 Rk+Rl+ 2)>>2
  • the prediction direction in Mode 7 is vertical left, and each predicted pixel value is calculated as below:
  • the prediction direction in Mode 8 is horizontal up, and each predicted pixel value is calculated as below:
  • the prediction direction in Mode 0 is vertical.
  • the prediction direction in Mode 1 is horizontal.
  • Mode 2 represents the DC prediction (average value prediction).
  • the prediction direction in Mode 3 is DIAGONAL_DOWN_LEFT.
  • the prediction direction in Mode 4 is DIAGONAL_DOWN_RIGHT
  • the prediction direction in Mode 5 is VERTICAL_RIGHT.
  • the prediction direction in Mode 6 is HORIZONTAL_DOWN.
  • the prediction direction in Mode 7 is VERTICAL_LEFT.
  • the prediction direction in Mode 8 is HORIZONTAL_UP.
  • Mode 0 four prediction modes (Mode 0 to Mode 3) that can be used for the prediction unit of the 16 ⁇ 16 pixels are shown.
  • the prediction direction in Mode 0 is vertical.
  • the prediction direction in Mode 1 is horizontal.
  • Mode 2 represents the DC prediction (average value prediction).
  • Mode 3 represents the plane prediction.
  • the prediction mode for the color difference component can be selected independently of the prediction mode for the luminance component.
  • FIG. 8 among prediction mode candidates that can be used when the block size of the color difference component is 8 ⁇ 8 pixels, four prediction modes (Mode 0 to Mode 3) adopted for existing image encoding schemes such as H.264/AVC are shown.
  • Mode 0 represents the DC prediction (average value prediction).
  • the predicted pixel value of the pixel position (x, y) is represented as Pr C (x, y)
  • eight left reference pixel values are represented as Re C ( ⁇ 1, n)
  • eight upper reference pixel values are represented as Re C (n, ⁇ 1).
  • C as a subscript means the color difference component.
  • n is an integer equal to 0 or more and equal to 7 or less.
  • the predicted pixel value Pr C (x, y) is calculated according to one of the following three formulas depending on which reference pixels are available:
  • the prediction direction in Mode 1 is horizontal, and the predicted pixel value Pr C (x, y) is calculated as below:
  • Pr C [ x,y ] Re C [ ⁇ 1, y ] [Math 2]
  • the prediction direction in Mode 2 is vertical, and the predicted pixel value Pr C (x, y) is calculated as below:
  • Pr C [ x,y ] Re C [ x, ⁇ 1] [Math 3]
  • Mode 3 represents the plane prediction.
  • the predicted pixel value Pr C (x, y) is calculated as below:
  • the LM mode (as Mode 4, for example) that will be described in the next section can be selected.
  • the predicted pixel value for the color difference component is calculated by using a linear function of the value of the corresponding luminance component.
  • the prediction function used in LM mode may be the following linear function described in Non-Patent Literature 2:
  • Pr C [ x,y ] ⁇ Re L ′[ x,y ]+ ⁇ (1)
  • Re L ′(x, y) represents the value of the pixel position (x, y) after resampling of the luminance components of a decoded image (so-called reconstructed image).
  • a decoded image an original image is used for image encoding.
  • the luminance components are resampled when the resolution of the color difference component is different from the resolution of the luminance component depending on the chroma-format. If, for example, the chroma-format is 4:2:0, the luminance components are resampled according to the following formula in such a way that the number of pixels is reduced by half in both the horizontal direction and the vertical direction.
  • Re L (u, v) represents the value of the luminance component in the pixel position (u, v) before resampling.
  • the luminance components are resampled in such a way that the number of pixels is reduced by half in the horizontal direction. If the chroma-format is 4:4:4, the luminance components are not resampled.
  • the coefficient ⁇ in Formula (1) is calculated according to the following formula (3). Also, the coefficient ⁇ in Formula (1) is calculated according to the following formula (4).
  • Re L ′(x, y) on the right-hand side of Formula (1) represents the value of the image position (x, y) after resampling of luminance components of a decoded image (a reconstructed image by an encoder or a decoded image by a decoder).
  • Re L ′(x, y) on the right-hand side of Formula (1) represents the value of the image position (x, y) after resampling of luminance components of a decoded image (a reconstructed image by an encoder or a decoded image by a decoder).
  • FIG. 9 is an explanatory view illustrating a difference between the order of processing in an existing technique and the order of processing in the present embodiment.
  • LCU 1 including coding units CU 0 , CU 1 , CU 2 and other coding units is shown.
  • the coding unit CU 0 is divided into four prediction units PU 00 , PU 01 , PU 02 , PU 03 .
  • the coding unit CU 1 is divided into four prediction units PU 10 , PU 11 , PU 12 , PU 13 .
  • the coding unit CU 2 is divided into four prediction units PU 20 , PU 21 , PU 22 , PU 23 .
  • prediction units of all color difference components have sizes determined from sizes of prediction units of corresponding luminance components in accordance with the chroma-format. That is, there is a one-to-one correspondence between the prediction unit of the luminance component and the prediction unit of the color difference component.
  • encoding processing of image data for the LCU is performed in the order of Y 00 ⁇ Y 01 ⁇ Y 02 ⁇ Y 03 ⁇ Cb 00 ⁇ Cb 01 ⁇ Cb 02 ⁇ Cb 03 ⁇ Cr 00 ⁇ Cr 01 ⁇ Cr 02 ⁇ Cr 03 ⁇ Y 10 ⁇ . . .
  • Y NN represents encoding processing of the luminance component of the prediction unit PU NN
  • Cb NN and C NN each represent encoding processing of a color difference component of the prediction unit PU NN .
  • This also applies to intra prediction processing. That is, according to the present technique, encoding processing is performed for each component in each coding unit. Such an order of processing is herein called an “order by component”.
  • encoding processing of image data is performed for each prediction unit in each coding unit.
  • Such an order of processing is herein called an “order by PU”.
  • encoding processing of the luminance component Y 00 and the two color difference components Cb 00 , Cr 00 of the prediction unit PU 01 is first performed.
  • encoding processing of the luminance component Y 01 and the two color difference components Cb 01 , Cr 01 of the prediction unit PU 01 is performed.
  • encoding processing of three components is repeated in the order of the prediction units PU 02 , PU 03 , PU 10 , . . . .
  • the required amount of memory resources to hold luminance component values referenced for intra prediction in LM mode is affected by the maximum size of the coding unit. If, for example, the maximum size of the coding unit is 128 ⁇ 128 pixels, the chroma-format is 4:2:0, and the bit depth is 10 bits, memory resources of 64 ⁇ 64 ⁇ 10 bits may be consumed. In the order by PU described above, on the other hand, the required amount of memory resources is affected by the maximum size of the prediction unit of the color difference component. In HEVC, the maximum size of the prediction unit of the luminance component is 64 ⁇ 64 pixels. Thus, when the order by PU described above is adopted, the required amount of memory resources is at most 32 ⁇ 32 ⁇ 10 bits if the chroma-format is 4:2:0 and the bit depth is 10 bits.
  • the search for the LM mode is limited. That is, only if the size of the prediction unit of the color difference component is equal to or less than a size determined from the size of the prediction unit of the corresponding luminance component in accordance with the chroma-format, the LM mode is added as a prediction mode candidate for the prediction unit of the color difference component.
  • the LM mode is selected, only a prediction unit of one luminance component corresponds to a prediction unit of one color difference component. Accordingly, buffered data of the prediction unit of the first luminance component can in principle be cleared when the order by PU is adopted, and when processing moves from the prediction unit of a first luminance component to the prediction unit of a second luminance component.
  • the correlation of images between the plurality of prediction units is generally low. If a prediction function is decided based on values of the luminance components extending over prediction units of a plurality of luminance components that are hardly correlated to each other, the prediction function is considered to be unable to model the correlation between the luminance component and the color difference component with adequate precision.
  • a prediction function is decided based on values of the luminance components extending over prediction units of a plurality of luminance components that are hardly correlated to each other, the prediction function is considered to be unable to model the correlation between the luminance component and the color difference component with adequate precision.
  • the size of the prediction unit of the luminance component is 4 ⁇ 4 pixels and the chroma-format is 4:2:0
  • the size of the prediction unit of the color difference component determined in accordance with the chroma-format is 2 ⁇ 2 pixels.
  • the prediction unit of 2 ⁇ 2 pixels cannot be used in HEVC.
  • the LM mode contributes more to improvement of encoding efficiency with a decreasing size of the prediction unit.
  • the lossless encoding section 16 successively encodes the prediction units of a plurality of luminance components (for example, four prediction units of 4 ⁇ 4 pixels) corresponding to the prediction unit (for example, 4 ⁇ 4 pixels) of one color difference component and then can encode the prediction unit of the color difference component.
  • FIG. 10A is an explanatory view illustrating a first example of the order of new encoding processing adopted in the present embodiment.
  • the left side of FIG. 10A shows the arrangement of prediction units of the luminance component (hereinafter, called luminance prediction units) in the coding unit CU 1 .
  • the right side of FIG. 10A shows the arrangement of prediction units of the color difference component (hereinafter, called color difference prediction units) in the coding unit CU 1 .
  • the size of the coding unit CU 1 is 128 ⁇ 128 pixels and the chroma-format is 4:2:0.
  • the coding unit CU 1 contains seven luminance prediction units PU 10 to PU 16 .
  • the size of the luminance prediction units PU 10 , PU 11 , PU 16 is 64 ⁇ 64 pixels and the size of the luminance prediction units PU 12 to PU 15 is 32 ⁇ 32 pixels.
  • the coding unit CU 1 contains seven color difference prediction units PU 20 to PU 26 .
  • the size of the color difference prediction unit PU 20 is 32 ⁇ 32 pixels and the color difference prediction unit PU 20 corresponds to the luminance prediction unit PU 10 .
  • the size of the color difference prediction units PU 21 to PU 24 is 16 ⁇ 16 pixels and the color difference prediction units PU 21 to PU 24 correspond to the luminance prediction unit PU 11 .
  • the size of the color difference prediction unit PU 25 is 32 ⁇ 32 pixels and the color difference prediction unit PU 25 corresponds to the luminance prediction units PU 12 to PU 15 .
  • the size of the color difference prediction unit PU 26 is 32 ⁇ 32 pixels and the color difference prediction unit PU 26 corresponds to the luminance prediction unit PU 16 .
  • the size (32 ⁇ 32 pixels) of the color difference prediction unit PU 25 is larger than the size (16 ⁇ 16 pixels) determined from the size (32 ⁇ 32 pixels) of the corresponding luminance prediction units PU 21 to PU 24 in accordance with the chroma-format. Therefore, regarding the color difference prediction unit PU 25 , the LM mode is excluded from prediction mode candidates. On the other hand, the LM mode is excluded from prediction mode candidates for other color difference prediction units.
  • the encoding processing is performed in the order of Y 10 ⁇ Cb 20 ⁇ Cr 20 ⁇ Y 11 ⁇ Cb 21 ⁇ Cr 21 ⁇ Cb 22 ⁇ Cr 22 ⁇ Cb 23 ⁇ Cr 23 ⁇ Cb 24 ⁇ Cr 24 ⁇ Y 12 ⁇ Y 13 ⁇ Y 14 ⁇ Y 15 ⁇ Cb 25 ⁇ Cr 25 ⁇ Y 16 ⁇ Cb 26 ⁇ Cr 26 . That is, for example, after the luminance prediction unit PU 10 is encoded, the color difference prediction unit PU 20 is encoded before the luminance prediction unit PU 11 is encoded.
  • the color difference prediction units PU 21 to PU 24 are encoded before the luminance prediction unit PU 12 is encoded. Therefore, even if the buffer size is 32 ⁇ 32 pixels, the value after sampling of the luminance component of the corresponding luminance prediction unit is not yet cleared at the time of intra prediction of each color difference prediction unit and thus, the LM mode can be used for each color difference prediction unit.
  • FIG. 10B is an explanatory view illustrating a second example of the order of new encoding processing adopted in the present embodiment.
  • the left side of FIG. 10B shows the arrangement of luminance prediction units in the coding unit CU 2 .
  • the right side of FIG. 10B shows the arrangement of color difference prediction units in the coding unit CU 2 .
  • the size of the coding unit CU 2 is 16 ⁇ 16 pixels and the chroma-format is 4:2:0.
  • the coding unit CU 2 contains 10 luminance prediction units PU 30 to PU 9 .
  • the size of the luminance prediction units PU 30 , PU 39 is 8 ⁇ 8 pixels and the size of the luminance prediction units PU 31 to PU 38 is 4 ⁇ 4 pixels.
  • the coding unit CU 2 contains four color difference prediction units PU 40 to PU 43 .
  • the size of the color difference prediction units PU 40 to PU 43 is 4 ⁇ 4 pixels, which is the minimum size that can be used as the color difference prediction unit.
  • the color difference prediction unit PU 40 corresponds to the luminance prediction unit PU 30
  • the color difference prediction unit PU 41 corresponds to the luminance prediction units PU 31 to PU 34
  • the color difference prediction unit PU 42 corresponds to the luminance prediction units PU 35 to PU 38
  • the color difference prediction unit PU 43 corresponds to the luminance prediction unit PU 39 .
  • the size (4 ⁇ 4 pixels) of the color difference prediction units PU 41 , PU 42 is larger than the size (2 ⁇ 2 pixels) determined from the size (4 ⁇ 4 pixels) of the corresponding luminance prediction units PU 31 to PU 34 , PU 35 to PU 38 in accordance with the chroma-format.
  • the size of the corresponding luminance prediction unit is 4 ⁇ 4 pixels and thus, the user of the LM mode is permitted also for the color difference prediction units PU 41 , PU 42 . Therefore, in the example of FIG. 10B , the LM mode is added to be an object of search for all the color difference prediction units PU 40 to PU 43 .
  • the encoding processing is performed in the order of Y 30 ⁇ Cb 40 ⁇ Cr 40 ⁇ Y 31 ⁇ Y 322 ⁇ Y 33 ⁇ Y 34 ⁇ Cb 41 ⁇ Cr 41 ⁇ Y 35 ⁇ Y 36 ⁇ Y 37 ⁇ Y 38 ⁇ Cb 42 ⁇ Cr 42 ⁇ Y 39 ⁇ Cb 43 ⁇ Cr 43 . That is, for example, before the color difference prediction unit PU 41 is encoded, the luminance prediction units PU 31 to PU 34 are encoded. Therefore, an intra prediction can be made in LM mode for the color difference prediction unit PU 41 based on values of the luminance components of the four luminance prediction units PU 31 to PU 3 of 4 ⁇ 4 pixels.
  • the luminance prediction units PU 35 to PU 38 are encoded. Therefore, an intra prediction can be made in LM mode for the color difference prediction unit PU 42 based on values of the luminance components of the four luminance prediction units PU 35 to PU 38 of 4 ⁇ 4 pixels.
  • FIGS. 11A and 11B are explanatory views further illustrating the reference pixels in LM mode.
  • the size of the prediction unit (PU) is 16 ⁇ 16 pixels and the chroma-format is 4:2:0.
  • the block size of the color difference component is 8 ⁇ 8 pixels.
  • the size of the prediction unit (PU) is 8 ⁇ 8 pixels and the chroma-format is 4:2:0.
  • the block size of the color difference component is 4 ⁇ 4 pixels.
  • FIGS. 11A and 11B shows that the ratio of the number of reference pixels to the block size remains unchanged if other conditions of the chroma-format and the like are the same. That is, while the size of one side of the prediction unit in the example of FIG. 11A is 16 pixels and the number I of reference pixels is 16, the size of one side of the prediction unit in the example of FIG. 11B is eight pixels and the number I of reference pixels is eight. Thus, if the number I of reference pixels increases with an increasing block size, the processing cost needed to calculate the coefficient ⁇ and the coefficient ⁇ using Formula (3) and Formula (4) respectively also increases. As will be understood by focusing particularly on Formula (3), the number of times of multiplication of pixel values increases on the order of the square of the number I of reference pixels.
  • the prediction controller 42 variably controls the number of reference pixels when the coefficient calculation section 44 calculates the coefficient ⁇ and the coefficient ⁇ in LM mode.
  • the prediction controller 42 typically controls the reference ratio as a ratio of the number of reference pixels to the block size so as to decrease with an increasing block size. An increase in processing cost is thereby curbed when the block size increases. When the block size is small to the extent that the processing cost presents no problem, the prediction controller 42 may not change the reference ratio even if the block sizes are different. Five exemplary scenarios of control of the reference ratio will be described below with reference to FIGS. 12 to 20B .
  • FIG. 12 is an explanatory view showing an example of the definition of the reference ratio in a first scenario.
  • the reference ratio is “1:1” if the size of the prediction unit (PU) is 4 ⁇ 4 pixels.
  • the reference ratio “1:1” means that, as shown in FIGS. 11A and 11B , reference pixels are all used.
  • the chroma-format is 4:2:0
  • the chroma-format is 4:2:2
  • the chroma-format is 4:4:4
  • the reference ratio is “2:1”.
  • the reference ratio “2:1” means that, as shown in FIGS. 11A and 11B , only half the reference pixels are used. That is, the coefficient calculation section 44 thins out half the reference pixels and uses only remaining reference pixels when calculating the coefficient ac and the coefficient ⁇ .
  • the chroma-format is 4:2:0
  • the chroma-format is 4:2:2
  • the chroma-format is 4:4:4
  • FIG. 13A shows an example of reference pixel settings when the PU size is 16 ⁇ 16 pixels and the chroma-format is 4:2:0.
  • every second reference pixel of the color difference component and every second reference pixel of the luminance component are thinned out.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both eight.
  • the reference ratio is “4:1”.
  • the reference ratio “4:1” means that, as shown in FIGS. 11A and 11B , only one fourth of reference pixels is used. That is, the coefficient calculation section 44 thins out three fourths of reference pixels and uses only remaining reference pixels when calculating the coefficient ⁇ and the coefficient ⁇ .
  • the chroma-format is 4:2:0
  • the chroma-format is 4:2:2
  • the chroma-format is 4:4:4
  • FIG. 13B shows an example of reference pixel settings when the PU size is 32 ⁇ 32 pixels and the chroma-format is 4:2:0.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both eight.
  • the number I of reference pixels is constant when the block size is 8 ⁇ 8 pixels or more as long as the chroma-format is the same. Therefore, an increase in processing cost is curbed when the block size increases.
  • coefficient calculation processing by the coefficient calculation section 44 can be performed by using a small common circuit or logic. Accordingly, an increase of the circuit scale or logic scale can also be curbed.
  • the degradation of prediction accuracy in LM mode due to an insufficient number of reference pixels can be prevented.
  • a smaller prediction unit is likely to be set to inside an image.
  • the coefficient calculation section 44 has also a role of a thinning section that thins out reference pixels referenced when an intra prediction in LM mode is made in the reference ratio in accordance with the block size to be predicted. This also applies to a coefficient calculation section 94 of an image decoding device 60 described later.
  • the number of reference pixels may variably be controlled by deriving one representative value from a plurality of reference pixel values.
  • the reference ratio is “4:1”
  • an average value of pixels values of four consecutive reference pixels or a median value thereof may be used as a representative value. This also applies to other scenarios described herein. While it is quite easy to implement processing to thin out reference pixels, the prediction accuracy can be improved by using the above representative value.
  • FIG. 14 is an explanatory view showing an example of the definition of the reference ratio in a second scenario.
  • the prediction controller 42 controls the reference ratio in accordance with the chroma-format, in addition to the size of the prediction unit.
  • the prediction controller 42 separately controls a first reference ratio as a ratio of the number of left reference pixels to the size in the vertical direction and a second reference ratio as a ratio of the number of upper reference pixels to the size in the horizontal direction.
  • the size of the prediction unit is 4 ⁇ 4 pixels and the chroma-format is 4:4:4, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “2:1”.
  • the size of the prediction unit is 8 ⁇ 8 pixels and the chroma-format is 4:4:4, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “2:1”.
  • FIG. 15A shows an example of reference pixel settings when the PU size is 8 ⁇ 8 pixels and the chroma-format is 4:2:0.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both eight.
  • FIG. 15B shows an example of reference pixel settings when the PU size is 8 ⁇ 8 pixels and the chroma-format is 4:2:2.
  • every second reference pixel in the vertical direction of the color difference component and the luminance component is thinned out.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both eight.
  • FIG. 15C shows an example of reference pixel settings when the PU size is 8 ⁇ 8 pixels and the chroma-format is 4:4:4.
  • every second reference pixel in the vertical direction and the horizontal direction of the color difference component and the luminance component is thinned out.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both eight.
  • FIG. 15D shows an example of reference pixel settings when the PU size is 16 ⁇ 16 pixels and the chroma-format is 4:2:0.
  • every second reference pixel in the vertical direction and the horizontal direction of the color difference component and the luminance component is thinned out.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both eight.
  • the prediction controller 42 controls the reference ratio so that the reference ratio decreases with an increasing resolution of the color difference component represented by the chroma-format. An increase in processing cost accompanying an increasing block size of the color difference component is thereby curbed.
  • the prediction controller 42 separately controls the reference ratio in the vertical direction and the reference ratio in the horizontal direction so that the number of reference pixels on the left of the block and the number of reference pixels above the block become equal. Accordingly, the numbers of reference pixels can be made the same in a plurality of cases in which chroma-formats are mutually different.
  • coefficient calculation processing by the coefficient calculation section 44 can be performed by using a common circuit or logic regardless of the chroma-format. Therefore, according to the second scenario, efficient implementation of a circuit or logic is promoted.
  • FIG. 16 is an explanatory view showing an example of the definition of the reference ratio in a third scenario.
  • the prediction controller 42 separately controls the first reference ratio as a ratio of the number of left reference pixels to the size in the vertical direction and the second reference ratio as a ratio of the number of upper reference pixels to the size in the horizontal direction.
  • the prediction controller 42 controls the reference ratios so that the reference ratio in the vertical direction is equal to the reference ratio in the horizontal direction or less to the same size.
  • the reference ratios in the vertical direction and the horizontal direction are both “1:1” if the size of the prediction unit is 4 ⁇ 4 pixels.
  • the reference ratio in the vertical direction is “2:1” and the reference ratio in the horizontal direction is “1:1”.
  • FIG. 17A shows an example of reference pixel settings when the PU size is 8 ⁇ 8 pixels and the chroma-format is 4:2:0.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both six.
  • the reference ratio in the vertical direction is “4:1” and the reference ratio in the horizontal direction is “1:1”.
  • FIG. 17B shows an example of reference pixel settings when the PU size is 16 ⁇ 16 pixels and the chroma-format is 4:2:0.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both 10.
  • the reference ratio in the vertical direction is “8:1” and the reference ratio in the horizontal direction is “2:1”.
  • the reference pixel value is stored in a frame memory or line memory in most cases and accessed in units of line in the horizontal direction. Therefore, if the reference ratio in the vertical direction is made smaller than the reference ratio in the horizontal direction like in the third scenario, the number of times of accessing a memory can be reduced even if the number of reference pixels to be used is the same. Accordingly, coefficient calculation processing by the coefficient calculation section 44 can be performed at high speed. In addition, by using reference pixels in an upper line of the block preferentially like in the third scenario, the reference pixel value can be acquired in a short time through continuous access to the memory.
  • FIG. 18 is an explanatory view showing an example of the definition of the reference ratio in a fourth scenario.
  • the prediction controller 42 controls the reference ratio so that the reference ratio decreases with decreasing capabilities of a device that encodes and decodes images.
  • the profile, the level, or both can be used as parameters representing capabilities of a device.
  • the profile and the level can normally be specified in a sequence parameter set of an encoded stream.
  • capabilities of a device are classified into two categories of “high” and “low”.
  • the reference ratio is “1:1” regardless of capabilities of a device.
  • the reference ratio when capabilities are “low” is half the reference ratio when capabilities are “high”.
  • the reference ratio when capabilities are “high” is “1:1” for the prediction unit of 8 ⁇ 8 pixels
  • the reference ratio when capabilities are “low” is “2:1”.
  • the prediction unit having the size of 16 ⁇ 16 pixels while the reference ratio when capabilities are “high” is “2:1”, the reference ratio when capabilities are “low” is “4:1”.
  • the prediction unit having the size of 32 ⁇ 32 pixels while the reference ratio when capabilities are “high” is “4:1”, the reference ratio when capabilities are “low” is “8:1”.
  • FIG. 19A shows an example of reference pixel settings when the PU size is 16 ⁇ 16 pixels, the chroma-format is 4:2:0, capabilities are “high”, and the reference ratio is “2:1”.
  • half the reference pixels of reference pixels of the color difference component and reference pixels of the luminance component are thinned out.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both eight.
  • FIG. 19B shows an example of reference pixel settings when the PU size is 16 ⁇ 16 pixels, the chroma-format is 4:2:0, capabilities are “low”, and the reference ratio is “4:1”.
  • the chroma-format is 4:2:0, capabilities are “low”, and the reference ratio is “4:1”.
  • three fourths of the lower half reference pixels of reference pixels of the color difference component and reference pixels of the luminance component are thinned out.
  • the number I C of reference pixels of the color difference component and the number I L of reference pixels of the luminance component are both four.
  • the number of reference pixels can further be reduced when the use of a device of lower capabilities is assumed. Accordingly, the processing cost exceeding the processing capacity of a device can be prevented from arising in coefficient calculation processing in LM mode.
  • Xiaoran Cao Tsinghua et al. propose in “CE6.b1 Report on Short Distance Intra Prediction Method” (JCTVC-E278, March 2011) the short distance intra prediction method that improves encoding efficiency by using a small-sized non-square prediction unit.
  • the short distance intra prediction method for example, prediction units of various sizes such as 1 ⁇ 4 pixels, 2 ⁇ 8 pixels, 4 ⁇ 16 pixels, 4 ⁇ 1 pixels, 8 ⁇ 2 pixels, and 16 ⁇ 4 pixels can be set into an image. In this case, which of the size in the vertical direction and the size in the horizontal direction of the prediction unit is larger depends on settings of the prediction unit.
  • the prediction controller 42 dynamically selects the reference ratio corresponding to the direction to which larger of the reference ratio in the vertical direction and the reference ratio in the horizontal ratio corresponds and controls the selected reference ratio.
  • FIG. 20A shows an example of reference pixel settings when the PU size is 2 ⁇ 8 pixels and the chroma-format is 4:2:0.
  • the size in the horizontal direction is larger than the size in the vertical direction and thus, while the reference ratio in the vertical direction is “1:1”, the reference ratio in the horizontal direction is “2:1”.
  • FIG. 20B shows an example of reference pixel settings when the PU size is 16 ⁇ 4 pixels and the chroma-format is 4:2:0.
  • the size in the vertical direction is larger than the size in the horizontal direction and thus, while the reference ratio in the horizontal direction is “1:1”, the reference ratio in the vertical direction is “4:1”.
  • the short distance intra prediction method when, like in the fifth scenario, the short distance intra prediction method is used, by dynamically selecting and controlling the reference ratio corresponding to the direction in which the size is larger, the degradation in prediction accuracy can be prevented by avoiding the reduction of the number of reference pixels in the direction in which the number thereof is smaller.
  • control of the reference ratio by the prediction controller 42 may be performed by mapping, for example, between the block size pre-defined in standard specifications of an image encoding scheme and the reference ratio.
  • bit depth of image data utilized for many uses is 8 bits
  • a greater bit depth such as 10 bits or 12 bits may be used for image data for some uses.
  • the coefficient calculation section 44 may reduce the reference pixel value to the predetermined number of bits before calculating the coefficient ⁇ and the coefficient ⁇ of a prediction function using the reduced reference pixel value. Accordingly, the coefficient ⁇ and the coefficient ⁇ can be calculated using a small-sized common circuit or logic regardless of the bit depth.
  • the prediction controller 42 controls the “reference ratio” as a ratio of the number of reference pixels to the block size is mainly described here.
  • the concept substantially equivalent to the reference ratio may be expressed by another term, for example, a “reduction ratio” meaning the ratio of reference pixels to be reduced.
  • the “reference ratio” or “reduction ratio” may be expressed by, instead of the above format such as “1:1”, “2:1”, and “4:1”, the percentage format like “100% (0%)”, “50% (50%)”, or “25% (75%)” or the numeric format in the range from 0 to 1.
  • mapping between the block size and the reference ratio (or the reduction ratio) as shown in each scenario may be, instead of defined in advance, adaptively selected.
  • information specifying the selected mapping may be transmitted from the encoding side to the decoding side inside the parameter set or the header area of an encoded stream.
  • FIG. 21 is a flow chart showing an example of the flow of intra prediction processing at the time of encoding by the intra prediction section 40 having the configuration as illustrated in FIG. 2 .
  • predicted image data in various prediction modes is first generated by the prediction section 46 for the luminance component of the coding unit to be processed and the optimum prediction mode and the arrangement of prediction units are decided by the mode determination section 48 (step S 100 ).
  • the prediction controller 42 focuses on one of one or more color difference prediction units set inside the coding unit to search for the optimum prediction mode of the color difference component (step S 105 )
  • the prediction controller 42 determines whether the size of the focused color difference PU satisfies the above predetermined condition (step S 110 ).
  • the predetermined condition is typically a condition that the size of the focused color difference PU is equal to a size determined from the size of the corresponding luminance PU in accordance with the chroma-format or less. If such a condition is satisfied, there is one luminance PU corresponding to the focused color difference PU.
  • the prediction controller 42 also determines whether the size of the luminance PU corresponding to the focused color difference PU is 4 ⁇ 4 pixels (step S 115 ).
  • the prediction processing in LM mode for the focused color difference PU is performed by the coefficient calculation section 44 and the prediction section 46 (step S 120 ). If the above condition is not satisfied and the size of the corresponding luminance PU is 8 ⁇ 8 pixels or more, the prediction processing in LM mode in step S 120 is skipped.
  • intra prediction processing in non-LM mode for example, Mode 0 to Mode 3 illustrated in FIG. 8
  • the mode determination section 48 calculates the cost function of each prediction mode for the focused color difference PU based on original image data and predicted image data (step S 130 ).
  • step S 105 to step S 130 is repeated for each color difference PU set in the coding unit (step S 135 ). Then, the mode determination section 48 decides the optimum arrangement of color difference PU in the coding unit and the optimum prediction mode for each color difference PU by mutually comparing cost functions values (step S 140 ).
  • FIG. 22 is a flow chart showing an example of a detailed flow of LM mode prediction processing in step S 120 of FIG. 21 .
  • the prediction controller 42 first acquires the reference ratio for each prediction unit in accordance with the size of the prediction unit and other parameters (for example, the chroma-format, profile, or level) (step S 121 ).
  • the coefficient calculation section 44 sets reference pixels to be referenced by the calculation formula (for example, the above Formula (3) and Formula (4)) to calculate coefficients of a prediction function according to the reference ratio instructed by the prediction controller 42 (step S 122 ).
  • the number of reference pixels set here can be reduced in accordance with the reference ratio.
  • the luminance components of reference pixels can be resampled depending on the chroma-format.
  • the coefficient calculation section 44 calculates the coefficient ⁇ of a prediction function using pixel values of set reference pixels according to, for example, the above Formula (3) (step S 123 ). Further, the coefficient calculation section 44 calculates the coefficient ⁇ 3 of a prediction function using pixel values of set reference pixels according to, for example, the above Formula (4) (step S 124 ).
  • the prediction section 46 calculates the predicted pixel value of each pixel to be predicted by substituting the value of the corresponding luminance component into a prediction function (for example, the above Formula (1)) built by using the coefficient ⁇ and the coefficient ⁇ (step S 125 ).
  • a prediction function for example, the above Formula (1)
  • FIGS. 23 and 24 an example configuration of an image decoding device according to an embodiment will be described using FIGS. 23 and 24 .
  • FIG. 23 is a block diagram showing an example of a configuration of an image decoding device 60 according to an embodiment.
  • the image decoding device 60 includes an accumulation buffer 61 , a lossless decoding section 62 , an inverse quantization section 63 , an inverse orthogonal transform section 64 , an addition section 65 , a deblocking filter 66 , a sorting buffer 67 , a D/A (Digital to Analogue) conversion section 68 , a frame memory 69 , selectors 70 and 71 , a motion compensation section 80 and an intra prediction section 90 .
  • D/A Digital to Analogue
  • the accumulation buffer 61 temporarily stores an encoded stream input via a transmission line using a storage medium.
  • the lossless decoding section 62 decodes an encoded stream input from the accumulation buffer 61 according to the encoding method used at the time of encoding. Also, the lossless decoding section 62 decodes information multiplexed to the header region of the encoded stream. Information that is multiplexed to the header region of the encoded stream may include the profile and the level in a sequence parameter set, and the information about inter prediction and information about intra prediction in the block header, for example. The lossless decoding section 62 outputs the information about inter prediction to the motion compensation section 80 . Also, the lossless decoding section 62 outputs the information about intra prediction to the intra prediction section 90 .
  • the lossless decoding section 62 decodes quantized data from an encoded stream in the above order by PU for the coding unit for which an intra prediction is made. That is, the lossless decoding section 62 decodes, for example, the luminance component of a first luminance prediction unit in one coding unit, the color difference component of a first color difference prediction unit corresponding to the first luminance prediction unit, and the luminance component of a second luminance prediction unit that does not correspond to the first color difference prediction unit (subsequent to the first luminance prediction unit) in the order of the luminance component of the first luminance prediction unit, the color difference component of the first color difference prediction unit, and the luminance component of the second luminance prediction unit.
  • the inverse quantization section 63 inversely quantizes quantized data which has been decoded by the lossless decoding section 62 .
  • the inverse orthogonal transform section 64 generates predicted error data by performing inverse orthogonal transformation on transform coefficient data input from the inverse quantization section 63 according to the orthogonal transformation method used at the time of encoding. Then, the inverse orthogonal transform section 64 outputs the generated predicted error data to the addition section 65 .
  • the addition section 65 adds the predicted error data input from the inverse orthogonal transform section 64 and predicted image data input from the selector 71 to thereby generate decoded image data. Then, the addition section 65 outputs the generated decoded image data to the deblocking filter 66 and the frame memory 69 .
  • the deblocking filter 66 removes block distortion by filtering the decoded image data input from the addition section 65 , and outputs the decoded image data after filtering to the sorting buffer 67 and the frame memory 69 .
  • the sorting buffer 67 generates a series of image data in a time sequence by sorting images input from the deblocking filter 66 . Then, the sorting buffer 67 outputs the generated image data to the D/A conversion section 68 .
  • the D/A conversion section 68 converts the image data in a digital format input from the sorting buffer 67 into an image signal in an analogue format. Then, the D/A conversion section 68 causes an image to be displayed by outputting the analogue image signal to a display (not shown) connected to the image decoding device 60 , for example.
  • the frame memory 69 stores, using a storage medium, the decoded image data before filtering input from the addition section 65 , and the decoded image data after filtering input from the deblocking filter 66 .
  • the selector 70 switches the output destination of the image data from the frame memory 70 between the motion compensation section 80 and the intra prediction section 90 for each block in the image according to mode information acquired by the lossless decoding section 62 .
  • the selector 70 outputs the decoded image data after filtering that is supplied from the frame memory 70 to the motion compensation section 80 as the reference image data.
  • the selector 70 outputs the decoded image data before filtering that is supplied from the frame memory 70 to the intra prediction section 90 as reference image data.
  • the selector 71 switches the output source of predicted image data to be supplied to the addition section 65 between the motion compensation section 80 and the intra prediction section 90 according to the mode information acquired by the lossless decoding section 62 .
  • the selector 71 supplies to the addition section 65 the predicted image data output from the motion compensation section 80 .
  • the selector 71 supplies to the addition section 65 the predicted image data output from the intra prediction section 90 .
  • the motion compensation section 80 performs a motion compensation process based on the information about inter prediction input from the lossless decoding section 62 and the reference image data from the frame memory 69 , and generates predicted image data. Then, the motion compensation section 80 outputs the generated predicted image data to the selector 71 .
  • the intra prediction section 90 performs an intra prediction process based on the information about intra prediction input from the lossless decoding section 62 and the reference image data from the frame memory 69 , and generates predicted image data. Then, the intra prediction section 90 outputs the generated predicted image data to the selector 71 .
  • the intra prediction process of the intra prediction section 90 will be described later in detail.
  • FIG. 24 is a block diagram showing an example of a detailed configuration of an intra prediction section 90 of the image decoding device 60 shown in FIG. 23 .
  • the intra prediction section 90 includes a prediction controller 92 , a luminance component buffer 93 , a coefficient calculation section 94 , and a prediction section 96 .
  • the prediction controller 92 controls intra prediction processing by the intra prediction section 90 .
  • the prediction controller 92 sets a luminance PU inside the coding unit based on prediction mode information contained in information about an intra prediction and performs intra prediction processing for the set luminance PU.
  • the prediction controller 92 sets a color difference PU inside the coding unit based on prediction mode information and performs intra prediction processing for the set color difference PU.
  • the above intra prediction processing is performed in the above order by PU.
  • the prediction controller 92 causes the prediction section 96 to generate the predicted pixel value of the luminance component of each pixel in prediction mode specified by prediction mode information.
  • the prediction controller 92 causes the prediction section 96 to generate the predicted pixel value of the color difference component of each pixel in prediction mode specified by prediction mode information.
  • prediction mode candidates for the color difference PU contain the above-mentioned LM mode.
  • the prediction controller 92 variably controls the ratio of the number of reference pixels when coefficients of a prediction function in LM mode is calculated to the block size, that is, the reference ratio.
  • the control of the reference ratio by the prediction controller 92 is typically performed in accordance with the block size. If, for example, the block size exceeds a predetermined size, the prediction controller 92 may control the reference ratio so that the number of reference pixels for calculating coefficients of a prediction function becomes constant. Mapping between the block size and reference ratio may be defined in advance and stored in a storage medium of the image decoding device 60 or may dynamically be specified inside the header area of an encoded stream.
  • the prediction controller 92 may control the reference ratio in accordance with the chroma-format. Also, the prediction controller 92 may control the reference ratio in accordance with the profile or level defining capabilities of a device. The control of the reference ratio by the prediction controller 92 may be performed according to one of the above five scenarios, any combination thereof, or other scenarios.
  • the luminance component buffer 93 temporarily stores the value of the luminance component used for intra prediction in LM mode for the color difference PU.
  • the maximum available size of the luminance PU and the maximum available size of the color difference PU are both 64 ⁇ 64 pixels.
  • the number of luminance PU corresponding to one color difference PU is limited in principle to one in LM mode.
  • the chroma-format is 4:2:0
  • the LM mode will not be specified for a color difference PU of 64 ⁇ 64 pixels.
  • the maximum size of the color difference PU for which the LM mode is specified is 32 ⁇ 32 pixels. Further in the present embodiment, encoding processing is performed ion the above order by PU.
  • an intra prediction may exceptionally be made in LM mode for one color difference PU based on a plurality of luminance PUs.
  • the chroma-format is 4:2:0
  • values of the luminance components of four corresponding luminance PUs may be buffered by the luminance component buffer 93 to make an intra prediction in LM mode for one color difference PU of 4 ⁇ 4 pixels.
  • the chroma-format is 4:2:2
  • values of the luminance components of two corresponding luminance PUs may be buffered by the luminance component buffer 93 to make an intra prediction in LM mode for one color difference PU of 4 ⁇ 4 pixels.
  • 32 ⁇ 32 pixels are sufficient as the buffer size of the luminance component buffer 93 .
  • the coefficient calculation section 94 calculates coefficients of a prediction function used by the prediction section 96 when the LM mode is specified for the color difference component by referring to pixels around the prediction unit to which the pixel to be predicted belongs, that is, reference pixels.
  • the prediction function used by the prediction section 96 is typically a linear function of the value of the luminance component and is represented by, for example, the above Formula (1).
  • the number of reference pixels referenced by the coefficient calculation section 94 to calculate coefficients of a prediction function is controlled by, as described above, the prediction controller 92 . If the reference ratio is not “1:1”, the coefficient calculation section 94 may calculate coefficients of a prediction function by, for example, thinning out as many reference pixels as a number in accordance with the reference ratio and then using only remaining reference pixels.
  • the coefficient calculation section 94 may calculate coefficients of a prediction function using a common circuit or logic for a plurality of block sizes exceeding a predetermined size. In addition, if the bit depth of a pixel value exceeds a predetermined number of bits, the coefficient calculation section 94 may reduce the reference pixel value to the predetermined number of bits before calculating coefficients of a prediction function using the reduced reference pixel value.
  • the prediction section 96 generates the pixel value of the luminance component and the pixel value of the color difference component of the pixel to be predicted according to the specified prediction mode using reference image data from a frame memory 69 under the control of the prediction controller 92 .
  • Prediction mode candidates used for the color difference component by the prediction section 96 may contain the above LM mode.
  • the prediction section 96 calculates the predicted pixel value of the color difference component by retrieving the value (resampled if necessary) of the corresponding luminance component from the luminance component buffer 93 and substituting the value into a prediction function built by using the coefficient ⁇ and the coefficient ( 3 calculated by the coefficient calculation section 94 .
  • the prediction section 96 outputs predicted image data generated as a result of prediction to an addition section 65 via a selector 71 .
  • FIG. 25 is a flow chart showing an example of the flow of intra prediction processing at the time of decoding by the intra prediction section 90 having the configuration as illustrated in FIG. 24 .
  • the prediction controller 92 first sets one prediction unit inside the coding unit in the order of decoding from an encoded stream (step S 200 ).
  • the intra prediction processing branches depending on whether the set prediction unit is a prediction unit of the luminance component or a prediction unit of the color difference component (step S 205 ). If the set prediction unit is a prediction unit of the luminance component, the processing proceeds to step S 210 . On the other hand, if the set prediction unit is a prediction unit of the color difference component, the processing proceeds to step S 230 .
  • step S 210 the prediction controller 92 recognizes the prediction mode of the luminance component specified by the prediction mode information (step S 210 ). Then, the prediction section 96 generates the predicted pixel value of the luminance component of each pixel in the prediction unit according to the specified prediction mode using reference image data from the frame memory 69 (step S 220 ).
  • step S 230 the prediction controller 92 recognizes the prediction mode of the color difference component specified by the prediction mode information (step S 230 ). Then, the prediction controller 92 determines whether the LM mode is specified (step S 240 ). If the LM mode is specified, the prediction controller 92 causes the coefficient calculation section 94 and the prediction section 96 to perform prediction processing of the color difference component in LM mode (step S 250 ). The LM mode prediction processing in step S 250 may be similar to the LM mode prediction processing described using FIG. 22 . On the other hand, if the LM mode is not specified, the prediction controller 92 causes the prediction section 96 to perform intra prediction processing of the color difference component in non-LM mode (step S 260 ).
  • next prediction unit is present in the same coding unit
  • the processing returns to step S 200 to repeat the above processing for the next prediction unit (step S 270 ). If next prediction unit is not present, the intra prediction processing in FIG. 25 terminates.
  • the prediction section 46 of the image encoding device 10 and the prediction section 96 of the image decoding device 60 thin out luminance components corresponding to each color difference component at some thinning rate.
  • the luminance component corresponding to each color difference component corresponds to each luminance component after resampling according to, for example, the above Formula (2).
  • the prediction section 46 and the prediction section 96 generate the predicted values of each color difference component corresponding to thinned luminance components by using values of luminance components that are not thinned out.
  • FIG. 26 is an explanatory view illustrating an example of thinning processing according to the present modification.
  • the prediction unit (PU) of 8 ⁇ 8 pixels is shown as an example. It is assumed that the chroma-format 4:2:0 and the thinning rate is 25%.
  • the thinning rate indicates the ratio of the number of pixels after thinning to the number of pixels before thinning.
  • the number of color difference components contained in one PU is 4 ⁇ 4.
  • the number of luminance components corresponding to each color difference component is also 4 ⁇ 4 due to resampling.
  • the number of luminance components used to predict the color difference component in LM mode is 2 ⁇ 2. More specifically, in the example at the lower right of FIG. 26 , among four luminance components Lu 1 to Lu 4 , the luminance components Lu 2 , Lu 3 , Lu 4 other than the luminance component Lu 1 are thinned out. Similarly, among four luminance components Lu 1 to Lu 8 , the luminance components Lu 6 , Lu 7 , Lu 8 other than the luminance component Lu 5 are thinned out.
  • the color difference component Cu 1 at the lower left of FIG. 26 corresponds to the luminance component Lu 1 that is not thinned out.
  • the prediction section 46 and the prediction section 96 can generate the predicted values of the color difference component Cu 1 by substituting the value of the luminance component Lu 1 into the right-hand side of the above Formula (1).
  • the color difference component Cu 2 corresponds to the thinned luminance component Lu 2 .
  • the prediction section 46 and the prediction section 96 generate the predicted values of the color difference component Cu 2 using the value of any luminance component that is not thinned out.
  • the predicted value of the color difference component Cu 2 may be replication of the predicted value of the color difference component Cu 1 or a value obtained by linear interpolation of two predicted values of the color difference components Cu 1 , Cu 5 .
  • the predicted pixel value Pr C (x, y) of the color difference component when the thinning rate is 25% may be calculated by techniques represented by the following Formula (5) or Formula (6).
  • Formula (5) represents replication of a predicted value from adjacent pixels.
  • Formula (6) represents linear interpolation of a predicted value.
  • the above thinning rate affects the amount of memory resources to hold pixel values after resampling of the luminance components.
  • the amount of consumption of memory resources decreases with an increasing number of luminance components to be thinned out.
  • the accuracy of prediction of the color difference component may be degraded.
  • the parameter to specify the thinning rate may be specified in the header (for example, the sequence parameter set, picture parameter set, or slice header) of an encoded stream.
  • the prediction section 96 of the image decoding device 60 decides the thinning rate based on the parameter acquired from the header. Accordingly, the thinning rate can flexibly be changed in accordance with requirements (for example, which of saving of memory resources and encoding efficiency should have higher priority) for each device.
  • the thinning rate is 50% in each case. In these examples, half the luminance components of luminance components after resampling are thinned out. However, even if the thinning rate is the same, patterns of position of luminance components to be thinned out (hereinafter, called thinning patterns) are mutually different.
  • the predicted value of the color difference component Cu 2 corresponding to the thinned luminance component Lu 2 may be replication of the predicted value of the color difference component Cu 1 or a value obtained by linear interpolation of two predicted values of the color difference components Cu 1 , Cu 5 .
  • the thinning pattern of FIG. 27A luminance components to be thinned out are uniformly distributed in the PU. Therefore, compared with other thinning patterns of the same thinning rate, the thinning pattern in FIG. 27A realizes higher prediction accuracy.
  • luminance components are thinned out in every other row.
  • Such a thinning pattern is advantageous in that, for example, in a device holding pixel values in a line memory, values of many luminance components can be accessed by memory access at a time.
  • luminance components are thinned out in every other column.
  • Such a thinning pattern is advantageous in that, for example, if the chroma-format is 4:2:2 and the number of pixels in the vertical direction is larger, more frequency components in the column direction can be maintained.
  • the parameter to specify the thinning pattern from a plurality of thinning pattern candidates may be specified in the header of an encoded stream.
  • the prediction section 96 of the image decoding device 60 decides the positions of luminance components to be thinned out based on the parameter acquired from the header. Accordingly, the thinning pattern can flexibly be changed in accordance with requirements for each device.
  • the prediction section 46 and the prediction section 96 may decide the thinning rates in accordance with the above reference ratio. If, for example, the number of reference pixels referenced when coefficients of a prediction function are calculated is smaller, more luminance components may be thinned out. At this point, the prediction section 46 and the prediction section 96 may thin out luminance components in positions corresponding to thinning positions of reference pixels.
  • FIGS. 28A and 28B each show examples of correspondence between thinning positions of reference pixels and thinning positions of luminance components.
  • the PU size is 16 ⁇ 16 pixels
  • the chroma-format is 4:2:0
  • the reference ratio is 2:1.
  • the thinning rate is decided in favor of 25% and thinning patterns similar to examples in FIG. 26 may be selected.
  • the PU size is 16 ⁇ 16 pixels
  • the chroma-format is 4:2:0
  • the reference ratio in the vertical direction is 2:1
  • the reference ratio in the horizontal direction is 1:1.
  • the thinning rate is decided in favor of 50% and thinning patterns similar to the example in FIG. 27B may be selected.
  • all luminance components of the block to be predicted are thinned out in rows in which the reference pixel is thinned out. All luminance components of the block to be predicted are thinned out in columns in which the reference pixel is thinned out. By deciding thinning positions in this manner, the determination of thinning positions is simplified and the implementation of thinning processing according to the present modification can be made still easier. Also in the example of FIG. 28B , all luminance components of the block to be predicted are thinned out in rows in which the reference pixel is thinned out.
  • the image encoding device 10 and the image decoding device 60 may be applied to various electronic appliances such as a transmitter and a receiver for satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, distribution to terminals via cellular communication, and the like, a recording device that records images in a medium such as an optical disc, a magnetic disk or a flash memory, a reproduction device that reproduces images from such storage medium, and the like.
  • various electronic appliances such as a transmitter and a receiver for satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, distribution to terminals via cellular communication, and the like
  • a recording device that records images in a medium such as an optical disc, a magnetic disk or a flash memory
  • reproduction device that reproduces images from such storage medium, and the like.
  • FIG. 29 is a block diagram showing an example of a schematic configuration of a television adopting the embodiment described above.
  • a television 900 includes an antenna 901 , a tuner 902 , a demultiplexer 903 , a decoder 904 , an video signal processing section 905 , a display section 906 , an audio signal processing section 907 , a speaker 908 , an external interface 909 , a control section 910 , a user interface 911 , and a bus 912 .
  • the tuner 902 extracts a signal of a desired channel from broadcast signals received via the antenna 901 , and demodulates the extracted signal. Then, the tuner 902 outputs an encoded bit stream obtained by demodulation to the demultiplexer 903 . That is, the tuner 902 serves as transmission means of the televisions 900 for receiving an encoded stream in which an image is encoded.
  • the demultiplexer 903 separates a video stream and an audio stream of a program to be viewed from the encoded bit stream, and outputs each stream which has been separated to the decoder 904 . Also, the demultiplexer 903 extracts auxiliary data such as an EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control section 910 . Additionally, the demultiplexer 903 may perform descrambling in the case the encoded bit stream is scrambled.
  • EPG Electronic Program Guide
  • the decoder 904 decodes the video stream and the audio stream input from the demultiplexer 903 . Then, the decoder 904 outputs video data generated by the decoding process to the video signal processing section 905 . Also, the decoder 904 outputs the audio data generated by the decoding process to the audio signal processing section 907 .
  • the video signal processing section 905 reproduces the video data input from the decoder 904 , and causes the display section 906 to display the video.
  • the video signal processing section 905 may also cause the display section 906 to display an application screen supplied via a network. Further, the video signal processing section 905 may perform an additional process such as noise removal, for example, on the video data according to the setting.
  • the video signal processing section 905 may generate an image of a GUI (Graphical User Interface) such as a menu, a button, a cursor or the like, for example, and superimpose the generated image on an output image.
  • GUI Graphic User Interface
  • the display section 906 is driven by a drive signal supplied by the video signal processing section 905 , and displays a video or an image on an video screen of a display device (for example, a liquid crystal display, a plasma display, an OLED, or the like).
  • a display device for example, a liquid crystal display, a plasma display, an OLED, or the like.
  • the audio signal processing section 907 performs reproduction processes such as D/A conversion and amplification on the audio data input from the decoder 904 , and outputs audio from the speaker 908 . Also, the audio signal processing section 907 may perform an additional process such as noise removal on the audio data.
  • the external interface 909 is an interface for connecting the television 900 and an external appliance or a network.
  • a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904 . That is, the external interface 909 also serves as transmission means of the televisions 900 for receiving an encoded stream in which an image is encoded.
  • the control section 910 includes a processor such as a CPU (Central Processing Unit), and a memory such as an RAM (Random Access Memory), an ROM (Read Only Memory), or the like.
  • the memory stores a program to be executed by the CPU, program data, EPG data, data acquired via a network, and the like.
  • the program stored in the memory is read and executed by the CPU at the time of activation of the television 900 , for example.
  • the CPU controls the operation of the television 900 according to an operation signal input from the user interface 911 , for example, by executing the program.
  • the user interface 911 is connected to the control section 910 .
  • the user interface 911 includes a button and a switch used by a user to operate the television 900 , and a receiving section for a remote control signal, for example.
  • the user interface 911 detects an operation of a user via these structural elements, generates an operation signal, and outputs the generated operation signal to the control section 910 .
  • the bus 912 interconnects the tuner 902 , the demultiplexer 903 , the decoder 904 , the video signal processing section 905 , the audio signal processing section 907 , the external interface 909 , and the control section 910 .
  • the decoder 904 has a function of the image decoding device 60 according to the embodiment described above. Accordingly, an increase in the amount of consumption of memory resources accompanying the extension of block size can be curbed when the LM mode is adopted for decoding images of the television 900 .
  • FIG. 30 is a block diagram showing an example of a schematic configuration of a mobile phone adopting the embodiment described above.
  • a mobile phone 920 includes an antenna 921 , a communication section 922 , an audio codec 923 , a speaker 924 , a microphone 925 , a camera section 926 , an image processing section 927 , a demultiplexing section 928 , a recording/reproduction section 929 , a display section 930 , a control section 931 , an operation section 932 , and a bus 933 .
  • the antenna 921 is connected to the communication section 922 .
  • the speaker 924 and the microphone 925 are connected to the audio codec 923 .
  • the operation section 932 is connected to the control section 931 .
  • the bus 933 interconnects the communication section 922 , the audio codec 923 , the camera section 926 , the image processing section 927 , the demultiplexing section 928 , the recording/reproduction section 929 , the display section 930 , and the control section 931 .
  • the mobile phone 920 performs operation such as transmission/reception of audio signal, transmission/reception of emails or image data, image capturing, recording of data, and the like, in various operation modes including an audio communication mode, a data communication mode, an image capturing mode, and a videophone mode.
  • an analogue audio signal generated by the microphone 925 is supplied to the audio codec 923 .
  • the audio codec 923 converts the analogue audio signal into audio data, and A/D converts and compresses the converted audio data. Then, the audio codec 923 outputs the compressed audio data to the communication section 922 .
  • the communication section 922 encodes and modulates the audio data, and generates a transmission signal. Then, the communication section 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921 . Also, the communication section 922 amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal, and acquires a received signal.
  • the communication section 922 demodulates and decodes the received signal and generates audio data, and outputs the generated audio data to the audio codec 923 .
  • the audio codec 923 extends and D/A converts the audio data, and generates an analogue audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 and causes the audio to be output.
  • the control section 931 in the data communication mode, the control section 931 generates text data that makes up an email, according to an operation of a user via the operation section 932 , for example. Moreover, the control section 931 causes the text to be displayed on the display section 930 . Furthermore, the control section 931 generates email data according to a transmission instruction of the user via the operation section 932 , and outputs the generated email data to the communication section 922 . Then, the communication section 922 encodes and modulates the email data, and generates a transmission signal. Then, the communication section 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921 .
  • the communication section 922 amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal, and acquires a received signal. Then, the communication section 922 demodulates and decodes the received signal, restores the email data, and outputs the restored email data to the control section 931 .
  • the control section 931 causes the display section 930 to display the contents of the email, and also, causes the email data to be stored in the storage medium of the recording/reproduction section 929 .
  • the recording/reproduction section 929 includes an arbitrary readable and writable storage medium.
  • the storage medium may be a built-in storage medium such as an RAM, a flash memory or the like, or an externally mounted storage medium such as a hard disk, a magnetic disk, a magneto-optical disk, an optical disc, an USB memory, a memory card, or the like.
  • the camera section 926 captures an image of a subject, generates image data, and outputs the generated image data to the image processing section 927 , for example.
  • the image processing section 927 encodes the image data input from the camera section 926 , and causes the encoded stream to be stored in the storage medium of the recording/reproduction section 929 .
  • the demultiplexing section 928 multiplexes a video stream encoded by the image processing section 927 and an audio stream input from the audio codec 923 , and outputs the multiplexed stream to the communication section 922 , for example.
  • the communication section 922 encodes and modulates the stream, and generates a transmission signal. Then, the communication section 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921 . Also, the communication section 922 amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal, and acquires a received signal.
  • These transmission signal and received signal may include an encoded bit stream.
  • the communication section 922 demodulates and decodes the received signal, restores the stream, and outputs the restored stream to the demultiplexing section 928 .
  • the demultiplexing section 928 separates a video stream and an audio stream from the input stream, and outputs the video stream to the image processing section 927 and the audio stream to the audio codec 923 .
  • the image processing section 927 decodes the video stream, and generates video data.
  • the video data is supplied to the display section 930 , and a series of images is displayed by the display section 930 .
  • the audio codec 923 extends and D/A converts the audio stream, and generates an analogue audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 and causes the audio to be output.
  • the image processing section 927 has a function of the image encoding device 10 and the image decoding device 60 according to the embodiment described above. Accordingly, an increase in the amount of consumption of memory resources accompanying the extension of block size can be curbed when the LM mode is adopted for encoding and decoding images of the mobile phone 920 .
  • FIG. 31 is a block diagram showing an example of a schematic configuration of a recording/reproduction device adopting the embodiment described above.
  • a recording/reproduction device 940 encodes, and records in a recording medium, audio data and video data of a received broadcast program, for example.
  • the recording/reproduction device 940 may also encode, and record in the recording medium, audio data and video data acquired from another device, for example.
  • the recording/reproduction device 940 reproduces, using a monitor or a speaker, data recorded in the recording medium, according to an instruction of a user, for example. At this time, the recording/reproduction device 940 decodes the audio data and the video data.
  • the recording/reproduction device 940 includes a tuner 941 , an external interface 942 , an encoder 943 , an HDD (Hard Disk Drive) 944 , a disc drive 945 , a selector 946 , a decoder 947 , an OSD (On-Screen Display) 948 , a control section 949 , and a user interface 950 .
  • the tuner 941 extracts a signal of a desired channel from broadcast signals received via an antenna (not shown), and demodulates the extracted signal. Then, the tuner 941 outputs an encoded bit stream obtained by demodulation to the selector 946 . That is, the tuner 941 serves as transmission means of the recording/reproduction device 940 .
  • the external interface 942 is an interface for connecting the recording/reproduction device 940 and an external appliance or a network.
  • the external interface 942 may be an IEEE 1394 interface, a network interface, an USB interface, a flash memory interface, or the like.
  • video data and audio data received by the external interface 942 are input to the encoder 943 . That is, the external interface 942 serves as transmission means of the recording/reproduction device 940 .
  • the encoder 943 encodes the video data and the audio data. Then, the encoder 943 outputs the encoded bit stream to the selector 946 .
  • the HDD 944 records in an internal hard disk an encoded bit stream, which is compressed content data of a video or audio, various programs, and other pieces of data. Also, the HDD 944 reads these pieces of data from the hard disk at the time of reproducing a video or audio.
  • the disc drive 945 records or reads data in a recording medium that is mounted.
  • a recording medium that is mounted on the disc drive 945 may be a DVD disc (a DVD-Video, a DVD-RAM, a DVD-R, a DVD-RW, a DVD+, a DVD+RW, or the like), a Blu-ray (registered trademark) disc, or the like, for example.
  • the selector 946 selects, at the time of recording a video or audio, an encoded bit stream input from the tuner 941 or the encoder 943 , and outputs the selected encoded bit stream to the HDD 944 or the disc drive 945 . Also, the selector 946 outputs, at the time of reproducing a video or audio, an encoded bit stream input from the HDD 944 or the disc drive 945 to the decoder 947 .
  • the decoder 947 decodes the encoded bit stream, and generates video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948 . Also, the decoder 904 outputs the generated audio data to an external speaker.
  • the OSD 948 reproduces the video data input from the decoder 947 , and displays a video. Also, the OSD 948 may superimpose an image of a GUI, such as a menu, a button, a cursor or the like, for example, on a displayed video.
  • a GUI such as a menu, a button, a cursor or the like
  • the control section 949 includes a processor such as a CPU, and a memory such as an RAM or an ROM.
  • the memory stores a program to be executed by the CPU, program data, and the like.
  • a program stored in the memory is read and executed by the CPU at the time of activation of the recording/reproduction device 940 , for example.
  • the CPU controls the operation of the recording/reproduction device 940 according to an operation signal input from the user interface 950 , for example, by executing the program.
  • the user interface 950 is connected to the control section 949 .
  • the user interface 950 includes a button and a switch used by a user to operate the recording/reproduction device 940 , and a receiving section for a remote control signal, for example.
  • the user interface 950 detects an operation of a user via these structural elements, generates an operation signal, and outputs the generated operation signal to the control section 949 .
  • the encoder 943 has a function of the image encoding device 10 according to the embodiment described above.
  • the decoder 947 has a function of the image decoding device 60 according to the embodiment described above. Accordingly, an increase in the amount of consumption of memory resources accompanying the extension of block size can be curbed when the LM mode is adopted for encoding and decoding images of the recording/reproduction device 940 .
  • FIG. 32 is a block diagram showing an example of a schematic configuration of an image capturing device adopting the embodiment described above.
  • An image capturing device 960 captures an image of a subject, generates an image, encodes the image data, and records the image data in a recording medium.
  • the image capturing device 960 includes an optical block 961 , an image capturing section 962 , a signal processing section 963 , an image processing section 964 , a display section 965 , an external interface 966 , a memory 967 , a media drive 968 , an OSD 969 , a control section 970 , a user interface 971 , and a bus 972 .
  • the optical block 961 is connected to the image capturing section 962 .
  • the image capturing section 962 is connected to the signal processing section 963 .
  • the display section 965 is connected to the image processing section 964 .
  • the user interface 971 is connected to the control section 970 .
  • the bus 972 interconnects the image processing section 964 , the external interface 966 , the memory 967 , the media drive 968 , the OSD 969 , and the control section 970 .
  • the optical block 961 includes a focus lens, an aperture stop mechanism, and the like.
  • the optical block 961 forms an optical image of a subject on an image capturing surface of the image capturing section 962 .
  • the image capturing section 962 includes an image sensor such as a CCD, a CMOS or the like, and converts by photoelectric conversion the optical image formed on the image capturing surface into an image signal which is an electrical signal. Then, the image capturing section 962 outputs the image signal to the signal processing section 963 .
  • the signal processing section 963 performs various camera signal processes, such as knee correction, gamma correction, color correction and the like, on the image signal input from the image capturing section 962 .
  • the signal processing section 963 outputs the image data after the camera signal process to the image processing section 964 .
  • the image processing section 964 encodes the image data input from the signal processing section 963 , and generates encoded data. Then, the image processing section 964 outputs the generated encoded data to the external interface 966 or the media drive 968 . Also, the image processing section 964 decodes encoded data input from the external interface 966 or the media drive 968 , and generates image data. Then, the image processing section 964 outputs the generated image data to the display section 965 . Also, the image processing section 964 may output the image data input from the signal processing section 963 to the display section 965 , and cause the image to be displayed. Furthermore, the image processing section 964 may superimpose data for display acquired from the OSD 969 on an image to be output to the display section 965 .
  • the OSD 969 generates an image of a GUI, such as a menu, a button, a cursor or the like, for example, and outputs the generated image to the image processing section 964 .
  • a GUI such as a menu, a button, a cursor or the like
  • the external interface 966 is configured as an USB input/output terminal, for example.
  • the external interface 966 connects the image capturing device 960 and a printer at the time of printing an image, for example.
  • a drive is connected to the external interface 966 as necessary.
  • a removable medium such as a magnetic disk, an optical disc or the like, for example, is mounted on the drive, and a program read from the removable medium may be installed in the image capturing device 960 .
  • the external interface 966 may be configured as a network interface to be connected to a network such as a LAN, the Internet or the like. That is, the external interface 966 serves as transmission means of the image capturing device 960 .
  • a recording medium to be mounted on the media drive 968 may be an arbitrary readable and writable removable medium, such as a magnetic disk, a magneto-optical disk, an optical disc, a semiconductor memory or the like, for example. Also, a recording medium may be fixedly mounted on the media drive 968 , configuring a non-transportable storage section such as a built-in hard disk drive or an SSD (Solid State Drive), for example.
  • a non-transportable storage section such as a built-in hard disk drive or an SSD (Solid State Drive), for example.
  • the control section 970 includes a processor such as a CPU, and a memory such as an RAM or an ROM.
  • the memory stores a program to be executed by the CPU, program data, and the like.
  • a program stored in the memory is read and executed by the CPU at the time of activation of the image capturing device 960 , for example.
  • the CPU controls the operation of the image capturing device 960 according to an operation signal input from the user interface 971 , for example, by executing the program.
  • the user interface 971 is connected to the control section 970 .
  • the user interface 971 includes a button, a switch and the like used by a user to operate the image capturing device 960 , for example.
  • the user interface 971 detects an operation of a user via these structural elements, generates an operation signal, and outputs the generated operation signal to the control section 970 .
  • the image processing section 964 has a function of the image encoding device 10 and the image decoding device 60 according to the embodiment described above. Accordingly, an increase in the amount of consumption of memory resources accompanying the extension of block size can be curbed when the LM mode is adopted for encoding and decoding images of the image capturing device 960 .
  • the image encoding device 10 and the image decoding device 60 has been described using FIGS. 1 to 32 .
  • the color difference component of a first color difference prediction unit corresponding to the first luminance prediction unit is encoded before the luminance component of a second luminance prediction unit that does not correspond to the first color difference prediction unit. Therefore, when an encoded stream is decoded according to the order of the above encoding, an intra prediction can be made in LM mode for the first color difference prediction unit based on the value of the luminance component of the buffered first luminance prediction unit.
  • the buffer may be cleared to newly buffer the value of the luminance component of the second luminance prediction unit. Therefore, there is no need to provide a large memory for the adoption of the LM mode. That is, the amount of memory resources needed when an intra prediction based on a dynamically built prediction function is made can be reduced.
  • the number of luminance prediction units corresponding to one color difference prediction unit is limited in principle to one in LM mode. That is, an intra prediction is not made in LM mode for one color difference component based on values of the luminance components extending over a plurality of luminance prediction units that are hardly correlated to each other. Therefore, because prediction modes from which high prediction precision cannot be expected are excluded from objects of search, the processing cost needed to encode images can be reduced.
  • the size of the luminance prediction unit is 4 ⁇ 4 pixels
  • the number of luminance prediction units corresponding to one color difference prediction unit may exceptionally be permitted to be plural in LM mode. According to the above configuration, encoding efficiency can be enhanced by increasing opportunities when the LM mode is utilized by limiting to cases when the size of the prediction unit is small.
  • the ratio of the number of reference pixels referenced to calculate coefficients of the function to the block size is variably controlled. Therefore, an increase in processing cost can be avoided or mitigated by curbing an increase of the number of reference pixels accompanying the extension of the block size.
  • the ratio is controlled so that the number of reference pixels is constant when the block size exceeds a predetermined size.
  • coefficients of the function can be calculated using a common circuit or logic for a plurality of block sizes. Therefore, an increase in scale of the circuit or logic caused by the adoption of the LM mode can also be curbed.
  • reference pixels are not excessively reduced when the block size falls below a predetermined size. Therefore, the degradation of prediction accuracy in LM mode due to an insufficient number of reference pixels can be prevented.
  • a relatively large block size can normally be set when an image in the block is monotonous and a prediction can easily be made. Therefore, when the block size is still larger, the risk of extreme degradation of prediction accuracy caused by the reduction of more reference pixels is small.
  • the ratio can separately be controlled in the vertical direction and the horizontal direction of the block.
  • coefficients of the function can be calculated using a common circuit or logic without being dependent on the chroma-format.
  • reference pixels to be reduced can adaptively be changed in accordance with the shape of the block.
  • the amount of consumption of memory resources in connection with the introduction of the LM mode can much effectively be reduced.
  • the information about intra prediction and the information about inter prediction is multiplexed to the header of the encoded stream, and the encoded stream is transmitted from the encoding side to the decoding side.
  • the method of transmitting this information is not limited to such an example.
  • this information may be transmitted or recorded as individual data that is associated with an encoded bit stream, without being multiplexed to the encoded bit stream.
  • the term “associate” here means to enable an image included in a bit stream (or a part of an image, such as a slice or a block) and information corresponding to the image to link to each other at the time of decoding.
  • this information may be transmitted on a different transmission line from the image (or the bit stream). Or, this information may be recorded on a different recording medium (or in a different recording area on the same recording medium) from the image (or the bit stream). Furthermore, this information and the image (or the bit stream) may be associated with each other on the basis of arbitrary units such as a plurality of frames, one frame, a part of a frame or the like, for example.
  • present technology may also be configured as below.
  • An image processing apparatus including:
  • a decoding section that decodes a luminance component and a color difference component of a block inside a coding unit in an order of the luminance component and the color difference component in each block.
  • the image processing apparatus wherein the decoding section decodes a luminance component of a first block in the coding unit, a color difference component of the first block, and a luminance component of a second block subsequent to the first block in the order of decoding in an order of the luminance component of the first block, the color difference component of the first block, and the luminance component of the second block.
  • the image processing apparatus decodes the luminance component of the first block, the color difference component of the first block, the luminance component of the second block, and a color difference component of the second block in an order of the luminance component of the first block, the color difference component of the first block, the luminance component of the second block, and the color difference component of the second block.
  • the image processing apparatus according to any one of (1) to (3), wherein a unit of decoding processing is hierarchically blocked, and wherein the block is a prediction unit.
  • the image processing apparatus according to any one of (1) to (4), further including:
  • a prediction section that when a linear model (LM) mode representing a prediction of the color difference component based on the luminance component is specified, generates a predicted value of the color difference component of the first block decoded by the decoding section by using a function based on a value of the luminance component of the first block.
  • LM linear model
  • the image processing apparatus wherein when a size of the luminance prediction unit is 4 ⁇ 4 pixels, the number of the luminance prediction units corresponding to the one color difference prediction unit is exceptionally permitted to be plural in the LM mode.
  • the decoding section decodes at least the one luminance component of the luminance prediction unit of 4 ⁇ 4 pixels including the first block and then decodes the color difference component of the first block.
  • the prediction section includes a buffer having a size equal to or smaller than a maximum size of the color difference prediction unit as the buffer of the luminance component for the LM mode.
  • An image processing method including:
  • An image processing apparatus including:
  • an encoding section that encodes a luminance component and a color difference component of a block inside a coding unit in an order of the luminance component and the color difference component in each block.
  • the image processing apparatus wherein the encoding section encodes a luminance component of a first block in the coding unit, a color difference component of the first block, and a luminance component of a second block subsequent to the first block in the order of decoding in an order of the luminance component of the first block, the color difference component of the first block, and the luminance component of the second block.
  • the image processing apparatus wherein the encoding section encodes the luminance component of the first block, the color difference component of the first block, the luminance component of the second block, and a color difference component of the second block in an order of the luminance component of the first block, the color difference component of the first block, the luminance component of the second block, and the color difference component of the second block.
  • the image processing apparatus according to any one of (11) to (13), wherein a unit of encoding processing is hierarchically blocked, and wherein the block is a prediction unit.
  • the image processing apparatus according to any one of (11) to (14), further including:
  • a prediction section that, in a linear model (LM) mode representing a prediction of the color difference component based on the luminance component, generates a predicted value of the color difference component of the first block encoded by the encoding section by using a function based on a value of the luminance component of the first block.
  • LM linear model
  • the encoding section encodes at least the one luminance component of the luminance prediction unit of 4 ⁇ 4 pixels including the first block and then encodes the color difference component of the first block.
  • the prediction section includes a buffer having a size equal to or smaller than a maximum size of the color difference prediction unit as the buffer of the luminance component for the LM mode.
  • An image processing method including:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Provided is an image processing apparatus including a decoding section that decodes a luminance component and a color difference component of a block inside a coding unit in an order of the luminance component and the color difference component in each block.

Description

    CROSS REFERENCE TO PRIOR APPLICATION
  • This application is a continuation of U.S. patent application Ser. No. 16/003,624 (filed on Jun. 8, 2018), which is a continuation of U.S. patent application Ser. No. 14/004,460 (filed on Sep. 11, 2013 and issued as U.S. Pat. No. 10,063,852 on Aug. 28, 2018), which is a National Stage Patent Application of PCT International Patent Application No. PCT/JP2012/059173 (filed on Apr. 4, 2012) under 35 U.S.C. § 371, which claims priority to Japanese Patent Application Nos. 2011-210543 (filed on Sep. 27, 2011), 2011-145411 (filed on Jun. 30, 2011), and 2011-125473 (filed on Jun. 3, 2011), which are all hereby incorporated by reference in their entirety.
  • TECHNICAL FIELD
  • The present disclosure relates to an image processing device, and an image processing method.
  • BACKGROUND ART
  • Conventionally, a compression technology is widespread that has its object to effectively transmit or accumulate digital images, and that compresses the amount of information of an image by motion compensation and orthogonal transform such as discrete cosine transform, for example, by using redundancy unique to the image. For example, an image encoding device and an image decoding device conforming to a standard technology such as H.26x standards developed by ITU-T or MPEG-y standards developed by MPEG (Moving Picture Experts Group) are widely used in various scenes, such as accumulation and distribution of images by a broadcaster and reception and accumulation of images by a general user.
  • The H.26x standards (ITU-T Q6/16 VCEG) are standards developed initially with the aim of performing encoding that is suitable for communications such as video telephones and video conferences. The H.26x standards are known to require a large computation amount for encoding and decoding, but to be capable of realizing a higher compression ratio, compared with the MPEG-y standards. Furthermore, with Joint Model of Enhanced-Compression Video Coding, which is a part of the activities of MPEG4, a standard allowing realization of a higher compression ratio by adopting a new function while being based on the H.26x standards is developed. This standard was made an international standard under the names of H.264 and MPEG-4 Part10 (Advanced Video Coding; AVC) in March 2003.
  • One important technique in the image encoding method describe above is in-screen prediction, that is, intra prediction. Intra prediction is a technique of using a correlation between adjacent blocks in an image and predicting the pixel value of a certain block from the pixel value of another block that is adjacent to thereby reduce the amount of information to be encoded. With an image encoding method before MPEG4, only the DC component and the low frequency component of an orthogonal transform coefficient were the targets of intra prediction, but with H.264/AVC, intra prediction is possible for all the pixel values. By using intra prediction, a significant increase in the compression ratio can be expected for an image where the change in the pixel value is gradual, such as an image of the blue sky, for example.
  • In H.264/AVC, the intra prediction can be made using a block of, for example, 4×4 pixels, 8×8 pixels, or 16×16 pixels as a processing unit (that is, a prediction unit (PU)). In HEVC (High Efficiency Video Coding) whose standardization is under way as a next-generation image encoding scheme subsequent to H.264/AVC, the size of the prediction unit is about to be extended to 32×32 pixels and 64×64 pixels (see Non-Patent Literature 1).
  • To make an intra prediction, the optimum prediction mode to predict a pixel value of a block to be predicted is normally selected from a plurality of prediction modes. The prediction mode is typically distinguished by the prediction direction from a reference pixel to a pixel to be predicted. In H.264/AVC, for example, when predicting a color difference component, four prediction modes of the average value prediction, horizontal prediction, vertical prediction, and plane prediction can be selected. Further, in HEVC, an additional prediction mode called a linear model (LM) mode that predicts the pixel value of a color difference component using a linear function of a dynamically built luminance component as a prediction function is proposed (see Non-Patent Literature 2).
  • CITATION LIST Non-Patent Literature
    • Non-Patent Literature 1: Sung-Chang Lim, Hahyun Lee, Jinho Lee, Jongho Kim, Haechul Choi, Seyoon Jeong, Jin Soo Choi, “Intra coding using extended block size” (VCEG-AL28, July 2009)
    • Non-Patent Literature 2: Jianle Chen, et al. “CE6.a.4: Chroma intra prediction by reconstructed luma samples” (JCTVC-E266, March, 2011)
    SUMMARY OF INVENTION Technical Problem
  • However, according to the technique described in Non-Patent Literature 2 described above, memory resources needed to build a prediction function in LM mode increase with an increasing number of reference pixels. Particularly in HEVC in which the size of the prediction unit is extended up to 64×64 pixels, it becomes necessary to provide a large memory to adopt the LM mode, which could present an obstacle to miniaturization or cost reduction of hardware.
  • Therefore, it is desirable to provide a technique capable of reducing the amount of memory resources needed when an intra prediction is made based on a dynamically built prediction function like the LM mode.
  • Solution to Problem
  • According to the present disclosure, an image processing apparatus including a decoding section that decodes a luminance component and a color difference component of a block inside a coding unit in the order of the luminance component and the color difference component in each block.
  • The image processing device mentioned above may be typically realized as an image decoding device that decodes an image.
  • Also according to the present disclosure, an image processing method including decoding a luminance component and a color difference component of a block inside a coding unit in the order of the luminance component and the color difference component in each block.
  • Also according to the present disclosure, an image processing apparatus including an encoding section that encodes a luminance component and a color difference component of a block inside a coding unit in the order of the luminance component and the color difference component in each block.
  • The image processing device mentioned above may be typically realized as an image encoding device that encodes an image.
  • Also according to the present disclosure, an image processing method including encoding a luminance component and a color difference component of a block inside a coding unit in the order of the luminance component and the color difference component in each block.
  • Advantageous Effects of Invention
  • According to a technology in the present disclosure, the amount of memory resources needed when an intra prediction is made based on a dynamically built prediction function can be reduced.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram showing an example of a configuration of an image encoding device according to an embodiment.
  • FIG. 2 is a block diagram showing an example of a detailed configuration of an intra prediction section of the image encoding device of the embodiment.
  • FIG. 3 is an explanatory view illustrating examples of prediction mode candidates for a luminance component of a prediction unit of 4×4 pixels.
  • FIG. 4 is an explanatory view illustrating prediction directions related to the examples in FIG. 3.
  • FIG. 5 is an explanatory view illustrating reference pixels related to the examples in FIG. 3.
  • FIG. 6 is an explanatory view illustrating examples of prediction mode candidates for a luminance component of a prediction unit of 8×8 pixels.
  • FIG. 7 is an explanatory view illustrating examples of prediction mode candidates for a luminance component of a prediction unit of 16×16 pixels.
  • FIG. 8 is an explanatory view illustrating examples of prediction mode candidates for a color difference component.
  • FIG. 9 is an explanatory view illustrating a difference between the order of processing in an existing technique and the order of processing in the present embodiment.
  • FIG. 10A is an explanatory view illustrating a first example of the order of new encoding processing.
  • FIG. 10B is an explanatory view illustrating a second example of the order of new encoding processing.
  • FIG. 11A is a first explanatory view illustrating reference pixels in LM mode.
  • FIG. 11B is a second explanatory view illustrating reference pixels in LM mode.
  • FIG. 12 is an explanatory view showing an example of the definition of a reference ratio in a first scenario.
  • FIG. 13A is an explanatory view showing a first example of the number of reference pixels controlled according to the first scenario.
  • FIG. 13B is an explanatory view showing a second example of the number of reference pixels controlled according to the first scenario.
  • FIG. 14 is an explanatory view showing an example of the definition of a reference ratio in a second scenario.
  • FIG. 15A is an explanatory view showing a first example of the number of reference pixels controlled according to the second scenario.
  • FIG. 15B is an explanatory view showing a second example of the number of reference pixels controlled according to the second scenario.
  • FIG. 15C is an explanatory view showing a third example of the number of reference pixels controlled according to the second scenario.
  • FIG. 15D is an explanatory view showing a fourth example of the number of reference pixels controlled according to the second scenario.
  • FIG. 16 is an explanatory view showing an example of the definition of a reference ratio in a third scenario.
  • FIG. 17A is an explanatory view showing a first example of the number of reference pixels controlled according to the third scenario.
  • FIG. 17B is an explanatory view showing a second example of the number of reference pixels controlled according to the third scenario.
  • FIG. 18 is an explanatory view showing an example of the definition of a reference ratio in a fourth scenario.
  • FIG. 19A is an explanatory view showing a first example of the number of reference pixels controlled according to the fourth scenario.
  • FIG. 19B is an explanatory view showing a second example of the number of reference pixels controlled according to the fourth scenario.
  • FIG. 20A is an explanatory view showing a first example of the number of reference pixels controlled according to a fifth scenario.
  • FIG. 20B is an explanatory view showing a second example of the number of reference pixels controlled according to the fifth scenario.
  • FIG. 21 is a flow chart showing an example of the intra prediction process at the time of encoding according to the embodiment.
  • FIG. 22 is a flow chart showing an example of a detailed flow of LM mode prediction processing in FIG. 21.
  • FIG. 23 is a block diagram showing an example of a detailed configuration of the image decoding device according to the embodiment.
  • FIG. 24 is a block diagram showing an example of the detailed configuration of the intra prediction section of the image decoding device according to the embodiment.
  • FIG. 25 is a flow chart showing an example of a flow of an intra prediction process at the time of decoding according to an embodiment.
  • FIG. 26 is an explanatory view illustrating an example of thinning processing according to a modification.
  • FIG. 27A is a first explanatory view illustrating an example of thinning processing different from the example of FIG. 26.
  • FIG. 27B is a second explanatory view illustrating an example of thinning processing different from the example of FIG. 26.
  • FIG. 27C is a third explanatory view illustrating an example of thinning processing different from the example of FIG. 26.
  • FIG. 28A is an explanatory view illustrating a first example of correspondence between thinning positions of reference pixels and thinning positions of luminance components.
  • FIG. 28B is an explanatory view illustrating a second example of correspondence between thinning positions of reference pixels and thinning positions of luminance components.
  • FIG. 29 is a block diagram showing an example of a schematic configuration of a television.
  • FIG. 30 is a block diagram showing an example of a schematic configuration of a mobile phone.
  • FIG. 31 is a block diagram showing an example of a schematic configuration of a recording/reproduction device.
  • FIG. 32 is a block diagram showing an example of a schematic configuration of an image capturing device.
  • DESCRIPTION OF EMBODIMENTS
  • Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the appended drawings. Note that, in this specification and the drawings, elements that have substantially the same function and structure are denoted with the same reference signs, and repeated explanation is omitted.
  • Furthermore, the “Description of Embodiments” will be described in the order mentioned below.
  • 1. Example Configuration of Image Encoding Device According to an Embodiment
  • 2. Flow of Process at the Time of Encoding According to an Embodiment
  • 3. Example Configuration of Image Decoding Device According to an Embodiment
  • 4. Flow of Process at the Time of Decoding According to an Embodiment
  • 5. Modifications
  • 6. Example Application
  • 7. Summary
  • 1. Example Configuration of Image Encoding Device According to an Embodiment 1-1. Example of Overall Configuration
  • FIG. 1 is a block diagram showing an example of a configuration of an image encoding device 10 according to an embodiment. Referring to FIG. 1, the image encoding device 10 includes an A/D (Analogue to Digital) conversion section 11, a sorting buffer 12, a subtraction section 13, an orthogonal transform section 14, a quantization section 15, a lossless encoding section 16, an accumulation buffer 17, a rate control section 18, an inverse quantization section 21, an inverse orthogonal transform section 22, an addition section 23, a deblocking filter 24, a frame memory 25, selectors 26 and 27, a motion estimation section 30 and an intra prediction section 40.
  • The A/D conversion section 11 converts an image signal input in an analogue format into image data in a digital format, and outputs a series of digital image data to the sorting buffer 12.
  • The sorting buffer 12 sorts the images included in the series of image data input from the A/D conversion section 11. After sorting the images according to the a GOP (Group of Pictures) structure according to the encoding process, the sorting buffer 12 outputs the image data which has been sorted to the subtraction section 13, the motion estimation section 30 and the intra prediction section 40.
  • The image data input from the sorting buffer 12 and predicted image data input by the motion estimation section 30 or the intra prediction section 40 described later are supplied to the subtraction section 13. The subtraction section 13 calculates predicted error data which is a difference between the image data input from the sorting buffer 12 and the predicted image data and outputs the calculated predicted error data to the orthogonal transform section 14.
  • The orthogonal transform section 14 performs orthogonal transform on the predicted error data input from the subtraction section 13. The orthogonal transform to be performed by the orthogonal transform section 14 may be discrete cosine transform (DCT) or Karhunen-Loeve transform, for example. The orthogonal transform section 14 outputs transform coefficient data acquired by the orthogonal transform process to the quantization section 15.
  • The transform coefficient data input from the orthogonal transform section 14 and a rate control signal from the rate control section 18 described later are supplied to the quantization section 15. The quantization section 15 quantizes the transform coefficient data, and outputs the transform coefficient data which has been quantized (hereinafter, referred to as quantized data) to the lossless encoding section 16 and the inverse quantization section 21. Also, the quantization section 15 switches a quantization parameter (a quantization scale) based on the rate control signal from the rate control section 18 to thereby change the bit rate of the quantized data to be input to the lossless encoding section 16.
  • The lossless encoding section 16 generates an encoded stream by performing lossless encoding processing on quantized data input from the quantization section 15. Lossless encoding by the lossless encoding section 16 may be, for example, variable-length encoding or arithmetic encoding. Also, the lossless encoding section 16 multiplexes information on an intra prediction or information on an inter prediction input from the selector 27 into a header region of the encoded stream. Then, the lossless encoding section 16 outputs the generated encoded stream to the accumulation buffer 17.
  • Normally, one coding unit (CU) one prediction unit or more for the luminance component (Y) and one prediction unit or more for each of the color difference components (Cb, Cr). According to the existing technique, quantized data of these prediction units is encoded in the order by component. That is, after data of the luminance component (Y) inside one CU is encoded, data of the color difference component (Cb) is encoded and then, data of the color difference component (Cr) is encoded. According to the present embodiment, by contrast, the lossless encoding section 16 encodes quantized data in the order by PU in the coding unit for which an intra prediction is made to generate an encoded stream. The order of encoding will further be described later.
  • The accumulation buffer 17 temporarily stores an encoded stream input from the lossless encoding section 16 using a storage medium such as a semiconductor memory. Then, the accumulation buffer 17 outputs accumulated encoded streams to a transmission section (not shown) (for example, a communication interface or a connection interface to a peripheral device) at a rate in accordance with the band of the transmission line.
  • The rate control section 18 monitors the free space of the accumulation buffer 17. Then, the rate control section 18 generates a rate control signal according to the free space on the accumulation buffer 17, and outputs the generated rate control signal to the quantization section 15. For example, when there is not much free space on the accumulation buffer 17, the rate control section 18 generates a rate control signal for lowering the bit rate of the quantized data. Also, for example, when the free space on the accumulation buffer 17 is sufficiently large, the rate control section 18 generates a rate control signal for increasing the bit rate of the quantized data.
  • The inverse quantization section 21 performs an inverse quantization process on the quantized data input from the quantization section 15. Then, the inverse quantization section 21 outputs transform coefficient data acquired by the inverse quantization process to the inverse orthogonal transform section 22.
  • The inverse orthogonal transform section 22 performs an inverse orthogonal transform process on the transform coefficient data input from the inverse quantization section 21 to thereby restore the predicted error data. Then, the inverse orthogonal transform section 22 outputs the restored predicted error data to the addition section 23.
  • The addition section 23 adds the restored predicted error data input from the inverse orthogonal transform section 22 and the predicted image data input from the motion estimation section 30 or the intra prediction section 40 to thereby generate decoded image data. Then, the addition section 23 outputs the generated decoded image data to the deblocking filter 24 and the frame memory 25.
  • The deblocking filter 24 performs a filtering process for reducing block distortion occurring at the time of encoding of an image. The deblocking filter 24 filters the decoded image data input from the addition section 23 to remove the block distortion, and outputs the decoded image data after filtering to the frame memory 25.
  • The frame memory 25 stores, using a storage medium, the decoded image data input from the addition section 23 and the decoded image data after filtering input from the deblocking filter 24.
  • The selector 26 reads the decoded image data after filtering which is to be used for inter prediction from the frame memory 25, and supplies the decoded image data which has been read to the motion estimation section 30 as reference image data. Also, the selector 26 reads the decoded image data before filtering which is to be used for intra prediction from the frame memory 25, and supplies the decoded image data which has been read to the intra prediction section 40 as reference image data.
  • In the inter prediction mode, the selector 27 outputs predicted image data which is a result of inter prediction output from the motion estimation section 30 to the subtraction section 13, and also, outputs the information about inter prediction to the lossless encoding section 16. Furthermore, in the intra prediction mode, the selector 27 outputs predicted image data which is a result of intra prediction output from the intra prediction section 40 to the subtraction section 13, and also, outputs the information about intra prediction to the lossless encoding section 16. The selector 27 switches between the inter prediction mode and the intra prediction mode depending on the size of a cost function output from the motion estimation section 30 or the intra prediction section 40.
  • The motion estimation section 30 performs inter prediction processing (inter-frame prediction processing) based on image data (original image data) to be encoded and input from the reordering buffer 12 and decoded image data supplied via the selector 26. For example, the motion estimation section 30 evaluates prediction results by each prediction mode using a predetermined cost function. Next, the motion estimation section 30 selects the prediction mode that produces the minimum cost function value, that is, the prediction mode that produces the highest compression ratio as the optimum prediction mode. Also, the motion estimation section 30 generates predicted image data according to the optimum prediction mode. Then, the motion estimation section 30 outputs prediction mode information indicating the selected optimum prediction mode, information on inter predictions including motion vector information and reference image information, the cost function value, and predicted image data to the selector 27.
  • The intra prediction section 40 performs intra prediction processing on each block set inside an image based on original image data input from the reordering buffer 12 and decoded image data as reference image data supplied from the memory frame 25. Then, the intra prediction section 40 outputs information on intra predictions, including prediction mode information indicating the optimum prediction mode and size related information, the cost function value, and predicted image data to the selector 27. Prediction modes that can be selected by the intra prediction section 40 include, in addition to existing intra prediction modes, a linear model (LM) mode about the color difference component. In contrast to the LM mode described in Non-Patent Literature 2 described above, the LM mode in the present embodiment is characterized in that the ratio of the number of reference pixels to the block size can change. The intra prediction processing by the intra prediction section 40 will be described later in detail.
  • 1-2. Configuration Example of Intra Prediction Section
  • FIG. 2 is a block diagram showing an example of a detailed configuration of the intra prediction section 40 of an image encoding device 10 shown in FIG. 1. Referring to FIG. 2, the intra prediction section 40 includes a prediction controller 42, a coefficient calculation section 44, a prediction section 46, and a mode determination section 48.
  • The prediction controller 42 controls intra prediction processing by the intra prediction section 40. For example, the prediction controller 42 performs intra prediction processing of the luminance component (Y) and then performs intra prediction processing of color difference components (Cb, Cr) in come processing unit. In the intra prediction processing of the luminance component, the prediction controller 42 causes the prediction section 46 to generate a predicted pixel value of each pixel in a plurality of prediction modes and causes the mode determination section 48 to determine the optimum prediction mode of the luminance component. As a result, the arrangement of prediction units in the coding unit is also decided. In the intra prediction processing of the color difference components, the prediction controller 42 causes the prediction section 46 to generate a predicted pixel value of each pixel in a plurality of prediction modes for each prediction unit and causes the mode determination section 48 to determine the optimum prediction mode of the color difference components.
  • Prediction mode candidates for the luminance component may be a prediction mode adopted by an existing image encoding scheme such as H.264/AVC or a different prediction mode. Prediction mode candidates for the color difference component may also contain a prediction mode adopted by an existing image encoding scheme. Further, prediction mode candidates for the color difference component contain the above-mentioned LM mode. However, in the present embodiment, only if the size of the prediction unit of the color difference component and the size of the prediction unit of the corresponding luminance component satisfy predetermined conditions, the LM mode is added as a prediction mode candidate for the prediction unit of the color difference component (that is, a search range of the optimum prediction mode). Predetermined conditions may be a condition that “the size of the prediction unit of the color difference component is equal to a size determined from the size of the prediction unit of the corresponding luminance component in accordance with the chroma-format or less”. The “corresponding prediction unit” is a prediction unit sharing a pixel at least partially. The size of the prediction unit of the color difference component may be decided without being dependent on the size of the prediction unit of the luminance component. Therefore, regarding a prediction unit of a certain color difference component, one or a plurality of corresponding prediction units of the luminance component may be present. “A size determined from the size of the prediction unit of the corresponding luminance component in accordance with the chroma-format” in the above condition is, when the size of the prediction unit of the luminance component is M×N pixels, (M/2)×(N/2) pixels if the chroma-format is 4:2:0, (M/2)×N pixels if the chroma-format is 4:2:2, and M×N pixels if the chroma-format is 4:4:4. The meaning of limiting the search for the LM mode will be described in detail later.
  • According to the technique described in Non-Patent Literature 2 described above, the ratio of the number of reference pixels to the block size when coefficients of a prediction function in LM mode are calculated is constant. Therefore, the number of reference pixels increases correspondingly with an increasing block size. In the present embodiment, on the other hand, the prediction controller 42 controls the above ratio variably. The block size here is in principle a size of the prediction unit. The above-mentioned ration controlled by the prediction controller 42, that is, the ratio of the number of reference pixels to the block size will herein be called a “reference ratio”. The control of the reference ratio by the prediction controller 42 is typically performed in accordance with the block size. Further, the prediction controller 42 may control the reference ratio in accordance with a chroma-format affecting the block size of the color difference component. The prediction controller 42 may also control the reference ratio in accordance with parameters (for example, a profile or level) defining capabilities of a device for encoding and decoding images. A plurality of scenarios of controlling the reference ratio by the prediction controller 42 will be described later in more detail.
  • The coefficient calculation section 44 calculates coefficients of a prediction function used by the prediction section 46 in LM mode by referring to pixels around the prediction unit to which the pixel to be predicted belongs, that is, reference pixels. The prediction function used by the prediction section 46 is typically a linear function of the value of the luminance component. The number of reference pixels referenced by the coefficient calculation section 44 to calculate coefficients of a prediction function is controlled by, as described above, the prediction controller 42.
  • The prediction section 46 predicts the pixel value of the luminance component and the pixel value of the color difference component of a pixel to be predicted according to various prediction mode candidates under the control of the prediction controller 42. Examples of prediction mode candidates used by the prediction section 46 will be described later in more detail. Predicted image data generated as a result of prediction by the prediction section 46 is output to the mode determination section 48 for each prediction mode.
  • The mode determination section 48 calculates the cost function value of each prediction mode based on original image data input from the reordering buffer 12 and predicted image data input from the prediction section 46. Then, based on the calculated cost function value, the mode determination section 48 decides the optimum prediction mode for the luminance component and the arrangement of prediction units inside the coding unit. Similarly, based on the cost function value of the color difference component, the mode determination section 48 decides the optimum prediction mode for the color difference component and the arrangement of prediction units. Then, the mode determination section 48 outputs information on intra predictions including prediction mode information indicating the decided optimum prediction mode and size related information, the cost function value, and predicted image data including predicted pixel values of the luminance component and the color difference component to the selector 27. The size related information output from the mode determination section 48 may contain, in addition to information to identify the size of each prediction unit, information specifying the chroma-format.
  • [1-3. Prediction Mode Candidates]
  • Next, prediction mode candidates that can be used by the prediction section 46 of the intra prediction section 40 will be described.
  • (1) Prediction Mode Candidates for the Luminance Component
  • Prediction mode candidates for the luminance component may be a prediction mode adopted by an existing image encoding scheme such as H.264/AVC. FIGS. 3 to 5 are explanatory views illustrating such prediction mode candidates when the size of the prediction unit is 4×4 pixels.
  • Referring to FIG. 3, nine prediction modes (Mode 0 to Mode 8) that can be used for the prediction unit of the 4×4 pixels are shown. In FIG. 4, the prediction direction corresponding to each mode number is schematically shown. In FIG. 5, lower-case alphabetic characters a to p represent the pixel value of each pixel (that is, each pixel to be predicted) in the prediction unit of 4×4 pixels. Rz (z=a, b, . . . , m) around the prediction unit represents the pixel value of an encoded reference pixel.
  • For example, the prediction direction in Mode 0 is a vertical direction, and each predicted pixel value is calculated as below:

  • a=e=i=m=Ra

  • b=f=j=n=Rb

  • c=g=k=o=Rc

  • d=h=l=p=Rd
  • The prediction direction in Mode 1 is horizontal, and each predicted pixel value is calculated as below:

  • a=b=c=d=Ri

  • e=f=g=h=Rj

  • i=j=k==Rk

  • m=n=o=p=Rl
  • Mode 2 represents the DC prediction (average value prediction) and each predicted pixel value is calculated according to one of the following four formulas depending on which reference pixel can be used.

  • a=b= . . . =p=(Ra+Rb+Rc+Rd+Ri+Rj+Rk+Rl+4)>>3

  • a=b= . . . =p=(Ra+Rb+Rc+Rd+2)>>2

  • a=b= . . . =p=(Ri+Rj+Rk+Rl+2)>>2

  • a=b= . . . =p=128
  • The prediction direction in Mode 3 is diagonal down left, and each predicted pixel value is calculated as below:

  • a=(Ra+2Rb+Rc+2)>>2

  • b=e=(Rb+2Rc+Rd+2)>>2

  • c=f=i=(Rc+2Rd+Re+2)>>2

  • d=g=j=m=(Rd+2Re+Rf+2)>>2

  • h=k=n=(Re+2Rf+Rg+2)>>2

  • l=o=(Rf+2Rg+Rh+2)>>2

  • p=(Rg+3Rh+2)>>2
  • The prediction direction in Mode 4 is diagonal down right, and each predicted pixel value is calculated as below:

  • m=(Rj+2Rk+Rl+2)>>2

  • i=n=(Ri+2Rj+Rk+2)>>2

  • e=j=o=(Rm+2Ri+Rj+2)>>2

  • a=f=k=p=(Ra+2Rm+Ri+2)>>2

  • b=g==(Rm+2Ra+Rb+2)>>2

  • c=h=(Ra+2Rb+Rc+2)>>2

  • d=(Rb+2Rc+Rd+2)>>2
  • The prediction direction in Mode 5 is vertical right, and each predicted pixel value is calculated as below:

  • a=j=(Rm+Ra+1)>>1

  • b=k=(Ra+Rb+1)>>1

  • c=l=(Rb+Rc+1)>>1

  • d=(Rc+Rd+1)>>1

  • e=n=(Ri+2Rm+Ra+2)>>2

  • f=o=(Rm+2Ra+Rb+2)>>2

  • g=p=(Ra+2Rb+Rc+2)>>2

  • h=(Rb+2Rc+Rd+2)>>2

  • i=(Rm+2Ri+Rj+2)>>2

  • m=(Ri+2Rj+Rk+2)>>2
  • The prediction direction in Mode 6 is horizontal down, and each predicted pixel value is calculated as below:

  • a=g=(Rm+Ri+1)>>1

  • b=h=(Ri+2Rm+Ra+2)>>2

  • c=(Rm+2Ra+Rb+2)>>2

  • d=(Ra+2Rb+Rc+2)>>2

  • e=k=(Ri+Rj+1)>>1

  • f==(Rm+2Ri+Rj+2)>>2

  • i=o=(Rj+Rk+1)>>1

  • j=p=(Ri+2Rj+Rk+2)>>2

  • m=(Rk+Rl+1)>>1

  • n=(Rj+2Rk+Rl+2)>>2
  • The prediction direction in Mode 7 is vertical left, and each predicted pixel value is calculated as below:

  • a=(Ra+Rb+1)>>1

  • b=i=(Rb+Rc+1)>>1

  • c=j=(Rc+Rd+1)>>1

  • d=k=(Rd+Re+1)>>1

  • l=(Re+Rf+1)>>1

  • e=(Ra+2Rb+Rc+2)>>2

  • f=m=(Rb+2Rc+Rd+2)>>2

  • g=n=(Rc+2Rd+Re+2)>>2

  • h=o=(Rd+2Re+Rf+2)>>2

  • p=(Re+2Rf+Rg+2)>>2
  • The prediction direction in Mode 8 is horizontal up, and each predicted pixel value is calculated as below:

  • a=(Ri+Rj+1)>>1

  • b=(Ri+2Rj+Rk+2)>>2

  • c=e=(Rj+Rk+1)>>1

  • d=f=(Rj+2Rk+Rl+2)>>2

  • g=i=(Rk+Rl+1)>>1

  • h=j=(Rk+3Rl+2)>>2

  • k=l=m=n=o=p=Rl
  • Referring to FIG. 6, nine prediction modes (Mode 0 to Mode 8) that can be used for the prediction unit of the 8×8 pixels are shown. The prediction direction in Mode 0 is vertical. The prediction direction in Mode 1 is horizontal. Mode 2 represents the DC prediction (average value prediction). The prediction direction in Mode 3 is DIAGONAL_DOWN_LEFT. The prediction direction in Mode 4 is DIAGONAL_DOWN_RIGHT The prediction direction in Mode 5 is VERTICAL_RIGHT. The prediction direction in Mode 6 is HORIZONTAL_DOWN. The prediction direction in Mode 7 is VERTICAL_LEFT. The prediction direction in Mode 8 is HORIZONTAL_UP.
  • Referring to FIG. 7, four prediction modes (Mode 0 to Mode 3) that can be used for the prediction unit of the 16×16 pixels are shown. The prediction direction in Mode 0 is vertical. The prediction direction in Mode 1 is horizontal. Mode 2 represents the DC prediction (average value prediction). Mode 3 represents the plane prediction.
  • (2) Prediction Mode Candidates for the Color Difference Component
  • The prediction mode for the color difference component can be selected independently of the prediction mode for the luminance component. In FIG. 8, among prediction mode candidates that can be used when the block size of the color difference component is 8×8 pixels, four prediction modes (Mode 0 to Mode 3) adopted for existing image encoding schemes such as H.264/AVC are shown.
  • Mode 0 represents the DC prediction (average value prediction). Here, the predicted pixel value of the pixel position (x, y) is represented as PrC(x, y), eight left reference pixel values are represented as ReC(−1, n), and eight upper reference pixel values are represented as ReC(n, −1). C as a subscript means the color difference component. n is an integer equal to 0 or more and equal to 7 or less. Then, the predicted pixel value PrC(x, y) is calculated according to one of the following three formulas depending on which reference pixels are available:
  • [ Math 1 ] Pr C [ x , y ] = ( n = 0 7 ( Re C [ - 1 , n ] + Re C [ n , - 1 ] ) + 8 ) >> 4 Pr C [ x , y ] = ( n = 0 7 Re C [ - 1 , n ] + 4 ) >> 3 Pr C [ x , y ] = ( n = 0 7 Re C [ n , - 1 ] + 4 ) >> 3
  • The prediction direction in Mode 1 is horizontal, and the predicted pixel value PrC(x, y) is calculated as below:

  • Pr C[x,y]=ReC[−1,y]  [Math 2]
  • The prediction direction in Mode 2 is vertical, and the predicted pixel value PrC(x, y) is calculated as below:

  • Pr C[x,y]=ReC[x,−1]  [Math 3]
  • Mode 3 represents the plane prediction. The predicted pixel value PrC(x, y) is calculated as below:
  • [ Math 4 ] Pr C [ x , y ] = Clip 1 ( a + b · ( x - 3 ) + c · ( y - 3 ) + 16 ) >> 5 a = 16 · ( Re C [ - 1 , 7 ] + Re C [ 7 , - 1 ] ) b = ( 17 · H + 16 ) >> 5 c = ( 17 · V + 16 ) >> 5 H = x = 1 4 x · ( Re C [ 3 + x , - 1 ] - Re C [ 3 - x , - 1 ] ) V = y = 1 4 y · ( Re C [ - 1 , 3 + y ] - Re C [ - 1 , 3 - y ] )
  • Further, in the present embodiment, the LM mode (as Mode 4, for example) that will be described in the next section can be selected.
  • [1-4. Details of LM Mode]
  • In LM mode, the predicted pixel value for the color difference component is calculated by using a linear function of the value of the corresponding luminance component. For example, the prediction function used in LM mode may be the following linear function described in Non-Patent Literature 2:

  • [Math 5]

  • Pr C[x,y]=α·ReL′[x,y]+β  (1)
  • In Formula (1), ReL′(x, y) represents the value of the pixel position (x, y) after resampling of the luminance components of a decoded image (so-called reconstructed image). Instead of a decoded image, an original image is used for image encoding. The luminance components are resampled when the resolution of the color difference component is different from the resolution of the luminance component depending on the chroma-format. If, for example, the chroma-format is 4:2:0, the luminance components are resampled according to the following formula in such a way that the number of pixels is reduced by half in both the horizontal direction and the vertical direction. ReL(u, v) represents the value of the luminance component in the pixel position (u, v) before resampling.
  • [Math 6]

  • ReL′[x,y]=(Re[2x,2y]+ReL[2x,2y+1])>>1  (2)
  • If the chroma-format is 4:2:2, the luminance components are resampled in such a way that the number of pixels is reduced by half in the horizontal direction. If the chroma-format is 4:4:4, the luminance components are not resampled.
  • The coefficient α in Formula (1) is calculated according to the following formula (3). Also, the coefficient β in Formula (1) is calculated according to the following formula (4).
  • [ Math 7 ] α = I · i = 0 I Re C ( i ) · Re L ( i ) - i = 0 I Re C ( i ) · i = 0 I Re L ( i ) I · i = 0 I Re L ( i ) · Re L ( i ) - ( i = 0 I Re L ( i ) ) 2 ( 3 ) β = i = 0 I Re C ( i ) - α · i = 0 I Re L ( i ) I ( 4 )
  • In Formulas (3) and (4), I represents the number of reference pixels. If, for example, the block size of the color difference component is 8×8 pixels and eight left reference pixels and eight upper reference pixels are both available, I is calculated as I=8+8=16.
  • [1-5. Order of New Encoding Processing]
  • The above Formula (1) to Formula (4) are similar to formulas described in Non-Patent Literature 2. ReL′(x, y) on the right-hand side of Formula (1) represents the value of the image position (x, y) after resampling of luminance components of a decoded image (a reconstructed image by an encoder or a decoded image by a decoder). Thus, if the predicted pixel value is calculated in blocks, it is necessary to at least temporarily hold the pixel value after resampling of the luminance component of the corresponding block until the predicted pixel value is calculated for the color difference component.
  • FIG. 9 is an explanatory view illustrating a difference between the order of processing in an existing technique and the order of processing in the present embodiment. Referring to FIG. 9, as an example, LCU1 including coding units CU0, CU1, CU2 and other coding units is shown. Further, the coding unit CU0 is divided into four prediction units PU00, PU01, PU02, PU03. The coding unit CU1 is divided into four prediction units PU10, PU11, PU12, PU13. The coding unit CU2 is divided into four prediction units PU20, PU21, PU22, PU23. To simplify the description, it is assumed that prediction units of all color difference components have sizes determined from sizes of prediction units of corresponding luminance components in accordance with the chroma-format. That is, there is a one-to-one correspondence between the prediction unit of the luminance component and the prediction unit of the color difference component. According to the existing technique, encoding processing of image data for the LCU is performed in the order of Y00→Y01→Y02→Y03→Cb00→Cb01→Cb02→Cb03→Cr00→Cr01→Cr02→Cr03→Y10→ . . . YNN represents encoding processing of the luminance component of the prediction unit PUNN, and CbNN and CNN each represent encoding processing of a color difference component of the prediction unit PUNN. This also applies to intra prediction processing. That is, according to the present technique, encoding processing is performed for each component in each coding unit. Such an order of processing is herein called an “order by component”. In the present embodiment, on the other hand, encoding processing of image data is performed for each prediction unit in each coding unit. Such an order of processing is herein called an “order by PU”. When, for example, the order by PU is applied to LCU1 in FIG. 9, encoding processing of the luminance component Y00 and the two color difference components Cb00, Cr00 of the prediction unit PU01 is first performed. Next, encoding processing of the luminance component Y01 and the two color difference components Cb01, Cr01 of the prediction unit PU01 is performed. Then, encoding processing of three components is repeated in the order of the prediction units PU02, PU03, PU10, . . . .
  • In the order by component according to the existing technique, the required amount of memory resources to hold luminance component values referenced for intra prediction in LM mode is affected by the maximum size of the coding unit. If, for example, the maximum size of the coding unit is 128×128 pixels, the chroma-format is 4:2:0, and the bit depth is 10 bits, memory resources of 64×64×10 bits may be consumed. In the order by PU described above, on the other hand, the required amount of memory resources is affected by the maximum size of the prediction unit of the color difference component. In HEVC, the maximum size of the prediction unit of the luminance component is 64×64 pixels. Thus, when the order by PU described above is adopted, the required amount of memory resources is at most 32×32×10 bits if the chroma-format is 4:2:0 and the bit depth is 10 bits.
  • Further in the present embodiment, as described above, the search for the LM mode is limited. That is, only if the size of the prediction unit of the color difference component is equal to or less than a size determined from the size of the prediction unit of the corresponding luminance component in accordance with the chroma-format, the LM mode is added as a prediction mode candidate for the prediction unit of the color difference component. As a result, when the LM mode is selected, only a prediction unit of one luminance component corresponds to a prediction unit of one color difference component. Accordingly, buffered data of the prediction unit of the first luminance component can in principle be cleared when the order by PU is adopted, and when processing moves from the prediction unit of a first luminance component to the prediction unit of a second luminance component.
  • To discuss the meaning of limiting the search for the LM mode, when a certain block is divided into a plurality of prediction units, the correlation of images between the plurality of prediction units is generally low. If a prediction function is decided based on values of the luminance components extending over prediction units of a plurality of luminance components that are hardly correlated to each other, the prediction function is considered to be unable to model the correlation between the luminance component and the color difference component with adequate precision. Thus, even if the above limitation is imposed on the application of the LM mode, only prediction modes from which high prediction precision cannot be expected are excluded from prediction mode candidates, therefore, no serious disadvantage arises, and rather, an advantage of reducing processing costs can be obtained.
  • Incidentally, if the size of the prediction unit of the luminance component is 4×4 pixels and the chroma-format is 4:2:0, the size of the prediction unit of the color difference component determined in accordance with the chroma-format is 2×2 pixels. However, the prediction unit of 2×2 pixels cannot be used in HEVC. On the other hand, there is a report that the LM mode contributes more to improvement of encoding efficiency with a decreasing size of the prediction unit. Thus, it is also beneficial to exceptionally permit the application of the LM mode based on the prediction units of a plurality of luminance components only when the size of the prediction unit of the luminance component is small (for example, 4×4 pixels). In this case, the lossless encoding section 16 successively encodes the prediction units of a plurality of luminance components (for example, four prediction units of 4×4 pixels) corresponding to the prediction unit (for example, 4×4 pixels) of one color difference component and then can encode the prediction unit of the color difference component.
  • FIG. 10A is an explanatory view illustrating a first example of the order of new encoding processing adopted in the present embodiment. The left side of FIG. 10A shows the arrangement of prediction units of the luminance component (hereinafter, called luminance prediction units) in the coding unit CU1. The right side of FIG. 10A shows the arrangement of prediction units of the color difference component (hereinafter, called color difference prediction units) in the coding unit CU1. It is assumed that the size of the coding unit CU1 is 128×128 pixels and the chroma-format is 4:2:0. The coding unit CU1 contains seven luminance prediction units PU10 to PU16. The size of the luminance prediction units PU10, PU11, PU16 is 64×64 pixels and the size of the luminance prediction units PU12 to PU15 is 32×32 pixels. The coding unit CU1 contains seven color difference prediction units PU20 to PU26. The size of the color difference prediction unit PU20 is 32×32 pixels and the color difference prediction unit PU20 corresponds to the luminance prediction unit PU10. The size of the color difference prediction units PU21 to PU24 is 16×16 pixels and the color difference prediction units PU21 to PU24 correspond to the luminance prediction unit PU11. The size of the color difference prediction unit PU25 is 32×32 pixels and the color difference prediction unit PU25 corresponds to the luminance prediction units PU12 to PU15. The size of the color difference prediction unit PU26 is 32×32 pixels and the color difference prediction unit PU26 corresponds to the luminance prediction unit PU16. Among these color difference prediction units PU20 to PU26, the size (32×32 pixels) of the color difference prediction unit PU25 is larger than the size (16×16 pixels) determined from the size (32×32 pixels) of the corresponding luminance prediction units PU21 to PU24 in accordance with the chroma-format. Therefore, regarding the color difference prediction unit PU25, the LM mode is excluded from prediction mode candidates. On the other hand, the LM mode is excluded from prediction mode candidates for other color difference prediction units.
  • In the example of FIG. 10A, the encoding processing is performed in the order of Y10→Cb20→Cr20→Y11→Cb21→Cr21→Cb22→Cr22→Cb23→Cr23→Cb24→Cr24→Y12→Y13→Y14→Y15→Cb25→Cr25→Y16→Cb26→Cr26. That is, for example, after the luminance prediction unit PU10 is encoded, the color difference prediction unit PU20 is encoded before the luminance prediction unit PU11 is encoded. Also, after the luminance prediction unit PU11 is encoded, the color difference prediction units PU21 to PU24 are encoded before the luminance prediction unit PU12 is encoded. Therefore, even if the buffer size is 32×32 pixels, the value after sampling of the luminance component of the corresponding luminance prediction unit is not yet cleared at the time of intra prediction of each color difference prediction unit and thus, the LM mode can be used for each color difference prediction unit.
  • FIG. 10B is an explanatory view illustrating a second example of the order of new encoding processing adopted in the present embodiment. The left side of FIG. 10B shows the arrangement of luminance prediction units in the coding unit CU2. The right side of FIG. 10B shows the arrangement of color difference prediction units in the coding unit CU2. It is assumed that the size of the coding unit CU2 is 16×16 pixels and the chroma-format is 4:2:0. The coding unit CU2 contains 10 luminance prediction units PU30 to PU9. The size of the luminance prediction units PU30, PU39 is 8×8 pixels and the size of the luminance prediction units PU31 to PU38 is 4×4 pixels. The coding unit CU2 contains four color difference prediction units PU40 to PU43. The size of the color difference prediction units PU40 to PU43 is 4×4 pixels, which is the minimum size that can be used as the color difference prediction unit. The color difference prediction unit PU40 corresponds to the luminance prediction unit PU30, the color difference prediction unit PU41 corresponds to the luminance prediction units PU31 to PU34, the color difference prediction unit PU42 corresponds to the luminance prediction units PU35 to PU38, and the color difference prediction unit PU43 corresponds to the luminance prediction unit PU39. The size (4×4 pixels) of the color difference prediction units PU41, PU42 is larger than the size (2×2 pixels) determined from the size (4×4 pixels) of the corresponding luminance prediction units PU31 to PU34, PU35 to PU38 in accordance with the chroma-format. However, the size of the corresponding luminance prediction unit is 4×4 pixels and thus, the user of the LM mode is permitted also for the color difference prediction units PU41, PU42. Therefore, in the example of FIG. 10B, the LM mode is added to be an object of search for all the color difference prediction units PU40 to PU43.
  • In the example of FIG. 10B, the encoding processing is performed in the order of Y30→Cb40→Cr40→Y31→Y322→Y33→Y34→Cb41→Cr41→Y35→Y36→Y37→Y38→Cb42→Cr42→Y39→Cb43→Cr43. That is, for example, before the color difference prediction unit PU41 is encoded, the luminance prediction units PU31 to PU34 are encoded. Therefore, an intra prediction can be made in LM mode for the color difference prediction unit PU41 based on values of the luminance components of the four luminance prediction units PU31 to PU3 of 4×4 pixels. Similarly, before the color difference prediction unit PU42 is encoded, the luminance prediction units PU35 to PU38 are encoded. Therefore, an intra prediction can be made in LM mode for the color difference prediction unit PU42 based on values of the luminance components of the four luminance prediction units PU35 to PU38 of 4×4 pixels.
  • [1-6. Control of the Number of Reference Pixels]
  • FIGS. 11A and 11B are explanatory views further illustrating the reference pixels in LM mode.
  • In the example of FIG. 11A, the size of the prediction unit (PU) is 16×16 pixels and the chroma-format is 4:2:0. In this case, the block size of the color difference component is 8×8 pixels. The number of reference pixels ReC(i) of the color difference component is 8+8=16 (if left and upper reference pixels are both available). The number of reference pixels ReL′(i) of the luminance component is also 8+8=16 as a result of resampling.
  • In the example of FIG. 11B, the size of the prediction unit (PU) is 8×8 pixels and the chroma-format is 4:2:0. In this case, the block size of the color difference component is 4×4 pixels. The number of reference pixels ReC(i) of the color difference component is 4+4=8. The number of reference pixels ReL′(i) of the luminance component is also 4+4=8 as a result of resampling.
  • Comparison of two examples of FIGS. 11A and 11B shows that the ratio of the number of reference pixels to the block size remains unchanged if other conditions of the chroma-format and the like are the same. That is, while the size of one side of the prediction unit in the example of FIG. 11A is 16 pixels and the number I of reference pixels is 16, the size of one side of the prediction unit in the example of FIG. 11B is eight pixels and the number I of reference pixels is eight. Thus, if the number I of reference pixels increases with an increasing block size, the processing cost needed to calculate the coefficient α and the coefficient β using Formula (3) and Formula (4) respectively also increases. As will be understood by focusing particularly on Formula (3), the number of times of multiplication of pixel values increases on the order of the square of the number I of reference pixels. Therefore, if the number of reference pixels is not appropriately controlled for calculating the coefficient α and the coefficient β, it is highly probable that a large amount of memory resources is consumed when the block size is large. Thus, the prediction controller 42 variably controls the number of reference pixels when the coefficient calculation section 44 calculates the coefficient α and the coefficient β in LM mode.
  • The prediction controller 42 typically controls the reference ratio as a ratio of the number of reference pixels to the block size so as to decrease with an increasing block size. An increase in processing cost is thereby curbed when the block size increases. When the block size is small to the extent that the processing cost presents no problem, the prediction controller 42 may not change the reference ratio even if the block sizes are different. Five exemplary scenarios of control of the reference ratio will be described below with reference to FIGS. 12 to 20B.
  • (1) First Scenario
  • FIG. 12 is an explanatory view showing an example of the definition of the reference ratio in a first scenario.
  • In the first scenario, the reference ratio is “1:1” if the size of the prediction unit (PU) is 4×4 pixels. The reference ratio “1:1” means that, as shown in FIGS. 11A and 11B, reference pixels are all used. In this case, if the chroma-format is 4:2:0, the number I of reference pixels is 2(vertical direction Iv)+2(horizontal direction Ih)=4. If the chroma-format is 4:2:2, the number I of reference pixels is 4+2=6. If the chroma-format is 4:4:4, the number I of reference pixels is 4+4=8.
  • Similarly, when the size of the prediction unit is 8×8 pixels, the reference ratio is also “1:1”. In this case, if the chroma-format is 4:2:0, the number I of reference pixels is 4+4=8. If the chroma-format is 4:2:2, the number I of reference pixels is 8+4=12. If the chroma-format is 4:4:4, the number I of reference pixels is 8+8=16.
  • When the size of the prediction unit is 16×16 pixels, by contrast, the reference ratio is “2:1”. The reference ratio “2:1” means that, as shown in FIGS. 11A and 11B, only half the reference pixels are used. That is, the coefficient calculation section 44 thins out half the reference pixels and uses only remaining reference pixels when calculating the coefficient ac and the coefficient β. In this case, if the chroma-format is 4:2:0, the number I of reference pixels is (8/2)+(8/2)=8. If the chroma-format is 4:2:2, the number I of reference pixels is (16/2)+(8/2)=12. If the chroma-format is 4:4:4, the number I of reference pixels is (16/2)+(16/2)=16.
  • FIG. 13A shows an example of reference pixel settings when the PU size is 16×16 pixels and the chroma-format is 4:2:0. In the example of FIG. 13A, every second reference pixel of the color difference component and every second reference pixel of the luminance component are thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both eight.
  • Further, when the size of the prediction unit is 32×32 pixels, the reference ratio is “4:1”. The reference ratio “4:1” means that, as shown in FIGS. 11A and 11B, only one fourth of reference pixels is used. That is, the coefficient calculation section 44 thins out three fourths of reference pixels and uses only remaining reference pixels when calculating the coefficient α and the coefficient β. In this case, if the chroma-format is 4:2:0, the number I of reference pixels is (16/4)+(16/4)=8. If the chroma-format is 4:2:2, the number I of reference pixels is (32/4)+(16/4)=12. If the chroma-format is 4:4:4, the number I of reference pixels is (32/4)+(32/4)=16.
  • FIG. 13B shows an example of reference pixel settings when the PU size is 32×32 pixels and the chroma-format is 4:2:0. In the example of FIG. 13B, three reference pixels in every four consecutive pixels of the color difference component and three reference pixels in every four consecutive pixels of the luminance component are thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both eight.
  • According to mapping between the block size and reference ratio like in the first scenario, the number I of reference pixels is constant when the block size is 8×8 pixels or more as long as the chroma-format is the same. Therefore, an increase in processing cost is curbed when the block size increases. In addition, by controlling the reference ratio so that the number of reference pixels is constant when the block size exceeds a predetermined size, coefficient calculation processing by the coefficient calculation section 44 can be performed by using a small common circuit or logic. Accordingly, an increase of the circuit scale or logic scale can also be curbed.
  • By setting the number of reference pixels to be thinned out to zero when the block size falls short of a predetermined size, the degradation of prediction accuracy in LM mode due to an insufficient number of reference pixels can be prevented. Particularly when it is comparatively difficult to make an intra prediction due to complex content of an image (that is, spatial fluctuations of the pixel value are violent), a smaller prediction unit is likely to be set to inside an image. By securing a sufficient number of reference pixels in such a case, the degradation of prediction accuracy in LM mode can be prevented.
  • Here, a description is provided that the number of reference pixels to be thinned out when a coefficient is calculated by the coefficient calculation section 44 changes in accordance with the reference ratio. That is, the coefficient calculation section 44 has also a role of a thinning section that thins out reference pixels referenced when an intra prediction in LM mode is made in the reference ratio in accordance with the block size to be predicted. This also applies to a coefficient calculation section 94 of an image decoding device 60 described later. However, instead of thinning out reference pixels, the number of reference pixels may variably be controlled by deriving one representative value from a plurality of reference pixel values. If, for example, the reference ratio is “4:1”, an average value of pixels values of four consecutive reference pixels or a median value thereof may be used as a representative value. This also applies to other scenarios described herein. While it is quite easy to implement processing to thin out reference pixels, the prediction accuracy can be improved by using the above representative value.
  • (2) Second Scenario
  • FIG. 14 is an explanatory view showing an example of the definition of the reference ratio in a second scenario. In the second scenario, the prediction controller 42 controls the reference ratio in accordance with the chroma-format, in addition to the size of the prediction unit. In addition, the prediction controller 42 separately controls a first reference ratio as a ratio of the number of left reference pixels to the size in the vertical direction and a second reference ratio as a ratio of the number of upper reference pixels to the size in the horizontal direction.
  • In the second scenario, if the size of the prediction unit is 4×4 pixels and the chroma-format is 4:2:0, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “1:1”. In this case, no reference pixel is thinned out and the number I of reference pixels is 2+2=4. If the size of the prediction unit is 4×4 pixels and the chroma-format is 4:2:2, the reference ratio in the vertical direction is “2:1” and the reference ratio in the horizontal direction is “1:1”. In this case, as a result of half the reference pixels of the reference pixels in the vertical direction being thinned out, the number I of reference pixels is (4/2)+2=4. If the size of the prediction unit is 4×4 pixels and the chroma-format is 4:4:4, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “2:1”. In this case, as a result of half the reference pixels of the reference pixels in both the vertical direction and the horizontal direction being thinned out, the number I of reference pixels is (4/2)+(4/2)-=4.
  • If the size of the prediction unit is 8×8 pixels and the chroma-format is 4:2:0, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “1:1”. In this case, no reference pixel is thinned out and the number I of reference pixels is 4+4=8. If the size of the prediction unit is 8×8 pixels and the chroma-format is 4:2:2, the reference ratio in the vertical direction is “2:1” and the reference ratio in the horizontal direction is “1:1”. In this case, as a result of half the reference pixels of the reference pixels in the vertical direction being thinned out, the number I of reference pixels is (8/2)+4=8. If the size of the prediction unit is 8×8 pixels and the chroma-format is 4:4:4, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “2:1”. In this case, as a result of half the reference pixels of the reference pixels in both the vertical direction and the horizontal direction being thinned out, the number I of reference pixels is (8/2)+(8/2)=8.
  • FIG. 15A shows an example of reference pixel settings when the PU size is 8×8 pixels and the chroma-format is 4:2:0. In the example of FIG. 15A, neither reference pixels of the color difference component nor reference pixels of the luminance component are thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both eight.
  • FIG. 15B shows an example of reference pixel settings when the PU size is 8×8 pixels and the chroma-format is 4:2:2. In the example of FIG. 15B, every second reference pixel in the vertical direction of the color difference component and the luminance component is thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both eight.
  • Further, FIG. 15C shows an example of reference pixel settings when the PU size is 8×8 pixels and the chroma-format is 4:4:4. In the example of FIG. 15C, every second reference pixel in the vertical direction and the horizontal direction of the color difference component and the luminance component is thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both eight.
  • If the size of the prediction unit is 16×16 pixels and the chroma-format is 4:2:0, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “2:1”. In this case, the number I of reference pixels is (8/2)+(8/2)=8. If the size of the prediction unit is 16×16 pixels and the chroma-format is 4:2:2, the reference ratio in the vertical direction is “4:1” and the reference ratio in the horizontal direction is “2:1”. In this case, the number I of reference pixels is (16/4)+(8/2)=8. If the size of the prediction unit is 16×16 pixels and the chroma-format is 4:4:4, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “4:1”. In this case, the number I of reference pixels is (16/4)+(16/4)=8.
  • FIG. 15D shows an example of reference pixel settings when the PU size is 16×16 pixels and the chroma-format is 4:2:0. In the example of FIG. 15D, every second reference pixel in the vertical direction and the horizontal direction of the color difference component and the luminance component is thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both eight.
  • If the size of the prediction unit is 32×32 pixels and the chroma-format is 4:2:0, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “4:1”. In this case, the number I of reference pixels is (16/4)+(16/4)=8. If the size of the prediction unit is 32×32 pixels and the chroma-format is 4:2:2, the reference ratio in the vertical direction is “8:1” and the reference ratio in the horizontal direction is “4:1”. In this case, the number I of reference pixels is (32/8)+(16/4)=8. If the size of the prediction unit is 32×32 pixels and the chroma-format is 4:4:4, the reference ratio in the vertical direction and the reference ratio in the horizontal direction are both “8:1”. In this case, the number I of reference pixels is (32/8)+(32/8)=8.
  • In the second scenario, as will be understood from the above description, the prediction controller 42 controls the reference ratio so that the reference ratio decreases with an increasing resolution of the color difference component represented by the chroma-format. An increase in processing cost accompanying an increasing block size of the color difference component is thereby curbed. Also in the second scenario, if the chroma-format is 4:2:2, the prediction controller 42 separately controls the reference ratio in the vertical direction and the reference ratio in the horizontal direction so that the number of reference pixels on the left of the block and the number of reference pixels above the block become equal. Accordingly, the numbers of reference pixels can be made the same in a plurality of cases in which chroma-formats are mutually different. As a result, coefficient calculation processing by the coefficient calculation section 44 can be performed by using a common circuit or logic regardless of the chroma-format. Therefore, according to the second scenario, efficient implementation of a circuit or logic is promoted.
  • (3) Third Scenario
  • FIG. 16 is an explanatory view showing an example of the definition of the reference ratio in a third scenario. Also in the third scenario, the prediction controller 42 separately controls the first reference ratio as a ratio of the number of left reference pixels to the size in the vertical direction and the second reference ratio as a ratio of the number of upper reference pixels to the size in the horizontal direction. In the third scenario, the prediction controller 42 controls the reference ratios so that the reference ratio in the vertical direction is equal to the reference ratio in the horizontal direction or less to the same size.
  • In the third scenario, the reference ratios in the vertical direction and the horizontal direction are both “1:1” if the size of the prediction unit is 4×4 pixels. In this case, if the chroma-format is 4:2:0, the number I of reference pixels is 2+2=4. If the chroma-format is 4:2:2, the number I of reference pixels is 4+2=6. If the chroma-format is 4:4:4, the number I of reference pixels is 4+4=8.
  • If the size of the prediction unit is 8×8 pixels, the reference ratio in the vertical direction is “2:1” and the reference ratio in the horizontal direction is “1:1”. In this case, if the chroma-format is 4:2:0, the number I of reference pixels is (4/2)+4=6. If the chroma-format is 4:2:2, the number I of reference pixels is (8/2)+4=8. If the chroma-format is 4:4:4, the number I of reference pixels is (8/2)+8=12.
  • FIG. 17A shows an example of reference pixel settings when the PU size is 8×8 pixels and the chroma-format is 4:2:0. In the example of FIG. 17A, lower half the reference pixels in the vertical direction of reference pixels of the color difference component and reference pixels of the luminance component are thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both six.
  • If the size of the prediction unit is 16×16 pixels, the reference ratio in the vertical direction is “4:1” and the reference ratio in the horizontal direction is “1:1”. In this case, if the chroma-format is 4:2:0, the number I of reference pixels is (8/4)+8=10. If the chroma-format is 4:2:2, the number I of reference pixels is (16/4)+8=12. If the chroma-format is 4:4:4, the number I of reference pixels is (16/4)+16=20.
  • FIG. 17B shows an example of reference pixel settings when the PU size is 16×16 pixels and the chroma-format is 4:2:0. In the example of FIG. 17B, three fourths of the lower half reference pixels in the vertical direction of reference pixels of the color difference component and reference pixels of the luminance component are thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both 10.
  • If the size of the prediction unit is 32×32 pixels, the reference ratio in the vertical direction is “8:1” and the reference ratio in the horizontal direction is “2:1”. In this case, if the chroma-format is 4:2:0, the number I of reference pixels is (16/8)+(16/2)=10. If the chroma-format is 4:2:2, the number I of reference pixels is (32/8)+(16/2)=12. If the chroma-format is 4:4:4, the number I of reference pixels is (32/8)+(32/2)=20.
  • In a device that encodes or decodes images, the reference pixel value is stored in a frame memory or line memory in most cases and accessed in units of line in the horizontal direction. Therefore, if the reference ratio in the vertical direction is made smaller than the reference ratio in the horizontal direction like in the third scenario, the number of times of accessing a memory can be reduced even if the number of reference pixels to be used is the same. Accordingly, coefficient calculation processing by the coefficient calculation section 44 can be performed at high speed. In addition, by using reference pixels in an upper line of the block preferentially like in the third scenario, the reference pixel value can be acquired in a short time through continuous access to the memory.
  • (4) Fourth Scenario
  • FIG. 18 is an explanatory view showing an example of the definition of the reference ratio in a fourth scenario. In the fourth scenario, the prediction controller 42 controls the reference ratio so that the reference ratio decreases with decreasing capabilities of a device that encodes and decodes images. In HEVC, for example, the profile, the level, or both can be used as parameters representing capabilities of a device. The profile and the level can normally be specified in a sequence parameter set of an encoded stream.
  • Referring to FIG. 18, in the fourth scenario, capabilities of a device are classified into two categories of “high” and “low”. Regarding the prediction unit having the size of 4×4 pixels, the reference ratio is “1:1” regardless of capabilities of a device. Regarding the prediction unit having the size of 8×8 pixels or more, by contrast, the reference ratio when capabilities are “low” is half the reference ratio when capabilities are “high”.
  • For example, while the reference ratio when capabilities are “high” is “1:1” for the prediction unit of 8×8 pixels, the reference ratio when capabilities are “low” is “2:1”. Regarding the prediction unit having the size of 16×16 pixels, while the reference ratio when capabilities are “high” is “2:1”, the reference ratio when capabilities are “low” is “4:1”. Regarding the prediction unit having the size of 32×32 pixels, while the reference ratio when capabilities are “high” is “4:1”, the reference ratio when capabilities are “low” is “8:1”.
  • FIG. 19A shows an example of reference pixel settings when the PU size is 16×16 pixels, the chroma-format is 4:2:0, capabilities are “high”, and the reference ratio is “2:1”. In the example of FIG. 19A, half the reference pixels of reference pixels of the color difference component and reference pixels of the luminance component are thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both eight.
  • FIG. 19B shows an example of reference pixel settings when the PU size is 16×16 pixels, the chroma-format is 4:2:0, capabilities are “low”, and the reference ratio is “4:1”. In the example of FIG. 19B, three fourths of the lower half reference pixels of reference pixels of the color difference component and reference pixels of the luminance component are thinned out. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both four.
  • By controlling the reference ratio in accordance with the level of capabilities of a device (for example, the processing capacity of a decoder) like in the fourth scenario, the number of reference pixels can further be reduced when the use of a device of lower capabilities is assumed. Accordingly, the processing cost exceeding the processing capacity of a device can be prevented from arising in coefficient calculation processing in LM mode.
  • (5) Fifth Scenario
  • Xiaoran Cao Tsinghua et al. propose in “CE6.b1 Report on Short Distance Intra Prediction Method” (JCTVC-E278, March 2011) the short distance intra prediction method that improves encoding efficiency by using a small-sized non-square prediction unit. In the short distance intra prediction method, for example, prediction units of various sizes such as 1×4 pixels, 2×8 pixels, 4×16 pixels, 4×1 pixels, 8×2 pixels, and 16×4 pixels can be set into an image. In this case, which of the size in the vertical direction and the size in the horizontal direction of the prediction unit is larger depends on settings of the prediction unit. Thus, in the fifth scenario, when the short distance intra prediction method is used, the prediction controller 42 dynamically selects the reference ratio corresponding to the direction to which larger of the reference ratio in the vertical direction and the reference ratio in the horizontal ratio corresponds and controls the selected reference ratio.
  • FIG. 20A shows an example of reference pixel settings when the PU size is 2×8 pixels and the chroma-format is 4:2:0. In the example of FIG. 20A, the size in the horizontal direction is larger than the size in the vertical direction and thus, while the reference ratio in the vertical direction is “1:1”, the reference ratio in the horizontal direction is “2:1”. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both 1+(4/2)=3.
  • FIG. 20B shows an example of reference pixel settings when the PU size is 16×4 pixels and the chroma-format is 4:2:0. In the example of FIG. 20B, the size in the vertical direction is larger than the size in the horizontal direction and thus, while the reference ratio in the horizontal direction is “1:1”, the reference ratio in the vertical direction is “4:1”. As a result, the number IC of reference pixels of the color difference component and the number IL of reference pixels of the luminance component are both (8/4)+2=4.
  • When, like in the fifth scenario, the short distance intra prediction method is used, by dynamically selecting and controlling the reference ratio corresponding to the direction in which the size is larger, the degradation in prediction accuracy can be prevented by avoiding the reduction of the number of reference pixels in the direction in which the number thereof is smaller.
  • Heretofore, five characteristic scenarios of control of the reference ration by the prediction controller 42 have been described in detail. The control of the reference ratio by the prediction controller 42 according to these scenarios may be performed by mapping, for example, between the block size pre-defined in standard specifications of an image encoding scheme and the reference ratio. By uniformly defining such mapping in advance, the need to support setting patterns of many reference pixels is eliminated so that a circuit or logic for decoding can easily be made common.
  • While the bit depth of image data utilized for many uses is 8 bits, a greater bit depth such as 10 bits or 12 bits may be used for image data for some uses. Thus, if the bit depth exceeds a predetermined number of bits (for example, 8 bits), the coefficient calculation section 44 may reduce the reference pixel value to the predetermined number of bits before calculating the coefficient α and the coefficient β of a prediction function using the reduced reference pixel value. Accordingly, the coefficient α and the coefficient β can be calculated using a small-sized common circuit or logic regardless of the bit depth.
  • An example in which the prediction controller 42 controls the “reference ratio” as a ratio of the number of reference pixels to the block size is mainly described here. However, the concept substantially equivalent to the reference ratio may be expressed by another term, for example, a “reduction ratio” meaning the ratio of reference pixels to be reduced. The “reference ratio” or “reduction ratio” may be expressed by, instead of the above format such as “1:1”, “2:1”, and “4:1”, the percentage format like “100% (0%)”, “50% (50%)”, or “25% (75%)” or the numeric format in the range from 0 to 1.
  • The above five scenarios are only examples for description. For example, two scenarios or more of the above five scenarios may be combined. Mapping between the block size and the reference ratio (or the reduction ratio) as shown in each scenario may be, instead of defined in advance, adaptively selected. In this case, information specifying the selected mapping may be transmitted from the encoding side to the decoding side inside the parameter set or the header area of an encoded stream.
  • 2. Flow of Processing at the Time of Encoding According to an Embodiment
  • Next, the flow of processing at the time of encoding will be described using FIGS. 21 and 22. FIG. 21 is a flow chart showing an example of the flow of intra prediction processing at the time of encoding by the intra prediction section 40 having the configuration as illustrated in FIG. 2.
  • Referring to FIG. 21, predicted image data in various prediction modes is first generated by the prediction section 46 for the luminance component of the coding unit to be processed and the optimum prediction mode and the arrangement of prediction units are decided by the mode determination section 48 (step S100).
  • Next, the prediction controller 42 focuses on one of one or more color difference prediction units set inside the coding unit to search for the optimum prediction mode of the color difference component (step S105)
  • Next, the prediction controller 42 determines whether the size of the focused color difference PU satisfies the above predetermined condition (step S110). The predetermined condition is typically a condition that the size of the focused color difference PU is equal to a size determined from the size of the corresponding luminance PU in accordance with the chroma-format or less. If such a condition is satisfied, there is one luminance PU corresponding to the focused color difference PU. The prediction controller 42 also determines whether the size of the luminance PU corresponding to the focused color difference PU is 4×4 pixels (step S115). If the above condition is satisfied or the size of the corresponding luminance PU is 4×4 pixels, the prediction processing in LM mode for the focused color difference PU is performed by the coefficient calculation section 44 and the prediction section 46 (step S120). If the above condition is not satisfied and the size of the corresponding luminance PU is 8×8 pixels or more, the prediction processing in LM mode in step S120 is skipped.
  • Next, intra prediction processing in non-LM mode (for example, Mode 0 to Mode 3 illustrated in FIG. 8) for the focused color difference PU is performed by the prediction section 46 (step S125). Next, the mode determination section 48 calculates the cost function of each prediction mode for the focused color difference PU based on original image data and predicted image data (step S130).
  • The processing of step S105 to step S130 is repeated for each color difference PU set in the coding unit (step S135). Then, the mode determination section 48 decides the optimum arrangement of color difference PU in the coding unit and the optimum prediction mode for each color difference PU by mutually comparing cost functions values (step S140).
  • FIG. 22 is a flow chart showing an example of a detailed flow of LM mode prediction processing in step S120 of FIG. 21.
  • Referring to FIG. 22, the prediction controller 42 first acquires the reference ratio for each prediction unit in accordance with the size of the prediction unit and other parameters (for example, the chroma-format, profile, or level) (step S121).
  • Next, the coefficient calculation section 44 sets reference pixels to be referenced by the calculation formula (for example, the above Formula (3) and Formula (4)) to calculate coefficients of a prediction function according to the reference ratio instructed by the prediction controller 42 (step S122). The number of reference pixels set here can be reduced in accordance with the reference ratio. In addition, the luminance components of reference pixels can be resampled depending on the chroma-format.
  • Next, the coefficient calculation section 44 calculates the coefficient α of a prediction function using pixel values of set reference pixels according to, for example, the above Formula (3) (step S123). Further, the coefficient calculation section 44 calculates the coefficient β3 of a prediction function using pixel values of set reference pixels according to, for example, the above Formula (4) (step S124).
  • Then, the prediction section 46 calculates the predicted pixel value of each pixel to be predicted by substituting the value of the corresponding luminance component into a prediction function (for example, the above Formula (1)) built by using the coefficient α and the coefficient β (step S125).
  • 3. Example Configuration of Image Decoding Device According to an Embodiment
  • In this section, an example configuration of an image decoding device according to an embodiment will be described using FIGS. 23 and 24.
  • 3-1. Example of Overall Configuration
  • FIG. 23 is a block diagram showing an example of a configuration of an image decoding device 60 according to an embodiment. Referring to FIG. 23, the image decoding device 60 includes an accumulation buffer 61, a lossless decoding section 62, an inverse quantization section 63, an inverse orthogonal transform section 64, an addition section 65, a deblocking filter 66, a sorting buffer 67, a D/A (Digital to Analogue) conversion section 68, a frame memory 69, selectors 70 and 71, a motion compensation section 80 and an intra prediction section 90.
  • The accumulation buffer 61 temporarily stores an encoded stream input via a transmission line using a storage medium.
  • The lossless decoding section 62 decodes an encoded stream input from the accumulation buffer 61 according to the encoding method used at the time of encoding. Also, the lossless decoding section 62 decodes information multiplexed to the header region of the encoded stream. Information that is multiplexed to the header region of the encoded stream may include the profile and the level in a sequence parameter set, and the information about inter prediction and information about intra prediction in the block header, for example. The lossless decoding section 62 outputs the information about inter prediction to the motion compensation section 80. Also, the lossless decoding section 62 outputs the information about intra prediction to the intra prediction section 90.
  • In the present embodiment, the lossless decoding section 62 decodes quantized data from an encoded stream in the above order by PU for the coding unit for which an intra prediction is made. That is, the lossless decoding section 62 decodes, for example, the luminance component of a first luminance prediction unit in one coding unit, the color difference component of a first color difference prediction unit corresponding to the first luminance prediction unit, and the luminance component of a second luminance prediction unit that does not correspond to the first color difference prediction unit (subsequent to the first luminance prediction unit) in the order of the luminance component of the first luminance prediction unit, the color difference component of the first color difference prediction unit, and the luminance component of the second luminance prediction unit.
  • The inverse quantization section 63 inversely quantizes quantized data which has been decoded by the lossless decoding section 62. The inverse orthogonal transform section 64 generates predicted error data by performing inverse orthogonal transformation on transform coefficient data input from the inverse quantization section 63 according to the orthogonal transformation method used at the time of encoding. Then, the inverse orthogonal transform section 64 outputs the generated predicted error data to the addition section 65.
  • The addition section 65 adds the predicted error data input from the inverse orthogonal transform section 64 and predicted image data input from the selector 71 to thereby generate decoded image data. Then, the addition section 65 outputs the generated decoded image data to the deblocking filter 66 and the frame memory 69.
  • The deblocking filter 66 removes block distortion by filtering the decoded image data input from the addition section 65, and outputs the decoded image data after filtering to the sorting buffer 67 and the frame memory 69.
  • The sorting buffer 67 generates a series of image data in a time sequence by sorting images input from the deblocking filter 66. Then, the sorting buffer 67 outputs the generated image data to the D/A conversion section 68.
  • The D/A conversion section 68 converts the image data in a digital format input from the sorting buffer 67 into an image signal in an analogue format. Then, the D/A conversion section 68 causes an image to be displayed by outputting the analogue image signal to a display (not shown) connected to the image decoding device 60, for example.
  • The frame memory 69 stores, using a storage medium, the decoded image data before filtering input from the addition section 65, and the decoded image data after filtering input from the deblocking filter 66.
  • The selector 70 switches the output destination of the image data from the frame memory 70 between the motion compensation section 80 and the intra prediction section 90 for each block in the image according to mode information acquired by the lossless decoding section 62. For example, in the case the inter prediction mode is specified, the selector 70 outputs the decoded image data after filtering that is supplied from the frame memory 70 to the motion compensation section 80 as the reference image data. Also, in the case the intra prediction mode is specified, the selector 70 outputs the decoded image data before filtering that is supplied from the frame memory 70 to the intra prediction section 90 as reference image data.
  • The selector 71 switches the output source of predicted image data to be supplied to the addition section 65 between the motion compensation section 80 and the intra prediction section 90 according to the mode information acquired by the lossless decoding section 62. For example, in the case the inter prediction mode is specified, the selector 71 supplies to the addition section 65 the predicted image data output from the motion compensation section 80. Also, in the case the intra prediction mode is specified, the selector 71 supplies to the addition section 65 the predicted image data output from the intra prediction section 90.
  • The motion compensation section 80 performs a motion compensation process based on the information about inter prediction input from the lossless decoding section 62 and the reference image data from the frame memory 69, and generates predicted image data. Then, the motion compensation section 80 outputs the generated predicted image data to the selector 71.
  • The intra prediction section 90 performs an intra prediction process based on the information about intra prediction input from the lossless decoding section 62 and the reference image data from the frame memory 69, and generates predicted image data. Then, the intra prediction section 90 outputs the generated predicted image data to the selector 71. The intra prediction process of the intra prediction section 90 will be described later in detail.
  • 3-2. Configuration Example of Intra Prediction Section
  • FIG. 24 is a block diagram showing an example of a detailed configuration of an intra prediction section 90 of the image decoding device 60 shown in FIG. 23. Referring to FIG. 24, the intra prediction section 90 includes a prediction controller 92, a luminance component buffer 93, a coefficient calculation section 94, and a prediction section 96.
  • The prediction controller 92 controls intra prediction processing by the intra prediction section 90. For example, the prediction controller 92 sets a luminance PU inside the coding unit based on prediction mode information contained in information about an intra prediction and performs intra prediction processing for the set luminance PU. Also, the prediction controller 92 sets a color difference PU inside the coding unit based on prediction mode information and performs intra prediction processing for the set color difference PU. The above intra prediction processing is performed in the above order by PU. In the intra prediction processing for the luminance PU, the prediction controller 92 causes the prediction section 96 to generate the predicted pixel value of the luminance component of each pixel in prediction mode specified by prediction mode information. Similarly, in the intra prediction processing for the color difference PU, the prediction controller 92 causes the prediction section 96 to generate the predicted pixel value of the color difference component of each pixel in prediction mode specified by prediction mode information.
  • In the present embodiment, prediction mode candidates for the color difference PU contain the above-mentioned LM mode. Then, the prediction controller 92 variably controls the ratio of the number of reference pixels when coefficients of a prediction function in LM mode is calculated to the block size, that is, the reference ratio. The control of the reference ratio by the prediction controller 92 is typically performed in accordance with the block size. If, for example, the block size exceeds a predetermined size, the prediction controller 92 may control the reference ratio so that the number of reference pixels for calculating coefficients of a prediction function becomes constant. Mapping between the block size and reference ratio may be defined in advance and stored in a storage medium of the image decoding device 60 or may dynamically be specified inside the header area of an encoded stream. Further, the prediction controller 92 may control the reference ratio in accordance with the chroma-format. Also, the prediction controller 92 may control the reference ratio in accordance with the profile or level defining capabilities of a device. The control of the reference ratio by the prediction controller 92 may be performed according to one of the above five scenarios, any combination thereof, or other scenarios.
  • The luminance component buffer 93 temporarily stores the value of the luminance component used for intra prediction in LM mode for the color difference PU. In HEVC, the maximum available size of the luminance PU and the maximum available size of the color difference PU are both 64×64 pixels. In the present embodiment, the number of luminance PU corresponding to one color difference PU is limited in principle to one in LM mode. Thus, if the chroma-format is 4:2:0, the LM mode will not be specified for a color difference PU of 64×64 pixels. The maximum size of the color difference PU for which the LM mode is specified is 32×32 pixels. Further in the present embodiment, encoding processing is performed ion the above order by PU. Thus, 32×32 pixels corresponding to the maximum size of the color difference PU are sufficient as the buffer size of the luminance component buffer 93. This is one fourth the required amount of memory resource when the order by component of the existing technique described using FIG. 9 is adopted. When an intra prediction is made in LM mode for a first color difference PU corresponding to a first luminance PU, the value of the luminance component of the first luminance PU stored in the luminance component buffer 93 may be referenced. Then, when the luminance component of a second luminance PU that does not correspond to the first color difference PU is decoded, the content of the luminance component buffer 93 is cleared and the value of the luminance component of the second luminance PU may newly be stored.
  • When, as described above, the size of the luminance PU is 4×4 pixels, an intra prediction may exceptionally be made in LM mode for one color difference PU based on a plurality of luminance PUs. When the chroma-format is 4:2:0, values of the luminance components of four corresponding luminance PUs may be buffered by the luminance component buffer 93 to make an intra prediction in LM mode for one color difference PU of 4×4 pixels. When the chroma-format is 4:2:2, values of the luminance components of two corresponding luminance PUs may be buffered by the luminance component buffer 93 to make an intra prediction in LM mode for one color difference PU of 4×4 pixels. In all cases, 32×32 pixels are sufficient as the buffer size of the luminance component buffer 93.
  • The coefficient calculation section 94 calculates coefficients of a prediction function used by the prediction section 96 when the LM mode is specified for the color difference component by referring to pixels around the prediction unit to which the pixel to be predicted belongs, that is, reference pixels. The prediction function used by the prediction section 96 is typically a linear function of the value of the luminance component and is represented by, for example, the above Formula (1). The number of reference pixels referenced by the coefficient calculation section 94 to calculate coefficients of a prediction function is controlled by, as described above, the prediction controller 92. If the reference ratio is not “1:1”, the coefficient calculation section 94 may calculate coefficients of a prediction function by, for example, thinning out as many reference pixels as a number in accordance with the reference ratio and then using only remaining reference pixels. The coefficient calculation section 94 may calculate coefficients of a prediction function using a common circuit or logic for a plurality of block sizes exceeding a predetermined size. In addition, if the bit depth of a pixel value exceeds a predetermined number of bits, the coefficient calculation section 94 may reduce the reference pixel value to the predetermined number of bits before calculating coefficients of a prediction function using the reduced reference pixel value.
  • The prediction section 96 generates the pixel value of the luminance component and the pixel value of the color difference component of the pixel to be predicted according to the specified prediction mode using reference image data from a frame memory 69 under the control of the prediction controller 92. Prediction mode candidates used for the color difference component by the prediction section 96 may contain the above LM mode. When the LM mode is specified, the prediction section 96 calculates the predicted pixel value of the color difference component by retrieving the value (resampled if necessary) of the corresponding luminance component from the luminance component buffer 93 and substituting the value into a prediction function built by using the coefficient α and the coefficient (3 calculated by the coefficient calculation section 94. The prediction section 96 outputs predicted image data generated as a result of prediction to an addition section 65 via a selector 71.
  • 4. Flow of Processing at the Time of Decoding According to an Embodiment
  • Next, the flow of processing at the time of decoding will be described using FIG. 25. FIG. 25 is a flow chart showing an example of the flow of intra prediction processing at the time of decoding by the intra prediction section 90 having the configuration as illustrated in FIG. 24.
  • Referring to FIG. 25, the prediction controller 92 first sets one prediction unit inside the coding unit in the order of decoding from an encoded stream (step S200). Next, the intra prediction processing branches depending on whether the set prediction unit is a prediction unit of the luminance component or a prediction unit of the color difference component (step S205). If the set prediction unit is a prediction unit of the luminance component, the processing proceeds to step S210. On the other hand, if the set prediction unit is a prediction unit of the color difference component, the processing proceeds to step S230.
  • In step S210, the prediction controller 92 recognizes the prediction mode of the luminance component specified by the prediction mode information (step S210). Then, the prediction section 96 generates the predicted pixel value of the luminance component of each pixel in the prediction unit according to the specified prediction mode using reference image data from the frame memory 69 (step S220).
  • In step S230, the prediction controller 92 recognizes the prediction mode of the color difference component specified by the prediction mode information (step S230). Then, the prediction controller 92 determines whether the LM mode is specified (step S240). If the LM mode is specified, the prediction controller 92 causes the coefficient calculation section 94 and the prediction section 96 to perform prediction processing of the color difference component in LM mode (step S250). The LM mode prediction processing in step S250 may be similar to the LM mode prediction processing described using FIG. 22. On the other hand, if the LM mode is not specified, the prediction controller 92 causes the prediction section 96 to perform intra prediction processing of the color difference component in non-LM mode (step S260).
  • Then, if the next prediction unit is present in the same coding unit, the processing returns to step S200 to repeat the above processing for the next prediction unit (step S270). If next prediction unit is not present, the intra prediction processing in FIG. 25 terminates.
  • 5. Modifications
  • In this section, two modifications to reduce the amount of consumption of memory resources in connection with the introduction of the LM mode will be described. In the modifications described below, the prediction section 46 of the image encoding device 10 and the prediction section 96 of the image decoding device 60 thin out luminance components corresponding to each color difference component at some thinning rate. The luminance component corresponding to each color difference component corresponds to each luminance component after resampling according to, for example, the above Formula (2). Then, the prediction section 46 and the prediction section 96 generate the predicted values of each color difference component corresponding to thinned luminance components by using values of luminance components that are not thinned out.
  • FIG. 26 is an explanatory view illustrating an example of thinning processing according to the present modification. Referring to FIG. 26, the prediction unit (PU) of 8×8 pixels is shown as an example. It is assumed that the chroma-format 4:2:0 and the thinning rate is 25%. The thinning rate indicates the ratio of the number of pixels after thinning to the number of pixels before thinning. In the example of FIG. 26, the number of color difference components contained in one PU is 4×4. The number of luminance components corresponding to each color difference component is also 4×4 due to resampling. As a result of thinning luminance components after resampling at the thinning rate 25%, the number of luminance components used to predict the color difference component in LM mode is 2×2. More specifically, in the example at the lower right of FIG. 26, among four luminance components Lu1 to Lu4, the luminance components Lu2, Lu3, Lu4 other than the luminance component Lu1 are thinned out. Similarly, among four luminance components Lu1 to Lu8, the luminance components Lu6, Lu7, Lu8 other than the luminance component Lu5 are thinned out. The color difference component Cu1 at the lower left of FIG. 26 corresponds to the luminance component Lu1 that is not thinned out. Therefore, the prediction section 46 and the prediction section 96 can generate the predicted values of the color difference component Cu1 by substituting the value of the luminance component Lu1 into the right-hand side of the above Formula (1). On the other hand, for example, the color difference component Cu2 corresponds to the thinned luminance component Lu2. In this case, the prediction section 46 and the prediction section 96 generate the predicted values of the color difference component Cu2 using the value of any luminance component that is not thinned out. For example, the predicted value of the color difference component Cu2 may be replication of the predicted value of the color difference component Cu1 or a value obtained by linear interpolation of two predicted values of the color difference components Cu1, Cu5.
  • More generally, for example, the predicted pixel value PrC(x, y) of the color difference component when the thinning rate is 25% may be calculated by techniques represented by the following Formula (5) or Formula (6). Formula (5) represents replication of a predicted value from adjacent pixels.
  • [ Math 8 ] Pr C [ x , y ] = { α · Re L [ x , y ] + β ( x mod 2 = 0 && y mod 2 = 0 ) Pr C [ x - 1 , y ] ( x mod 2 = 1 && y mod 2 = 0 ) Pr C [ x , y - 1 ] ( x mod 2 = 0 && y mod 2 = 1 ) Pr C [ x - 1 , y - 1 ] ( x mod 2 = 1 && y mod 2 = 1 ) ( 5 )
  • Formula (6) represents linear interpolation of a predicted value.
  • [ Math 9 ] Pr C [ x , y ] = { α · Re L [ x , y ] + β ( x mod 2 = 0 && y mod 2 = 0 ) ( α · Re L [ x - 1 , y ] + β ) + ( α · Re L [ x + 1 , y ] + β ) 2 ( x mod 2 = 1 && y mod 2 = 0 ) ( α · Re L [ x , y - 1 ] + β ) + ( α · Re L [ x , y + 1 ] + β ) 2 ( x mod 2 = 0 && y mod 2 = 1 ) ( α · Re L [ x - 1 , y - 1 ] + β ) + ( α · Re L [ x + 1 , y + 1 ] + β ) 2 ( x mod 2 = 1 && y mod 2 = 1 ) ( 6 )
  • Incidentally, Formula (5) and Formula (6) are only examples and other formulas may also be used.
  • The above thinning rate affects the amount of memory resources to hold pixel values after resampling of the luminance components. The amount of consumption of memory resources decreases with an increasing number of luminance components to be thinned out. However, if the number of luminance components to be thinned out is large, the accuracy of prediction of the color difference component may be degraded. Thus, the parameter to specify the thinning rate may be specified in the header (for example, the sequence parameter set, picture parameter set, or slice header) of an encoded stream. In this case, the prediction section 96 of the image decoding device 60 decides the thinning rate based on the parameter acquired from the header. Accordingly, the thinning rate can flexibly be changed in accordance with requirements (for example, which of saving of memory resources and encoding efficiency should have higher priority) for each device.
  • Referring to FIGS. 27A to 27C, in contrast to the example in FIG. 26, the thinning rate is 50% in each case. In these examples, half the luminance components of luminance components after resampling are thinned out. However, even if the thinning rate is the same, patterns of position of luminance components to be thinned out (hereinafter, called thinning patterns) are mutually different
  • In the example of FIG. 27A, among four luminance components Lu1 to Lu4, the luminance components Lu2, Lu4 are thinned out. Similarly, among four luminance components Lu5 to Lu8, the luminance components Lu6, Lu8 are thinned out. Also in this case, for example, the predicted value of the color difference component Cu2 corresponding to the thinned luminance component Lu2 may be replication of the predicted value of the color difference component Cu1 or a value obtained by linear interpolation of two predicted values of the color difference components Cu1, Cu5. In the thinning pattern of FIG. 27A, luminance components to be thinned out are uniformly distributed in the PU. Therefore, compared with other thinning patterns of the same thinning rate, the thinning pattern in FIG. 27A realizes higher prediction accuracy.
  • In the example of FIG. 27B, luminance components are thinned out in every other row. Such a thinning pattern is advantageous in that, for example, in a device holding pixel values in a line memory, values of many luminance components can be accessed by memory access at a time. In the example of FIG. 27C, on the other hand, luminance components are thinned out in every other column. Such a thinning pattern is advantageous in that, for example, if the chroma-format is 4:2:2 and the number of pixels in the vertical direction is larger, more frequency components in the column direction can be maintained.
  • The parameter to specify the thinning pattern from a plurality of thinning pattern candidates may be specified in the header of an encoded stream. In this case, the prediction section 96 of the image decoding device 60 decides the positions of luminance components to be thinned out based on the parameter acquired from the header. Accordingly, the thinning pattern can flexibly be changed in accordance with requirements for each device.
  • In addition, the prediction section 46 and the prediction section 96 may decide the thinning rates in accordance with the above reference ratio. If, for example, the number of reference pixels referenced when coefficients of a prediction function are calculated is smaller, more luminance components may be thinned out. At this point, the prediction section 46 and the prediction section 96 may thin out luminance components in positions corresponding to thinning positions of reference pixels.
  • FIGS. 28A and 28B each show examples of correspondence between thinning positions of reference pixels and thinning positions of luminance components. In the example of FIG. 28A, the PU size is 16×16 pixels, the chroma-format is 4:2:0, and the reference ratio is 2:1. In this case, for example, the thinning rate is decided in favor of 25% and thinning patterns similar to examples in FIG. 26 may be selected. In the example of FIG. 28B, on the other hand, the PU size is 16×16 pixels, the chroma-format is 4:2:0, the reference ratio in the vertical direction is 2:1, and the reference ratio in the horizontal direction is 1:1. In this case, for example, the thinning rate is decided in favor of 50% and thinning patterns similar to the example in FIG. 27B may be selected.
  • In the example of FIG. 28A, all luminance components of the block to be predicted are thinned out in rows in which the reference pixel is thinned out. All luminance components of the block to be predicted are thinned out in columns in which the reference pixel is thinned out. By deciding thinning positions in this manner, the determination of thinning positions is simplified and the implementation of thinning processing according to the present modification can be made still easier. Also in the example of FIG. 28B, all luminance components of the block to be predicted are thinned out in rows in which the reference pixel is thinned out. By deciding thinning positions in this manner, access to luminance components can completely be skipped in rows corresponding to the thinning positions regardless of reference pixels or pixels to be predicted. Accordingly, the implementation of thinning processing is made still easier and also the processing speed can be increased by reducing the number of times of memory access.
  • 6. Example Application
  • The image encoding device 10 and the image decoding device 60 according to the embodiment described above may be applied to various electronic appliances such as a transmitter and a receiver for satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, distribution to terminals via cellular communication, and the like, a recording device that records images in a medium such as an optical disc, a magnetic disk or a flash memory, a reproduction device that reproduces images from such storage medium, and the like. Four example applications will be described below.
  • 6-1. First Example Application
  • FIG. 29 is a block diagram showing an example of a schematic configuration of a television adopting the embodiment described above. A television 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, an video signal processing section 905, a display section 906, an audio signal processing section 907, a speaker 908, an external interface 909, a control section 910, a user interface 911, and a bus 912.
  • The tuner 902 extracts a signal of a desired channel from broadcast signals received via the antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs an encoded bit stream obtained by demodulation to the demultiplexer 903. That is, the tuner 902 serves as transmission means of the televisions 900 for receiving an encoded stream in which an image is encoded.
  • The demultiplexer 903 separates a video stream and an audio stream of a program to be viewed from the encoded bit stream, and outputs each stream which has been separated to the decoder 904. Also, the demultiplexer 903 extracts auxiliary data such as an EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control section 910. Additionally, the demultiplexer 903 may perform descrambling in the case the encoded bit stream is scrambled.
  • The decoder 904 decodes the video stream and the audio stream input from the demultiplexer 903. Then, the decoder 904 outputs video data generated by the decoding process to the video signal processing section 905. Also, the decoder 904 outputs the audio data generated by the decoding process to the audio signal processing section 907.
  • The video signal processing section 905 reproduces the video data input from the decoder 904, and causes the display section 906 to display the video. The video signal processing section 905 may also cause the display section 906 to display an application screen supplied via a network. Further, the video signal processing section 905 may perform an additional process such as noise removal, for example, on the video data according to the setting. Furthermore, the video signal processing section 905 may generate an image of a GUI (Graphical User Interface) such as a menu, a button, a cursor or the like, for example, and superimpose the generated image on an output image.
  • The display section 906 is driven by a drive signal supplied by the video signal processing section 905, and displays a video or an image on an video screen of a display device (for example, a liquid crystal display, a plasma display, an OLED, or the like).
  • The audio signal processing section 907 performs reproduction processes such as D/A conversion and amplification on the audio data input from the decoder 904, and outputs audio from the speaker 908. Also, the audio signal processing section 907 may perform an additional process such as noise removal on the audio data.
  • The external interface 909 is an interface for connecting the television 900 and an external appliance or a network. For example, a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also serves as transmission means of the televisions 900 for receiving an encoded stream in which an image is encoded.
  • The control section 910 includes a processor such as a CPU (Central Processing Unit), and a memory such as an RAM (Random Access Memory), an ROM (Read Only Memory), or the like. The memory stores a program to be executed by the CPU, program data, EPG data, data acquired via a network, and the like. The program stored in the memory is read and executed by the CPU at the time of activation of the television 900, for example. The CPU controls the operation of the television 900 according to an operation signal input from the user interface 911, for example, by executing the program.
  • The user interface 911 is connected to the control section 910. The user interface 911 includes a button and a switch used by a user to operate the television 900, and a receiving section for a remote control signal, for example. The user interface 911 detects an operation of a user via these structural elements, generates an operation signal, and outputs the generated operation signal to the control section 910.
  • The bus 912 interconnects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing section 905, the audio signal processing section 907, the external interface 909, and the control section 910.
  • In the television 900 configured as described above, the decoder 904 has a function of the image decoding device 60 according to the embodiment described above. Accordingly, an increase in the amount of consumption of memory resources accompanying the extension of block size can be curbed when the LM mode is adopted for decoding images of the television 900.
  • 6-2. Second Example Application
  • FIG. 30 is a block diagram showing an example of a schematic configuration of a mobile phone adopting the embodiment described above. A mobile phone 920 includes an antenna 921, a communication section 922, an audio codec 923, a speaker 924, a microphone 925, a camera section 926, an image processing section 927, a demultiplexing section 928, a recording/reproduction section 929, a display section 930, a control section 931, an operation section 932, and a bus 933.
  • The antenna 921 is connected to the communication section 922. The speaker 924 and the microphone 925 are connected to the audio codec 923. The operation section 932 is connected to the control section 931. The bus 933 interconnects the communication section 922, the audio codec 923, the camera section 926, the image processing section 927, the demultiplexing section 928, the recording/reproduction section 929, the display section 930, and the control section 931.
  • The mobile phone 920 performs operation such as transmission/reception of audio signal, transmission/reception of emails or image data, image capturing, recording of data, and the like, in various operation modes including an audio communication mode, a data communication mode, an image capturing mode, and a videophone mode.
  • In the audio communication mode, an analogue audio signal generated by the microphone 925 is supplied to the audio codec 923. The audio codec 923 converts the analogue audio signal into audio data, and A/D converts and compresses the converted audio data. Then, the audio codec 923 outputs the compressed audio data to the communication section 922. The communication section 922 encodes and modulates the audio data, and generates a transmission signal. Then, the communication section 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. Also, the communication section 922 amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal, and acquires a received signal. Then, the communication section 922 demodulates and decodes the received signal and generates audio data, and outputs the generated audio data to the audio codec 923. The audio codec 923 extends and D/A converts the audio data, and generates an analogue audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 and causes the audio to be output.
  • Also, in the data communication mode, the control section 931 generates text data that makes up an email, according to an operation of a user via the operation section 932, for example. Moreover, the control section 931 causes the text to be displayed on the display section 930. Furthermore, the control section 931 generates email data according to a transmission instruction of the user via the operation section 932, and outputs the generated email data to the communication section 922. Then, the communication section 922 encodes and modulates the email data, and generates a transmission signal. Then, the communication section 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. Also, the communication section 922 amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal, and acquires a received signal. Then, the communication section 922 demodulates and decodes the received signal, restores the email data, and outputs the restored email data to the control section 931. The control section 931 causes the display section 930 to display the contents of the email, and also, causes the email data to be stored in the storage medium of the recording/reproduction section 929.
  • The recording/reproduction section 929 includes an arbitrary readable and writable storage medium. For example, the storage medium may be a built-in storage medium such as an RAM, a flash memory or the like, or an externally mounted storage medium such as a hard disk, a magnetic disk, a magneto-optical disk, an optical disc, an USB memory, a memory card, or the like.
  • Furthermore, in the image capturing mode, the camera section 926 captures an image of a subject, generates image data, and outputs the generated image data to the image processing section 927, for example. The image processing section 927 encodes the image data input from the camera section 926, and causes the encoded stream to be stored in the storage medium of the recording/reproduction section 929.
  • Furthermore, in the videophone mode, the demultiplexing section 928 multiplexes a video stream encoded by the image processing section 927 and an audio stream input from the audio codec 923, and outputs the multiplexed stream to the communication section 922, for example. The communication section 922 encodes and modulates the stream, and generates a transmission signal. Then, the communication section 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. Also, the communication section 922 amplifies a wireless signal received via the antenna 921 and converts the frequency of the wireless signal, and acquires a received signal. These transmission signal and received signal may include an encoded bit stream. Then, the communication section 922 demodulates and decodes the received signal, restores the stream, and outputs the restored stream to the demultiplexing section 928. The demultiplexing section 928 separates a video stream and an audio stream from the input stream, and outputs the video stream to the image processing section 927 and the audio stream to the audio codec 923. The image processing section 927 decodes the video stream, and generates video data. The video data is supplied to the display section 930, and a series of images is displayed by the display section 930. The audio codec 923 extends and D/A converts the audio stream, and generates an analogue audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 and causes the audio to be output.
  • In the mobile phone 920 configured in this manner, the image processing section 927 has a function of the image encoding device 10 and the image decoding device 60 according to the embodiment described above. Accordingly, an increase in the amount of consumption of memory resources accompanying the extension of block size can be curbed when the LM mode is adopted for encoding and decoding images of the mobile phone 920.
  • 6-3. Third Example Application
  • FIG. 31 is a block diagram showing an example of a schematic configuration of a recording/reproduction device adopting the embodiment described above. A recording/reproduction device 940 encodes, and records in a recording medium, audio data and video data of a received broadcast program, for example. The recording/reproduction device 940 may also encode, and record in the recording medium, audio data and video data acquired from another device, for example. Furthermore, the recording/reproduction device 940 reproduces, using a monitor or a speaker, data recorded in the recording medium, according to an instruction of a user, for example. At this time, the recording/reproduction device 940 decodes the audio data and the video data.
  • The recording/reproduction device 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disc drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control section 949, and a user interface 950.
  • The tuner 941 extracts a signal of a desired channel from broadcast signals received via an antenna (not shown), and demodulates the extracted signal. Then, the tuner 941 outputs an encoded bit stream obtained by demodulation to the selector 946. That is, the tuner 941 serves as transmission means of the recording/reproduction device 940.
  • The external interface 942 is an interface for connecting the recording/reproduction device 940 and an external appliance or a network. For example, the external interface 942 may be an IEEE 1394 interface, a network interface, an USB interface, a flash memory interface, or the like. For example, video data and audio data received by the external interface 942 are input to the encoder 943. That is, the external interface 942 serves as transmission means of the recording/reproduction device 940.
  • In the case the video data and the audio data input from the external interface 942 are not encoded, the encoder 943 encodes the video data and the audio data. Then, the encoder 943 outputs the encoded bit stream to the selector 946.
  • The HDD 944 records in an internal hard disk an encoded bit stream, which is compressed content data of a video or audio, various programs, and other pieces of data. Also, the HDD 944 reads these pieces of data from the hard disk at the time of reproducing a video or audio.
  • The disc drive 945 records or reads data in a recording medium that is mounted. A recording medium that is mounted on the disc drive 945 may be a DVD disc (a DVD-Video, a DVD-RAM, a DVD-R, a DVD-RW, a DVD+, a DVD+RW, or the like), a Blu-ray (registered trademark) disc, or the like, for example.
  • The selector 946 selects, at the time of recording a video or audio, an encoded bit stream input from the tuner 941 or the encoder 943, and outputs the selected encoded bit stream to the HDD 944 or the disc drive 945. Also, the selector 946 outputs, at the time of reproducing a video or audio, an encoded bit stream input from the HDD 944 or the disc drive 945 to the decoder 947.
  • The decoder 947 decodes the encoded bit stream, and generates video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. Also, the decoder 904 outputs the generated audio data to an external speaker.
  • The OSD 948 reproduces the video data input from the decoder 947, and displays a video. Also, the OSD 948 may superimpose an image of a GUI, such as a menu, a button, a cursor or the like, for example, on a displayed video.
  • The control section 949 includes a processor such as a CPU, and a memory such as an RAM or an ROM. The memory stores a program to be executed by the CPU, program data, and the like. A program stored in the memory is read and executed by the CPU at the time of activation of the recording/reproduction device 940, for example. The CPU controls the operation of the recording/reproduction device 940 according to an operation signal input from the user interface 950, for example, by executing the program.
  • The user interface 950 is connected to the control section 949. The user interface 950 includes a button and a switch used by a user to operate the recording/reproduction device 940, and a receiving section for a remote control signal, for example. The user interface 950 detects an operation of a user via these structural elements, generates an operation signal, and outputs the generated operation signal to the control section 949.
  • In the recording/reproduction device 940 configured in this manner, the encoder 943 has a function of the image encoding device 10 according to the embodiment described above. Also, the decoder 947 has a function of the image decoding device 60 according to the embodiment described above. Accordingly, an increase in the amount of consumption of memory resources accompanying the extension of block size can be curbed when the LM mode is adopted for encoding and decoding images of the recording/reproduction device 940.
  • 6-4. Fourth Example Application
  • FIG. 32 is a block diagram showing an example of a schematic configuration of an image capturing device adopting the embodiment described above. An image capturing device 960 captures an image of a subject, generates an image, encodes the image data, and records the image data in a recording medium.
  • The image capturing device 960 includes an optical block 961, an image capturing section 962, a signal processing section 963, an image processing section 964, a display section 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control section 970, a user interface 971, and a bus 972.
  • The optical block 961 is connected to the image capturing section 962. The image capturing section 962 is connected to the signal processing section 963. The display section 965 is connected to the image processing section 964. The user interface 971 is connected to the control section 970. The bus 972 interconnects the image processing section 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control section 970.
  • The optical block 961 includes a focus lens, an aperture stop mechanism, and the like. The optical block 961 forms an optical image of a subject on an image capturing surface of the image capturing section 962. The image capturing section 962 includes an image sensor such as a CCD, a CMOS or the like, and converts by photoelectric conversion the optical image formed on the image capturing surface into an image signal which is an electrical signal. Then, the image capturing section 962 outputs the image signal to the signal processing section 963.
  • The signal processing section 963 performs various camera signal processes, such as knee correction, gamma correction, color correction and the like, on the image signal input from the image capturing section 962. The signal processing section 963 outputs the image data after the camera signal process to the image processing section 964.
  • The image processing section 964 encodes the image data input from the signal processing section 963, and generates encoded data. Then, the image processing section 964 outputs the generated encoded data to the external interface 966 or the media drive 968. Also, the image processing section 964 decodes encoded data input from the external interface 966 or the media drive 968, and generates image data. Then, the image processing section 964 outputs the generated image data to the display section 965. Also, the image processing section 964 may output the image data input from the signal processing section 963 to the display section 965, and cause the image to be displayed. Furthermore, the image processing section 964 may superimpose data for display acquired from the OSD 969 on an image to be output to the display section 965.
  • The OSD 969 generates an image of a GUI, such as a menu, a button, a cursor or the like, for example, and outputs the generated image to the image processing section 964.
  • The external interface 966 is configured as an USB input/output terminal, for example. The external interface 966 connects the image capturing device 960 and a printer at the time of printing an image, for example. Also, a drive is connected to the external interface 966 as necessary. A removable medium, such as a magnetic disk, an optical disc or the like, for example, is mounted on the drive, and a program read from the removable medium may be installed in the image capturing device 960. Furthermore, the external interface 966 may be configured as a network interface to be connected to a network such as a LAN, the Internet or the like. That is, the external interface 966 serves as transmission means of the image capturing device 960.
  • A recording medium to be mounted on the media drive 968 may be an arbitrary readable and writable removable medium, such as a magnetic disk, a magneto-optical disk, an optical disc, a semiconductor memory or the like, for example. Also, a recording medium may be fixedly mounted on the media drive 968, configuring a non-transportable storage section such as a built-in hard disk drive or an SSD (Solid State Drive), for example.
  • The control section 970 includes a processor such as a CPU, and a memory such as an RAM or an ROM. The memory stores a program to be executed by the CPU, program data, and the like. A program stored in the memory is read and executed by the CPU at the time of activation of the image capturing device 960, for example. The CPU controls the operation of the image capturing device 960 according to an operation signal input from the user interface 971, for example, by executing the program.
  • The user interface 971 is connected to the control section 970. The user interface 971 includes a button, a switch and the like used by a user to operate the image capturing device 960, for example. The user interface 971 detects an operation of a user via these structural elements, generates an operation signal, and outputs the generated operation signal to the control section 970.
  • In the image capturing device 960 configured in this manner, the image processing section 964 has a function of the image encoding device 10 and the image decoding device 60 according to the embodiment described above. Accordingly, an increase in the amount of consumption of memory resources accompanying the extension of block size can be curbed when the LM mode is adopted for encoding and decoding images of the image capturing device 960.
  • 7. Summary
  • Heretofore, the image encoding device 10 and the image decoding device 60 according to an embodiment has been described using FIGS. 1 to 32. According to the present embodiment, after the luminance component of a first luminance prediction unit in the coding unit is encoded, the color difference component of a first color difference prediction unit corresponding to the first luminance prediction unit is encoded before the luminance component of a second luminance prediction unit that does not correspond to the first color difference prediction unit. Therefore, when an encoded stream is decoded according to the order of the above encoding, an intra prediction can be made in LM mode for the first color difference prediction unit based on the value of the luminance component of the buffered first luminance prediction unit. When the intra prediction for the first color difference prediction unit is completed, the buffer may be cleared to newly buffer the value of the luminance component of the second luminance prediction unit. Therefore, there is no need to provide a large memory for the adoption of the LM mode. That is, the amount of memory resources needed when an intra prediction based on a dynamically built prediction function is made can be reduced.
  • Also according to the present embodiment, the number of luminance prediction units corresponding to one color difference prediction unit is limited in principle to one in LM mode. That is, an intra prediction is not made in LM mode for one color difference component based on values of the luminance components extending over a plurality of luminance prediction units that are hardly correlated to each other. Therefore, because prediction modes from which high prediction precision cannot be expected are excluded from objects of search, the processing cost needed to encode images can be reduced. However, when the size of the luminance prediction unit is 4×4 pixels, the number of luminance prediction units corresponding to one color difference prediction unit may exceptionally be permitted to be plural in LM mode. According to the above configuration, encoding efficiency can be enhanced by increasing opportunities when the LM mode is utilized by limiting to cases when the size of the prediction unit is small.
  • According to the present embodiment, when the LM mode using a function of the value of the corresponding luminance component is adopted for intra prediction of the color difference component for encoding or decoding images, the ratio of the number of reference pixels referenced to calculate coefficients of the function to the block size is variably controlled. Therefore, an increase in processing cost can be avoided or mitigated by curbing an increase of the number of reference pixels accompanying the extension of the block size.
  • Also according to the present embodiment, the ratio is controlled so that the number of reference pixels is constant when the block size exceeds a predetermined size. According to such a configuration, coefficients of the function can be calculated using a common circuit or logic for a plurality of block sizes. Therefore, an increase in scale of the circuit or logic caused by the adoption of the LM mode can also be curbed.
  • Also according to the present embodiment, reference pixels are not excessively reduced when the block size falls below a predetermined size. Therefore, the degradation of prediction accuracy in LM mode due to an insufficient number of reference pixels can be prevented. A relatively large block size can normally be set when an image in the block is monotonous and a prediction can easily be made. Therefore, when the block size is still larger, the risk of extreme degradation of prediction accuracy caused by the reduction of more reference pixels is small.
  • Also according to the present embodiment, the ratio can separately be controlled in the vertical direction and the horizontal direction of the block. According to such a configuration, coefficients of the function can be calculated using a common circuit or logic without being dependent on the chroma-format. In addition, it becomes possible to leave more reference pixels arranged along the horizontal direction that can be accessed with less amount of memory access and to reduce more reference pixels arranged along the vertical direction. Further, when the short distance intra prediction method is used, reference pixels to be reduced can adaptively be changed in accordance with the shape of the block.
  • According to the two modifications described above, the amount of consumption of memory resources in connection with the introduction of the LM mode can much effectively be reduced.
  • Additionally, in the present specification, an example has been mainly described where the information about intra prediction and the information about inter prediction is multiplexed to the header of the encoded stream, and the encoded stream is transmitted from the encoding side to the decoding side. However, the method of transmitting this information is not limited to such an example. For example, this information may be transmitted or recorded as individual data that is associated with an encoded bit stream, without being multiplexed to the encoded bit stream. The term “associate” here means to enable an image included in a bit stream (or a part of an image, such as a slice or a block) and information corresponding to the image to link to each other at the time of decoding. That is, this information may be transmitted on a different transmission line from the image (or the bit stream). Or, this information may be recorded on a different recording medium (or in a different recording area on the same recording medium) from the image (or the bit stream). Furthermore, this information and the image (or the bit stream) may be associated with each other on the basis of arbitrary units such as a plurality of frames, one frame, a part of a frame or the like, for example.
  • Heretofore, a preferred embodiment of the present disclosure has been described in detail while referring to the appended drawings, but the technical scope of the present disclosure is not limited to such an example. It is apparent that a person having an ordinary skill in the art of the technology of the present disclosure may make various alterations or modifications within the scope of the technical ideas described in the claims, and these are, of course, understood to be within the technical scope of the present disclosure.
  • Additionally, the present technology may also be configured as below.
  • (1)
  • An image processing apparatus including:
  • a decoding section that decodes a luminance component and a color difference component of a block inside a coding unit in an order of the luminance component and the color difference component in each block.
  • (2)
  • The image processing apparatus according to (1), wherein the decoding section decodes a luminance component of a first block in the coding unit, a color difference component of the first block, and a luminance component of a second block subsequent to the first block in the order of decoding in an order of the luminance component of the first block, the color difference component of the first block, and the luminance component of the second block.
  • (3)
  • The image processing apparatus according to (2), wherein the decoding section decodes the luminance component of the first block, the color difference component of the first block, the luminance component of the second block, and a color difference component of the second block in an order of the luminance component of the first block, the color difference component of the first block, the luminance component of the second block, and the color difference component of the second block.
  • (4)
  • The image processing apparatus according to any one of (1) to (3), wherein a unit of decoding processing is hierarchically blocked, and wherein the block is a prediction unit.
  • (5)
  • The image processing apparatus according to any one of (1) to (4), further including:
  • a prediction section that when a linear model (LM) mode representing a prediction of the color difference component based on the luminance component is specified, generates a predicted value of the color difference component of the first block decoded by the decoding section by using a function based on a value of the luminance component of the first block.
  • (6)
  • The image processing apparatus according to (5), wherein a number of luminance prediction units corresponding to one color difference prediction unit is limited to one in the LM mode.
  • (7)
  • The image processing apparatus according to (6), wherein when a size of the luminance prediction unit is 4×4 pixels, the number of the luminance prediction units corresponding to the one color difference prediction unit is exceptionally permitted to be plural in the LM mode.
  • (8)
  • The image processing apparatus according to (7), wherein when a size of the first block is 4×4 pixels, the decoding section decodes at least the one luminance component of the luminance prediction unit of 4×4 pixels including the first block and then decodes the color difference component of the first block.
  • (9)
  • The image processing apparatus according to any one of (5) to (8), wherein the prediction section includes a buffer having a size equal to or smaller than a maximum size of the color difference prediction unit as the buffer of the luminance component for the LM mode.
  • (10)
  • An image processing method including:
  • decoding a luminance component and a color difference component of a block inside a coding unit in an order of the luminance component and the color difference component in each block.
  • (11)
  • An image processing apparatus including:
  • an encoding section that encodes a luminance component and a color difference component of a block inside a coding unit in an order of the luminance component and the color difference component in each block.
  • (12)
  • The image processing apparatus according to (11), wherein the encoding section encodes a luminance component of a first block in the coding unit, a color difference component of the first block, and a luminance component of a second block subsequent to the first block in the order of decoding in an order of the luminance component of the first block, the color difference component of the first block, and the luminance component of the second block.
  • (13)
  • The image processing apparatus according to (12), wherein the encoding section encodes the luminance component of the first block, the color difference component of the first block, the luminance component of the second block, and a color difference component of the second block in an order of the luminance component of the first block, the color difference component of the first block, the luminance component of the second block, and the color difference component of the second block.
  • (14)
  • The image processing apparatus according to any one of (11) to (13), wherein a unit of encoding processing is hierarchically blocked, and wherein the block is a prediction unit.
  • (15)
  • The image processing apparatus according to any one of (11) to (14), further including:
  • a prediction section that, in a linear model (LM) mode representing a prediction of the color difference component based on the luminance component, generates a predicted value of the color difference component of the first block encoded by the encoding section by using a function based on a value of the luminance component of the first block.
  • (16)
  • The image processing apparatus according to (15), wherein a number of luminance prediction units corresponding to one color difference prediction unit is limited to one in the LM mode.
  • (17)
  • The image processing apparatus according to (16), wherein when a size of the luminance prediction unit is 4×4 pixels, the number of the luminance prediction units corresponding to the one color difference prediction unit is exceptionally permitted to be plural in the LM mode.
  • (18)
  • The image processing apparatus according to (17), wherein when a size of the first block is 4×4 pixels, the encoding section encodes at least the one luminance component of the luminance prediction unit of 4×4 pixels including the first block and then encodes the color difference component of the first block.
  • (19)
  • The image processing apparatus according to any one of (15) to (18), wherein the prediction section includes a buffer having a size equal to or smaller than a maximum size of the color difference prediction unit as the buffer of the luminance component for the LM mode.
  • (20)
  • An image processing method including:
  • encoding a luminance component and a color difference component of a block inside a coding unit in an order of the luminance component and the color difference component in each block.
  • REFERENCE SIGNS LIST
    • 10 image encoding device (image processing apparatus)
    • 16 encoding section
    • 40 intra prediction section
    • 60 image decoding device (image processing apparatus)
    • 62 decoding section
    • 90 intra prediction section

Claims (10)

1. An image processing device comprising:
circuitry configured to
encode a plurality of coding blocks sequentially according to a processing order assigned to a current coding block, of the plurality of coding blocks, including first block, second block, third block, and fourth block as a first luma block in the first block, a first Cb block in the first block, a first Cr block in the first block, a second luma block in the second block, a second Cb block in the second block, a second Cr block in the second block, a third luma block in the third block, a third Cb block in the third block, a third Cr block in the third block, a fourth luma block in the fourth block, a fourth Cb block in the fourth block, a fourth Cr block in the fourth block.
2. The image processing device according to claim 1, wherein
the processing order for the current coding block is assigned, in order from (1) to (12), of:
(1) encoding the first luma block in the first block; then
(2) encoding the first Cb block in the first block; then
(3) encoding the first Cr block in the first block; then
(4) encoding the second luma block in the second block; then
(5) encoding the second Cb block in the second block; then
(6) encoding the second Cr block in the second block; then
(7) encoding the third luma block in the third block; then
(8) encoding the third Cb block in the third block; then
(9) encoding the third Cr block in the third block; then
(10) encoding the fourth luma block in the fourth block; then
(11) encoding the fourth Cb block in the fourth block; and then
(12) encoding the fourth Cr block in the fourth block.
3. The image processing device according to claim 1, wherein
the current coding block and the plurality of coding blocks are included in a Largest Coding Unit (LCU).
4. The image processing device according to claim 3, wherein the circuitry is further configured to
encode the current coding block in a format in which the number of chroma pixels is vertically and horizontally different from the number of luma pixels.
5. The image processing device according to claim 4, wherein
the format is 4:2:0.
6. An image processing method comprising:
encoding a plurality of coding blocks sequentially according to a processing order assigned to a current coding block, of the plurality of coding blocks, including first block, second block, third block, and fourth block as a first luma block in the first block, a first Cb block in the first block, a first Cr block in the first block, a second luma block in the second block, a second Cb block in the second block, a second Cr block in the second block, a third luma block in the third block, a third Cb block in the third block, a third Cr block in the third block, a fourth luma block in the fourth block, a fourth Cb block in the fourth block, a fourth Cr block in the fourth block.
7. The image processing method according to claim 6, wherein
the processing order for the current coding block is assigned, in order from (1) to (12), of:
(1) encoding the first luma block in the first block; then
(2) encoding the first Cb block in the first block; then
(3) encoding the first Cr block in the first block; then
(4) encoding the second luma block in the second block; then
(5) encoding the second Cb block in the second block; then
(6) encoding the second Cr block in the second block; then
(7) encoding the third luma block in the third block; then
(8) encoding the third Cb block in the third block; then
(9) encoding the third Cr block in the third block; then
(10) encoding the fourth luma block in the fourth block; then
(11) encoding the fourth Cb block in the fourth block; and then
(12) encoding the fourth Cr block in the fourth block.
8. The image processing method according to claim 6, wherein the current coding block and the plurality of coding blocks are included in a Largest Coding Unit (LCU).
9. The image processing method according to claim 8, wherein
the current coding block is encoded in a format in which the number of chroma pixels is vertically and horizontally different from the number of luma pixels.
10. The image processing method according to claim 9, wherein
the format is 4:2:0.
US16/805,500 2011-06-03 2020-02-28 Image processing device and image processing method Abandoned US20200204796A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/805,500 US20200204796A1 (en) 2011-06-03 2020-02-28 Image processing device and image processing method

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
JP2011-125473 2011-06-03
JP2011125473 2011-06-03
JP2011145411 2011-06-30
JP2011-145411 2011-06-30
JP2011210543A JP2013034163A (en) 2011-06-03 2011-09-27 Image processing device and image processing method
JP2011-210543 2011-09-27
PCT/JP2012/059173 WO2012165040A1 (en) 2011-06-03 2012-04-04 Image processing device and image processing method
US201314004460A 2013-09-11 2013-09-11
US16/003,624 US10972722B2 (en) 2011-06-03 2018-06-08 Image processing device and image processing method
US16/805,500 US20200204796A1 (en) 2011-06-03 2020-02-28 Image processing device and image processing method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/003,624 Continuation US10972722B2 (en) 2011-06-03 2018-06-08 Image processing device and image processing method

Publications (1)

Publication Number Publication Date
US20200204796A1 true US20200204796A1 (en) 2020-06-25

Family

ID=47258905

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/004,460 Active 2035-02-19 US10063852B2 (en) 2011-06-03 2012-04-04 Image processing device and image processing method
US16/003,624 Active US10972722B2 (en) 2011-06-03 2018-06-08 Image processing device and image processing method
US16/805,500 Abandoned US20200204796A1 (en) 2011-06-03 2020-02-28 Image processing device and image processing method

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/004,460 Active 2035-02-19 US10063852B2 (en) 2011-06-03 2012-04-04 Image processing device and image processing method
US16/003,624 Active US10972722B2 (en) 2011-06-03 2018-06-08 Image processing device and image processing method

Country Status (4)

Country Link
US (3) US10063852B2 (en)
JP (1) JP2013034163A (en)
CN (1) CN103583045A (en)
WO (1) WO2012165040A1 (en)

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013034162A (en) 2011-06-03 2013-02-14 Sony Corp Image processing device and image processing method
WO2012176405A1 (en) * 2011-06-20 2012-12-27 株式会社Jvcケンウッド Image encoding device, image encoding method and image encoding program, and image decoding device, image decoding method and image decoding program
JPWO2013150838A1 (en) * 2012-04-05 2015-12-17 ソニー株式会社 Image processing apparatus and image processing method
JPWO2013164922A1 (en) * 2012-05-02 2015-12-24 ソニー株式会社 Image processing apparatus and image processing method
JP6005572B2 (en) 2013-03-28 2016-10-12 Kddi株式会社 Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, and program
BR112015025113B1 (en) * 2013-04-05 2023-03-21 Mitsubishi Electric Corporation COLOR IMAGE DECODING DEVICE, AND COLOR IMAGE DECODING METHOD
US9894359B2 (en) * 2013-06-18 2018-02-13 Sharp Kabushiki Kaisha Illumination compensation device, LM prediction device, image decoding device, image coding device
EP3021578B1 (en) * 2013-07-10 2019-01-02 KDDI Corporation Sub-sampling of reference pixels for chroma prediction based on luma intra prediction mode
CN104918050B (en) * 2014-03-16 2019-11-08 上海天荷电子信息有限公司 Use the image coding/decoding method for the reference pixel sample value collection that dynamic arrangement recombinates
JP6352141B2 (en) 2014-09-30 2018-07-04 Kddi株式会社 Moving picture encoding apparatus, moving picture decoding apparatus, moving picture compression transmission system, moving picture encoding method, moving picture decoding method, and program
CN107409208B (en) * 2015-03-27 2021-04-20 索尼公司 Image processing apparatus, image processing method, and computer-readable storage medium
WO2018056603A1 (en) * 2016-09-22 2018-03-29 엘지전자 주식회사 Illumination compensation-based inter-prediction method and apparatus in image coding system
CN118678062A (en) * 2016-10-04 2024-09-20 Lx 半导体科技有限公司 Encoding/decoding apparatus and apparatus for transmitting image data
CN117041564A (en) * 2016-11-29 2023-11-10 成均馆大学校产学协力团 Video encoding/decoding method, apparatus, and recording medium storing bit stream
TWI618214B (en) * 2017-04-13 2018-03-11 力成科技股份有限公司 Chip structure have redistribution layer
JP6680260B2 (en) * 2017-04-28 2020-04-15 株式会社Jvcケンウッド IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, IMAGE ENCODING PROGRAM, IMAGE DECODING DEVICE, IMAGE DECODING METHOD, AND IMAGE DECODING PROGRAM
JP7150810B2 (en) 2017-07-06 2022-10-11 エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュート Image decoding method and image encoding method
WO2019059107A1 (en) * 2017-09-20 2019-03-28 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Encoding device, decoding device, encoding method and decoding method
CN111344787B (en) * 2017-11-17 2021-12-17 索尼公司 Information processing apparatus and method, storage medium, playback apparatus, and playback method
US10819965B2 (en) * 2018-01-26 2020-10-27 Samsung Electronics Co., Ltd. Image processing device and method for operating image processing device
CN118158403A (en) 2018-03-25 2024-06-07 有限公司B1影像技术研究所 Image encoding/decoding method, recording medium, and method of transmitting bit stream
KR20230088840A (en) 2018-09-20 2023-06-20 엘지전자 주식회사 Method and apparatus for image decoding on basis of cclm prediction in image coding system
CN111083489B (en) 2018-10-22 2024-05-14 北京字节跳动网络技术有限公司 Multiple iteration motion vector refinement
JP2022506283A (en) * 2018-11-06 2022-01-17 北京字節跳動網絡技術有限公司 Reduced complexity in deriving parameters for intra-prediction
JP7146086B2 (en) 2018-11-12 2022-10-03 北京字節跳動網絡技術有限公司 Bandwidth control method for inter-prediction
WO2020103852A1 (en) 2018-11-20 2020-05-28 Beijing Bytedance Network Technology Co., Ltd. Difference calculation based on patial position
KR102415322B1 (en) * 2018-11-23 2022-06-30 엘지전자 주식회사 CCLM prediction-based video decoding method and apparatus in video coding system
CN113170122B (en) 2018-12-01 2023-06-27 北京字节跳动网络技术有限公司 Parameter derivation for intra prediction
CA3121671C (en) * 2018-12-07 2024-06-18 Beijing Bytedance Network Technology Co., Ltd. Context-based intra prediction
CN115967810A (en) 2019-02-24 2023-04-14 抖音视界有限公司 Method, apparatus and computer readable medium for encoding and decoding video data
KR102635518B1 (en) 2019-03-06 2024-02-07 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Use of converted single prediction candidates
WO2020184918A1 (en) * 2019-03-08 2020-09-17 한국전자통신연구원 Image encoding/decoding method and device, and recording medium storing bitstream
WO2020192642A1 (en) 2019-03-24 2020-10-01 Beijing Bytedance Network Technology Co., Ltd. Conditions in parameter derivation for intra prediction
MX2021012503A (en) 2019-06-21 2021-11-12 Panasonic Ip Corp America System and method for video coding.
JP6879401B2 (en) * 2020-02-27 2021-06-02 株式会社Jvcケンウッド Image coding device, image coding method and image coding program, and image decoding device, image decoding method and image decoding program.
JP7550373B2 (en) 2021-08-31 2024-09-13 合同会社IP Bridge1号 Image encoding device, image encoding method, image encoding program, image decoding device, image decoding method, and image decoding program

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3575508B2 (en) 1996-03-04 2004-10-13 Kddi株式会社 Encoded video playback device
JP2003289544A (en) 2002-03-27 2003-10-10 Sony Corp Equipment and method for coding image information, equipment and method for decoding image information, and program
CN1615019A (en) * 2003-11-05 2005-05-11 华为技术有限公司 Visual macro-modular encoding method
US7809186B2 (en) * 2004-04-27 2010-10-05 Canon Kabushiki Kaisha Image processing apparatus, image processing method, program thereof, and recording medium
KR101323732B1 (en) * 2005-07-15 2013-10-31 삼성전자주식회사 Apparatus and method of encoding video and apparatus and method of decoding encoded video
US7747096B2 (en) * 2005-07-15 2010-06-29 Samsung Electronics Co., Ltd. Method, medium, and system encoding/decoding image data
JP4347289B2 (en) * 2005-11-14 2009-10-21 シャープ株式会社 Information processing apparatus, program, and recording medium
JP2007214641A (en) 2006-02-07 2007-08-23 Seiko Epson Corp Coder, decoder, image processing apparatus, and program for allowing computer to execute image processing method
US8265345B2 (en) * 2006-11-20 2012-09-11 Sharp Kabushiki Kaisha Image processing method, image processing apparatus, image forming apparatus, and image reading apparatus
CN101193305B (en) 2006-11-21 2010-05-12 安凯(广州)微电子技术有限公司 Inter-frame prediction data storage and exchange method for video coding and decoding chip
CN101198051B (en) 2006-12-07 2011-10-05 深圳艾科创新微电子有限公司 Method and device for implementing entropy decoder based on H.264
US8265152B2 (en) 2008-10-10 2012-09-11 Arecont Vision, Llc. System and method for low-latency processing of intra-frame video pixel block prediction
JP4697557B2 (en) * 2009-01-07 2011-06-08 ソニー株式会社 Encoding apparatus, encoding method, recording medium, and image processing apparatus
JP5441670B2 (en) * 2009-12-22 2014-03-12 キヤノン株式会社 Image processing apparatus and control method thereof
JP5544996B2 (en) 2010-04-09 2014-07-09 ソニー株式会社 Image processing apparatus and method
CN107197258B (en) 2011-03-30 2020-04-28 Lg 电子株式会社 Video decoding device and video encoding device
US9288500B2 (en) * 2011-05-12 2016-03-15 Texas Instruments Incorporated Luma-based chroma intra-prediction for video coding

Also Published As

Publication number Publication date
US10063852B2 (en) 2018-08-28
US10972722B2 (en) 2021-04-06
US20140003512A1 (en) 2014-01-02
JP2013034163A (en) 2013-02-14
CN103583045A (en) 2014-02-12
WO2012165040A1 (en) 2012-12-06
US20180295358A1 (en) 2018-10-11

Similar Documents

Publication Publication Date Title
US10972722B2 (en) Image processing device and image processing method
US10652546B2 (en) Image processing device and image processing method
US10931955B2 (en) Image processing device and image processing method that horizontal filtering on pixel blocks
US11196995B2 (en) Image processing device and image processing method
US10785504B2 (en) Image processing device and image processing method
US9749625B2 (en) Image processing apparatus and image processing method utilizing a correlation of motion between layers for encoding an image
US20150036758A1 (en) Image processing apparatus and image processing method
WO2012098790A1 (en) Image processing device and image processing method
JP6217826B2 (en) Image processing apparatus and image processing method
US20130182967A1 (en) Image processing device and image processing method
JP2012195815A (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SATO, KAZUSHI;REEL/FRAME:051968/0289

Effective date: 20180604

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION