CN114598880B - Image processing method, intelligent terminal and storage medium - Google Patents
Image processing method, intelligent terminal and storage medium Download PDFInfo
- Publication number
- CN114598880B CN114598880B CN202210491875.0A CN202210491875A CN114598880B CN 114598880 B CN114598880 B CN 114598880B CN 202210491875 A CN202210491875 A CN 202210491875A CN 114598880 B CN114598880 B CN 114598880B
- Authority
- CN
- China
- Prior art keywords
- partition
- image block
- determining
- target image
- prediction mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The application discloses an image processing method, an intelligent terminal and a storage medium, wherein the image processing method comprises the following steps: and determining a prediction result set of a corresponding partition in the target image block according to a preset prediction mode, wherein the prediction result set is used for determining the prediction result of the target image block. According to the embodiment of the application, the preset prediction mode can be flexibly used for prediction, and the accuracy of the prediction result is improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an intelligent terminal, and a storage medium.
Background
In the process of video coding and decoding, the prediction of an image block by using a prediction mode is a very important link. The image blocks are predicted through the corresponding prediction modes, so that the redundancy of a video time domain or a video space domain can be effectively removed, and the video is compressed to be better transmitted.
In the course of conceiving and implementing the present application, the inventors found that at least the following problems existed: there is not a good balance between the flexibility of using prediction modes for corresponding partitions in an image block and the accuracy of the prediction results.
The foregoing description is provided for general background information and is not admitted to be prior art.
Disclosure of Invention
In view of the above technical problems, the present application provides an image processing method, an intelligent terminal, and a storage medium, which can flexibly use a preset prediction mode to perform prediction and improve the accuracy of a prediction result.
In order to solve the above technical problem, the present application provides an image processing method, which is applicable to an intelligent terminal, and includes:
and determining a prediction result set of the target image block according to a preset prediction mode, wherein the prediction result set is used for determining the prediction result of the target image block.
Optionally, the target image block includes a first partition and/or a second partition, and the first partition and/or the second partition are image areas divided by a dividing line; the preset prediction mode comprises a prediction mode used by partitions divided by a dividing line in the target image block; the set of predictors includes a first set of predictors and/or a second set of predictors.
Optionally, the method further comprises: determining a target division mode parameter of a target image block according to the first division mode set; the target division mode parameter includes prediction mode indication information for indicating a prediction mode used by a corresponding partition in the target image block.
Optionally, the determining a prediction result set of a corresponding partition in the target image block according to a preset prediction mode includes at least one of: if the prediction mode is the first prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the motion vector of the corresponding partition in the target image block; and if the prediction mode is a second prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the target reference sampling points of the corresponding partition in the target image block.
Optionally, the determining a prediction result set of a corresponding partition in the target image block according to the motion vector of the corresponding partition in the target image block includes: determining a first motion vector and/or a second motion vector of a corresponding partition in the target image block according to the merging candidate list of the target image block; and determining a prediction result set of a corresponding partition in the target image block according to the first motion vector and/or the second motion vector.
Optionally, the target reference sampling points comprise first reference sampling points and/or second reference sampling points.
Optionally, the method further comprises at least one of:
the first and second reference sample points are different;
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition;
the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition;
the position relation of the first reference sampling point relative to the corresponding partition is different from the position relation of the second reference sampling point relative to the corresponding partition;
and the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the target image block is located and is not adjacent to the target image block.
Optionally, the second prediction mode includes at least one type of second prediction mode, and if the prediction mode is the second prediction mode, the method further includes: and determining a second prediction mode of the target type used by the corresponding partition of the target image block from the second prediction modes of the at least one type, wherein the second prediction mode of the target type is used for determining a prediction result set of the target image block.
Optionally, the determining a prediction result set of a corresponding partition in the target image block according to the target reference sampling points of the corresponding partition in the target image block includes the following steps: s21: determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block; s22: and determining a prediction result set of a corresponding partition in the target image block according to the first reference sampling point and/or the second reference sampling point.
Optionally, the step S21 includes: and determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block according to the position relation between boundary sampling points of the corresponding partitions in the target image block and the target image block.
Optionally, the method further comprises: determining a boundary through which a dividing line used by the target image block passes according to a boundary mapping table; and determining the boundary sampling points according to the boundary passed by the dividing line.
Optionally, the determining the boundary sampling point according to the boundary passed by the dividing line includes at least one of: determining that the boundary sampling points comprise first boundary sampling points when a division line used by the target image block passes through a first boundary of the target image block; and when the division line used by the target image block passes through a second boundary of the target image block, determining that the boundary sampling points comprise second boundary sampling points.
Optionally, the determining manner of the boundary sample point includes: determining a dividing line equation according to the target dividing mode parameters of the target image block; and determining at least one boundary sampling point according to a boundary reference point and the parting line equation.
Optionally, the determining, according to a positional relationship between boundary sampling points in the target image block of partition lines used by corresponding partitions in the target image block and the target image block, first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block includes: determining a first coordinate range and/or a second coordinate range according to the at least one boundary sampling point; and determining a first reference sampling point and/or a second reference sampling point according to the first coordinate range and/or the second coordinate range.
Optionally, the step S21 includes: determining the partition range of the corresponding partition in the target image block according to the distance between the sampling point of the corresponding partition in the target image block and the partition line; determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block based on the coordinate ranges of the first sampling points and the partition ranges; or determining the first sampling point as a first reference sampling point or a second reference sampling point of a corresponding partition in the target image block based on distance information between the first sampling point adjacent to the target image block and the dividing line.
Optionally, the step S22 includes: filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point; and determining a prediction result set of the corresponding partition of the target image block according to the first reference sampling point and the filled reference sampling point.
Optionally, the filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point includes: determining at least one padding reference sample point among the first reference sample points; determining a filling value of the second reference sample point based on the sample value of the at least one filling reference sample point; and filling the second reference sampling point based on the filling value to obtain the filled reference sampling point.
Optionally, said determining a padding value for said second reference sample point based on the sample value of said at least one padding reference sample point comprises: determining a filling weight of each filling reference sampling point based on a position relation between the each filling reference sampling point and the second reference sampling point; determining a padding value for the second reference sample point based on the padding weight and the sample value of the respective padded reference sample point.
Optionally, at least one of: the positional relationship comprises a distance between the respective filling reference sample point and the second reference sample point; the first reference sampling points include first reference sampling points adjacent to a first boundary of the target image block and/or first reference sampling points adjacent to a second boundary of the target image block; the at least one padded reference sample point comprises a first padded reference sample point of the first reference sample points adjacent to the first boundary and/or a second padded reference sample point of the first reference sample points adjacent to the second boundary.
Optionally, the determining the padding values of the second reference sampling points based on the padding weights and the sampling values of the respective padded reference sampling points comprises: determining a filling value of a second filled reference sample point adjacent to the corresponding boundary based on the sample value of the first filled reference sample point and the filling weight of the first filled reference sample point and/or the sample value of the second filled reference sample point and the filling weight of the second filled reference sample point.
The application provides another image processing method which can be applied to an intelligent terminal and comprises the following steps:
s1: determining a target reference sampling point through a preset strategy;
s2: and determining a prediction result set according to the target reference sampling point and a preset prediction mode.
Optionally, the target reference sampling point is determined by a positional relationship between the reference sampling point and the sampling points in the image block partition.
Optionally, the step S1 includes: and determining a target reference sampling point according to the position relation between the reference sampling point and the sampling point in the image block partition.
Optionally, the determining a target reference sampling point by a position relationship between the reference sampling point and the sampling point in the image block partition includes: determining at least one boundary sample point from the sample points of the image block partition; and determining a target reference sampling point according to the position relation between the reference sampling point and the at least one boundary sampling point.
Optionally, the step S2 includes: determining a preset prediction mode according to the prediction mode indication information of the image block; and determining a prediction result set according to the preset prediction mode and the target reference sampling point.
Optionally, the preset prediction mode is at least one of: a prediction mode used by a partition divided by a partition line in an image block and/or a prediction mode used by an adjacent image block.
Optionally, the preset prediction mode is a prediction mode used by partitions divided by a partition line in the image block; determining a prediction result set according to the preset prediction mode and the target reference sampling point, wherein the determining the prediction result set comprises the following steps: if the prediction mode comprises a first prediction mode and/or a second prediction mode, determining a prediction result set of a partition corresponding to the image block according to the motion vector and/or the target reference sampling point; and determining a prediction result set of the image block according to the prediction result set of the partition corresponding to the image block.
Optionally, if the prediction mode includes a second prediction mode, determining a prediction result set of a partition corresponding to the image block according to the target reference sample point, including:
if the prediction mode comprises a second prediction mode, filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point;
and determining a prediction result set of the partition corresponding to the image block according to the first reference sampling point and the filled reference sampling point.
The application also provides an intelligent terminal, including: the image processing method comprises a memory and a processor, wherein the memory stores an image processing program, and the image processing program realizes the steps of any one of the image processing methods when being executed by the processor.
The present application also provides a computer storage medium storing a computer program which, when executed by a processor, implements the steps of any of the image processing methods described above.
As described above, the image processing method of the present application, which is applicable to an intelligent terminal, includes: and determining a prediction result set of the target image block according to a preset prediction mode, wherein the prediction result set is used for determining the prediction result of the target image block. Through the technical scheme, the preset prediction mode can be any preset type of prediction mode, the selection range of the preset prediction mode is not limited, and the use flexibility is high; according to the preset prediction mode, the accurate prediction result set of the corresponding partition in the target image block can be determined, and then the prediction result of the target image block with small deviation can be determined according to the prediction result set, so that the accuracy of the prediction result is improved. Therefore, the method and the device can realize the functions of flexibly using the preset prediction mode, improving the accuracy of the prediction result and solving the problem of unbalance between the use flexibility of the prediction mode and the accuracy of the prediction result.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a hardware structure of an intelligent terminal implementing various embodiments of the present application;
fig. 2 is a communication network system architecture diagram according to an embodiment of the present application;
fig. 3 is a flowchart illustrating an image processing method according to the first embodiment;
FIG. 4a is a diagram illustrating an effect of dividing an image block by a dividing line according to the first embodiment;
FIG. 4b is a schematic diagram illustrating one angular division according to the first embodiment;
FIG. 4c is a schematic diagram showing a corresponding plurality of offsets at an angle φ i in accordance with the first embodiment;
FIG. 4d is a schematic diagram of a partitioning scheme shown according to the first embodiment;
fig. 5 is a flowchart illustrating an image processing method according to a second embodiment;
FIG. 6a is a diagram illustrating neighboring block locations for an exemplary spatial merge candidate list according to a second embodiment;
FIG. 6b is a diagram illustrating an exemplary merge candidate list according to the second embodiment;
FIG. 6c is a schematic diagram of an exemplary target reference sample point shown in accordance with the second embodiment;
FIG. 6d is a schematic diagram illustrating encoding of a target image block according to a second embodiment;
FIG. 6e is a diagram illustrating a distance analysis between a pixel and a partition line according to a second embodiment;
fig. 7 is a flowchart illustrating an image processing method according to a third embodiment;
fig. 8a is a diagram showing a case where a dividing line in a target image block passes through a boundary according to the third embodiment;
fig. 8b is a schematic diagram showing the positional relationship between the boundary reference points and the boundary sample points according to the third embodiment;
FIG. 8c is a schematic diagram illustrating a target reference sample point partition in accordance with a third embodiment;
FIG. 8d is a schematic diagram illustrating another division of target reference sampling points according to the third embodiment;
FIGS. 9a and 9b are schematic diagrams of some of the filled reference sample points shown according to a third embodiment;
FIG. 9c is a schematic diagram showing a method of using a filled reference sample point according to a third embodiment;
FIG. 9d is a schematic diagram showing another use of a padding reference sample point according to the third embodiment;
FIG. 9e is a schematic diagram showing yet another use of padding reference sampling points in accordance with the third embodiment;
fig. 9f is a schematic diagram of a wide-angle intra prediction mode according to the third embodiment;
fig. 10 is a flowchart illustrating an image processing method according to a fourth embodiment;
fig. 11 is a flowchart illustrating an image processing method according to a fifth embodiment;
fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings. With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or," "and/or," "including at least one of the following," and the like, as used herein, are to be construed as inclusive or mean any one or any combination. For example, "includes at least one of: A. b, C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C ", again for example," A, B or C "or" A, B and/or C "means" any of the following: a; b; c; a and B; a and C; b and C; a and B and C'. An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should be noted that step numbers such as S501 and S502 are used herein for the purpose of more clearly and briefly describing the corresponding contents, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S502 and then S501 in the specific implementation, but these steps should be within the scope of the present application.
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly.
The smart terminal may be implemented in various forms. For example, the smart terminal described in the present application may include smart terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The following description will be given taking a mobile terminal as an example, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000 (Code Division Multiple Access 2000 ), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), TDD-LTE (Time Division duplex-Long Term Evolution, Time Division Long Term Evolution), 5G, and so on.
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Optionally, the light sensor includes an ambient light sensor that may adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing gestures of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometers and taps), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Alternatively, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Optionally, the touch detection device detects a touch orientation of a user, detects a signal caused by a touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Optionally, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited thereto.
Alternatively, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a program storage area and a data storage area, and optionally, the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor and a modem processor, optionally, the application processor mainly handles operating systems, user interfaces, application programs, etc., and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system provided in an embodiment of the present application, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Optionally, the UE201 may be the mobile terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Alternatively, the eNodeB2021 may be connected with other enodebs 2022 through a backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. Optionally, the MME2031 is a control node that handles signaling between the UE201 and the EPC203, providing bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems (e.g. 5G), and the like.
Based on the above mobile terminal hardware structure and communication network system, various embodiments of the present application are provided.
For the sake of understanding, the following first explains the terms of art to which the embodiments of the present application may be related.
One, prediction mode
In the process of coding an image in a video, predicting an image block is an indispensable step, and a prediction block is obtained by predicting the image block to construct a residual block with smaller energy, so that transmission bits can be reduced. The prediction of the image block may be implemented by some preset prediction modes, which may include an inter prediction mode and an intra prediction mode.
(1) Inter prediction mode: the inter prediction mode is a prediction mode having higher coding efficiency than the intra prediction mode, which uses correlation between pixels of different images to remove temporal redundancy.
(2) Intra prediction mode: the intra prediction mode uses the correlation in the video spatial domain to predict the current pixel using neighboring encoded pixels in the same frame. For example, a current Coding Unit (CU) may use neighboring reconstructed pixels to predict pixels in the current CU. The intra prediction mode may be: direct Current (DC) mode, or PLANAR (PLANAR) mode, or angular mode.
Two, partition mode
In order to better encode the boundary part of a moving object in an image, the image can be divided into different areas for prediction, a Geometric Partitioning Mode (GPM) is also proposed in the next generation Video compression standard (VVC), the GPM mode can divide the boundary of the moving object in the image more finely, and the boundary of the moving object is attached through a dividing line, so that the edge Coding Unit (Coding Unit) of the moving object is divided into rectangular or non-rectangular sub-Coding units for prediction, and the prediction value of the whole Coding Unit is obtained.
First embodiment
Referring to fig. 3, fig. 3 is a schematic flowchart of an image processing method according to a first embodiment, where an execution main body in this embodiment may be a computer device or a cluster formed by a plurality of computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100) or a server, and here, the execution main body in this embodiment is an intelligent terminal for example.
S301, determining a prediction result set of a corresponding partition in the target image block according to a preset prediction mode.
The target image block refers to an image block currently being encoded in an input video image (i.e., a video frame), and may be referred to simply as a current block or a current image block or a current encoding block. The target image block may be a Coding Tree Unit (CTU) in the input video image, or a Coding Unit (CU), or a Transform Unit (TU), and so on. This is not a limitation. The target image block may be a square block (i.e., the size of the image block is square) or a non-square block. A non-square block may be a rectangular sized image block including horizontal blocks (width greater than height) and/or vertical blocks (height greater than width), for example, when the target image block is a CU, the CU may be a square block or a non-square block. And are not intended to be limiting herein.
Optionally, the target image block includes a first partition and/or a second partition, and the first partition and/or the second partition are image areas divided by a dividing line. The corresponding partition in the target image block may be the first partition or the second partition. The first partition and/or the second partition are rectangular or non-rectangular areas in the target image block, and the first partition and the second partition are relative. In an embodiment, the first partition and the second partition are rectangular, triangular, or trapezoidal regions with respect to the target image block obtained by the GPM mode. For example, as shown in (1) in fig. 4a, the target image block is divided using a horizontal direction dividing line, and an image area above the horizontal direction dividing line may be referred to as a first partition and an image area below the horizontal direction dividing line may be referred to as a second partition. Conversely, an image region below the horizontal dividing line may be referred to as a first partition, an image region above the horizontal dividing line may be referred to as a second partition, and both the partitions may be rectangular regions. As shown in (2) of fig. 4a, one of two partitions obtained by dividing the target image block by the dividing line is a triangular region, and the other is a non-rectangular region. The corresponding partition in the target image block refers to a partition included in the target image block, which is an image area in the target image block. The corresponding partition in the target image block may be the first partition or the second partition.
Optionally, the preset prediction mode includes a prediction mode used by a partition divided by a partition line in the target image block. The partitions divided by the dividing lines in the target image block comprise a first partition and/or a second partition, and correspondingly, the preset prediction mode comprises a prediction mode used by the first partition and/or a prediction mode used by the second partition in the target image block.
In an embodiment, the prediction mode used by the neighboring image block may be used as the prediction mode used by the partition line-divided partition in the target image block. Alternatively, the prediction mode used by the partition line divided in the target image may be the prediction mode used by the partition line divided in the target image, among the prediction modes used by the at least one adjacent image block, which is used the most frequently or frequently. In another embodiment, the prediction mode used by the partition divided by the dividing line in the target image may be obtained by other means. The neighboring image block is an encoded image block neighboring the target image block, and the neighboring image block may include one or more. When the adjacent image blocks include one, the prediction mode used by the adjacent image blocks may be directly determined as the preset prediction mode, and when the adjacent image blocks include a plurality of adjacent image blocks, the prediction mode with the largest number of usage times may be determined as the preset prediction mode by counting the number of usage times of each prediction mode. In the above embodiment, the adopted preset prediction mode does not need to be determined by calculating the rate distortion cost. However, the present invention is not limited to this, and in the above-described embodiment, the preset prediction mode may also be determined by calculating a rate-distortion cost. Due to the correlation between the adjacent image blocks and the target image block, the preset prediction mode used by the current coding block can ensure the coding quality and improve the coding efficiency by referring to the prediction mode used by the adjacent image blocks. In other embodiments, a rate distortion cost determination may be employed for the prediction mode used for partition of the partition line partition in the target image block. In this way, the preset prediction mode is a prediction mode more matched with the corresponding partition in the target image block, so that the accuracy of the prediction result can be effectively ensured, and compared with the former two ways, the accuracy can be improved. For different partitions, the preset prediction modes corresponding to different partitions may be different. For example, the first partition and the second partition in the target image block use different prediction modes.
In one implementation, when the preset prediction mode includes a prediction mode used by a partition corresponding to the target image block, the determination process for the preset prediction mode includes the following 1) and 2):
1) and determining the target division mode of the target image block according to the rate-distortion cost applied to the target image block by each division mode included in the first division mode.
For this step, a rate-distortion cost applied to the target image block by each partition mode included in the first partition mode may be first determined, and then the partition mode with the smallest rate-distortion cost is determined as the target partition mode used by the partition line partitioned in the target image block. That is to say, all the partition modes included in the first partition mode may be traversed, rate-distortion costs required by the target image block for adopting various partition modes are determined, in order to achieve optimal coding performance, a goal of reducing video distortion as much as possible at a certain code rate or compressing a video to the minimum within a distortion allowable range is sought, the rate-distortion costs corresponding to the respective partition modes may be compared, the partition mode with the minimum rate-distortion cost is determined, the partition mode with the minimum rate-distortion cost is used as the target partition mode of the target image block, and the target partition mode is used for partitioning different partitions of the target image block, so as to achieve optimal coding of the target image block.
2) And determining a target division mode parameter corresponding to the target division mode according to the first division mode parameter set. Optionally, a mode parameter corresponding to the target division mode may be queried from the first division mode parameter set, and the mode parameter is used as a target division mode parameter to be used by the target image block, and then the target image block may be divided and predicted according to the target division mode parameter, so as to obtain a prediction result of the target image block.
For convenience of understanding, the following describes the above process by taking the first partition mode as the GPM mode, the partition mode as any one of 64 partition modes corresponding to the GPM mode, the first partition mode parameter set is combined as the GPM mapping table, and the target partition mode parameter is the GPM parameter as an example: first, an encoder determines a color component (including a luminance component and/or a chrominance component) of a current block; and respectively performing predictive coding on the color components by using a plurality of prediction modes (including an intra-frame prediction mode and/or an inter-frame prediction mode) based on the parameters of the current block, and calculating the rate distortion cost corresponding to each prediction mode so as to determine the minimum rate distortion cost from the rate distortion costs respectively corresponding to the plurality of prediction modes. And finally, determining the prediction mode corresponding to the minimum rate distortion cost as the prediction mode parameter of the current block. And when the prediction mode corresponding to the minimum rate-distortion cost is the GPM mode, determining the GPM mode as the prediction mode parameter of the current block. And packaging the prediction mode parameters corresponding to the GPM mode into a bit stream for transmission after binarization.
Through traversing 64 division modes corresponding to the GPM mode, the division mode with the minimum rate distortion cost can be determined, and the division mode with the minimum rate distortion cost is used as the target division mode of the current block. Optionally, according to the target partition mode, a target GPM partition index merge _ GPM _ partition _ idxT, a target angle index angleIdxT, and a target distance index distanceIdxT corresponding to the target partition mode are determined through a mapping table of the GPM partition index GPM _ partition _ idx, the angle index angleIdx, and the distance index distanceIdx. Determination of a parameter mapping table for the GPM partition index, angle index, and distance index. The mapping table for GPM is shown in table 1 below.
Table 1 GPM mapping table
|
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
|
0 | 0 | 2 | 2 | 2 | 2 | 3 | 3 | 3 | 3 | 4 | 4 | 4 | 4 | 5 | 5 |
|
1 | 3 | 0 | 1 | 2 | 3 | 0 | 1 | 2 | 3 | 0 | 1 | 2 | 3 | 0 | 1 |
|
16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 |
angleIdx | 5 | 5 | 8 | 8 | 11 | 11 | 11 | 11 | 12 | 12 | 12 | 12 | 13 | 13 | 13 | 13 |
|
2 | 3 | 1 | 3 | 0 | 1 | 2 | 3 | 0 | 1 | 2 | 3 | 0 | 1 | 2 | 3 |
|
32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 |
angleIdx | 14 | 14 | 14 | 14 | 16 | 16 | 18 | 18 | 18 | 19 | 19 | 19 | 20 | 20 | 20 | 21 |
|
0 | 1 | 2 | 3 | 1 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | 1 |
gpm_partition_idx | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 |
angleIdx | 21 | 21 | 24 | 24 | 27 | 27 | 27 | 28 | 28 | 28 | 29 | 29 | 29 | 30 | 30 | 30 |
|
2 | 3 | 1 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 | 1 | 2 | 3 |
As shown in table 1, the GPM mode includes 64 division modes, each of which corresponds to one division line. The GPM parameter includes a partition index GPM _ partition _ idx, an angle index angleIdx, and a distance index distanceIdx. It should be noted that, when the angle index angleIdx takes different values, it corresponds to different angles φ shown in FIG. 4b i Optionally, i is 1-24. The distance index distanceIdx, when taken to have different values, corresponds to ρ in fig. 4c j Optionally, j is 0-3. Due to the angle index angleIdx, different value combinations of the distance index distanceIdx respectively form different division modes. For example as shown in fig. 4 d.
In one embodiment, a lookup table may be utilized to represent the relationship between the partition index pm _ partition _ idx and the angle index angleIdx, distance index distanceIdx. For example, as shown in table 2 below.
TABLE 2 look-up table
|
0 | 1 | 2 | 3 | 4 | ... | ... | 59 | 60 | 61 | 62 | 63 |
angleIdx | a1 | a2 | a3 | a4 | a5 | ... | ... | a59 | a60 | a61 | a62 | a63 |
distanceIdx | ρ0 | ρ1 | ρ2 | ρ3 | ρ0 | ... | ... | ρ1 | ρ2 | ρ3 | ρ0 | ρ1 |
Optionally, the angle index angleIdx corresponds to the sine value of the angle and the distance index distanceIdx corresponds to ρ j. In one embodiment, a mapping table for angleIdx and cos (φ) may be set, as shown in Table 3 below. In another embodiment, a mapping table of intermediate variables with cos (φ) for angleIdx may also be set.
TABLE 3 Angle mapping Table
|
0 | 1 | 2 | ... | ... | n |
cos(φ) | a1 | a2 | a3 | ... | ... | an |
In another embodiment, a mapping table for the angle index angleIdx and slope may also be provided. For example, a variety of fixed slopes { slope0, slope1, slope2,.., slope } are used to construct an angularly table of unequal spacing.
Since the prediction mode parameters can be packed into a bitstream for transmission, the prediction mode parameters of the target image block, which are parameter information indicating a prediction mode used by the target image block, can be determined at a decoding end by parsing the bitstream. If the prediction mode parameter is a target division mode parameter indicating that the first division mode is used, what type of first division mode is specifically used may be determined by the target division mode parameter. Optionally, the target partition mode parameter includes at least one of a partition index, an angle index, and a distance index, and further includes prediction mode indication information for indicating a prediction mode used by a corresponding partition in the target image block. The prediction mode indication information is used for indicating the prediction mode used by the corresponding partition in the target image block at the decoding end. In other words, at the decoding end, it is possible to: and determining a prediction mode used by a partition included in the target image block according to the prediction mode indication information included in the target partition mode parameter.
Optionally, the prediction mode indication information includes prediction mode indication information indicating a prediction mode used by the first partition and/or prediction mode indication information indicating a prediction mode used by the second partition. These prediction mode indication information may be a flag or an index of the prediction mode type used at the encoding end by the partition corresponding to the target image block.
In one implementation, the prediction modes include intra-prediction modes and/or inter-prediction modes. If the predetermined prediction mode is the intra prediction mode, the pixels in the current image block may be predicted by using the encoded pixels of the corresponding partition in the target image block (e.g., the angular prediction mode), or the color component of the current image block may be predicted by using the encoded luminance component, for example, the pixels of the chrominance component are predicted by using the pixels of the Y component in the CCLM (cross component linear model) mode. If the preset prediction mode is an inter-frame prediction mode, the motion vector can be determined by using a reference image of the image where the target image block is located, and prediction is performed based on the motion vector to obtain a prediction result.
Alternatively, a motion vector index indicating a motion vector used by the encoding end is encoded in the bitstream. The motion vectors include a first motion vector and/or a second motion vector, and the motion vector indices are, for example, gpm _ idx0[ x0] [ y0] and gpm _ idx1[ x0] [ y0 ]. Note that gpm _ idx0[ x0] [ y0] and gpm _ idx1[ x0] [ y0] may also be signaled in the merged data merge _ data (). gpm _ idx0[ x0] [ y0] represents the position of the first motion vector in the merge candidate list, and gpm _ idx1[ x0] [ y0] represents the position of the second motion vector in the merge candidate list. Thus, when the prediction mode parameter is the target division mode parameter, the target division mode parameter further includes the motion vector index.
Optionally, the motion vector index comprises a first motion vector index and/or a second motion vector index. The motion vector correspondence includes a first motion vector and/or a second motion vector, which may be required for the inter prediction mode to be used for the partition corresponding to the target image block. Illustratively, when the prediction mode indication information indicates that the first partition adopts an inter prediction mode and the second partition adopts an intra prediction mode, the first motion vector and/or the second motion vector is/are the motion vector of the first partition; when the prediction mode indication information indicates that the second partition adopts the inter prediction mode and the first partition adopts the intra prediction mode, the first motion vector and/or the second motion vector are/is the motion vector of the second partition.
The determination method of the prediction result set of the corresponding partition in the target image block may determine the prediction result set of the corresponding partition of the target image block by using the prediction mode of the corresponding partition of the target image block.
Optionally, the set of predictors includes a first set of predictors of the first partition and/or a second set of predictors of the second partition. For deployment, one may: and determining a first prediction result set according to the prediction mode used by the first partition in the target image block, and/or determining a second prediction result set according to the prediction mode used by the second partition in the target image block. When the target image block includes the first partition and the second partition, the set of predictors for the target image block includes a first set of predictors and a second set of predictors. It should be noted that the first prediction result set of the first partition (or the second prediction result set of the second partition) is obtained by predicting data in the first partition (or the second partition) according to a preset prediction mode, and in the subsequent use of determining the prediction result of the target image block, the first prediction result set and the second prediction result set may be shared by the first partition and the second partition.
In one embodiment, the prediction result set of the corresponding partition of the target image block is used to determine the prediction result of the target image block. Optionally, the prediction result of the target image block may be determined according to the first prediction result set of the first partition and/or the second prediction result set of the second partition. For the specific determination of the prediction result, reference may be made to the following description of the embodiments, which will not be described in detail herein.
In summary, the image processing scheme provided in the embodiment of the present application predicts the target image block through the preset prediction mode, and determines the prediction result set of the partition corresponding to the target image block, so as to obtain the prediction result. In the selection of the preset prediction mode, the image blocks adjacent to the target image block can be referred to or the distortion cost calculation of the utilization rate can be adopted, the prediction modes meeting the conditions can be used as the preset prediction mode, the selection range of the preset prediction mode is large, and the flexibility of the partition corresponding to the target image block in the matching use is high; and according to the prediction result set of the corresponding partition in the target image block determined by the preset prediction mode, the accuracy of the prediction result of the target image block can be ensured by integrating the prediction result sets corresponding to different partitions. Therefore, the flexibility of the use of the preset prediction mode and the accuracy of the prediction result can be well balanced.
Second embodiment
Referring to fig. 5, fig. 5 is a flowchart illustrating an image processing method according to a second embodiment, where an execution main body in this embodiment may be a computer device or a cluster formed by a plurality of computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100) or a server, and here, the execution main body in this embodiment is an intelligent terminal for example.
When the preset prediction mode is a prediction mode used by partitions divided by a division line in the target image block, the prediction result set of the corresponding partition in the target image block may be determined according to the contents described in S501 and S502 below. Optionally, the prediction modes comprise a first prediction mode and/or a second prediction mode. Since the prediction mode used by the corresponding partition may be the first prediction mode or the second prediction mode, the preset prediction mode may be the first prediction mode or the second prediction mode.
S501, if the prediction mode is the first prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the motion vector of the corresponding partition in the target image block.
Optionally, the first prediction mode is an inter prediction mode. In this case, the prediction mode used by the partition of the target image block partition line is an inter prediction mode, and the partition of the target image block into which the partition line is divided includes the first partition and/or the second partition. The target image block corresponding partition is a partition using an inter prediction mode, and the partition may include a first partition and/or a second partition, in other words, the first partition in the target image block uses the inter prediction mode, or the second partition in the target image block uses the inter prediction mode, or both the first partition and the second partition in the target image block use the inter prediction mode.
In the inter prediction mode, a set of predictors of corresponding partitions in the target image block may be determined based on the motion vectors. Optionally, when the inter prediction mode is used for a first partition in the target image block, the first set of predictors is determined from motion vectors of the first partition, and/or, when the inter prediction mode is used for a second partition in the target image block, the second set of predictors is determined from motion vectors of the second partition. In one embodiment, there may be the steps of: determining a first motion vector and/or a second motion vector of a corresponding partition in the target image block according to the merging candidate list of the target image block; and determining a prediction result set of a corresponding partition in the target image block according to the first motion vector and/or the second motion vector.
Before determining the motion vector, there may be the following steps: and constructing a merging candidate list of the target image block. Optionally, the merge candidate list is derived based on a spatial merge candidate list. The merge candidate list is an inter prediction candidate list that may be used to determine the first motion vector and/or the second motion vector in the case of a uni-directional prediction candidate or a bi-directional prediction candidate.
The following describes a process of constructing a merging candidate list of target image blocks.
Referring to fig. 6a, fig. 6a is a schematic diagram illustrating neighboring block positions of an exemplary spatial merging candidate list according to an embodiment of the present disclosure. At most 4 candidate motion vectors can be selected from the spatial domain merging candidate list, and the construction sequence is as follows: the motion vector information of the neighboring blocks, i.e., the upper neighboring block B1, the left neighboring block a1, the upper right neighboring block B0, the lower left neighboring block a0, and the upper left neighboring block B2, are arranged in sequence. It should be noted that B2 is considered only if other locations are not available. After adding the lower left neighboring block a0, redundancy detection is needed to ensure that candidates in the list do not have the same motion information. In addition, the historical reference block his, the average motion vector avg of the first candidate motion vector and the second candidate motion vector, and the zero motion vector 0 may be added to the merge candidate list. Referring to fig. 6b, fig. 6b is a schematic diagram illustrating an exemplary merging candidate list according to an embodiment of the present disclosure. The merge candidate list includes the motion information of 5 neighboring blocks shown in fig. 6a, the sequence numbers are 0, 1, 2, 3, and 4, respectively, and each neighboring block includes bi-directional predicted motion vector information, i.e., motion vector information corresponding to list0 (list 0) and list1 (list 1), respectively.
In one embodiment, the inter prediction modes include a unidirectional prediction mode and a bidirectional prediction mode. For the partition in the target image block adopting the inter prediction mode, a unidirectional prediction mode or a bidirectional prediction mode may be used, and the unidirectional prediction mode and/or the bidirectional prediction mode may be determined by indication information (e.g., a flag or an index) of unidirectional prediction or bidirectional prediction used by one partition in the target image block. Thus, it may be determined whether a partition in the target image block specifically adopts the unidirectional prediction mode or the bidirectional prediction mode according to the indication information, and then further determine a unidirectional motion vector (the first motion vector or the second motion vector) or a bidirectional motion vector (the first motion vector and the second motion vector) of the partition from the merge candidate list: when a first partition (or a second partition) in the target image block uses the unidirectional prediction indication information, determining that the first partition (or the second partition) in the target image block uses the unidirectional prediction mode, and determining a first motion vector or a second motion vector of the first partition (or the second partition) from the merging candidate list; when a first partition (or a second partition) in the target image block uses the bi-prediction indication information, it is determined that the first partition (or the second partition) in the target image block uses the bi-prediction mode, and a first motion vector and a second motion vector of the first partition (or the second partition) are determined from the merge candidate list.
The first motion vector and/or the second motion vector are/is an inter-frame prediction motion vector, after the first motion vector and/or the second motion vector of the corresponding partition in the target image block is obtained by merging the candidate list, motion compensation can be performed according to the first motion vector and/or the second motion vector, and an inter-frame prediction value of the target image block relative to the inter-frame prediction motion vector is determined, so that a prediction result set is obtained, wherein the prediction result set comprises the inter-frame prediction value.
When the target image block includes the first partition and the second partition, and both the first partition and the second partition use the inter prediction mode, the set of prediction results of the target image block may be determined for each partition in the above manner, and when the set of prediction results includes the first set of prediction results and the second set of prediction results, that is: and determining a first prediction result set of the target image block according to the motion vector of the first partition in the target image block, and determining a second prediction result set of the target image block according to the motion vector of the second partition in the target image block. In this case, the first prediction result set and the second prediction result set are both obtained by inter-frame prediction.
S502, if the prediction mode is the second prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the target reference sampling points of the corresponding partition in the target image block.
Optionally, the second prediction mode is an intra prediction mode. At the decoding end, the prediction mode used by the partition corresponding to the target image block is the first prediction mode or the second prediction mode, and the prediction mode is determined based on the prediction mode indication information.
Optionally, the second prediction mode comprises at least one type of second prediction mode. The second prediction mode is an intra-frame prediction mode, and comprises a plurality of types of intra-frame prediction modes. For example, in VVC, in order to capture any edge direction present in natural video, the prediction mode type included in the intra prediction mode is increased by more prediction directions, so as to improve the accuracy of intra prediction. Compared to intra prediction modes in HEVC, VVC includes angular prediction modes that are extended to 65, while leaving the Planar mode and the DC mode, for a total of 67 types of intra prediction modes. Details are shown in table 4 below.
TABLE 4 Intra prediction modes
Intra prediction mode (Intra prediction mode) | Association name (Associated name) |
0 | Planar mode (INTRA _ PLANAR) |
1 | DC mode (INTRA _ DC) |
2..66 | |
The at least one type of intra prediction mode includes a Planar mode, a DC mode, and an angular prediction mode. The Planar mode is used for solving the gradual smooth texture area, the DC mode is suitable for a large-area flat area, the angle prediction modes include an angle prediction mode 2 to an angle prediction mode 66, and the prediction directions of different angle prediction modes are different so as to better adapt to textures in different directions in video content.
In one implementation, when the prediction mode is the second prediction mode, the method may further include: and determining a second prediction mode of the target type used by the corresponding partition of the target image block from the second prediction modes of the at least one type.
The partitions of the target image block include the first partition and/or the second partition. The second prediction mode may be indicated by the prediction mode indication information, and when the prediction mode indication information of the first partition indicates that the prediction mode used by the first partition is the intra prediction mode, the target type of intra prediction mode may be specifically determined. Alternatively, when the second prediction mode is an intra prediction mode, the target type of second prediction mode may be any one of a Planar mode (i.e., a Planar mode), a DC mode, and 65 angular prediction modes.
And if the prediction mode of the partition corresponding to the target image block is the intra-frame prediction mode, determining the intra-frame prediction mode of the target type used by the partition corresponding to the target image block based on the intra-frame most probable mode index. The intra Most Probable Mode index refers to an index of a target type of intra prediction Mode that is Most likely to be used, and may be determined by constructing and from a list of Most Probable Modes (MPMs). There are 6 prediction modes in the MPM list within VVC, if the prediction mode of the current block only needs to encode its index in the MPM, i.e. the intra most probable mode index. The data volume can be effectively reduced and the coding efficiency can be improved through the construction of the most probable mode list.
Optionally, the second prediction mode of the target type is used to determine a set of prediction results for the target image block. And under the second prediction mode of the target type, determining a prediction result set of the target image block by using the target reference sampling points by adopting a processing mode set under the second prediction mode of the target type, and further determining the prediction result of the target image block. For example, if the second prediction mode of the target type is a Planar mode, a linear filter in the horizontal direction and a linear filter in the vertical direction may be used to obtain an average value of pixel points (i.e., target reference sampling points) respectively adjacent to each other in the horizontal direction and the vertical direction, so as to obtain a prediction result set.
In the intra-frame prediction mode, the current pixel can be predicted according to the adjacent coded pixels, so that the target reference sampling point refers to the coded pixel point adjacent to at least one partition in the target image block, and the sampling value of the reference sampling point is the pixel value.
Optionally, the target reference sampling points comprise first reference sampling points and/or second reference sampling points, the first reference sampling points and/or the second reference sampling points being different. The selection of the first reference sample points and/or the second reference sample points is different for different partitions in the target image block. Optionally, the first reference sample point and the second reference sample point are different as follows:
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition; the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition.
When the corresponding partition in the target image block is a first partition, the first reference sampling point is adjacent to the first partition and is not adjacent to the second partition, and the second reference sampling point is adjacent to the second partition and is not adjacent to the first partition; when the corresponding partition in the target image block is the second partition, the first reference sampling point is adjacent to the second partition and is not adjacent to the first partition, and the second reference sampling point is adjacent to the first partition and is not adjacent to the second partition. Please refer to fig. 6c, which illustrates an exemplary target reference sample point. As shown in (1) of fig. 6c, the first reference sample points (or the second reference sample points) are different for different partitions. The first reference sample points may be reference sample points that are adjacent to the corresponding partition, and the second reference sample points may be reference sample points that are not adjacent to another partition other than the corresponding partition in the target image block.
Optionally, the first or second reference sample points further include reference sample points that are not adjacent to the corresponding partition in addition to the reference sample points that are adjacent to the corresponding partition. As shown in (2) of fig. 6c, when the corresponding partition in the target image block is a first partition, the first reference sample points include reference sample points that are adjacent to the first partition and are not adjacent to a second partition, the second reference sample points include reference sample points that are adjacent to the second partition and are not adjacent to the first partition, and reference sample points that are not adjacent to both the first partition and the second partition; when the corresponding partition in the target image block is the second partition, the first reference sampling points comprise reference sampling points adjacent to the second partition and not adjacent to the first partition, and reference sampling points not adjacent to the first partition and the second partition, and the second reference sampling points comprise reference sampling points adjacent to the first partition and not adjacent to the second partition.
The position relation of the first reference sampling point relative to the corresponding partition is different from the position relation of the second reference sampling point relative to the corresponding partition.
The positional relationship of the target reference sampling point with respect to the corresponding partition includes a proximity relationship and/or a distance between the target reference sampling point and the corresponding partition. When the corresponding partition in the target image block is the first partition, the position relationship of the first reference sampling point with respect to the first partition and the position relationship of the second reference sampling point with respect to the first partition are different, and optionally, the position relationship may be the content described in (r), that is, the relationship of being adjacent to the partition or not. In addition, the position relationship can also be expressed according to the distance between the target reference sampling point and the corresponding partition: when the corresponding partition in the target image block is the first partition, the shortest distance between the first reference sampling point and the first partition is smaller than the shortest distance between the second reference sampling point and the second partition.
And the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the target image block is located and is not adjacent to the target image block.
Here, the target image block is a coding unit, which may be a coding unit in which any one of the coding tree units is coding. In the VVC encoding, a frame image may be divided into a plurality of coding tree units to be sequentially encoded, and each of the coding tree units may be divided into a plurality of coding units to be sequentially encoded. When the coding of the coding unit in one coding tree unit ends, the coding unit in the next coding tree unit is then coded, as shown in fig. 6 d. When the target image block is a first coding unit in the coding tree unit, the target reference sampling points comprise first reference sampling points and second reference sampling points adjacent to the coding tree unit; when the target image block is a coding unit which is not first coded in the coding tree unit, the target reference sampling points include reference sampling points adjacent to the coding tree unit and/or reference sampling points adjacent to the coding unit, and as shown in (1) in fig. 6c, the first reference sampling points and the second reference sampling points included in the target reference sampling points are both adjacent to the coding unit. The reference sampling points are all encoded pixel points, and since the brightness values and the chromatic values of two adjacent pixels are often relatively close, namely the colors are gradually changed, the purpose of effectively removing the video time domain redundancy can be achieved by utilizing the correlation to compress. For a specific determination of the target reference sampling point, reference may be made to the following embodiments, which will not be described in detail herein.
In another embodiment, if the first partition of the target image block may use the first prediction mode and the second partition may use the second prediction mode, the operations as described in S501 and S502 may be performed simultaneously, and the corresponding partition of the target image block in S501 is the first partition and the corresponding partition of the target image block in S502 is the second partition. Similarly, if the first prediction mode can be used for the second partition and the second prediction mode can be used for the first partition, the operations as described in S501 and S502 can also be performed simultaneously, and the corresponding partition of the target image block in S501 is the second partition and the corresponding partition of the target image block in S502 is the first partition. Optionally, the first prediction mode is an inter prediction mode, and may include a bidirectional prediction mode or a unidirectional prediction mode. Alternatively, the partition using inter prediction may perform unidirectional or bidirectional motion compensation to obtain the prediction value. The second prediction mode is an intra-prediction mode, the intra-prediction mode includes multiple angular prediction modes, planar mode and DC mode, and in the case of using the partition mode for the target image block, the following prediction modes may be used for the partitions divided by the partition mode: the above lists are only reference examples, and in order to avoid redundancy, they are not listed one by one, and in actual development or application, they can be flexibly combined according to actual needs, but any combination belongs to the technical solution of the present application, and also covers the protection scope of the present application.
In one embodiment, the prediction result set of the corresponding partition of the target image block is used to determine the prediction result of the target image block. Optionally, comprising the following steps 1) and 2):
1) and determining partition weight based on the target partition mode parameter of the target image block.
Optionally, the partition weight is determined based on an angle index and a distance index included in the target partition mode parameter. The partition weight refers to the weight corresponding to two partitions included in the image block, the partition weight is used for weighting each pixel point in the corresponding partition to obtain a weighted prediction pixel value, which is marked as predSamples, and the weighted prediction pixel value of the corresponding partition can be used as a prediction result.
The linear equation of the dividing line can be obtained according to the angle index angleIdx and the distance index distanceIdx, and optionally cos (phi) and sin (phi) in the linear equation can be determined according to the angle index angleIdx, and p in the linear equation can be determined according to the distance index distanceIdx. The expression for the straight line equation is as follows:
alternatively, (x) c ,y c ) The coordinates of any sampling point in the target image block.
As shown in fig. 6e, fig. 6e is a schematic diagram of distance analysis between a pixel point and a dividing line according to an embodiment of the present disclosure. If a pixel point (x) c ,y c ) For the pixels in the current block, based on the above linear equation, a pixel point (x) can be obtained c ,y c ) The distance to the parting line is:
when ρ is 0, the dividing line is as shown in fig. 4b described above.
For this reason, the angle index and the distance index included in the target division mode parameter may determine a distance between each pixel point in the target image block and the division line, and may determine a weight corresponding to each pixel value of the target image block according to the distance.
Optionally, according to pixel point (x) c ,y c ) Different weights are set with respect to the distance of the dividing line, e.g. if a pixel point (x) c ,y c ) If the distance to the dividing line is greater than or equal to the set distance threshold, the pixel point (x) c ,y c ) The corresponding weight is set to K1, otherwise, the pixel point (x) is set c ,y c ) The corresponding weight is set to K2. It is obvious that: the pixel points smaller than the set distance threshold are all located near the dividing line, and the pixel points larger than the set distance threshold are far away from the dividing line. And carrying out weight setting on pixel points corresponding to the first partition and the second partition in the target image block by adopting the rule to obtain partition weights. In addition, for the pixel points on the two sides of the dividing line, namely the pixel points included in different partitions, different fixed weights can be set, and therefore the weight of each pixel value is not only related to the distance from the dividing line, but also related to the partition where each pixel is located.
By setting the partition weights in such a way, different attention degrees can be given to the pixel points in the two image areas which take the dividing line as the boundary in the target image block, the weights corresponding to the pixel points close to the dividing line and the pixel points not close to the dividing line can also be different, and the weight corresponding to the pixel point closer to the dividing line in the partitions at the two sides of the dividing line is larger, so that the two partitions of the target image block can be better fused along the edge of the dividing line, and the prediction result is obtained.
2) And determining a prediction result of the target image block based on at least one of the partition weight, the first prediction result set and the second prediction result set.
In one embodiment, the first set of predictors is derived using a first prediction mode (e.g., an inter prediction mode) or a first prediction mode (e.g., an intra prediction mode), and the second set of predictors is similarly derived.
Optionally, an optional implementation manner of this step may be: and determining the prediction result of the sampling point based on at least one of the first weight, the second weight, the first prediction result of the sampling point in the first prediction result set included in the target image block and the second prediction result of the sampling point in the second prediction result set.
For the first partition, it is possible to: and determining the prediction result of the sampling point in the first partition included in the target image block based on at least one of the first weight, the first prediction result of the sampling point in the first prediction result set and the second prediction result of the sampling point in the second prediction result set.
The first weight may be a set of weights corresponding to the first partition, which may be referred to as a first set of weights, including { w } 11, w 12 And determining a weight respectively representing a weight corresponding to a close dividing line and a weight corresponding to a far dividing line in the first partition based on a rule of determining weights by the distance between the sampling point and the dividing line, wherein the close or far areas are divided by a distance threshold. And carrying out weighted summation processing on a first prediction result corresponding to the sampling points in the first partition in the target image block in the first prediction result set and a second prediction result corresponding to the sampling points in the second prediction result set by using the first weight to obtain a fused prediction value, namely the prediction result of the sampling points in the first partition. The specific expression is as follows:
alternatively, (x) c1 ,y c1 ) Representing sample points, P, in a first partition 11 First prediction of a sample point representing a first partition in a first set of predictorsResults of measurement, P 12 Second predictors, w, of the second set of predictors, representing sample points of the first partition 11 And w 11 The sum is 1, according to the setting rule of the weight, w 11 Can be set to K1, w 12 Corresponding to (1-K1); or w 11 Set to K2, w 12 Corresponding to (1-K2).
For the second partition, it may be: and determining the prediction result of the sampling point in the second partition included in the target image block based on at least one of the second weight, the first prediction result of the sampling point in the first prediction result set and the second prediction result of the sampling point in the second prediction result set.
Similarly, the second weight is a set of weights corresponding to the second partition, which may be referred to as a second set of weights, including { w 21, w 22 And the weights corresponding to the areas close to the dividing line and the areas far away from the dividing line in the second partition are respectively represented, and the areas close to or far away from the dividing line are divided by a distance threshold which can be the same as or different from the distance threshold used by the first partition. Optionally, the sampling points in the second partition included in the target image block correspond to corresponding prediction results in both the first prediction result set and the second prediction result set, so that the target image block may perform weighted summation processing on the first prediction result corresponding to the sampling points in the second partition included in the target image block in the first prediction result set and the second prediction result corresponding to the sampling points in the second prediction result set by using the second weight to obtain a fused prediction value, that is, the prediction result of the sampling points in the second partition. The specific expression is as follows:
optionally, (x) c2 ,y c2 ) Representing sample points, P, in the second partition 21 Denotes the first prediction result, P 22 A second prediction result is expressed according to the setting rule of the weight, w 21 Can be set to K1, w 22 Corresponding to (1-K1); or w 21 Set to K2, w 22 Corresponding to (1-K2); or w 21 May be set to a different weight than K1 or K2, but w 21 And w 21 The sum is 1.
It should be noted that, for each sampling point in the first partition and the second partition, the corresponding prediction result may be determined according to the above manner, so as to obtain the prediction result of the first partition and the prediction result of the second partition, where the prediction result of the first partition includes the prediction results of all sampling points in the first partition, and the prediction result of the second partition includes the prediction results of all sampling points in the second partition. The prediction result of the first partition and the prediction result of the second partition may be combined, and prediction results of sample points of all the partitions may be obtained, where the prediction result is the prediction result of the target image block. Or performing edge fusion on the prediction results of the two partitions to obtain the prediction result of the target image block.
In summary, the image processing scheme provided in the embodiment of the present application may determine the prediction result set according to different information of the partition corresponding to the target image block in different prediction modes, that is, when the prediction mode is the first prediction mode, the first prediction result set may be determined according to the motion vector of the partition in the target image block using the first prediction mode, and when the prediction mode is the second prediction mode, the second prediction result set may be determined according to the target reference sample point of the partition in the target image block using the second prediction mode, and the first prediction mode and the second prediction mode each include multiple types, and particularly, the second prediction mode has a wider selectable range, and has more combinations with the partition modes used by the target image block, and the accuracy of the prediction results is improved.
Third embodiment
Referring to fig. 7, fig. 7 is a flowchart illustrating an image processing method according to a third embodiment, where an execution main body in this embodiment may be a computer device or a cluster formed by a plurality of computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100) or a server, and here, the execution main body in this embodiment is an intelligent terminal for example.
In an alternative embodiment, when the preset prediction mode is the prediction mode used by the corresponding partition in the target image block, and the prediction mode used by the partition is the second prediction mode, the set of prediction results of the corresponding partition in the target image block is determined according to the target reference sample points of the corresponding partition in the target image block, which may be referred to as the contents described in S701 and S702 below.
S701, determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block.
Optionally, the target image block includes a first partition and/or a second partition, and the corresponding partition in the target image block is the first partition or the second partition. When the target image block comprises a first partition and a second partition, a first reference sampling point and/or a second reference sampling point of the first partition, and a first reference sampling point and/or a second reference sampling point of the second partition may be determined, as the logic of the reference sampling points determined for the different partitions is similar, except that the first reference sampling point and/or the second reference sampling point differ in content for the different partitions. Therefore, in the present embodiment, the first reference sampling point and/or the second reference sampling point of one partition (for example, the first partition) are determined as an example for explanation.
In one embodiment, the implementation of S701 may be: and determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block according to the position relation between boundary sampling points of the corresponding partitions in the target image block and the target image block.
The target image block corresponding partition herein refers to a partition in the target image block using the second prediction mode, and optionally, the second prediction mode is an intra prediction mode. The division line used by the target image block passes through different boundaries of the target image block to divide the target image block into different image areas, referred to herein as a first partition and a second partition. The position relation between the corresponding partition and the boundary sampling points in the target image block refers to the relative position relation between the sampling points in the partition adopting the second prediction mode and the boundary sampling points, and the reference sampling points corresponding to the first partition or the second partition can be determined according to the position relation. The target image block comprises at least one sampling point, and optionally, the sampling points are pixel points. Because the dividing line intersects with the boundary of the target image block, the boundary sampling point of the dividing line in the target image block refers to the sampling point through which the dividing line passes in N sampling points adjacent to the encoded pixel point in the target image block, and N is a positive integer. Exemplarily, as shown in fig. 8a, it is a schematic diagram of the boundary sampling points where the dividing lines are located. The partition line divides the target image block into a partition a and a partition B, and the partition line passes through an upper boundary and a left boundary of the target image block, whereby the boundary sample points include boundary sample points 1 and 2.
In one implementation, the following steps may be included: determining a boundary through which a dividing line used by the target image block passes according to a boundary mapping table; and determining the boundary sampling points according to the boundary passed by the dividing line.
Optionally, the boundary mapping table includes a mapping relationship that each partition line passes through a specified boundary in the target image block, where the specified boundary includes a first boundary and/or a second boundary. Optionally, the first boundary is an upper boundary and the second boundary is a left boundary. The condition that the division line used by the target image block passes through the upper boundary and the left boundary of the target image block can be determined through the boundary mapping table. The division line used in the target image block as shown in fig. 8a passes through both the upper boundary and the left boundary.
The mapping relationship of each partition line passing through the specified boundary in the target image block may be used to indicate the partition index of the partition line and whether the partition line passes through the upper boundary and the left boundary of the target image block. Illustratively, the boundary map is shown in table 5 below.
Table 5 boundary mapping table
Partition index gpm _ partition _ |
0 | 1 | 2 | 3 | 4 | ... | 59 | 60 | 61 | 62 | 63 |
Whether the dividing line passes through the left boundary | Y | Y | Y | N | N | ... | Y | Y | Y | Y | Y |
Whether the dividing line passes the upper boundary | Y | Y | Y | Y | Y | ... | N | N | Y | Y | Y |
The boundary mapping table is the dividing line index corresponding to 64 dividing modes in the geometric dividing modes, and records the information of whether each dividing line index and the dividing line pass through the left boundary or the upper boundary. Therefore, the boundary mapping table can quickly determine the boundary passed by the dividing line used by the target image block, so as to determine the boundary sampling point.
Optionally, when the division line used by the target image block passes through a first boundary of the target image block, determining that the boundary sampling points comprise first boundary sampling points; and/or when the division line used by the target image block passes through a second boundary of the target image block, determining that the boundary sampling points comprise second boundary sampling points.
The condition that the dividing line passes through the specified boundary can be determined through the boundary mapping table, and the content included by the boundary sampling point is further determined. Optionally, the first boundary is an upper boundary and the second boundary is a left boundary. The boundary sample points include upper boundary sample points when the partition line passes an upper boundary of the target image block, and/or the boundary sample points include left boundary sample points when the partition line passes a left boundary of the target image block. The dividing line may pass through the upper boundary and the left boundary, or may pass through the upper boundary or the left boundary, and the boundary sampling points include upper boundary sampling points and/or left boundary sampling points. Thus, when the boundary through which the dividing line passes satisfies the condition (i.e., the passed boundary is at least one of the first boundary and the second boundary), the boundary sampling point includes at least one. When the dividing line passes through the upper boundary, the dividing line may also pass through any one of the left boundary, the right boundary, and the lower boundary at the same time, and when the dividing line passes through the left boundary, the dividing line may also pass through any one of the upper boundary, the right boundary, and the lower boundary at the same time. Exemplarily, the division of the image block by different division lines as shown in the aforementioned fig. 4 c. It can be found that some division lines do not pass through the left boundary of the image block, some division lines do not pass through the upper boundary, and some division lines do not pass through the upper boundary nor the left boundary, for example, the division line in the 4 th image block in the first row.
In one embodiment, the determining method for the boundary sampling point includes: determining a dividing line equation according to the target dividing mode parameters of the target image block; and determining at least one boundary sampling point according to a boundary reference point and the parting line equation.
Optionally, the target division mode parameter of the target image block includes at least one of: angle index, distance index, and segmentation index. The target partition mode parameter may be a geometric partition mode parameter. In one implementation, the partition line equation is determined according to an angle index and a distance index included in the target partition mode parameter. The dividing line equation refers to a straight line equation of the dividing line, cos (phi) and sin (phi) in the dividing line equation are determined in the angle mapping table according to the angle index angleIdx, and rho in the dividing line equation is determined according to the distance index distanceIdx. The expression for the split line equation is as follows:
After the parting line equation is determined, at least one boundary sample point may be determined from the boundary reference point and the parting line equation. Optionally, the boundary reference point is a sampling point arranged at the top of the coding order in the target image block. Positionally, the boundary reference point is a sample point of the upper left position of the target image block, and the position of the boundary reference point is denoted herein as (xCb, yCb). Optionally, the position coordinates of at least one boundary sample point are determined from the boundary reference point.
The at least one boundary sample point includes a first boundary sample point and/or a second boundary sample point. If the dividing line passes through a first boundary (the first boundary is an upper boundary), the boundary sampling points comprise first boundary sampling points, and the position coordinates of the first boundary sampling points are (x _ gpm _ above, y _ gpm _ above); if the dividing line passes through the second boundary (the second boundary is a left boundary), the boundary sample points comprise second boundary sample points, and the position coordinates of the second boundary sample points are (x _ gpm _ left, y _ gpm _ left). Since the position of the boundary reference point is (xCb, yCb), and the boundary sample points and the boundary reference points have the same coordinate axis, x _ gpm _ left = xCb and y _ gpm _ above = yCb, that is, the first boundary sample point and the boundary reference point are on the same x axis, and the second boundary sample point and the boundary reference point are on the same y axis, so that y _ gpm _ left of the second boundary sample point and x _ gpm _ above of the first boundary sample point can be determined by the dividing line equation. If the value of x _ gpm _ above exceeds a predetermined range (referring to the x values of the last critical sampling points arranged on the first boundary), the partition line does not pass through the first boundary (e.g., the upper boundary) of the target image block, and the first boundary sampling points do not exist, and similarly, if the value of y _ gpm _ left exceeds a predetermined range (referring to the y values of the last critical sampling points arranged on the second boundary), the partition line does not pass through the second boundary (e.g., the left boundary) of the target image block, and the second boundary sampling points do not exist. Illustratively, for the contents described above, it may be referred to a schematic diagram of the positional relationship between the boundary reference points and the boundary sampling points as shown in fig. 8 b. Y1 of critical sample point 1 can be used as a measure of the presence or absence of the second boundary sample point, that is, y _ gpm _ left found from the dividing line is compared with y1, and if y _ gpm _ left is greater than y1, the second boundary sample point does not exist, otherwise, the second boundary sample point exists. For critical sampling point 2, the same principle x2 may be used as a criterion for whether x _ gpm _ above is outside a predetermined range to determine whether a first boundary sampling point is present.
It should be noted that, if the boundary through which the dividing line passes is determined according to the boundary mapping table, it may be determined whether a qualified boundary sampling point exists first, and then the position coordinates of the corresponding boundary sampling point are obtained according to the above manner. For example, if the boundary mapping table is queried according to the partition line index, and it is determined that the partition line used by the target image block passes through the upper boundary and the left boundary, the specific positions of the upper boundary sampling point and the left boundary sampling point may be determined according to the partition line equation and the boundary reference point.
After the boundary sampling points are determined, first reference sampling points and/or second reference sampling points can be determined according to the position relation between the boundary sampling points and the corresponding partitions of the target image block. In one embodiment, the method may include the steps of: determining a first coordinate range and/or a second coordinate range according to the at least one boundary sampling point; and determining a first reference sampling point and/or a second reference sampling point according to the first coordinate range and/or the second coordinate range.
Optionally, the at least one boundary sample point comprises a first boundary sample point and/or a second boundary sample point. In one embodiment, at least one of the following is included: and determining a first coordinate range according to the first boundary sample points and/or the second boundary sample points, and determining a second coordinate range according to the first boundary sample points and/or the second boundary sample points. The first coordinate range and the second coordinate range are exemplarily illustrated with the at least one boundary sample point including a first boundary sample point and a second boundary sample point, and the position coordinates of the first boundary sample point are (x _ gpm _ above, y _ gpm _ above) and the position coordinates of the second boundary sample point are (x _ gpm _ left, y _ gpm _ left). The first coordinate range includes a coordinate range of xCb where y = y _ gpm _ above +1, x is less than or equal to x _ gpm _ above and greater than or equal to the boundary reference point, and a coordinate range of x = x _ gpm _ left-1, y is greater than or equal to y _ gpm _ left and less than or equal to yCb; the second coordinate range includes a coordinate range of y = y _ gpm _ above +1, x being greater than x _ gpm _ above, and a coordinate range of x = x _ gpm _ left-1, y being less than y _ gpm _ left. From the position relation, the reference sampling points in the first coordinate range comprise the reference sampling points positioned above the left side of the second boundary sampling point and the reference sampling points positioned above the left side of the first boundary sampling point; the reference sample points of the second coordinate range include a reference sample point located above and to the right of the second boundary sample point, and a reference sample point located below and to the left of the first boundary sample point.
When the target image block corresponding partition is a first partition using a second prediction mode (e.g., intra prediction mode), the reference sample points located within the first coordinate range may be determined as first reference sample points, and/or the reference sample points located within the second coordinate range may be determined as second reference sample points. When the target image block corresponding partition is a second partition using a second prediction mode (e.g., intra-frame prediction mode), and the second partition is adjacent to the second coordinate range, the reference sample points located in the second coordinate range may be determined as the first reference sample points, and/or the reference sample points located in the first coordinate range may be determined as the second reference sample points.
There is a case where the first reference sample point and/or the second reference sample point is available or unavailable for the corresponding partition of the target image block. In the embodiment of the present application, the first reference sample points refer to available reference sample points of the corresponding partitions of the target image block, and the second reference sample points refer to unavailable reference sample points of the corresponding partitions of the target image block. For example, as shown in fig. 8c, the intra-frame prediction mode is adopted for the partition a, for the upper reference sampling points, since the difference between the sampling value of the upper reference sampling point located on the right side of gpm _ above and the original sampling value in the partition a is large, if the upper reference sampling point located on the right side of gpm _ above is used to predict the sampling value in the partition a, the difference between the obtained predicted value and the original sampling value is large (i.e., the residual is large), so that the coding quality is not high, and therefore, the upper reference sampling points located on the right side of gpm _ above are all the reference sampling points that are not available for the partition a. Similarly, for the left reference sample points, the left reference sample points located below gpm _ lef are all the reference sample points that are not available for partition a. It should be noted that, for the a partition, the available reference sampling points are the reference sampling points in the immediate vicinity of the a partition. Similarly, when the intra prediction mode is adopted for the B partition, for the upper reference sample points, the upper reference sample points located on the left side of gpm _ above are all the reference samples that are not available for the B partition. For the left reference sample point, the left reference sample points located above gpm _ lef are all the reference sample points that are not available for the B partition. It should be noted that, for the partition B, the available reference samples are the reference samples in the immediate vicinity of the partition B.
For the determination manner of the first reference sampling point and the second reference sampling point, in another embodiment, the implementation manner of S701 may also be: determining the partition range of the corresponding partition in the target image block according to the distance between the sampling point of the corresponding partition in the target image block and the partition line; determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block based on the coordinate ranges of the first sampling points and the partition ranges; or determining the first sampling point as a first reference sampling point or a second reference sampling point of a corresponding partition in the target image block based on distance information between the first sampling point adjacent to the target image block and the dividing line.
In this way, the equation of the dividing line may be determined first, and for the determination process of the equation of the dividing line, reference may be made to the foregoing description, which is not described herein again. And determining the distance between the sampling point of the corresponding partition in the target image block and the partition line according to the partition line equation. Suppose that the sampling point of the corresponding partition in the target image block is (x) c ,y c ) And obtaining the distance from the sampling point to the dividing line according to the dividing line equation, wherein the distance expression is shown in the following formula.
d(x c ,y c ) For sampling points in target image blockThe distance between the parting lines. The sampling points may refer to pixel points.
The corresponding partition of the target image block is a partition adopting a second prediction mode (e.g., intra prediction mode). Whether the sample point belongs to the first partition or the second partition can be determined according to the distance from the sample point to the partition line in the partition. For example, when the distance between the sampling point belonging to the first partition and the dividing line is set to a positive value and the distance between the sampling point belonging to the second partition and the dividing line is set to a negative value, for example, when the distance between the sampling point S and the dividing line is determined to be a positive value, the sampling point S is determined to belong to the first partition. And when the distance between the sampling point S and the dividing line is determined to be a negative value, determining that the sampling point S belongs to the second partition. It should be noted that the present invention is not limited thereto. Other ways of determining whether a sample point belongs to the first partition or the second partition may be employed.
In an embodiment, a partition having a distance to a specific position of a target image block or a position of an adjacent image block of the target image block smaller than a preset distance may be determined as a first partition, and a partition having a distance to the specific position of the target image block larger than the preset distance may be determined as a second partition.
In yet another embodiment, a partition having a distance from a position of an adjacent image block of the target image block smaller than a preset distance may be determined as the first partition, and a partition having a distance from a position of an adjacent image block of the target image block larger than the preset distance may be determined as the second partition.
Optionally, the specific position of the target image block corresponds to the coordinates of the sampling point at the upper left corner of the target image block, and the position of the adjacent image block of the target image block corresponds to the coordinates of the spatial adjacent block of the target image block in the constructed spatial merging candidate list. Referring to fig. 6a, the spatial neighboring blocks are an upper neighboring block B1, a left neighboring block a1, an upper right neighboring block B0, a lower left neighboring block a0, and an upper left neighboring block B2.
The first sample points are reference sample points adjacent to the target image block, the first sample points include at least one and the first sample points are encoded sample points. The first sample point may be used as the first reference sample point or the second reference sample point, and may be determined in any one of the following two ways:
and determining a reference sampling point adjacent to or not adjacent to the corresponding partition of the target image block according to the coordinate range and the partition range of the first sampling point. The first sample point adjacent to the corresponding partition may be determined as a first reference sample point of the corresponding partition, and the first sample point not adjacent to the corresponding partition may be determined as a second reference sample point of the corresponding partition. The first reference sample point (or the second reference sample point) is an available (or not available) reference sample point for a different partition. For example, please refer to fig. 8d, where fig. 8d is a schematic diagram illustrating a division of a target reference sampling point according to an embodiment of the present application. As shown in fig. 8d, the image block includes a partitions and B partitions, and the reference sample points include reference sample points adjacent to the a partitions, reference sample points adjacent to the B partitions, and reference sample points not adjacent to any of the a/B partitions. Optionally, the reference sampling points that are not adjacent to none of the a/B partitions are the reference sampling points adjacent to the coding tree unit where the target image block is located. If the intra-frame prediction mode partition used in the target image block is an A partition, determining a first reference sampling point corresponding to the A partition as a reference sampling point adjacent to the A partition, and determining a second reference sampling point corresponding to the A partition as a reference sampling point adjacent to the B partition and a reference sampling point not adjacent to the A/B partition. If the partition using the intra-frame prediction mode in the target image block is a partition B, determining a first reference sampling point corresponding to the partition B as a reference sampling point adjacent to the partition B and a reference sampling point not adjacent to the partition A/B, and determining a second reference sampling point corresponding to the partition B as a reference sampling point adjacent to the partition A. It can be seen that the first reference sample points also comprise reference sample points that are not adjacent to the corresponding partition.
Determining whether the first reference sampling point belongs to the first reference sampling point or the second reference sampling point of the corresponding subarea according to the distance information between the first sampling point adjacent to the target image block and the dividing line. Optionally, the distance information comprises a sign of a distance between the first sampling point and the dividing line. The first sample point is a reference sample point adjacent to the target image block. Whether the reference sample point is adjacent to the first partition or the second partition may be determined by calculating a sign of a distance of the reference sample point to the dividing line. For example, if the sign of the distance from the reference sample point to the partition line is the same as the sign of the distance from the sample point in the first partition to the partition line, the reference sample point is adjacent to the first partition. If the sign of the distance from the reference sampling point to the dividing line is different from the sign of the distance from the sample in the first partition to the dividing line, the reference sampling point is not adjacent to the first partition. The sign of the distance here may be a positive sign and a negative sign used to depict the distance.
Optionally, the first reference sample and the second reference sample corresponding to the partition are determined according to the partition corresponding to the target image block in the second prediction mode. The second prediction mode is an intra prediction mode. When the first partition adopts the intra-frame prediction mode, the reference sampling points adjacent to the first partition are available reference sampling points of the first partition, and the reference sampling points not adjacent to the first partition are unavailable reference sampling points of the first partition. When the second partition adopts the intra-frame prediction mode, the reference sampling points which are not adjacent to the first partition are available reference sampling points of the second partition, and the reference sampling points which are adjacent to the first partition are unavailable reference sampling points of the second partition.
S702, determining a prediction result set of a corresponding partition in the target image block according to the first reference sampling points and/or the second reference sampling points.
If the first reference sampling point and the second reference sampling point are determined in the above manner, intra-frame prediction can be further performed by using the two reference sampling points to obtain a prediction result set of the target image block. Although one reference sampling point is unavailable for a certain partition, the reference sampling point can be applied to the actual prediction process of intra-frame prediction through corresponding processing, and the coding quality is ensured, so that when the target image block is divided into different image areas for prediction, the target image block can be flexibly selected from different types of second prediction modes, and when the second prediction mode is the intra-frame prediction mode, any one of a plurality of intra-frame prediction mode candidates can be selected for use, so that the selection range of the types of the intra-frame prediction modes for the division modes is fully expanded, and the flexibility and the range of selection are improved while the coding quality is ensured.
In one embodiment, step S702 includes the following: (1) filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point; (2) and determining a prediction result set of the corresponding partition of the target image block according to the first reference sampling point and the filled reference sampling point.
Because the first reference sampling point is an available reference sampling point for the corresponding partition of the target image block, and the second reference sampling point is an unavailable reference sampling point for the corresponding partition of the target image block, the unavailable reference sampling point can be replaced by other pixels by filling the unavailable reference sampling point with the available reference sampling point, and the filled reference sampling point is obtained, so that the second reference sampling point is used as the available reference sampling point in the actual processing process, namely: and predicting by taking the first reference sampling point and the filled reference sampling point as final reference sampling points to obtain a prediction result set. Therefore, the determination process of the prediction result set of the corresponding partition of the target image block is participated or acted by the first reference sampling point and the filled reference sampling point together.
The filling pattern of the second reference sampling point will be described in detail below. In one embodiment, the method includes the following steps S7021 to S7023:
s7021: determining at least one padding reference sample point among the first reference sample points.
Filling the second reference sample points, it is first necessary to determine reliable filling reference sample points. Since the first reference sampling point is an available reference sampling point for the corresponding partition of the target image block, the difference between the second reference sampling point and the sampling point in the corresponding partition can be effectively reduced by using the available reference sampling point as a filling reference sampling point. The at least one padded reference sample point may be one or at least one first reference sample point adjacent to the second reference sample point, for example, the first reference sample point closest to the second reference sample point, or at least one first reference sample point in a sequential arrangement adjacent to the second reference sample point. The at least one padding reference sample point may also be any one or more of the first reference sample points, for example randomly located ones of the first reference sample points.
It should be noted that, since the first reference sampling point and the second reference sampling point comprise different sampling points for different partitions of the target image block, the at least one filled reference sampling point is also different for different partitions of the target image block.
Illustratively, please refer to fig. 9a and 9b, and fig. 9a and 9b are schematic diagrams of some filling reference sampling points provided by the embodiments of the present application. As shown in fig. 9a, the partition a is a partition in the target image block using the intra prediction mode, and of the reference sample points in the shadow area, the reference sample point adjacent to the partition a is an available reference sample point of the partition a, and the reference sample point not adjacent to the partition a is an unavailable reference sample point of the partition a. The at least one padding reference sample point comprises two available nearest neighboring reference sample points of the available reference sample points of the a-partition. As shown in fig. 9B, the partition B is a partition in the target image block using the intra prediction mode, among the reference sample points in the shadow area, the reference sample points that are not adjacent to the partition a are available reference sample points of the partition B, and the reference sample points that are adjacent to the partition a are unavailable reference sample points of the partition B. The at least one padding reference sample point comprises two available nearest neighboring reference sample points of the B-partition available reference sample points.
S7022: determining a filling value for the second reference sample point based on the sample value of the at least one filling reference sample point.
When the at least one filled reference sample point is the nearest neighboring first reference sample point, a sample value of the nearest neighboring first reference sample point may be taken as a filled value of the second reference sample point, and when the at least one filled reference sample point includes the nearest neighboring at least one first reference sample point, the filled value of the second reference sample point may be determined based on the nearest neighboring at least one first reference sample point. For example, an average value of neighboring at least one first reference sample point is taken as the padding value of the second reference sample point.
In an embodiment, at least one of the filling reference sample points may be given different weights, which may take into account the influence of filling reference sample points in different positions on the filling value of the second reference sample point. Optionally: determining a filling weight of each filling reference sampling point based on a position relation between each filling reference sampling point and the second reference sampling point; determining a padding value for the second reference sample point based on the padding weight and the sample value of the respective padded reference sample point.
The respective padding reference sampling point refers to each of the at least one padding reference sampling point. Each filled reference sample point corresponds to a filling weight. For example, 3 filling reference sampling points correspond to 3 filling weights respectively, the sampling values of the filling reference sampling points can be weighted and summed based on the filling weights, and the obtained weighted and summed value can be used as the filling value of the second reference sampling point.
Optionally, the positional relationship comprises a distance between the respective filled reference sample point and the second reference sample point. The distance here may refer to a straight-line distance between two reference sampling points, the size of the filling weight may be determined based on the distance between the filling reference sampling point and the second reference sampling point, and since the correlation between the second reference sampling point and the filling reference sampling point which is farther away is smaller, the filling weight which is set may be smaller for the filling reference sampling point which is farther away, such that the filling weight decreases with the increase of the distance. When a plurality of filling reference sampling points exist, the filling reference sampling points with stronger correlation can play a larger role in the process of adjusting the difference value between the second reference sampling point and the sampling point in the partition, and better filling quality is obtained, so that the residual error is better reduced, and the coding quality is improved.
Optionally, the first reference sample points comprise first reference sample points adjacent to a first boundary of the target image block and/or first reference sample points adjacent to a second boundary of the target image block.
Optionally, the first boundary is an upper boundary and the second boundary is a left boundary. The first reference sample points may be divided into two categories according to the adjacency to the boundary of the target image block, i.e., the first reference sample point adjacent to the upper boundary and the first reference sample point adjacent to the left boundary. Furthermore, it is also possible to include a first reference sample point which is not adjacent to both the upper boundary and the left boundary. Illustratively, the available reference sample points (i.e., the first reference sample points) for partition B as in fig. 9B include reference sample points that are not adjacent to partition A, B.
Since the padding reference sample points are determined from the first reference sample points, optionally: the at least one padded reference sample point comprises a first padded reference sample point of the first reference sample points adjacent to the first boundary and/or a second padded reference sample point of the first reference sample points adjacent to the second boundary. That is, the at least one padding reference sample point includes first padding reference sample points that are one or more of the first reference sample points adjacent to the upper boundary and second padding reference sample points that are one or more of the first reference sample points adjacent to the left boundary.
According to the above division of the first reference sample point and the at least one filling reference sample point, the determination manner of the filling value of the second reference sample point may optionally be: determining a filling value of a second filled reference sample point adjacent to the corresponding boundary based on the sample value of the first filled reference sample point and the filling weight of the first filled reference sample point and/or the sample value of the second filled reference sample point and the filling weight of the second filled reference sample point.
The corresponding boundary includes a first boundary and/or a second boundary of the target image block, optionally, the first boundary is an upper boundary, and the second boundary is a left boundary. That is, the determination of the filling value for the second reference sample point comprises any one or more of the following: determining a filling value of a second reference sample point adjacent to the first boundary based on the sample value of the first filled reference sample point and the filling weight of the first filled reference sample point; determining a filling value of a second reference sample point adjacent to the second boundary based on the sample value of the second filled reference sample point and the filling weight of the second filled reference sample point; determining the filling values of all second reference sample points based on the sample values of the first filled reference sample points and the filling weights of the first filled reference sample points, and the sample values of the second filled reference sample points and the filling weights of the second filled reference sample points, all second reference sample points comprising second reference sample points adjacent to the first boundary, and/or second reference samples adjacent to the second boundary.
Exemplarily, taking the B partition as the partition of the target image block using the intra prediction mode as an example, as shown in fig. 9c, the at least one filled reference sample point includes a first filled reference sample point and a second filled reference sample point, the first filled reference sample point includes 3 adjacent available reference sample points (i.e., first reference sample points) of the available reference sample points adjacent to the upper boundary of the B partition, and 3 adjacent available reference sample points of the available reference sample points adjacent to the left boundary of the B partition, and the filled sample points are all adjacent to the unavailable reference sample points (i.e., second reference sample points). The first padding reference sample points may be used to pad first reference sample points adjacent to the upper boundary of the a-zone, and padding values of the respective first reference sample points adjacent to the upper boundary of the a-zone are the same. The second padding reference sample point may be used to pad the first reference sample points adjacent to the left boundary of the a-zone, and the padding values of the respective first reference sample points adjacent to the left boundary of the a-zone are the same.
As shown in fig. 9d, the at least one padded reference sampling point also includes a first padded reference sampling point and a second padded reference sampling point; the first padded reference sample points comprise available reference sample points that are not adjacent to each other and are adjacent to the upper boundary, and the second padded reference sample points comprise available reference sample points that are not adjacent to each other and are adjacent to the left boundary. A second reference sample point on the same boundary may be padded according to the first padded reference sample point: and carrying out weighted summation on the sampling values of the first filling reference sampling points according to the filling weight to obtain filling values so as to fill the second reference sampling points. The resulting sample values of the second reference sample points filled on the boundary are identical. In another implementation manner, the sampling value of the first padding reference sampling point may also be directly used as a padding value, and the padding value is alternately padded according to a preset direction. For example, if the first filled reference sample point includes sample points 1, 2, and 3, the sample value of sample point 1 may be filled to the first second reference sample point from right to left, the sample value of sample point 2 may be filled to the second reference sample point from right to left, the sample value of sample point 2 may be filled to the third second reference sample point from right to left, and sample point 1 may be filled to the fourth second reference sample point from right to left, and the process may cycle sequentially until the second reference sample point adjacent to the upper boundary is filled.
A schematic diagram of populating the second reference sample point based on the first populated reference sample point and the second populated reference sample point can be seen in fig. 9 e. As shown in (1) of fig. 9e, since the distances between the first and second filling reference sample points and the second reference sample point to be filled are equal, the filling weights set by the two filling reference sample points are the same, and thus the filling value of the second reference sample point is also the average value between the first and second filling reference sample points. As shown in (2) of fig. 9e, the distances between the first filling reference sample point and the second reference sample point to be filled are different, respectively, the set filling weights are also different, and the first filling reference sample point is closer in distance and has a larger weight when the filling value is calculated, and the calculation duty ratio is also larger.
S7023: and filling the second reference sampling point based on the filling value to obtain the filled reference sampling point.
The filling of the second reference sample point based on the filling value may be an assignment process, for example, assigning a sample value of the nearest first reference sample point to the second reference sample point, or assigning an average sample value of at least one adjacent first reference sample point to the second reference sample point to obtain a filled reference sample point. The padding value may be used as a sampling value in the padded reference sampling point.
After the filled reference sampling points are obtained, a prediction result set of the corresponding partition of the target image block can be determined according to the first reference sampling points and the filled reference sampling points. In this way, when determining the prediction result set of the partition corresponding to the target image block, even if there is an unavailable reference sampling point (i.e. a second reference sampling point), by processing the unavailable reference sampling point, the non-ideal situation of the prediction result is effectively improved compared with the situation that prediction is performed by using the second reference sampling point without any processing.
It should be noted that the scheme provided by the embodiment of the present application may also be applied to a non-square coding block, because the non-square coding block uses a wide-angle intra prediction mode, when the coding block uses a GPM mode, the wide-angle intra prediction mode may also be combined with the GPM mode, thereby expanding the category of intra prediction modes combined with the GPM mode. For the wide-angle intra prediction mode, as shown in fig. 9 f. Optionally, the intra prediction mode corresponding to the intra prediction direction indicated by the dotted arrow is a wide-angle intra prediction mode.
In summary, when the target image block is predicted by using the second prediction mode (e.g., intra-frame prediction mode), the image processing scheme provided in the embodiment of the present application may determine the first reference sampling points and/or the second reference sampling points corresponding to the partition using the second prediction mode, where the reference sampling points include available reference sampling points and unavailable reference sampling points, and fill the unavailable reference sampling points with the available reference sampling points, so as to reduce the original value difference between the unavailable reference sampling points and the original sampling points in the partition using the second prediction mode, change the unavailable reference sampling points into available reference sampling points in the actual use process, and use the available reference sampling points in the prediction process of the image block, thereby obtaining a smaller residual error and improving the encoding quality. Therefore, the original poor processing effect of the second prediction mode is effectively improved, and any type of second prediction mode is not required to be limited by the situation with the poor prediction effect and is not selected, so that the second prediction mode with the proper type can be selected from multiple types of second prediction modes, the selection range of the prediction modes is enlarged, the flexibility is higher, and the encoding quality can be ensured. When the intra-frame prediction mode of the target image block available in the GPM mode can be effectively expanded through the guarantee of the coding quality, the combination flexibility of the GPM mode and the intra-frame prediction mode is stronger, and therefore the better balance between the flexibility and the coding quality is achieved.
Fourth embodiment
Referring to fig. 10, fig. 10 is a flowchart illustrating an image processing method according to a fourth embodiment, where an execution main body in this embodiment may be a computer device or a cluster formed by a plurality of computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100) or a server, and here, the execution main body in this embodiment is an intelligent terminal for example.
And S1001, determining a target reference sampling point through a preset strategy.
In one embodiment, an alternative implementation of step S1001 may be: and determining a target reference sampling point according to the position relation between the reference sampling point and the sampling point in the image block partition.
An image block refers to an image block currently being encoded in an input video image (i.e., a video frame), and may be referred to as a current block or current image block or current encoding block. The image block herein corresponds to the target image block mentioned in the foregoing embodiment, and the image block may be the target image block, and related descriptions may refer to the first embodiment, which is not described herein again.
Optionally, the image block partition is an image area divided by a partition line used by the image block, the image block partition includes a first partition and/or a second partition, and the sample points in the image block partition include sample points of the first partition and/or sample points of the second partition. The reference sampling points comprise encoded sampling points adjacent to the image block partition and/or encoded sampling points adjacent to a reference image block in which the image block is located, and when the image block is an encoding unit, the reference image block is an encoding tree unit. The reference image block includes at least one image block, and when the at least one image block is sequentially encoded, the encoding of the reference image block is completed, as shown in fig. 6 d.
The position relationship between the reference sampling point and the sampling point in the image block partition can be a relative azimuth relationship, and can also be characterized by distance information. The target reference sampling point may be determined by determining whether the reference sampling point is available or unavailable for the image block partition based on a positional relationship of the reference sampling point and the sampling points of the image block partition. The target reference sample points may comprise reference sample points available for the image block partition and/or reference sample points unavailable for the image block partition.
In one implementation, determining the target reference sampling point by the position relationship of the reference sampling point and the sampling point in the image block partition may include the following steps 1) and 2):
1) at least one boundary sample point is determined from the sample points of the image block partition.
Optionally, the at least one boundary sample point includes a first boundary sample point and/or a second boundary sample point, the at least one boundary sample point includes the first boundary sample point when a division line in the image block passes through a first boundary of the image block, the at least one boundary sample point includes the second boundary sample point when the division line passes through a second boundary of the image block, and the at least one boundary sample point includes the first boundary sample point and the second boundary sample point when the division line simultaneously passes through the first boundary and the second boundary of the image block. Optionally, the first boundary is an upper boundary, the second boundary is a left boundary, the first boundary samples are upper boundary samples, and the second boundary samples are left boundary samples. For example, see fig. 8a in the foregoing embodiment for the first and second boundary sample points.
In one embodiment, the determination of the at least one boundary sample point comprises: determining a parting line equation according to the target parting mode parameter of the image block; and determining at least one boundary sampling point according to a boundary reference point and the parting line equation. The specific content of this manner can refer to the related content in the third embodiment, which is not described herein.
2) And determining a target reference sampling point according to the position relation between the reference sampling point and the at least one boundary sampling point.
In one embodiment, the at least one boundary sample point comprises a first boundary sample point and/or a second boundary sample point. Alternative implementations of step 2) include: determining a first coordinate range according to the first boundary sampling points, and/or determining a second coordinate range according to the second boundary sampling points; and determining a target reference sampling point according to the position relation between the reference sampling point and the first coordinate range and/or according to the position relation between the reference sampling point and the second coordinate range.
The first coordinate range and/or the second coordinate range are used for delimiting the sampling area of the target reference sampling point, the area where the reference sampling point on the left side above the first boundary sampling point is located and/or the area where the reference sampling point of the point on the left side above the second boundary sampling point is located can be determined as the first coordinate range, and the area where the reference sampling point on the right side above the first boundary sampling point is located and/or the area where the reference sampling point below the left side of the second boundary sampling point is located can be determined as the second coordinate range.
And determining the reference sampling points in the first coordinate range and/or the reference sampling points in the second coordinate range as target reference sampling points. Optionally, the target reference sampling points comprise first reference sampling points and/or second reference sampling points, and the sampling values of the target reference sampling points are different for different partitions of the image block. When the image block partition is a first partition, and the first partition is adjacent to the first coordinate range, the reference sampling point in the first coordinate range may be used as the first reference sampling point, and the reference sampling point in the second coordinate range may be used as the second reference sampling point.
In another implementation, determining the target reference sampling point according to the position relationship between the reference sampling point and the sampling point in the image block partition may include: determining a partition range according to sampling points in image block partitions; and determining a target reference sampling point according to the position relation between the reference sampling point and the partition range.
The partition range may be defined according to the sampling points on the boundary in the partition of the image block, for example, when the partition line used by the image block passes through a specified boundary (including the first boundary and/or the second boundary), the partition range may be determined according to the boundary sampling points of the partition line in the image block and the critical sampling points on the specified boundary, where the critical sampling points refer to the most marginal sampling points of the image block, such as the critical sampling points included in fig. 8b described above. The range value of the partition range along the specified boundary can be determined according to different position coordinates of the boundary sampling point and the critical sampling point.
The position relationship between the reference sampling point and the partition range can be used to describe whether the reference sampling point and the image block partition are adjacent or distant. Thus, determining the target reference sampling point may comprise the following: and determining a target reference sampling point according to the coordinate range of the reference sampling point and the partition range, or determining the target reference sampling point according to the distance information between the reference sampling point and the partition line used by the image block partition. The reference sample points here may correspond to the aforementioned first sample points, and include reference sample points adjacent to the image block, and optionally, may also include reference sample points not adjacent to the image block. The target reference sample points comprise first reference sample points and/or second reference sample points and may or may not be adjacent to the image block partition. For a detailed description, reference may be made to the related contents in the foregoing third embodiment, which are not described herein again.
Optionally, the target reference sampling points comprise first reference sampling points and/or second reference sampling points;
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition;
the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition;
the positional relationship of the first reference sampling point with respect to the corresponding partition is different from the positional relationship of the second reference sampling point with respect to the corresponding partition;
and the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the image block is located and is not adjacent to the image block.
For the above detailed description, reference may be made to the related contents in the foregoing third embodiment, which are not described herein again.
S1002, determining a prediction result set according to the target reference sampling point and a preset prediction mode.
After the target reference sampling points are determined, a set of prediction results for the image block may be determined based on the target reference sampling points and a preset prediction mode. The set of prediction results is used to determine a prediction result for the image block. Optionally, the set of predictors is used to determine a predictor for the image block. The set of predictors includes a first set of predictors and/or a second set of predictors. The first prediction result set and/or the second prediction result set may be obtained according to preset prediction modes used by different partitions of the image block, and the determination of the prediction result of the image block based on the first prediction result set and/or the second prediction result set may refer to the description of the foregoing embodiment, which is not described herein again. For the determination of the prediction result set, reference may be made to the following examples in more detail, which are not described in detail herein.
In the image processing scheme provided by this embodiment, the target reference sampling point is determined by using a preset policy, where the prediction policy may refer to whether a position relationship between the reference sampling point and the sampling point of the partition of the image block satisfies a condition, and when the condition is satisfied, for example, the reference sampling point is adjacent to the partition corresponding to the image block, and the partition may be used as the target reference sampling point. The target reference sampling points comprise reference sampling points which are available or unavailable for the image block partitions, and an accurate prediction result set can be obtained according to the target reference sampling points in a preset prediction mode, so that the quality of the prediction result is improved.
Fifth embodiment
Referring to fig. 11, fig. 11 is a schematic flowchart of an image processing method according to a fifth embodiment, where an execution main body in this embodiment may be a computer device or a cluster formed by a plurality of computer devices, and the computer device may be an intelligent terminal (such as the foregoing mobile terminal 100) or a server, and here, the execution main body in this embodiment is an intelligent terminal for example.
In one embodiment, a preset prediction mode may be determined according to the prediction mode indication information of the image block, and a prediction result set may be determined according to the preset prediction mode and the target reference sampling point.
Optionally, the prediction mode indication information of the image block includes prediction mode indication information for indicating a prediction mode used by the first partition, and/or prediction mode indication information for indicating a prediction mode used by the second partition. The prediction mode indication information may be a flag or an index of a prediction mode type used by the encoding end of the partition corresponding to the target image block, and may be used at the encoding end or the decoding end.
Optionally, the preset prediction mode is: a prediction mode used by a partition divided by a partition line in an image block and/or a prediction mode used by an adjacent image block. Optionally, the prediction mode used by the partition divided by the partition line in the target image block includes any one of: the prediction modes used by the adjacent image blocks, and the prediction mode with the use times larger than or equal to a preset threshold in the prediction modes used by at least one adjacent image block.
In an embodiment, the prediction mode used by the neighboring image block may be used as the prediction mode used by the partition line-divided partition in the target image block. Alternatively, the prediction mode used by the partition line divided in the target image may be the prediction mode used by the partition line divided in the target image, among the prediction modes used by the at least one adjacent image block, which is used the most frequently or frequently. In another embodiment, the prediction mode used by the partition divided by the dividing line in the target image may be obtained by other means. In the above embodiment, the adopted preset prediction mode does not need to be determined by calculating the rate distortion cost. However, the present invention is not limited to this, and in the above-described embodiment, the preset prediction mode may also be determined by calculating a rate-distortion cost.
Since the partitions divided by the image block partition line include the first partition and/or the second partition, the preset prediction mode may include a prediction mode used by the first partition and/or a prediction mode used by the second partition, and the prediction mode includes the first prediction mode and/or the second prediction mode, for example, if the image block partition includes the first partition and the second partition, there is any one of the following cases: both partitions use the first prediction mode; both partitions use the second prediction mode; one partition uses a first prediction mode and the other partition uses a second prediction mode. The prediction mode used by the neighboring image block may be used as a reference for the image block currently being encoded, for example, the neighboring image block uses the first prediction mode, and the currently encoded image block may also directly use the first prediction mode.
Optionally, the preset prediction mode is a prediction mode used by partitions divided by a partition line in the image block, and details of determining the prediction result set according to the preset prediction mode and the target reference sampling point may be referred to in the following descriptions of S1101 and S1102.
S1101, if the prediction mode comprises a first prediction mode and/or a second prediction mode, determining a prediction result set of a partition corresponding to the image block according to the motion vector and/or the target reference sampling point.
That is, the prediction mode used by the partition of the partition line in the image block may be a first prediction mode and/or a second prediction mode, and for the whole image block, the preset prediction mode may perform prediction according to the first prediction mode, may also perform prediction according to the second prediction mode, and may also perform prediction according to the first prediction mode and the second prediction mode used by different partitions, where the first prediction mode and the second prediction mode are used by different partitions of the image block, and one partition uses the first prediction mode and the other partition uses the second prediction mode as mentioned above.
In one embodiment, if the prediction mode includes a second prediction mode, determining a set of prediction results for a corresponding partition of the image block according to the target reference sample points. Optionally, the second prediction mode is an intra prediction mode.
Optionally, the determining the prediction result set of the partition corresponding to the image block according to the target reference sampling points includes: filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point; and determining a prediction result set of the partition corresponding to the image block according to the first reference sampling point and the filled reference sampling point.
Since the second reference sampling points are not available for the partition corresponding to the image block using the second prediction mode, and the first reference sampling points are available for the partition corresponding to the image block using the second prediction mode, the unavailable reference sampling points are filled with the available reference sampling points, and the prediction result set of the partition corresponding to the image block is determined based on the filled reference sampling points and the available reference sampling points, so that errors brought by the unavailable reference sampling points to the prediction results in actual use can be reduced.
In one embodiment, the filling pattern for the second reference sampling point may include: determining at least one padding reference sample point among the first reference sample points; determining a filling value of the second reference sample point based on the sample value of the at least one filling reference sample point; and filling the second reference sampling point based on the filling value to obtain the filled reference sampling point.
Optionally, padding weights may be introduced for different padding reference samples to determine the padding values, i.e.: determining a filling weight of each filling reference sampling point based on a position relation between each of the at least one filling reference sampling point and the second reference sampling point; determining a padding value for the second reference sample point based on the padding weight and the sample value of the respective padded reference sample point.
Optionally, the fill weight decreases with increasing distance. When a plurality of filling reference sampling points exist, the filling reference sampling points with stronger correlation can be enabled to play a larger role in the process of adjusting the difference value between the second reference sampling point and the sampling point in the partition through the filling weight, and better filling quality is obtained, so that the residual error is better reduced, and the coding quality is improved.
Optionally: the positional relationship comprises a distance between each of the at least one padded reference sampling point and the second reference sampling point; the first reference sample points comprise first reference sample points adjacent to a first boundary of the target image block and/or first reference sample points adjacent to a second boundary of the target image block; the at least one padded reference sample point comprises a first padded reference sample point of the first reference sample points adjacent to the first boundary and/or a second padded reference sample point of the first reference sample points adjacent to the second boundary.
The determining of the padding values of the second reference sampling points based on the padding weights and the sampling values of the respective padded reference sampling points comprises: determining a filling value of the second reference sampling point adjacent to the corresponding boundary based on the sampling value of the first filled reference sampling point and the filling weight of the first filled reference sampling point, and/or the sampling value of the second filled reference sampling point and the filling weight of the second filled reference sampling point. For the above-mentioned filling steps, reference may be made to the related contents described in the third embodiment, which are not described herein again.
In another embodiment, if the prediction mode includes the first prediction mode, a set of prediction results for a partition corresponding to the image block is determined according to the motion vector. Optionally, the motion vector comprises a first run vector and/or a second motion vector, depending on the type of the first prediction mode. The type of the first prediction mode comprises a bidirectional prediction mode and/or a unidirectional prediction mode, when the first prediction mode is the bidirectional prediction mode, a prediction result set of a partition corresponding to the image block can be determined according to the first motion vector and the second motion vector, and when the type of the first prediction mode is the unidirectional prediction mode, the prediction result set of the partition corresponding to the image block can be determined according to the first motion vector or the second motion vector. For details, reference may be made to the description of the foregoing embodiments, which are not described herein.
In another embodiment, if the prediction modes include a first prediction mode and a second prediction mode, determining a set of prediction results for the partition corresponding to the image block according to the motion vector, and determining a set of prediction results for the partition corresponding to the image block according to the target reference sampling points. The use of motion vectors herein refers to motion vectors of image block partitions using the first prediction mode, and the target reference sample points are reference sample points corresponding to image block partitions using the second prediction mode. And the partition of the image block using the first prediction mode and the partition of the image block using the second prediction mode are different. The detailed implementation in this embodiment can be according to the foregoing descriptions, and is not described herein.
S1102, determining a prediction result set of the image block according to the prediction result set of the partition corresponding to the image block.
The prediction result set of the partition corresponding to the image block includes a first prediction result set of the first partition and/or a second prediction result set of the second partition, the first prediction result set may be obtained through the first prediction mode or the second prediction mode, and similarly, the second prediction result set may also be obtained through the first prediction mode or the second prediction mode.
For example, the image block as shown in fig. 8c includes an a partition and a B partition, the first prediction mode is an inter prediction mode, and the second prediction mode is an intra prediction mode. When one of the partition A and the partition B adopts intra-frame prediction and the other partition adopts inter-frame prediction, the process of determining the first prediction result set and the second prediction result set of the current block comprises the following steps: for a partition adopting intra-frame prediction, after a target reference sampling point of the partition is determined, an intra-frame prediction value of a current block is determined by using a specifically used intra-frame prediction mode type to obtain a first prediction value set, wherein the first prediction value set is a prediction value set related to intra-frame prediction, and the first prediction value set can be used as a first prediction result set or a second prediction result set according to the partition correspondingly using the intra-frame prediction mode, for example, the partition A uses the intra-frame prediction mode and then can be used as the first prediction result set. Note that, if only one of the a partition and the B partition performs intra prediction, only one prediction value set related to intra prediction is obtained. If one partition exists in the partition A and the partition B, the partition uses inter prediction, an inter prediction motion vector corresponding to the partition is obtained through a spatial domain merging candidate list, then an inter prediction value of the current block relative to the inter prediction motion vector is determined, so that a second prediction value set is obtained, the first prediction value set is a prediction value set relative to intra prediction, and the first prediction value set can be used as a first prediction result set or a second prediction result set according to the partition correspondingly using the intra prediction mode, for example, the partition A uses the intra prediction mode, so that the partition A can be used as the first prediction result set.
When both partition a and partition B perform intra prediction, the process of determining the first and second sets of prediction values for the current block includes the following: and if the A partition and the B partition are subjected to intra-frame prediction, respectively determining respective reference sampling points and unavailable reference sampling points of the A partition and the B partition. After filling the unavailable reference sample points with the nearest available reference sample points, determining the intra-frame prediction value corresponding to the intra-frame prediction mode type determined by the current block relative to each partition by respectively using the intra-frame prediction mode type determined by each partition and the final reference sample points, thereby obtaining a first prediction result set and a second prediction result set. Optionally, the first set of predictors and the second set of predictors are both obtained by intra prediction, and the first set of predictors and the second set of predictors may be sets of predictors for intra prediction.
It should be noted that the second prediction mode is an intra prediction mode, and by the above padding operation, not only the angular prediction mode included in the intra prediction mode can be combined with the partition mode (for example, GPM prediction mode), but also the wide-angle intra prediction mode can be combined with the partition mode, so that the kinds of the second prediction mode that can be combined with the partition mode are expanded, that is, the intra prediction mode for square image blocks and the intra prediction mode for non-square image blocks are included, and thus the applicability of image blocks of different sizes is also improved.
According to the image processing scheme provided by the embodiment of the application, as the sampling values of the unavailable reference sampling points can be replaced by other pixels (filling values here), for the unavailable reference sampling points in a partition, the sampling values of the unavailable reference sampling points and the sampling values of the sampling points in the partition for performing second prediction (for example, intra-frame prediction) can be made to have smaller difference by assigning the unavailable reference sampling points to new sampling values based on the sampling values of the available reference sampling points, so that the corresponding obtained residual error is smaller, and the coding quality is higher. Meanwhile, the kind of the second prediction mode (e.g., intra prediction mode) available for combining with the partition mode (e.g., GPM mode) of the image block is expanded, so that the flexibility of prediction according to the second prediction mode and the quality of a prediction result are improved in the partition mode.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, where the image processing apparatus may be a computer program (including program code) running in a server, and the image processing apparatus is an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present application. The image processing apparatus 1200 includes: a determination module 1201 and a filling module 1202.
The determining module 1201 is configured to determine, according to a preset prediction mode, a prediction result set of a corresponding partition in a target image block, where the prediction result set is used to determine a prediction result of the target image block.
Optionally, the target image block includes a first partition and/or a second partition, and the first partition and/or the second partition are image areas divided by a dividing line; the preset prediction mode comprises a prediction mode used by partitions divided by a dividing line in the target image block; the set of predictors includes a first set of predictors for the first partition and/or a second set of predictors for the second partition.
In one embodiment, the determining module 1201 is further configured to: determining a target division mode parameter of a target image block according to the first division mode set; the target division mode parameter includes prediction mode indication information for indicating a prediction mode used by a corresponding partition in the target image block.
In an embodiment, the determining module 1201 is specifically configured to at least one of: if the prediction mode is the first prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the motion vector of the corresponding partition in the target image block; and if the prediction mode is the second prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the target reference sampling points of the corresponding partition in the target image block.
In an embodiment, the determining module 1201 is specifically configured to: determining a first motion vector and/or a second motion vector of a corresponding partition in the target image block according to the merging candidate list of the target image block; and determining a prediction result set of a corresponding partition in the target image block according to the first motion vector and/or the second motion vector.
Optionally, the target reference sampling points comprise first reference sampling points and/or second reference sampling points.
Optionally, at least one of:
the first and second reference sample points are different;
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition;
the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition;
the position relation of the first reference sampling point relative to the corresponding partition is different from the position relation of the second reference sampling point relative to the corresponding partition;
and the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the target image block is located and is not adjacent to the target image block.
Optionally, the second prediction mode includes at least one type of second prediction mode, and if the prediction mode is the second prediction mode, the determining module 1201 is further configured to: and determining a second prediction mode of the target type used by the corresponding partition of the target image block from the second prediction modes of the at least one type, wherein the second prediction mode of the target type is used for determining a prediction result set of the target image block.
In an embodiment, the determining module 1201 is specifically configured to: determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block; and determining a prediction result set of a corresponding partition in the target image block according to the first reference sampling point and/or the second reference sampling point.
In an embodiment, the determining module 1201 is specifically configured to: and determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block according to the position relation between boundary sampling points of the corresponding partitions in the target image block and the target image block.
In an embodiment, the determining module 1201 is specifically configured to: determining a boundary through which a dividing line used by the target image block passes according to a boundary mapping table; and determining the boundary sampling points according to the boundary passed by the dividing line.
In an embodiment, the determining module 1201 is specifically configured to at least one of: determining that the boundary sampling points comprise first boundary sampling points when a division line used by the target image block passes through a first boundary of the target image block; and when the division line used by the target image block passes through a second boundary of the target image block, determining that the boundary sampling points comprise second boundary sampling points.
Optionally, the determining module 1201 is further specifically configured to determine a dividing line equation according to a target dividing mode parameter of the target image block; and determining at least one boundary sampling point according to a boundary reference point and the parting line equation.
In an embodiment, the determining module 1201 is specifically configured to: determining a first coordinate range and/or a second coordinate range according to the at least one boundary sampling point; and determining a first reference sampling point and/or a second reference sampling point according to the first coordinate range and/or the second coordinate range.
In an embodiment, the determining module 1201 is specifically configured to: determining the partition range of the corresponding partition in the target image block according to the distance between the sampling point of the corresponding partition in the target image block and the partition line; determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block based on the coordinate ranges of the first sampling points and the partition ranges; or determining the first sampling point as a first reference sampling point or a second reference sampling point of a corresponding partition in the target image block based on distance information between the first sampling point adjacent to the target image block and the dividing line.
In one embodiment, the fill module 1202 is to: filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point; and determining a prediction result set of the corresponding partition of the target image block according to the first reference sampling point and the filled reference sampling point.
In one embodiment, the filling module 1202 is specifically configured to: determining at least one padding reference sample point among the first reference sample points;
determining a filling value of the second reference sample point based on the sample value of the at least one filling reference sample point; and filling the second reference sampling point based on the filling value to obtain a filled reference sampling point.
In one embodiment, the filling module 1202 is specifically configured to: determining a filling weight of each filling reference sampling point based on a position relation between the each filling reference sampling point and the second reference sampling point; determining a padding value for the second reference sample point based on the padding weight and the sample value of the respective padded reference sample point.
Optionally, at least one of:
the positional relationship comprises a distance between the respective filling reference sample point and the second reference sample point;
the first reference sample points comprise first reference sample points adjacent to a first boundary of the target image block and/or first reference sample points adjacent to a second boundary of the target image block;
the at least one padded reference sample point comprises a first padded reference sample point of the first reference sample points adjacent to the first boundary and/or a second padded reference sample point of the first reference sample points adjacent to the second boundary;
in one embodiment, the filling module 1202 is specifically configured to: determining a filling value of a second filled reference sample point adjacent to the corresponding boundary based on the sample value of the first filled reference sample point and the filling weight of the first filled reference sample point and/or the sample value of the second filled reference sample point and the filling weight of the second filled reference sample point.
In one possible embodiment, the image processing apparatus 1200 shown in fig. 12 can also be used for the contents included in the image processing method described below.
A determining module 1201, configured to determine a target reference sampling point through a preset strategy;
the determining module 1201 is further configured to determine a prediction result set according to the target reference sampling point and a preset prediction mode.
In an embodiment, the determining module 1201 is specifically configured to: and determining a target reference sampling point according to the position relation between the reference sampling point and the sampling point in the image block partition.
In an embodiment, the determining module 1201 is specifically configured to: determining at least one boundary sample point from the sample points of the image block partition; and determining a target reference sampling point according to the position relation between the reference sampling point and the at least one boundary sampling point.
Optionally, the at least one boundary sample point includes a first boundary sample point and/or a second boundary sample point, and the determining module 1201 is specifically configured to: determining a first coordinate range according to the first boundary sampling points, and/or determining a second coordinate range according to the second boundary sampling points;
and determining a target reference sampling point according to the position relation between the reference sampling point and the first coordinate range and/or according to the position relation between the reference sampling point and the second coordinate range.
In an embodiment, the determining module 1201 is specifically configured to: determining a partition range according to sampling points in image block partitions; and determining a target reference sampling point according to the position relation between the reference sampling point and the partition range.
In an embodiment, the determining module 1201 is specifically configured to: determining a preset prediction mode according to the prediction mode indication information of the image block; and determining a prediction result set according to the preset prediction mode and the target reference sampling point.
Optionally, the preset prediction mode is at least one of: a prediction mode used by a partition divided by a partition line in an image block and/or a prediction mode used by an adjacent image block.
In an embodiment, the determining module 1201 is specifically configured to: if the prediction mode comprises a first prediction mode and/or a second prediction mode, determining a prediction result set of a partition corresponding to the image block according to the motion vector and/or the target reference sampling point; and determining a prediction result set of the image block according to the prediction result set of the partition corresponding to the image block.
Optionally, the target reference sampling points comprise first reference sampling points and/or second reference sampling points;
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition;
the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition;
the positional relationship of the first reference sampling point with respect to the corresponding partition is different from the positional relationship of the second reference sampling point with respect to the corresponding partition;
and the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the image block is located and is not adjacent to the image block.
In one embodiment, the filling module 1202 is specifically configured to: if the prediction mode comprises a second prediction mode, filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point; and determining a prediction result set of the partition corresponding to the image block according to the first reference sampling point and the filled reference sampling point.
It can be understood that the functions of the functional modules of the image processing apparatus described in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the description related to the foregoing method embodiment, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
The embodiment of the present application further provides an intelligent terminal, where the intelligent terminal includes a memory and a processor, and the memory stores an image processing program, and when the image processing program is executed by the processor, the image processing method in any of the embodiments is implemented. The smart terminal may be a mobile terminal 100 as shown in fig. 1.
In a possible embodiment, the processor 110 of the mobile terminal 100 shown in fig. 1 may be configured to call the image processing program stored in the memory 109 to perform the following operations: and determining a prediction result set of a corresponding partition in the target image block according to a preset prediction mode, wherein the prediction result set is used for determining the prediction result of the target image block.
Optionally, the target image block includes a first partition and/or a second partition, and the first partition and/or the second partition are image areas divided by a dividing line; the preset prediction mode comprises a prediction mode used by partitions divided by a dividing line in the target image block; the set of predictors includes a first set of predictors for the first partition and/or a second set of predictors for the second partition.
In one embodiment, the processor 110 is further configured to: determining a target division mode parameter of a target image block according to the first division mode set; the target division mode parameter includes prediction mode indication information for indicating a prediction mode used by a corresponding partition in the target image block.
In one embodiment, the processor 110 is specifically configured to at least one of: if the prediction mode is the first prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the motion vector of the corresponding partition in the target image block; and if the prediction mode is the second prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the target reference sampling points of the corresponding partition in the target image block.
In one embodiment, the processor 110 is specifically configured to: determining a first motion vector and/or a second motion vector of a corresponding partition in the target image block according to the merging candidate list of the target image block; and determining a prediction result set of a corresponding partition in the target image block according to the first motion vector and/or the second motion vector.
Optionally, the target reference sampling points comprise first reference sampling points and/or second reference sampling points.
Optionally, at least one of:
the first and second reference sample points are different;
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition;
the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition;
the position relation of the first reference sampling point relative to the corresponding partition is different from the position relation of the second reference sampling point relative to the corresponding partition;
and the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the target image block is located and is not adjacent to the target image block.
Optionally, the second prediction mode includes at least one type of second prediction mode, and if the prediction mode is the second prediction mode, the processor 110 is further configured to: and determining a second prediction mode of the target type used by the corresponding partition of the target image block from the second prediction modes of the at least one type, wherein the second prediction mode of the target type is used for determining a prediction result set of the target image block.
In one embodiment, the processor 110 is specifically configured to: determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block; and determining a prediction result set of a corresponding partition in the target image block according to the first reference sampling point and/or the second reference sampling point.
In one embodiment, the processor 110 is specifically configured to: and determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block according to the position relation between the corresponding partitions in the target image block and boundary sampling points of the partition line used by the target image block in the target image block.
In one embodiment, the processor 110 is specifically configured to: determining the boundary passed by the dividing line used by the target image block according to a boundary mapping table; and determining the boundary sampling points according to the boundary passed by the dividing line.
In one embodiment, the processor 110 is specifically configured to at least one of: determining that the boundary sampling points comprise first boundary sampling points when a division line used by the target image block passes through a first boundary of the target image block; and when the division line used by the target image block passes through a second boundary of the target image block, determining that the boundary sampling points comprise second boundary sampling points.
Optionally, the processor 110 is further specifically configured to determine a dividing line equation according to the target dividing mode parameter of the target image block; and determining at least one boundary sampling point according to a boundary reference point and the parting line equation.
In one embodiment, the processor 110 is specifically configured to: determining a first coordinate range and/or a second coordinate range according to the at least one boundary sampling point; and determining a first reference sampling point and/or a second reference sampling point according to the first coordinate range and/or the second coordinate range.
In one embodiment, the processor 110 is specifically configured to: determining the partition range of the corresponding partition in the target image block according to the distance between the sampling point of the corresponding partition in the target image block and the partition line; determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block based on the coordinate ranges of the first sampling points and the partition ranges; or determining the first sampling point as a first reference sampling point or a second reference sampling point of a corresponding partition in the target image block based on distance information between the first sampling point adjacent to the target image block and the dividing line.
In one embodiment, the processor 110 is configured to: filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point; and determining a prediction result set of the corresponding partition of the target image block according to the first reference sampling point and the filled reference sampling point.
In one embodiment, the processor 110 is specifically configured to: determining at least one padding reference sample point among the first reference sample points;
determining a filling value of the second reference sample point based on the sample value of the at least one filling reference sample point; and filling the second reference sampling point based on the filling value to obtain the filled reference sampling point.
In one embodiment, the processor 110 is specifically configured to: determining a filling weight of each filling reference sampling point based on a position relation between the filling reference sampling point and the second reference sampling point; determining a padding value for the second reference sample point based on the padding weights and the sample values of the respective padded reference sample points.
Optionally, at least one of:
the positional relationship comprises a distance between the respective filling reference sample point and the second reference sample point;
the first reference sample points comprise first reference sample points adjacent to a first boundary of the target image block and/or first reference sample points adjacent to a second boundary of the target image block;
the at least one padded reference sample point comprises a first padded reference sample point of the first reference sample points adjacent to the first boundary and/or a second padded reference sample point of the first reference sample points adjacent to the second boundary;
in one embodiment, the processor 110 is specifically configured to: determining a filling value of a second filled reference sample point adjacent to the corresponding boundary based on the sample value of the first filled reference sample point and the filling weight of the first filled reference sample point and/or the sample value of the second filled reference sample point and the filling weight of the second filled reference sample point.
In one possible implementation, the processor 110 of the mobile terminal 100 shown in fig. 1 may be configured to invoke an image processing program stored in the memory 109 to perform the following operations: determining a target reference sampling point through a preset strategy; and determining a prediction result set according to the target reference sampling point and a preset prediction mode.
In one embodiment, the processor 110 is specifically configured to: and determining a target reference sampling point according to the position relation between the reference sampling point and the sampling point in the image block partition.
In one embodiment, the processor 110 is specifically configured to: determining at least one boundary sample point from the sample points of the image block partition; and determining a target reference sampling point according to the position relation between the reference sampling point and the at least one boundary sampling point.
Optionally, the at least one boundary sample point includes a first boundary sample point and/or a second boundary sample point, and the processor 110 is specifically configured to: determining a first coordinate range according to the first boundary sampling points, and/or determining a second coordinate range according to the second boundary sampling points;
and determining a target reference sampling point according to the position relation between the reference sampling point and the first coordinate range and/or according to the position relation between the reference sampling point and the second coordinate range.
In one embodiment, the processor 110 is specifically configured to: determining a partition range according to sampling points in the image block partitions; and determining target reference sampling points according to the position relation between the reference sampling points and the partition range.
In one embodiment, the processor 110 is specifically configured to: determining a preset prediction mode according to the prediction mode indication information of the image block; and determining a prediction result set according to the preset prediction mode and the target reference sampling point.
Optionally, the preset prediction mode is at least one of: a prediction mode used by a partition divided by a partition line in an image block and/or a prediction mode used by an adjacent image block.
In one embodiment, the processor 110 is specifically configured to: if the prediction mode comprises a first prediction mode and/or a second prediction mode, determining a prediction result set of a partition corresponding to the image block according to the motion vector and/or the target reference sampling point; and determining a prediction result set of the image block according to the prediction result set of the partition corresponding to the image block.
Optionally, the target reference sampling points comprise first reference sampling points and/or second reference sampling points;
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition;
the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition;
the position relation of the first reference sampling point relative to the corresponding partition is different from the position relation of the second reference sampling point relative to the corresponding partition;
and the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the image block is located and is not adjacent to the image block.
In one embodiment, the processor 110 is specifically configured to: if the prediction mode comprises a second prediction mode, filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point; and determining a prediction result set of the partition corresponding to the image block according to the first reference sampling point and the filled reference sampling point.
It should be understood that the mobile terminal described in the embodiment of the present application may perform the method description of any one of the above embodiments, and may also perform the description of the image processing apparatus in the above corresponding embodiment, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
The embodiment of the present application further provides a computer-readable storage medium, where an image processing program is stored on the storage medium, and when the image processing program is executed by a processor, the image processing program implements the steps of the image processing method in any of the above embodiments.
In the embodiments of the intelligent terminal and the computer-readable storage medium provided in the present application, all technical features of any one of the embodiments of the image processing method may be included, and the expanding and explaining contents of the specification are basically the same as those of the embodiments of the method, and are not described herein again.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method in the above various possible embodiments.
Embodiments of the present application further provide a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method in the above various possible embodiments.
It is to be understood that the foregoing scenarios are only examples, and do not constitute a limitation on application scenarios of the technical solutions provided in the embodiments of the present application, and the technical solutions of the present application may also be applied to other scenarios. For example, as can be known by those skilled in the art, with the evolution of system architecture and the emergence of new service scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device in the embodiment of the application can be merged, divided and deleted according to actual needs.
In the present application, the same or similar term concepts, technical solutions and/or application scenario descriptions will be generally described only in detail at the first occurrence, and when the description is repeated later, the detailed description will not be repeated in general for brevity, and when understanding the technical solutions and the like of the present application, reference may be made to the related detailed description before the description for the same or similar term concepts, technical solutions and/or application scenario descriptions and the like which are not described in detail later.
In the present application, each embodiment is described with emphasis, and reference may be made to the description of other embodiments for parts that are not described or illustrated in any embodiment.
The technical features of the technical solution of the present application may be arbitrarily combined, and for brevity of description, all possible combinations of the technical features in the embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present application should be considered as being described in the present application.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controlled terminal, or a network device) to execute the method of each embodiment of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.
Claims (25)
1. An image processing method, comprising:
determining a prediction result set of a corresponding partition in a target image block according to a preset prediction mode, wherein the prediction result set is used for determining the prediction result of the target image block;
the determining a prediction result set of a corresponding partition in a target image block according to a preset prediction mode includes: if the preset prediction mode is a second prediction mode, determining a prediction result set of a corresponding partition in the target image block according to target reference sampling points of the corresponding partition in the target image block;
the target reference sampling points are determined based on the position relation between corresponding partitions in the target image block and boundary sampling points in the target image block, wherein the boundary sampling points refer to sampling points through which partition lines used by the target image block pass among sampling points adjacent to encoded pixel points in the target image block.
2. The method of claim 1, comprising at least one of:
the target image block comprises a first partition and/or a second partition, and the first partition and/or the second partition are image areas obtained by dividing through a dividing line;
the preset prediction mode comprises a prediction mode used by partitions divided by a dividing line in the target image block;
the set of predictors includes a first set of predictors for the first partition and/or a second set of predictors for the second partition.
3. The method of claim 2, wherein the method further comprises:
determining a target division mode parameter of a target image block according to the first division mode set; the target division mode parameter includes prediction mode indication information for indicating a prediction mode used by a corresponding partition in the target image block.
4. The method as claimed in claim 2, wherein the determining the prediction result set of the corresponding partition in the target image block according to the preset prediction mode further comprises:
and if the prediction mode is the first prediction mode, determining a prediction result set of a corresponding partition in the target image block according to the motion vector of the corresponding partition in the target image block.
5. The method as claimed in claim 4, wherein said determining a set of prediction results for corresponding partitions in the target image block according to motion vectors of the corresponding partitions in the target image block comprises:
determining a first motion vector and/or a second motion vector of a corresponding partition in the target image block according to the merging candidate list of the target image block;
and determining a prediction result set of a corresponding partition in the target image block according to the first motion vector and/or the second motion vector.
6. The method of claim 1, further comprising at least one of:
the target reference sampling points comprise first reference sampling points and/or second reference sampling points;
the second prediction mode includes at least one type of second prediction mode.
7. The method of claim 6, further comprising at least one of:
the first and second reference sample points are different;
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition;
the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition;
the position relation of the first reference sampling point relative to the corresponding partition is different from the position relation of the second reference sampling point relative to the corresponding partition;
the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the target image block is located and is not adjacent to the target image block;
if the predetermined prediction mode is a second prediction mode, the method further comprises: and determining a second prediction mode of a target type used by the corresponding partition of the target image block from the second prediction modes of the at least one type, wherein the second prediction mode of the target type is used for determining a prediction result set of the target image block.
8. The method as claimed in claim 6, wherein said determining the set of prediction results for the corresponding partition in the target image block according to the target reference sampling points of the corresponding partition in the target image block comprises the steps of:
s21: determining first reference sampling points and/or second reference sampling points of corresponding partitions in the target image block;
s22: and determining a prediction result set of a corresponding partition in the target image block according to the first reference sampling point and/or the second reference sampling point.
9. The method of claim 8, wherein the step S21 includes at least one of:
determining a first reference sampling point and/or a second reference sampling point of a corresponding partition in the target image block according to the position relation between boundary sampling points of the corresponding partition in the target image block and a partition line used by the target image block in the target image block;
determining a partition range of a corresponding partition in the target image block according to a distance between a sampling point of the corresponding partition in the target image block and a partition line, and determining a first reference sampling point and/or a second reference sampling point of the corresponding partition in the target image block based on a coordinate range of the first sampling point and the partition range, or determining the first sampling point as the first reference sampling point or the second reference sampling point of the corresponding partition in the target image block based on distance information between the first sampling point adjacent to the target image block and the partition line.
10. The method of claim 9, wherein the method further comprises:
determining a boundary through which a dividing line used by the target image block passes according to a boundary mapping table;
and determining the boundary sampling points according to the boundary passed by the dividing line.
11. The method of claim 10, wherein determining the boundary sample points from the boundary traversed by the parting line comprises at least one of:
determining that the boundary sampling points comprise first boundary sampling points when a division line used by the target image block passes through a first boundary of the target image block;
and when the division line used by the target image block passes through a second boundary of the target image block, determining that the boundary sampling points comprise second boundary sampling points.
12. The method as claimed in claim 10, wherein the determining of the boundary sample points comprises:
determining a dividing line equation according to the target dividing mode parameters of the target image block;
and determining at least one boundary sampling point according to a boundary reference point and the parting line equation.
13. The method according to any one of claims 9 to 12, wherein determining the first reference sample points and/or the second reference sample points of the corresponding partitions in the target image block according to the positional relationship between the boundary sample points in the target image block of the partition lines used by the corresponding partitions in the target image block and the target image block comprises:
determining a first coordinate range and/or a second coordinate range according to at least one boundary sampling point;
and determining a first reference sampling point and/or a second reference sampling point according to the first coordinate range and/or the second coordinate range.
14. The method according to any one of claims 8 to 12, wherein the step S22 includes:
filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point;
and determining a prediction result set of the corresponding partition of the target image block according to the first reference sampling point and the filled reference sampling point.
15. The method of claim 14, wherein the padding the second reference sample points according to the first reference sample point to obtain padded reference sample points comprises:
determining at least one padding reference sample point among the first reference sample points;
determining a filling value of the second reference sample point based on the sample value of the at least one filling reference sample point;
and filling the second reference sampling point based on the filling value to obtain the filled reference sampling point.
16. The method of claim 15, wherein the determining the padding values for the second reference sample points based on the sample value of the at least one padded reference sample point comprises:
determining a filling weight of each filling reference sampling point based on a position relation between the each filling reference sampling point and the second reference sampling point;
determining a padding value for the second reference sample point based on the padding weight and the sample value of the respective padded reference sample point.
17. The method of claim 16, comprising at least one of:
the positional relationship comprises a distance between the respective filling reference sample point and the second reference sample point;
the first reference sample points comprise first reference sample points adjacent to a first boundary of the target image block and/or first reference sample points adjacent to a second boundary of the target image block;
the at least one padded reference sample point comprises a first padded reference sample point of the first reference sample points adjacent to the first boundary and/or a second padded reference sample point of the first reference sample points adjacent to the second boundary;
the determining of the padding values of the second reference sampling points based on the padding weights and the sampling values of the respective padded reference sampling points comprises:
determining a filling value of a second filled reference sample point adjacent to the corresponding boundary based on the sample value of the first filled reference sample point and the filling weight of the first filled reference sample point and/or the sample value of the second filled reference sample point and the filling weight of the second filled reference sample point.
18. An image processing method, characterized by comprising the steps of:
s1: determining a target reference sampling point through a preset strategy, wherein the target reference sampling point is determined based on the position relation between an image block partition and a boundary sampling point in an image block, and the boundary sampling point refers to a sampling point through which a partition line used by the image block passes in sampling points adjacent to a coded pixel point in the image block;
s2: determining a prediction result set according to the target reference sampling point and a preset prediction mode; the preset prediction mode includes a second prediction mode.
19. The method of claim 18, wherein the step of S1 includes at least one of:
determining at least one boundary sampling point from the sampling points of the image block partitions, and determining a target reference sampling point according to the position relation between a reference sampling point and the at least one boundary sampling point;
and determining a partition range according to the sampling points in the image block partition, and determining a target reference sampling point according to the position relation between the reference sampling point and the partition range.
20. The method as claimed in claim 19, wherein the at least one boundary sample point comprises a first boundary sample point and/or a second boundary sample point, and the determining a target reference sample point according to a positional relationship between a reference sample point and the at least one boundary sample point comprises:
determining a first coordinate range according to the first boundary sampling points, and/or determining a second coordinate range according to the second boundary sampling points;
and determining a target reference sampling point according to the position relation between the reference sampling point and the first coordinate range and/or according to the position relation between the reference sampling point and the second coordinate range.
21. The method according to any one of claims 18 to 20, wherein the step S2 includes:
determining a preset prediction mode according to the prediction mode indication information of the image block;
and determining a prediction result set according to the preset prediction mode and the target reference sampling point.
22. The method of claim 21, comprising at least one of:
the preset prediction mode is as follows: a prediction mode used by partitions divided by a division line in the image block and/or a prediction mode used by an adjacent image block;
the preset prediction mode is a prediction mode used by partitions divided by a partition line in the image block, and the determining of the prediction result set according to the preset prediction mode and the target reference sampling point comprises the following steps: if the prediction mode comprises a first prediction mode and/or a second prediction mode, determining a prediction result set of a partition corresponding to the image block according to a motion vector and/or a target reference sampling point, and determining the prediction result set of the image block according to the prediction result set of the partition corresponding to the image block;
the target reference sampling points comprise first reference sampling points and/or second reference sampling points;
the first reference sampling point is adjacent to the corresponding partition and is not adjacent to another partition;
the second reference sampling point is not adjacent to the corresponding partition and is adjacent to the other partition;
the position relation of the first reference sampling point relative to the corresponding partition is different from the position relation of the second reference sampling point relative to the corresponding partition;
and the first reference sampling point or the second reference sampling point is a pixel point which is adjacent to the coding tree unit where the image block is located and is not adjacent to the image block.
23. The method as claimed in claim 22, wherein if the prediction mode comprises a second prediction mode, determining a set of prediction results for the corresponding partition of the image block based on the target reference sample points comprises:
if the prediction mode comprises a second prediction mode, filling the second reference sampling point according to the first reference sampling point to obtain a filled reference sampling point;
and determining a prediction result set of the partition corresponding to the image block according to the first reference sampling point and the filled reference sampling point.
24. An intelligent terminal, characterized in that, intelligent terminal includes: memory, a processor, wherein the memory has stored thereon an image processing program which, when executed by the processor, implements the steps of the image processing method of any of claims 1 to 23.
25. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 23.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210491875.0A CN114598880B (en) | 2022-05-07 | 2022-05-07 | Image processing method, intelligent terminal and storage medium |
PCT/CN2023/090367 WO2023216866A1 (en) | 2022-05-07 | 2023-04-24 | Image processing method, intelligent terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210491875.0A CN114598880B (en) | 2022-05-07 | 2022-05-07 | Image processing method, intelligent terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114598880A CN114598880A (en) | 2022-06-07 |
CN114598880B true CN114598880B (en) | 2022-09-16 |
Family
ID=81812890
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210491875.0A Active CN114598880B (en) | 2022-05-07 | 2022-05-07 | Image processing method, intelligent terminal and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114598880B (en) |
WO (1) | WO2023216866A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114598880B (en) * | 2022-05-07 | 2022-09-16 | 深圳传音控股股份有限公司 | Image processing method, intelligent terminal and storage medium |
CN115002463B (en) * | 2022-07-15 | 2023-01-13 | 深圳传音控股股份有限公司 | Image processing method, intelligent terminal and storage medium |
CN115379214B (en) * | 2022-10-26 | 2023-05-23 | 深圳传音控股股份有限公司 | Image processing method, intelligent terminal and storage medium |
CN115955565B (en) * | 2023-03-15 | 2023-07-04 | 深圳传音控股股份有限公司 | Processing method, processing apparatus, and storage medium |
CN116847088B (en) * | 2023-08-24 | 2024-04-05 | 深圳传音控股股份有限公司 | Image processing method, processing apparatus, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016200235A1 (en) * | 2015-06-11 | 2016-12-15 | 엘지전자(주) | Intra-prediction mode-based image processing method and apparatus therefor |
WO2020086248A1 (en) * | 2018-10-23 | 2020-04-30 | Interdigital Vc Holdings, Inc. | Method and device for picture encoding and decoding |
WO2021015581A1 (en) * | 2019-07-23 | 2021-01-28 | 한국전자통신연구원 | Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning |
CN112532997A (en) * | 2019-10-04 | 2021-03-19 | Oppo广东移动通信有限公司 | Image prediction method, encoder, decoder, and storage medium |
WO2022019613A1 (en) * | 2020-07-20 | 2022-01-27 | 한국전자통신연구원 | Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning |
CN114422781A (en) * | 2022-03-29 | 2022-04-29 | 深圳传音控股股份有限公司 | Image processing method, intelligent terminal and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103517070B (en) * | 2013-07-19 | 2017-09-29 | 清华大学 | The decoding method and device of image |
WO2017124305A1 (en) * | 2016-01-19 | 2017-07-27 | 北京大学深圳研究生院 | Panoramic video coding and decoding methods and devices based on multi-mode boundary fill |
CN116506607A (en) * | 2016-08-01 | 2023-07-28 | 韩国电子通信研究院 | Image encoding/decoding method and apparatus, and recording medium storing bit stream |
CN111083486A (en) * | 2019-01-03 | 2020-04-28 | 北京达佳互联信息技术有限公司 | Method and device for determining chrominance information of coding unit |
CN114586366A (en) * | 2020-04-03 | 2022-06-03 | Oppo广东移动通信有限公司 | Inter-frame prediction method, encoder, decoder, and storage medium |
MX2023000279A (en) * | 2020-10-16 | 2023-02-09 | Guangdong Oppo Mobile Telecommunications Corp Ltd | Intra prediction method, encoder, decoder, and storage medium. |
CN114598880B (en) * | 2022-05-07 | 2022-09-16 | 深圳传音控股股份有限公司 | Image processing method, intelligent terminal and storage medium |
-
2022
- 2022-05-07 CN CN202210491875.0A patent/CN114598880B/en active Active
-
2023
- 2023-04-24 WO PCT/CN2023/090367 patent/WO2023216866A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016200235A1 (en) * | 2015-06-11 | 2016-12-15 | 엘지전자(주) | Intra-prediction mode-based image processing method and apparatus therefor |
WO2020086248A1 (en) * | 2018-10-23 | 2020-04-30 | Interdigital Vc Holdings, Inc. | Method and device for picture encoding and decoding |
WO2021015581A1 (en) * | 2019-07-23 | 2021-01-28 | 한국전자통신연구원 | Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning |
CN112532997A (en) * | 2019-10-04 | 2021-03-19 | Oppo广东移动通信有限公司 | Image prediction method, encoder, decoder, and storage medium |
WO2022019613A1 (en) * | 2020-07-20 | 2022-01-27 | 한국전자통신연구원 | Method, apparatus, and recording medium for encoding/decoding image by using geometric partitioning |
CN114422781A (en) * | 2022-03-29 | 2022-04-29 | 深圳传音控股股份有限公司 | Image processing method, intelligent terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2023216866A1 (en) | 2023-11-16 |
CN114598880A (en) | 2022-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114598880B (en) | Image processing method, intelligent terminal and storage medium | |
CN115002463B (en) | Image processing method, intelligent terminal and storage medium | |
CN114422781B (en) | Image processing method, intelligent terminal and storage medium | |
US10986332B2 (en) | Prediction mode selection method, video encoding device, and storage medium | |
CN112822491B (en) | Image data encoding and decoding method and device | |
CN115834897B (en) | Processing method, processing apparatus, and storage medium | |
CN115988206B (en) | Image processing method, processing apparatus, and storage medium | |
CN115379214B (en) | Image processing method, intelligent terminal and storage medium | |
EP3764648A1 (en) | Motion estimation method and device for video, terminal and storage medium | |
CN116456102B (en) | Image processing method, processing apparatus, and storage medium | |
CN116668704B (en) | Processing method, processing apparatus, and storage medium | |
CN115955565B (en) | Processing method, processing apparatus, and storage medium | |
WO2024212649A1 (en) | Image processing method, processing device and storage medium | |
US10827198B2 (en) | Motion estimation method, apparatus, and storage medium | |
CN115422986B (en) | Processing method, processing apparatus, and storage medium | |
CN116847088B (en) | Image processing method, processing apparatus, and storage medium | |
CN117176959B (en) | Processing method, processing apparatus, and storage medium | |
CN219068846U (en) | OLED display panel and intelligent terminal | |
WO2023019567A1 (en) | Image processing method, mobile terminal and storage medium | |
WO2024212191A1 (en) | Image processing method, processing device, and storage medium | |
CN111654708A (en) | Motion vector obtaining method and device and electronic equipment | |
CN110213593A (en) | A kind of method for calculation motion vector, code compression method and relevant apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |