WO2015003383A1 - Methods for inter-view motion prediction - Google Patents
Methods for inter-view motion prediction Download PDFInfo
- Publication number
- WO2015003383A1 WO2015003383A1 PCT/CN2013/079287 CN2013079287W WO2015003383A1 WO 2015003383 A1 WO2015003383 A1 WO 2015003383A1 CN 2013079287 W CN2013079287 W CN 2013079287W WO 2015003383 A1 WO2015003383 A1 WO 2015003383A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sub
- motion
- view
- parameters
- derived
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
Definitions
- the invention relates generally to Three-Dimensional (3D) video processing.
- the present invention relates to methods for inter-view motion prediction in 3D video coding.
- 3D video coding is developed for encoding or decoding video data of multiple views simultaneously captured by several cameras. Since all cameras capture the same scene from different viewpoints, multi-view video data contains a large amount of inter-view redundancy. To exploit the inter-view redundancy, additional tools such as inter-view motion prediction (IVMP) have been integrated to conventional 3D- HEVC (High Efficiency Video Coding) or 3D-AVC (Advanced Video Coding) codec.
- IVMP inter-view motion prediction
- a disparity vector such as the well-known neighboring block disparity vector (NBDV) or the depth-oriented NBDV (DoNBDV)
- NBDV well-known neighboring block disparity vector
- DoNBDV depth-oriented NBDV
- the corresponding area in reference view may have plentiful motion information, however, only the motion information in the middle position of the area is used for current PU in a dependent view. That will decrease the performance.
- SPIVMP sub-PU level IVMP
- Fig. 1 is a diagram illustrating the concept of IVMP in current HTM
- Fig. 2 is a diagram illustrating the concept of SPIVMP according to an embodiment of the invention. DETAILED DESCRIPTION
- SPIVMP sub-PU level IVMP
- TIVMC temporal inter-view merge candidate of current Prediction Unit
- step 2 locates a reference block in a reference view for each sub-PU according to a derived DV.
- the derived DV can be different for each sub-PU or all sub-PUs can share a unified derived DV.
- the associated motion parameters can be used as TIVMC for the corresponding sub-PU in current PU in the current view. Otherwise, the corresponding sub-PU can share the candidate motion parameters with its spatial neighbor.
- the TIVMC of current PU is composed of the TIVMC of all the sub-PUs.
- CMP(SPi) The candidate motion parameters of SPi denoted as CMP(SPi) is set equal to null.
- ⁇ CMP(SPi) is set equal to MP( efBlk(SP )
- ⁇ tempMP is set equal to MP(RefBlk(SPi))
- the associated motion (disparity) parameter can be derived as the DIVMC for current sub-PU to perform DCP, or the derived DV of current sub-PU can also be directly used as DIVMC for current sub-PU to perform DCP.
- current sub-PU can use the motion parameter from the neighboring sub-PU or use the derived DV or a default motion parameter as the candidate motion parameter.
- the associated motion parameter is used as the TIVMC for the current sub-PU. If the reference block is DCP coded, the associated motion parameter is used as DIVMC for current sub-PU. If the reference block is intra coded, the motion parameter from the neighboring sub-PUs or a default motion parameter can be used as the candidate motion parameter for current sub-PU.
- the block size of sub-PU can be 4x4, 8x8, 16x16, or other sizes. If one PU has block size smaller than or equal to that of sub-PU, then this PU would not be divided.
- Each sub-PU can have its own associated derived DV, or all the sub-PU in current PU can share one derived DV.
- One or more syntax elements can be used to signal whether the current PU is further divided into sub-PUs or not, or to indicate the sub-PU size.
- syntax elements can be explicitly transmitted in the sequence, view, picture, or slice level, such as SPS, VPS, APS, slice header.
- the information about whether the PU is further divided or the sub-PU size can also be derived implicitly on decoder side.
- the above information can be derived implicitly on decoder side according to mode selections, motion parameters of the neighboring PUs, or according to the motion parameters of the reference blocks of the sub-PUs.
- the SPIVMP method described above can be used in a video encoder as well as in a video decoder.
- Embodiments of SPIVMP method according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both.
- an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein.
- An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein.
- DSP Digital Signal Processor
- the invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA).
- processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention.
- the software code or firmware codes may be developed in different programming languages and different format or style.
- the software code may also be compiled for different target platform.
- different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Methods of inter-view motion prediction for multi-view video coding and 3D video coding are disclosed. The motion parameters of each sub-PU in one current PU are predicted according to the motion parameters of a corresponding reference block in a coded picture in the reference view.
Description
METHODS FOR INTER- VIEW MOTION PREDICTION
TECHNICAL FIELD
The invention relates generally to Three-Dimensional (3D) video processing. In particular, the present invention relates to methods for inter-view motion prediction in 3D video coding.
BACKGROUND
3D video coding is developed for encoding or decoding video data of multiple views simultaneously captured by several cameras. Since all cameras capture the same scene from different viewpoints, multi-view video data contains a large amount of inter-view redundancy. To exploit the inter-view redundancy, additional tools such as inter-view motion prediction (IVMP) have been integrated to conventional 3D- HEVC (High Efficiency Video Coding) or 3D-AVC (Advanced Video Coding) codec.
The basic concept of the IVMP in current 3DV-HTM is illustrated in Fig. 1. For deriving the motion parameters of temporal inter-view merge candidate (TIVMC) for a current Prediction Unit (PU) in a dependent view, a disparity vector (DV), such as the well-known neighboring block disparity vector (NBDV) or the depth-oriented NBDV (DoNBDV), is derived for current prediction unit (PU) . By adding the DV to the middle position of current PU, a reference sample location is obtained. The prediction block in the already coded picture in the reference view that covers the sample location is used as the reference block. If this reference block is coded using motion compensated prediction (MCP), the associated motion parameters can be used as the TIVMC for the current PU in the current view. The derived DV can also be directly used as the disparity inter-view merge candidate (DIVMC) for the current PU for disparity compensated prediction (DCP).
The corresponding area in reference view may have plentiful motion information, however, only the motion information in the middle position of the area is used for current PU in a dependent view. That will decrease the performance.
SUMMARY
In light of the previously described problems, a sub-PU level IVMP (SPIVMP) method is proposed, which predict the motion parameters of current PU with a finer granularity.
Other aspects and features of the invention will become apparent to those with ordinary skill in the art upon review of the following descriptions of specific embodiments.
BRIEF DESCRIPTION OF DRAWINGS
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
Fig. 1 is a diagram illustrating the concept of IVMP in current HTM;
Fig. 2 is a diagram illustrating the concept of SPIVMP according to an embodiment of the invention. DETAILED DESCRIPTION
The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
The proposed sub-PU level IVMP (SPIVMP) is shown in Fig. 2, and the temporal inter-view merge candidate (TIVMC) of current Prediction Unit (PU) is derived according to the proposed SPIVMP in the following:
First, dividing current processed PU into multiple sub-PUs with smaller size, Second, adding a derived DV to the middle position of each sub-PU to obtain a group of reference sample locations, and for each reference sample location, a prediction block in the already coded picture in the reference view that covers that sample position is used as reference block. In generally, step 2 locates a reference block in a reference view for each sub-PU according to a derived DV. The derived DV
can be different for each sub-PU or all sub-PUs can share a unified derived DV.
Third, for each reference block, if it is coded using MCP, the associated motion parameters can be used as TIVMC for the corresponding sub-PU in current PU in the current view. Otherwise, the corresponding sub-PU can share the candidate motion parameters with its spatial neighbor.
Finally, the TIVMC of current PU is composed of the TIVMC of all the sub-PUs.
Specifically, the following pseudo code is an embodiment of the above third step. We assume that there are N sub-PUs in current PU.
- tempMP is set equal to null
- For each sub-PU denoted as SPi in the current PU with i=0..N-l
o The candidate motion parameters of SPi denoted as CMP(SPi) is set equal to null.
o If the motion parameters of the reference block of SPi denoted as MP(RefBlk(SPi)) is available
■ CMP(SPi) is set equal to MP( efBlk(SP )
■ tempMP is set equal to MP(RefBlk(SPi))
■ if CMP(SPi-i) is not null, for each j=0..i-l
• CMP(SPj) is set equal to tempMP
o Else
■ If tempMP is not null, CMP(SPj) is set equal to tempMP
- If tempMP is not null, the TIMVC of current PU is marked as available
- Else, the TIVMC of current PU is marked as unavailable
In another example, if the reference block of current sub-PU is coded using DCP, the associated motion (disparity) parameter can be derived as the DIVMC for current sub-PU to perform DCP, or the derived DV of current sub-PU can also be directly used as DIVMC for current sub-PU to perform DCP.
In another example, if the reference block of current sub-PU is intra coded or coded using DCP, current sub-PU can use the motion parameter from the neighboring sub-PU or use the derived DV or a default motion parameter as the candidate motion parameter.
In another example, if the reference block is MCP coded, the associated motion parameter is used as the TIVMC for the current sub-PU. If the reference block is DCP coded, the associated motion parameter is used as DIVMC for current sub-PU. If the reference block is intra coded, the motion parameter from the neighboring sub-PUs or
a default motion parameter can be used as the candidate motion parameter for current sub-PU.
The block size of sub-PU can be 4x4, 8x8, 16x16, or other sizes. If one PU has block size smaller than or equal to that of sub-PU, then this PU would not be divided.
Each sub-PU can have its own associated derived DV, or all the sub-PU in current PU can share one derived DV.
One or more syntax elements can be used to signal whether the current PU is further divided into sub-PUs or not, or to indicate the sub-PU size.
The syntax elements can be explicitly transmitted in the sequence, view, picture, or slice level, such as SPS, VPS, APS, slice header.
The information about whether the PU is further divided or the sub-PU size can also be derived implicitly on decoder side.
The above information can be derived implicitly on decoder side according to mode selections, motion parameters of the neighboring PUs, or according to the motion parameters of the reference blocks of the sub-PUs.
The SPIVMP method described above can be used in a video encoder as well as in a video decoder. Embodiments of SPIVMP method according to the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program codes integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program codes to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware codes may be developed in different programming languages and different format or style. The software code may also be compiled for different target platform. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.
The invention may be embodied in other specific forms without departing from
its spirit or essential characteristics. The described examples are to be considered in all respects only as illustrative and not restrictive. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
Claims
1. A method of inter-view motion prediction for multi-view video coding or 3D video coding, wherein motion or disparity parameters of each sub-Prediction Unit (PU) in a current PU are predicted according to motion or disparity parameters of a corresponding reference block in a coded picture in a reference view.
2. The method as claimed in claim 1, wherein a derived disparity vector (DV) is added to a middle position of each sub-PU to locate the corresponding reference block in the coded picture in the reference view.
3. The method as claimed in claim 1, wherein the sub-PU shares the motion or disparity parameters of neighboring sub-PUs when the motion or disparity parameters of the corresponding reference block in the coded picture in the reference view are not available.
4. The method as claimed in claim 1, wherein the sub-PU uses a default motion or disparity parameter when the motion or disparity parameters of the corresponding reference block in the coded picture in the reference view are not available.
5. The method as claimed in claim 1, if the motion parameters in a temporal direction of the reference block are available, the motion parameters are used as candidate motion parameters in the temporal direction for a current sub-PU; otherwise, if the motion disparity parameters in an inter-view direction of the reference block are available, the motion parameters or a derived DV is used as the candidate motion parameters in the inter-view direction for the current sub-PU; Otherwise, a default motion parameter or the derived DV is used as a motion parameter predictor of the current sub-PU.
6. The method as claimed in claim 1, wherein a motion vector and a Picture Order Count (POC) of a reference picture is the same for a current sub-PU and a corresponding reference block when the reference picture is in a temporal direction.
7. The method as claimed in claim 1, wherein a motion or disparity vector of a reference block is scaled to generate a motion vector of a sub-PU when the reference picture is in an inter-view direction.
8. The method as claimed in claim 1, wherein a motion vector of a reference block is scaled to generate a motion vector of a sub-PU when POCs of reference pictures are different between the sub-PU and the reference block.
9. The method as claimed in claim 1, wherein each sub-PU derives an associated
DV, or a unified derived DV is shared for all sub-PUs in the current PU.
10. The method as claimed in claim 9, wherein the derived DV is signalled explicitly to a decoder or is derived implicitly by the decoder.
11. The method as claimed in claim 9, wherein the derived DV is selected from a plurality of DV candidates and selection information is signalled explicitly to a decoder or is derived implicitly by the decoder.
12. The method as claimed in claim 1, wherein the candidate motion or disparity parameters for one PU are composed of candidate motion or disparity parameters of all the sub-PUs derived.
13. The method as claimed in claim 12, wherein the candidate motion or disparity parameters for one PU is used as a specific inter-view merge candidate of the PU in merge mode.
14. The method as claimed in claim 13, wherein the specific inter-view merge candidate is inserted in a first position of a candidate list.
15. The method as claimed in claim 13, wherein one PU has more than one specific inter-view merge candidates according to different sub-PU sizes, and each of the specific inter-view merge candidates is allowed to have sub-PU partition or without sub-PU partition.
16. The method as claimed in claim 1, wherein the sub-PU size is selecting from 4x4, 8x8, 16x16, or other sizes, or equal to the size of PU, or is different between each sub-PU.
17. The method as claimed in claim 16, wherein a current processed PU is not further divided when the sub-PU size is larger than or equal to the size of PU.
18. The method as claimed in claim 16, a flag is signaled to indicate the sub-PU size, whether the PU is divided into sub-PUs or not, the partition level, or a quadtree/split depth for sub-PU partition.
19. The method as claimed in claim 18, the flag is explicitly transmitted in sequence, view, picture, slice level, or slice header.
20. The method as claimed in claim 18, the flag is implicitly derived on decoder side.
21. The method as claimed in claim 20, the flag is implicitly derived according to mode selections, motion parameters of neighboring PUs, or according to motion parameters of the reference blocks of the sub-PUs.
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2013/079287 WO2015003383A1 (en) | 2013-07-12 | 2013-07-12 | Methods for inter-view motion prediction |
CN201480028930.6A CN105247858A (en) | 2013-07-12 | 2014-07-10 | Method of sub-prediction unit inter-view motion predition in 3d video coding |
AU2014289739A AU2014289739A1 (en) | 2013-07-12 | 2014-07-10 | Method of sub-prediction unit inter-view motion predition in 3D video coding |
EP14822485.0A EP2997730A4 (en) | 2013-07-12 | 2014-07-10 | Method of sub-prediction unit inter-view motion predition in 3d video coding |
PCT/CN2014/081931 WO2015003635A1 (en) | 2013-07-12 | 2014-07-10 | Method of sub-prediction unit inter-view motion predition in 3d video coding |
US14/891,822 US10165252B2 (en) | 2013-07-12 | 2014-07-10 | Method of sub-prediction unit inter-view motion prediction in 3D video coding |
AU2017202368A AU2017202368A1 (en) | 2013-07-12 | 2017-04-11 | Method of sub-prediction unit inter-view motion prediction in 3d video coding |
US16/171,149 US10587859B2 (en) | 2013-07-12 | 2018-10-25 | Method of sub-predication unit inter-view motion prediction in 3D video coding |
AU2019201221A AU2019201221A1 (en) | 2013-07-12 | 2019-02-21 | Method of sub-prediction unit inter-view motion prediction in 3D video coding |
AU2021201240A AU2021201240B2 (en) | 2013-07-12 | 2021-02-25 | Method of sub-prediction unit inter-view motion prediction in 3D video coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2013/079287 WO2015003383A1 (en) | 2013-07-12 | 2013-07-12 | Methods for inter-view motion prediction |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/891,822 Continuation US10165252B2 (en) | 2013-07-12 | 2014-07-10 | Method of sub-prediction unit inter-view motion prediction in 3D video coding |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015003383A1 true WO2015003383A1 (en) | 2015-01-15 |
Family
ID=52279329
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2013/079287 WO2015003383A1 (en) | 2013-07-12 | 2013-07-12 | Methods for inter-view motion prediction |
PCT/CN2014/081931 WO2015003635A1 (en) | 2013-07-12 | 2014-07-10 | Method of sub-prediction unit inter-view motion predition in 3d video coding |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/081931 WO2015003635A1 (en) | 2013-07-12 | 2014-07-10 | Method of sub-prediction unit inter-view motion predition in 3d video coding |
Country Status (4)
Country | Link |
---|---|
US (2) | US10165252B2 (en) |
EP (1) | EP2997730A4 (en) |
AU (4) | AU2014289739A1 (en) |
WO (2) | WO2015003383A1 (en) |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10009618B2 (en) * | 2013-07-12 | 2018-06-26 | Samsung Electronics Co., Ltd. | Video encoding method and apparatus therefor using modification vector inducement, video decoding method and apparatus therefor |
KR102216128B1 (en) * | 2013-07-24 | 2021-02-16 | 삼성전자주식회사 | Method and apparatus for determining motion vector |
US9948915B2 (en) * | 2013-07-24 | 2018-04-17 | Qualcomm Incorporated | Sub-PU motion prediction for texture and depth coding |
EP3028466B1 (en) * | 2013-07-24 | 2022-01-26 | Qualcomm Incorporated | Simplified advanced motion prediction for 3d-hevc |
ES2781561T3 (en) * | 2013-10-18 | 2020-09-03 | Lg Electronics Inc | Method that predicts synthesis of views in multi-view video encoding and method of constituting a list of fusion candidates by using it |
CN110381317B (en) | 2014-01-03 | 2022-12-02 | 庆熙大学校产学协力团 | Method and apparatus for deriving motion information between time points of sub-prediction units |
CN106105212A (en) | 2014-03-07 | 2016-11-09 | 高通股份有限公司 | Sub-predicting unit (SUB PU) kinematic parameter simplifying inherits (MPI) |
JP2017520994A (en) * | 2014-06-20 | 2017-07-27 | 寰發股▲ふん▼有限公司HFI Innovation Inc. | Sub-PU syntax signaling and illumination compensation method for 3D and multi-view video coding |
US10070130B2 (en) * | 2015-01-30 | 2018-09-04 | Qualcomm Incorporated | Flexible partitioning of prediction units |
KR102385396B1 (en) * | 2016-10-11 | 2022-04-11 | 엘지전자 주식회사 | Video decoding method and apparatus according to intra prediction in video coding system |
WO2019059575A2 (en) * | 2017-09-19 | 2019-03-28 | 삼성전자주식회사 | Method for encoding and decoding motion information, and apparatus for encoding and decoding motion information |
EP3725074A1 (en) | 2017-12-14 | 2020-10-21 | InterDigital VC Holdings, Inc. | Texture-based partitioning decisions for video compression |
CN111010571B (en) | 2018-10-08 | 2023-05-16 | 北京字节跳动网络技术有限公司 | Generation and use of combined affine Merge candidates |
WO2020084476A1 (en) | 2018-10-22 | 2020-04-30 | Beijing Bytedance Network Technology Co., Ltd. | Sub-block based prediction |
WO2020084554A1 (en) | 2018-10-24 | 2020-04-30 | Beijing Bytedance Network Technology Co., Ltd. | Searching based motion candidate derivation for sub-block motion vector prediction |
CN111630865B (en) | 2018-11-12 | 2023-06-27 | 北京字节跳动网络技术有限公司 | Line buffer reduction for generalized bi-prediction mode |
JP7241870B2 (en) | 2018-11-20 | 2023-03-17 | 北京字節跳動網絡技術有限公司 | Difference calculation based on partial position |
CN113170097B (en) | 2018-11-20 | 2024-04-09 | 北京字节跳动网络技术有限公司 | Encoding and decoding of video encoding and decoding modes |
JP7319365B2 (en) | 2018-11-22 | 2023-08-01 | 北京字節跳動網絡技術有限公司 | Adjustment method for inter-prediction based on sub-blocks |
KR102635518B1 (en) | 2019-03-06 | 2024-02-07 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Use of converted single prediction candidates |
CN113574880B (en) | 2019-03-13 | 2023-04-07 | 北京字节跳动网络技术有限公司 | Partitioning with respect to sub-block transform modes |
CN113647099B (en) | 2019-04-02 | 2022-10-04 | 北京字节跳动网络技术有限公司 | Decoder-side motion vector derivation |
KR102701594B1 (en) | 2019-05-21 | 2024-08-30 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Syntax signaling in subblock merge mode |
KR102627821B1 (en) | 2019-06-04 | 2024-01-23 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | Construction of motion candidate list using neighboring block information |
CN114097228B (en) | 2019-06-04 | 2023-12-15 | 北京字节跳动网络技术有限公司 | Motion candidate list with geometric partition mode coding |
JP7460661B2 (en) | 2019-06-06 | 2024-04-02 | 北京字節跳動網絡技術有限公司 | Structure of motion candidate list for video encoding |
WO2021008514A1 (en) | 2019-07-14 | 2021-01-21 | Beijing Bytedance Network Technology Co., Ltd. | Indication of adaptive loop filtering in adaptation parameter set |
CN114270831B (en) | 2019-08-10 | 2024-07-30 | 北京字节跳动网络技术有限公司 | Sub-picture size definition in video processing |
CN114208184A (en) | 2019-08-13 | 2022-03-18 | 北京字节跳动网络技术有限公司 | Motion accuracy in sub-block based inter prediction |
WO2021052504A1 (en) | 2019-09-22 | 2021-03-25 | Beijing Bytedance Network Technology Co., Ltd. | Scaling method for sub-block based inter prediction |
WO2021057996A1 (en) | 2019-09-28 | 2021-04-01 | Beijing Bytedance Network Technology Co., Ltd. | Geometric partitioning mode in video coding |
EP4333431A1 (en) | 2019-10-18 | 2024-03-06 | Beijing Bytedance Network Technology Co., Ltd. | Syntax constraints in parameter set signaling of subpictures |
US12133019B2 (en) | 2020-09-17 | 2024-10-29 | Lemon Inc. | Subpicture track referencing and processing |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101222639A (en) * | 2007-01-09 | 2008-07-16 | 华为技术有限公司 | Inter-view prediction method, encoder and decoder of multi-viewpoint video technology |
WO2012171442A1 (en) * | 2011-06-15 | 2012-12-20 | Mediatek Inc. | Method and apparatus of motion and disparity vector prediction and compensation for 3d video coding |
EP2579592A1 (en) * | 2011-10-04 | 2013-04-10 | Thomson Licensing | Method and device for inter-view-predictive encoding of data of a view, device for decoding and computer-readable storage medium carrying encoded data of a view |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101090764B1 (en) * | 2008-11-20 | 2011-12-08 | 주식회사 넷앤티비 | Apparatus and method for playback of contents based on scene description |
JP2011146980A (en) | 2010-01-15 | 2011-07-28 | Sony Corp | Image processor and image processing method |
US9565449B2 (en) | 2011-03-10 | 2017-02-07 | Qualcomm Incorporated | Coding multiview video plus depth content |
KR20140057373A (en) * | 2011-08-30 | 2014-05-12 | 노키아 코포레이션 | An apparatus, a method and a computer program for video coding and decoding |
CN103907346B (en) | 2011-10-11 | 2017-05-24 | 联发科技股份有限公司 | Motion vector predictor and method and apparatus for disparity vector derivation |
WO2013068548A2 (en) * | 2011-11-11 | 2013-05-16 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Efficient multi-view coding using depth-map estimate for a dependent view |
US9525861B2 (en) * | 2012-03-14 | 2016-12-20 | Qualcomm Incorporated | Disparity vector prediction in video coding |
CN102790892B (en) * | 2012-07-05 | 2014-06-11 | 清华大学 | Depth map coding method and device |
-
2013
- 2013-07-12 WO PCT/CN2013/079287 patent/WO2015003383A1/en active Application Filing
-
2014
- 2014-07-10 WO PCT/CN2014/081931 patent/WO2015003635A1/en active Application Filing
- 2014-07-10 US US14/891,822 patent/US10165252B2/en active Active
- 2014-07-10 AU AU2014289739A patent/AU2014289739A1/en not_active Abandoned
- 2014-07-10 EP EP14822485.0A patent/EP2997730A4/en not_active Ceased
-
2017
- 2017-04-11 AU AU2017202368A patent/AU2017202368A1/en not_active Abandoned
-
2018
- 2018-10-25 US US16/171,149 patent/US10587859B2/en active Active
-
2019
- 2019-02-21 AU AU2019201221A patent/AU2019201221A1/en not_active Abandoned
-
2021
- 2021-02-25 AU AU2021201240A patent/AU2021201240B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101222639A (en) * | 2007-01-09 | 2008-07-16 | 华为技术有限公司 | Inter-view prediction method, encoder and decoder of multi-viewpoint video technology |
WO2012171442A1 (en) * | 2011-06-15 | 2012-12-20 | Mediatek Inc. | Method and apparatus of motion and disparity vector prediction and compensation for 3d video coding |
EP2579592A1 (en) * | 2011-10-04 | 2013-04-10 | Thomson Licensing | Method and device for inter-view-predictive encoding of data of a view, device for decoding and computer-readable storage medium carrying encoded data of a view |
Also Published As
Publication number | Publication date |
---|---|
US10587859B2 (en) | 2020-03-10 |
WO2015003635A1 (en) | 2015-01-15 |
AU2019201221A1 (en) | 2019-03-14 |
EP2997730A4 (en) | 2016-11-16 |
AU2014289739A1 (en) | 2015-11-05 |
AU2021201240A1 (en) | 2021-03-11 |
AU2021201240B2 (en) | 2022-09-08 |
EP2997730A1 (en) | 2016-03-23 |
US20160134857A1 (en) | 2016-05-12 |
AU2017202368A1 (en) | 2017-05-04 |
US10165252B2 (en) | 2018-12-25 |
US20190068948A1 (en) | 2019-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015003383A1 (en) | Methods for inter-view motion prediction | |
JP6472877B2 (en) | Method for 3D or multi-view video encoding including view synthesis prediction | |
KR101706309B1 (en) | Method and apparatus of inter-view candidate derivation for three-dimensional video coding | |
WO2015062002A1 (en) | Methods for sub-pu level prediction | |
US10230937B2 (en) | Method of deriving default disparity vector in 3D and multiview video coding | |
US9992494B2 (en) | Method of depth based block partitioning | |
WO2015109598A1 (en) | Methods for motion parameter hole filling | |
CA2904424C (en) | Method and apparatus of camera parameter signaling in 3d video coding | |
WO2014166068A1 (en) | Refinement of view synthesis prediction for 3-d video coding | |
WO2014005280A1 (en) | Method and apparatus to improve and simplify inter-view motion vector prediction and disparity vector prediction | |
WO2015100710A1 (en) | Existence of inter-view reference picture and availability of 3dvc coding tools | |
JP5986657B2 (en) | Simplified depth-based block division method | |
EP2984821A1 (en) | Method and apparatus of compatible depth dependent coding | |
WO2015006922A1 (en) | Methods for residual prediction | |
CN105247858A (en) | Method of sub-prediction unit inter-view motion predition in 3d video coding | |
WO2013159326A1 (en) | Inter-view motion prediction in 3d video coding | |
WO2015055143A1 (en) | Method of motion information prediction and inheritance in multi-view and three-dimensional video coding | |
WO2014106327A1 (en) | Method and apparatus for inter-view residual prediction in multiview video coding | |
WO2014023024A1 (en) | Methods for disparity vector derivation | |
WO2015103747A1 (en) | Motion parameter hole filling | |
WO2015113245A1 (en) | Methods for merging candidates list construction | |
WO2015006924A1 (en) | An additional texture merging candidate | |
WO2014166096A1 (en) | Reference view derivation for inter-view motion prediction and inter-view residual prediction | |
WO2015006900A1 (en) | A disparity derived depth coding method | |
WO2015100712A1 (en) | The method to perform the deblocking on sub-pu edge |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13889091 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13889091 Country of ref document: EP Kind code of ref document: A1 |