CN111339870B - Human body shape and posture estimation method for object occlusion scene - Google Patents

Human body shape and posture estimation method for object occlusion scene Download PDF

Info

Publication number
CN111339870B
CN111339870B CN202010099358.XA CN202010099358A CN111339870B CN 111339870 B CN111339870 B CN 111339870B CN 202010099358 A CN202010099358 A CN 202010099358A CN 111339870 B CN111339870 B CN 111339870B
Authority
CN
China
Prior art keywords
human body
dimensional
image
network
shielding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010099358.XA
Other languages
Chinese (zh)
Other versions
CN111339870A (en
Inventor
王雁刚
黄步真
张天舒
彭聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010099358.XA priority Critical patent/CN111339870B/en
Publication of CN111339870A publication Critical patent/CN111339870A/en
Application granted granted Critical
Publication of CN111339870B publication Critical patent/CN111339870B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human body shape and posture estimation method aiming at an object occlusion scene, which comprises the steps of converting a calculated weak perspective projection parameter into a camera coordinate, and obtaining a UV image containing human body shape information under the condition of no occlusion; adding a random object picture to the human body two-dimensional image for shielding, and acquiring a human body mask under the shielding condition; training a UV (ultraviolet) map repairing network of an encoding-decoding structure by using the obtained virtual occlusion data; inputting a real object to shield a human body color image, and constructing a significance detection network of an encoding-decoding structure by taking a mask image as a true value; monitoring human body coding network training by using the coded hidden space characteristics; inputting a color image of a sheltered human body to obtain a complete UV image; and restoring the human body three-dimensional model under the shielding condition by using the corresponding relation between the UV image and the vertex of the human body three-dimensional model. The invention converts the estimation of the shape of the occluded human body into the image restoration problem of the two-dimensional UV mapping, thereby realizing the real-time and dynamic reconstruction of the human body in the occluded scene.

Description

Human body shape and posture estimation method for object occlusion scene
Technical Field
The invention belongs to the field of computer vision and three-dimensional vision, and particularly relates to a human body shape and posture estimation method for an object shielding scene.
Background
Estimating the shape and posture of a three-dimensional human body from a single image is a research hotspot in the field of three-dimensional vision in recent years. The method plays an important role in the application aspects of virtual reality technologies such as human motion capture, virtual fitting and human animation. In recent years, the deep learning technology simplifies the solving way of recovering the overall shape of a human body from a single image, and particularly, after an SMPL model is proposed and widely applied, monocular image three-dimensional human body shape and posture estimation is developed rapidly in multiple stages, including (1) the SMPL parameters are optimized and solved by matching two-dimensional visual characteristics; directly regressing (2) SMPL parameters with a Convolutional Neural Network (CNN); (3) and representing three-dimensional points on the surface of the SMPL by using a two-dimensional UV map, and further converting the three-dimensional human body shape estimation into an image translation problem based on the CNN. Deep neural networks are the mainstream methods for estimating the three-dimensional human body shape by virtue of the accuracy and the operation efficiency of the deep neural networks, and the deep neural networks can obtain better reconstruction results in specific scenes. However, most of the existing methods do not consider the common phenomenon of occlusion between a person and an object. If occlusion is not explicitly considered, this type of approach cannot be directly transferred to human body estimation in processing occlusion scenes. This makes them very sensitive to scenes with occlusions or even slight object occlusions, which makes it difficult to meet real-world requirements.
Conventionally, estimation of three-dimensional shape and posture of a human body in an occlusion scene is a difficult point in the field, and the main reasons are as follows: (1) object occlusion introduces severe ambiguity in network training and results in a significant reduction in directly available image features, thereby affecting the overall three-dimensional human body shape estimation effect. (2) Due to the universality and randomness of the shielding object, the network is difficult to accurately segment the pixels of the human body and the shielding object in the image, so that the reconstruction result is interfered.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problem of estimation of the shape and the posture of a human body in an occlusion scene, the invention provides a method for estimating the shape and the posture of the human body in the occlusion scene, which is used for converting the estimation of the shape of the occluded human body into the image restoration problem of a two-dimensional UV map so as to realize the real-time and dynamic reconstruction of the human body in the occlusion scene.
The technical scheme is as follows: the invention relates to a human body shape and posture estimation method aiming at an object sheltered scene, which comprises the following steps:
(1) in the data preparation stage, calculating weak perspective projection parameters by using the corresponding relation between three-dimensional joint points and two-dimensional joint points of a human body of a three-dimensional human body data set;
(2) converting the human body three-dimensional model into camera coordinates through three-dimensional rotation and translation according to the computed weak perspective projection parameters;
(3) normalizing the vertex x, y and z coordinate values of the human body three-dimensional model under the camera coordinate to be in the range of-0.5 to 0.5, and storing the vertex x, y and z coordinate values into three channels of R, G and B of the UV map to obtain the UV map containing the human body shape information under the condition of no shielding;
(4) adding a random object picture to the human body two-dimensional image for shielding, and acquiring a human body mask under the shielding condition;
(5) repeating the step (3), wherein three-dimensional points which fall outside the mask area after the weak perspective projection are used as three-dimensional points under visual shielding, and x, y and z coordinates of the three-dimensional points are fixedly set to be-0.5, so as to obtain a UV (ultraviolet) map under corresponding shielding;
(6) in the training stage, training a UV (ultraviolet) map repairing network of the coding-decoding structure based on the virtual occlusion data obtained in the steps (1) to (5); the repair network takes the L1 loss between the UV map of the complete human body, the Laplace smooth item between adjacent pixels and the consistency of the UV connection as constraints;
(7) a real object is used for shielding a human body color image as input, and a mask image is used as a true value to construct a significance detection network of an encoding-decoding structure;
(8) connecting the color picture of the shielded human body with the saliency map, sending the color picture of the shielded human body into a human body coding network, simultaneously coding the UV map under corresponding shielding by using the repair network trained in the step (6), and supervising human body coding network training by using the hidden space characteristics obtained by coding;
(9) in the testing stage, inputting a color image of a sheltered human body, and decoding a hidden space characteristic value obtained by encoding the human body encoding network by using a decoder of a repair network through a significance detection network and a human body encoding network to obtain a complete UV image;
(10) and restoring the human body three-dimensional model under the shielding condition by using the corresponding relation between the UV map and the vertex of the human body three-dimensional model.
Further, the UV map repair network of step (6) uses ResNet as an encoder and stacked deconvolution layers as a decoder.
Further, the step (6) is realized by the following formula:
L=L1+λLtv+μLp
wherein, λ, μ is weight, LtvIs a Laplace smoothing term, LpFor the UV junction consistency constraint:
Figure BDA0002386364430000031
wherein, VbIs a model vertex point set corresponding to a plurality of UV pixels, and p (v) is a UV pixel value corresponding to a model vertex v.
Further, the human body coding network of the step (8) uses a VGG-19 structure.
Further, the color image in step (9) is a preprocessed human body occlusion image obtained from a monocular color camera.
Has the advantages that: compared with the prior art, the invention has the beneficial effects that: 1. a large amount of virtual shielding data is used for training an image repairing network, so that the whole framework has better robustness to various shielding; 2. by using significance detection, the interference of invalid image features such as shielding and background on reconstruction is reduced, the robustness of a human body and a shielding edge in an image is enhanced, and the problem of inaccurate segmentation is avoided; 3. the method of hidden space consistency is used for converting the human body three-dimensional shape estimation into an image restoration problem, so that the solving complexity is reduced; 4. and the consistency constraint of the UV connection part is provided, so that the smoothness of a reconstruction result in a human body reconstruction method by using a UV map is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of human body information UV map generation;
FIG. 3 is a diagram of human body shape information UV;
FIG. 4 is a schematic view of a three-dimensional model of a human body;
FIG. 5 is a diagram of a saliency detection network structure;
fig. 6 is a schematic diagram of the reconstruction result of the present invention.
Detailed Description
The present invention is described in further detail below with reference to the attached drawing figures. As shown in fig. 1, the implementation process of the method for estimating the shape and posture of the human body in the object occlusion scene according to the present invention is as follows:
as shown in fig. 2, the generation method of the human body information UV map is as follows: in the data preparation stage, firstly, the projection relation between the human body three-dimensional model joint point and the two-dimensional joint point in the three-dimensional human body data set is utilized to calculate weak perspective projection parameters, the human body model is converted into camera coordinates through operations such as three-dimensional translation and rotation, the vertex x, y and z coordinates of the human body three-dimensional model in the camera coordinates are normalized to be [ -0.5 and 0.5] and are stored in three channels of R, G and B of the UV map, and therefore the UV map containing human body shape information under the condition of no shielding as shown in figure 3 is obtained. In order to obtain a UV image of a shielded human body, a random object image is added into the two-dimensional image of the human body for shielding, and a human body mask under the shielding condition is obtained. And carrying out weak perspective projection on the human body three-dimensional model to a human body mask through the projection parameters. The three-dimensional points falling outside the mask area are three-dimensional points under the visual occlusion, the x, y, z coordinates are fixed to-0.5, and the three-dimensional coordinates of the vertices still stored in the mask area, thereby obtaining the corresponding UV map under the occlusion as shown in FIG. 4. Since both the occlusion UV map and the complete UV map in this step are independent of the background of the color image, a large amount of occlusion UV data can be generated by using virtual occlusion, thereby enhancing the robustness of the network.
And training an image repairing network which takes ResNet-50 as an encoder and stacks an deconvolution layer as a decoder by using the obtained large number of shielding UV graphs and complete UV graphs. The network can block UV images from being coded into high-dimensional human body features, and a complete human body shape UV image is decoded from the high-dimensional features. The network is constrained by the L1 loss between the UV map of the whole human body, the Laplace smoothing term between adjacent pixels, and the consistency of the UV junctions.
The concrete formula is as follows:
L=L1+λLtv+μLp
wherein, λ, μ is weight, LtvIs a Laplace smoothing term, LpFor the UV junction consistency constraint:
Figure BDA0002386364430000041
wherein, VbIs a model vertex point set corresponding to a plurality of UV pixels, and p (v) is a UV pixel value corresponding to a model vertex v. This constraint enables smooth joining of the various parts of the UV map as shown in figure 3.
A human body saliency map of a real object shielding human body color image is used as input, a mask image is used as a true value to construct a saliency detection network of an encoding-decoding structure, and the human body saliency map of the shielding image is obtained after passing through the saliency map detection network shown in FIG. 5. And connecting the color images of the shielded human body with the saliency maps, sending the images into a human body coding network, coding the UV maps under corresponding shielding by using a trained restoration network, and supervising human body coding network training by using the hidden space characteristics obtained by coding. Here, a human body coding network having VGG-19 as a basic structure is inputted. And using the shielding UV image corresponding to the color image, and using the high-dimensional characteristics obtained by an encoder of the image restoration network as supervision of a human body coding network. Meanwhile, as shown in fig. 5, the human body masks with different scales are used as the supervision of the significance network, and the two networks are trained end to end.
After the network training is finished, a human body shielding image is directly obtained from the monocular color camera, and preprocessing such as cutting and zooming is performed. Inputting the preprocessed color image into a network, and directly passing through a significance detection network and a human body coding network to obtain high-dimensional human body characteristics. And decoding the hidden space characteristic value obtained by encoding the human body encoding network by using a decoder of the repair network to obtain high-dimensional characteristics, and then decoding by using a decoder of the image repair network to obtain a complete UV image. And through the corresponding relation between the UV mapping and the human body three-dimensional model, the human body three-dimensional model with the corresponding shape can be directly recovered from the human body shape UV mapping. The reconstruction of an occluded human color image through this method is shown in fig. 6.

Claims (5)

1. A method for estimating the shape and the posture of a human body aiming at an object-occluded scene is characterized by comprising the following steps:
(1) in the data preparation stage, calculating weak perspective projection parameters by using the corresponding relation between three-dimensional joint points and two-dimensional joint points of a human body of a three-dimensional human body data set;
(2) converting the human body three-dimensional model into camera coordinates through three-dimensional rotation and translation according to the computed weak perspective projection parameters;
(3) normalizing the vertex x, y and z coordinate values of the human body three-dimensional model under the camera coordinate to be in the range of-0.5 to 0.5, and storing the vertex x, y and z coordinate values into three channels of R, G and B of the UV map to obtain the UV map containing the human body shape information under the condition of no shielding;
(4) adding a random object picture to the human body two-dimensional image for shielding, and acquiring a human body mask under the shielding condition;
(5) repeating the step (3), wherein three-dimensional points which fall outside the mask area after the weak perspective projection are used as three-dimensional points under visual shielding, and x, y and z coordinates of the three-dimensional points are fixedly set to be-0.5, so as to obtain a UV (ultraviolet) map under corresponding shielding;
(6) in the training stage, training a UV (ultraviolet) map repairing network of the coding-decoding structure based on the virtual occlusion data obtained in the steps (1) to (5); l between the repair network and the UV map of the whole human body1Loss, laplacian smoothing terms between adjacent pixels and UV junction consistency as constraints;
(7) a real object is used for shielding a human body color image as input, and a mask image is used as a true value to construct a significance detection network of an encoding-decoding structure;
(8) connecting the color picture of the shielded human body with the saliency map, sending the color picture of the shielded human body into a human body coding network, simultaneously coding the UV map under corresponding shielding by using the repair network trained in the step (6), and supervising human body coding network training by using the hidden space characteristics obtained by coding;
(9) in the testing stage, inputting a color image of a sheltered human body, and decoding a hidden space characteristic value obtained by encoding the human body encoding network by using a decoder of a repair network through a significance detection network and a human body encoding network to obtain a complete UV image;
(10) and restoring the human body three-dimensional model under the shielding condition by using the corresponding relation between the UV map and the vertex of the human body three-dimensional model.
2. The method of claim 1, wherein the UV map repair network of step (6) uses ResNet as an encoder and stacked deconvolution layers as a decoder.
3. The method for estimating the shape and the posture of the human body aiming at the object-occluded scene according to claim 1, wherein the step (6) is realized by the following formula:
L=L1+λLtv+μLp
wherein, λ, μ is weight, LtvIs a Laplace smoothing term, LpFor the UV junction consistency constraint:
Figure FDA0003540688530000021
wherein, VbIs a model vertex point set corresponding to a plurality of UV pixels, and p (v) is a UV pixel value corresponding to a model vertex v.
4. The method for estimating human body shape and posture for an object-occluded scene according to claim 1, wherein the human body coding network of step (8) uses VGG-19 structure.
5. The method for estimating human body shape and posture for an object-occluded scene according to claim 1, wherein the color image of step (9) is a preprocessed human body occlusion image obtained from a monocular color camera.
CN202010099358.XA 2020-02-18 2020-02-18 Human body shape and posture estimation method for object occlusion scene Active CN111339870B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010099358.XA CN111339870B (en) 2020-02-18 2020-02-18 Human body shape and posture estimation method for object occlusion scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010099358.XA CN111339870B (en) 2020-02-18 2020-02-18 Human body shape and posture estimation method for object occlusion scene

Publications (2)

Publication Number Publication Date
CN111339870A CN111339870A (en) 2020-06-26
CN111339870B true CN111339870B (en) 2022-04-26

Family

ID=71185382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010099358.XA Active CN111339870B (en) 2020-02-18 2020-02-18 Human body shape and posture estimation method for object occlusion scene

Country Status (1)

Country Link
CN (1) CN111339870B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739161B (en) * 2020-07-23 2020-11-20 之江实验室 Human body three-dimensional reconstruction method and device under shielding condition and electronic equipment
CN112530027A (en) * 2020-12-11 2021-03-19 北京奇艺世纪科技有限公司 Three-dimensional point cloud repairing method and device and electronic equipment
CN112785524B (en) * 2021-01-22 2024-05-24 北京百度网讯科技有限公司 Character image restoration method and device and electronic equipment
CN112785692B (en) * 2021-01-29 2022-11-18 东南大学 Single-view-angle multi-person human body reconstruction method based on depth UV prior
CN112819951A (en) * 2021-02-09 2021-05-18 北京工业大学 Three-dimensional human body reconstruction method with shielding function based on depth map restoration
CN112907736B (en) * 2021-03-11 2022-07-15 清华大学 Implicit field-based billion pixel scene crowd three-dimensional reconstruction method and device
CN113378980B (en) * 2021-07-02 2023-05-09 西安电子科技大学 Mask face shielding recovery method based on self-adaptive context attention mechanism
CN113538663B (en) * 2021-07-12 2022-04-05 华东师范大学 Controllable human body shape complementing method based on depth characteristic decoupling
CN113628342A (en) * 2021-09-18 2021-11-09 杭州电子科技大学 Three-dimensional human body posture and shape reconstruction method based on occlusion perception
WO2024055194A1 (en) * 2022-09-14 2024-03-21 维沃移动通信有限公司 Virtual object generation method, and codec training method and apparatus thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780569A (en) * 2016-11-18 2017-05-31 深圳市唯特视科技有限公司 A kind of human body attitude estimates behavior analysis method
CN109242954A (en) * 2018-08-16 2019-01-18 叠境数字科技(上海)有限公司 Multi-view angle three-dimensional human body reconstruction method based on template deformation
CN110119679A (en) * 2019-04-02 2019-08-13 北京百度网讯科技有限公司 Object dimensional information estimating method and device, computer equipment, storage medium
CN110533721A (en) * 2019-08-27 2019-12-03 杭州师范大学 A kind of indoor objects object 6D Attitude estimation method based on enhancing self-encoding encoder
CN110633748A (en) * 2019-09-16 2019-12-31 电子科技大学 Robust automatic face fusion method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005755B (en) * 2014-04-25 2019-03-29 北京邮电大学 Three-dimensional face identification method and system
TWI526992B (en) * 2015-01-21 2016-03-21 國立清華大學 Method for optimizing occlusion in augmented reality based on depth camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780569A (en) * 2016-11-18 2017-05-31 深圳市唯特视科技有限公司 A kind of human body attitude estimates behavior analysis method
CN109242954A (en) * 2018-08-16 2019-01-18 叠境数字科技(上海)有限公司 Multi-view angle three-dimensional human body reconstruction method based on template deformation
CN110119679A (en) * 2019-04-02 2019-08-13 北京百度网讯科技有限公司 Object dimensional information estimating method and device, computer equipment, storage medium
CN110533721A (en) * 2019-08-27 2019-12-03 杭州师范大学 A kind of indoor objects object 6D Attitude estimation method based on enhancing self-encoding encoder
CN110633748A (en) * 2019-09-16 2019-12-31 电子科技大学 Robust automatic face fusion method

Also Published As

Publication number Publication date
CN111339870A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111339870B (en) Human body shape and posture estimation method for object occlusion scene
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN107240129A (en) Object and indoor small scene based on RGB D camera datas recover and modeling method
Malciu et al. A robust model-based approach for 3d head tracking in video sequences
US11961266B2 (en) Multiview neural human prediction using implicit differentiable renderer for facial expression, body pose shape and clothes performance capture
CN112785692B (en) Single-view-angle multi-person human body reconstruction method based on depth UV prior
CN112907631B (en) Multi-RGB camera real-time human body motion capture system introducing feedback mechanism
CN111950477A (en) Single-image three-dimensional face reconstruction method based on video surveillance
CN112906675B (en) Method and system for detecting non-supervision human body key points in fixed scene
WO2023159517A1 (en) System and method of capturing three-dimensional human motion capture with lidar
Kang et al. Competitive learning of facial fitting and synthesis using uv energy
JP2024510230A (en) Multi-view neural human prediction using implicitly differentiable renderer for facial expression, body pose shape and clothing performance capture
CN114996814A (en) Furniture design system based on deep learning and three-dimensional reconstruction
CN115951784A (en) Dressing human body motion capture and generation method based on double nerve radiation fields
CN117994480A (en) Lightweight hand reconstruction and driving method
CN116071412A (en) Unsupervised monocular depth estimation method integrating full-scale and adjacent frame characteristic information
CN113920270A (en) Layout reconstruction method and system based on multi-view panorama
CN111899293B (en) Virtual and real shielding processing method in AR application
Cha et al. Self-supervised monocular depth estimation with isometric-self-sample-based learning
CN117711066A (en) Three-dimensional human body posture estimation method, device, equipment and medium
Zhang et al. Imaged-based 3D face modeling
KR102577135B1 (en) A skeleton-based dynamic point cloud estimation system for sequence compression
Jäger et al. A comparative Neural Radiance Field (NeRF) 3D analysis of camera poses from HoloLens trajectories and Structure from Motion
Han et al. Learning residual color for novel view synthesis
CN118134983B (en) Transparent object depth complement method based on double-intersection attention network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant