Home |
SIGGRAPH Asia 2016 |
||
Li Yi1
Vladimir G. Kim1,2
Duygu Ceylan2
I-Chao Shen3
Mengyuan Yan1
|
||
Figure 1: We use our method to create detailed per-point labeling of 31963 models in 16 shape categories in ShapeNetCore. |
||
Video |
||
AbstractLarge repositories of 3D shapes provide valuable input for data- driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate se- mantic region annotations. Given a shape collection and a user- specified region label our goal is to correctly demarcate the corre- sponding regions with minimal manual work. Our active frame- work achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility func- tion that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human effi- ciency. We demonstrate that incorporating verification of all pro- duced labelings within this unified objective improves both accu- racy and efficiency of the active learning procedure. We automati- cally propagate human labels across a dynamic shape network us- ing a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be sig- nificantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by an- notating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collec- tion more than one order of magnitude larger than existing ones. |
||
Paper |
||
PDF(23.2MB) PDF(6.8MB) Supplemental | ||
Pipeline |
||
Figure 2: This figure summarizes our pipeline. Given the input dataset we select annotation set and use our UI to obtain human labels. We automatically propagate these labels to the rest of the shapes and then query the users to verify most confident propagations. We then use these verifications to improve our propagation technique. |
||
Results |
||
Figure 3: We use our method to get part labels for more than 30,000 models in 16 shape categories in ShapeNetCore. We denote the number of models in each category in parentheses. |
||
Code and DataCode TBA Data An official release of our part annotation data will come with the next release of ShapeNet. But you can get an pre-release version here (1.08GB). Notice you might want to get the corresponding 3D models from ShapeNet. Thanks to Kalogerakis et al., a per-face part labeling for ShapeNetCore meshes could be downloaded here. News We re-organize our data so that it could be used as a segmentation benchmark more conveniently. Only expert verified segmentations are included and the official training/test split from ShapeNet is also provided. Please download the data here(635MB) |
||
BibTex@article{Yi16,Author = {Li Yi and Vladimir G. Kim and Duygu Ceylan and I-Chao Shen and Mengyan Yan and Hao Su and Cewu Lu and Qixing Huang and Alla Sheffer and Leonidas Guibas}, Journal = {SIGGRAPH Asia}, Title = {A Scalable Active Framework for Region Annotation in 3D Shape Collections}, Year = {2016}} |