Skip to content

[AAAI 2024] Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees

Notifications You must be signed in to change notification settings

Wang-ML-Lab/multi-domain-active-learning

Repository files navigation

Composite Active Learning:
Towards Multi-Domain Active Learning with Theoretical Guarantees

This repo contains the code for our AAAI 2024 paper:
Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees
Guang-Yuan Hao, Hengguan Huang, Haotian Wang, Jie Gao, Hao Wang
AAAI 2024
[Paper]

Outline for This README

Brief Introduction for CAL

Active learning (AL) aims to improve model performance within a fixed labeling budget by choosing the most informative data points to label. Existing AL focuses on the single-domain setting, where all data come from the same domain (e.g., the same dataset). However, many real-world tasks often involve multiple domains. For example, in visual recognition, it is often desirable to train an image classifier that works across different environments (e.g., different backgrounds), where images from each environment constitute one domain. Such a multi-domain AL setting is challenging for prior methods because they (1) ignore the similarity among different domains when assigning labeling budget and (2) fail to handle distribution shift of data across different domains. In this paper, we propose the first general method, dubbed composite active learning (CAL), for multi-domain AL. Our approach explicitly considers the domain-level and instance-level information in the problem; CAL first assigns domain-level budgets according to domain-level importance, which is estimated by optimizing an upper error bound that we develop; with the domain-level budgets, CAL then leverages a certain instance-level query strategy to select samples to label from each domain. Our theoretical analysis shows that our method achieves a better error bound compared to current AL methods. Our empirical results demonstrate that our approach significantly outperforms the state-of-the-art AL methods on both synthetic and real-world multi-domain datasets.

Installation and Run

conda env create -f environment.yml

We run experiments on four datasets, including RotatingMNIST (10 classes), Office-Home (65 classes), ImageCLEF (12 classes), and Office-Caltech (10 classes). The corresponding run scripts are named run_mnist.py, run_of.py, run_celf.py, and run_ofct.py respectively. The path data contains all four datasets. For experiments using Office-Home, ImageCLEF, and Office-Caltech, we first extract features from images by pre-trained ResNet-50 and then use these features to form datasets.

Also Check Our Relevant Work

[1] Taxonomy-Structured Domain Adaptation
Tianyi Liu*, Zihao Xu*, Hao He, Guang-Yuan Hao, Guang-He Lee, Hao Wang
Fortieth International Conference on Machine Learning (ICML), 2023
[Paper] [OpenReview] [PPT] [Talk (Youtube)] [Talk (Bilibili)]
"*" indicates equal contribution.

[2] Domain-Indexing Variational Bayes: Interpretable Domain Index for Domain Adaptation
Zihao Xu*, Guang-Yuan Hao*, Hao He, Hao Wang
Eleventh International Conference on Learning Representations, 2023
[Paper] [OpenReview] [PPT] [Talk (Youtube)] [Talk (Bilibili)]

[3] Graph-Relational Domain Adaptation
Zihao Xu, Hao He, Guang-He Lee, Yuyang Wang, Hao Wang
Tenth International Conference on Learning Representations (ICLR), 2022
[Paper] [Code] [Talk] [Slides]

[4] Continuously Indexed Domain Adaptation
Hao Wang*, Hao He*, Dina Katabi
Thirty-Seventh International Conference on Machine Learning (ICML), 2020
[Paper] [Code] [Talk] [Blog] [Slides] [Website]

Reference

Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees

@article{hao2024composite,
  title={Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees},
  author={Hao, Guang-Yuan and Huang, Hengguan and Wang, Haotian and Gao, Jie and Wang, Hao},
  journal={AAAI},
  year={2024}
}

About

[AAAI 2024] Composite Active Learning: Towards Multi-Domain Active Learning with Theoretical Guarantees

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages