Skip to content

Latest commit

 

History

History
102 lines (60 loc) · 6.67 KB

about.md

File metadata and controls

102 lines (60 loc) · 6.67 KB
image
/assets/img/blog/steve-harvey.jpg

Overview

{:.lead}

Featured Research Delivering

{:.lead}

Hightlight: Robust Learning and Inference under Adverse Conditions, e.g., noisy labels or observations, outliers, adversaries, sample imbalance (long-tailed), etc.

{:.lead} Why important? DNNs can brute forcelly fit well training examples with random lables (non-meaningful patterns):


In the large-scale training datasets, noisy training data points generally exist. Specifically and explicitly, the observations and their corresponding semantic labels may not matched. {:.message}

Are deep models robust to massive noise intrinsically?

Intuitive concepts to keep in mind

  • The definition of abnormal examples: A training example, i.e., an observation-label pair, is abnormal when an obserevation and its corresponding annotated label for learning supervision are semantically unmatched.

  • Fitting of abnormal examples: When a deep model fits an abnormal example, i.e., mapping an oberservation to a semantically unmatched label, this abnormal example can be viewed as an successful adversary, i.e., an unrestricted adversarial example.

  • Learning objective: A deep model is supposed to extract/learn meaningful patterns from training data, while avoid fitting any anomaly. {:.message}

Related papers reading