Skip to content

UVa CS 4501/6501 Interpretable Machine Learning

Notifications You must be signed in to change notification settings

uvanlp/iml-2022

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

91 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

1. Course Information

  • Instructors: Hanjie Chen, Yangfeng Ji
  • Semester: Spring 2022
  • Location: Rice Hall 340
  • Time: Tuesday and Thursday 11 AM - 12:15 PM
  • TA: Wanyu Du
  • Office hours:
    • Hanjie Chen: Thursday 2:00 PM - 3:00 PM, Location: Zoom
    • Yangfeng Ji: Friday 11:00 AM - 12:00 PM, Location: Rice 510 (in person) and Zoom
    • Wanyu Du: Tuesday 3:00 PM - 4:00 PM, Location: Zoom
  • Schedule
  • Campuswire for online discussion. By the time of our first class, students registered for this course should all receive an invitation from Campuswire. Please let the instructors know if you haven’t gotten one.

2. Course Description

Machine learning models have achieved remarkable performance in a wide range of AI fields, such as Natural Language Processing and Computer Vision. However, the lack of interpretability of machine learning models raises concerns regarding the trustworthiness and reliability of their predictions. This problem blocks their applications in the real world, especially in high-stake scenarios, such as healthcare, economy and criminal justice. The goal of this course is to let students get familiar with the emerging problem in machine learning and recent advances in interpretable and explainable AI.

2.1 Topics

This course will include but not limit to the following contents:

  • Background of interpretable machine learning
    • Interpretability in machine learning
    • Brief introduction of deep learning
  • Techniques in exploring the interpretability of machine learning models
    • Different classes of interpretable models (e.g., prototype based approaches, sparse linear models, rule based techniques, generalized additive models)
    • Post-hoc explanations (e.g., white-box explanations, black-box explanations, saliency maps)
    • Connections between model interpretability and other properties, such as robustness, uncertainty, and fairness
  • Implementation of model interpretability in real-world applications, including natural language processing, computer vision, healthcare, etc.

2.2 Format

  • Hybrid: lectures will be given in person at Rice Hall 340 and also streamed and recorded on Zoom. Students can find the Zoom link on Collab.
  • From Week 4: one lecture + one discussion per week

2.3 Prerequisites

  • Machine Learning: Students are expected to have machine learning background, for example, by taking one of our machine learning classes (CS 4774 or CS 6316).
  • Programming: Students are also expected to have programming and software engineering skills to work with machine packages using Python (e.g., Sklearn, PyTorch, Tensorflow).
  • Calculus and Linear Algebra: Multivariable derivatives, matrix/vector notations and operations; singular value decomposition, etc.
  • Probability and Statistics: Mean and variance, multinomial distribution, conditional dependence, maximum likelihood estimation, Bayes theorem, etc.

2.4 Textbook/Materials

3. Assignments and Evaluation Schemes

  • Application-oriented (for undergraduates)

    • 3 programming assignments (45%)
    • 1 paper presentation (15%)
    • 10 paper summaries (10%)
    • Final project (20%)
    • In-class discussion (7%) + attendance (3%)
  • Research-oriented (for graduates)

    • 2 programming assignments (30%)
    • 2 paper presentations (30%)
    • 10 paper summaries (10%)
    • Final project (20%)
    • In-class discussion (7%) + attendance (3%)
  • Rubrics

4. Additional Information

Acknowledgement

Hanjie Chen is supported by the UVa Engineering Graduate Teaching Internship Program (GTI) for designing and teaching this course.

About

UVa CS 4501/6501 Interpretable Machine Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages