Skip to content

laurakn/debias

 
 

Repository files navigation

Debiasing Representations by Removing Unwanted Variation Due to Protected Attributes

Here we present an example using the ProPublica COMPAS data set. Using race as the protected attribute, we compare raw and debiased predictions trained on a simple logistic regression model. We show that the debiased data decreases the magnitude of the difference of both false positve and false negative rates between African-Americans and Caucasians will little loss to accuracy. While our model is simple, we also show it is comparable to the COMPAS algorithm in scoring and accuracy.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%