Skip to content

Implementation of debiasing algorithm in "Debiasing Representations by Removing Unwanted Variation Due to Protected Attributes" on ProPublica's COMPAS data set

Notifications You must be signed in to change notification settings

Amandarg/debias

Repository files navigation

Debiasing Representations by Removing Unwanted Variation Due to Protected Attributes

Here we present an example using the ProPublica COMPAS data set. Using race as the protected attribute, we compare raw and debiased predictions trained on a simple logistic regression model. We show that the debiased data decreases the magnitude of the difference of both false positve and false negative rates between African-Americans and Caucasians will little loss to accuracy. While our model is simple, we also show it is comparable to the COMPAS algorithm in scoring and accuracy.

About

Implementation of debiasing algorithm in "Debiasing Representations by Removing Unwanted Variation Due to Protected Attributes" on ProPublica's COMPAS data set

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published