Skip to content

[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models

License

Notifications You must be signed in to change notification settings

pliang279/LM_bias

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Towards Understanding and Mitigating Social Biases in Language Models

This repo contains code and data for evaluating and mitigating bias from generation models.

Paper

Towards Understanding and Mitigating Social Biases in Language Models
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and Ruslan Salakhutdinov
ICML 2021

If you find this repository useful, please cite our paper:

@inproceedings{liang2021towards,
  title={Towards Understanding and Mitigating Social Biases in Language Models},
  author={Liang, Paul Pu and Wu, Chiyu and Morency, Louis-Philippe and Salakhutdinov, Ruslan},
  booktitle={International Conference on Machine Learning},
  pages={6565--6576},
  year={2021},
  organization={PMLR}
}

1. Identify bias-sensitive tokens, obtain bias subspace and create the dataset to train the bias classifier

python data_preprocess.py --embed_source glove --by_pca True --num_components 5 --save_subspace False

Glove embedding and gpt2 embedding are large files, you can download or extract them by yourself. We also provide the google drive link.

2. Train the bias classifier and learn the projection matrix P

python context_nullspace_projection.py

The code of nullspace projection is from INLP. Thanks for their great work!

To run the INLP experiments, you need to git clone https://github.com/shauli-ravfogel/nullspace_projection first, and put it under the root directory of this repo.

3. Evaluate Bias existing in the gpt2

Local Bias

cd src/local_bias
python measure_local_bias.py

It will take long time to run the evaluation script on the full data. Here we provide the subset of our evaluation data now. Full data is available via google drive. Note that when evaluating over full data, you may encounter numerical problems on some sentences, you can simply discard these samples.

Global Bias

We use the regard score difference as the metric for global bias. The evaluation code is from https://github.com/ewsheng/nlg-bias. Thanks for their great work!

git clone https://github.com/ewsheng/nlg-bias.git
cd src/global_bias
python generate_full_sentence.py --algorithm INLP

After full sentences are generated, you need to use the regard classifier to measure the global bias.

To reproduce the result in our paper, we also provide the projection matrix P for the gender bias test in data/saved_P/P_gender_test_79.npy

Acknowledgements

About

[ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages