Fork me on GitHub

Social bias and fairness in NLP

Social bias and fairness in NLP

Learned continuous representations for language units was the first trembling steps of making neural networks useful for natural language processing (NLP), and promised a future with semantically rich representations for downstream solutions. NLP has now seen some of the progress that previously happened in image processing: the availability of increased computing power and the development of algorithms have allowed people to train larger models that perform better than ever. Such models also make it possible to use transfer learning for language tasks, thus leveraging large widely available datasets.

In 2016, Bolukbasi, et.al., presented their paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings”, shedding lights on some of the gender bias that was available in trained word embeddings at the time. Datasets obviously encode the social bias that surrounds us, and models trained on that data may expose the bias in their decisions. It is important to be aware of what information a learned system is basing its predictions on. Some solutions have been proposed to limit the expression of societal bias in NLP systems. These include techniques such as data augmentation and representation calibration. Similar approaches may also be relevant for privacy and disentangled representations. In this talk, we’ll discuss some of these issues, and go through some of the solutions that have been proposed recently.

References

  • Bolukbasi, et.al., NeurIPS 2016, Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
  • Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356(6334):183–186
  • Zhao, et.al, EMNLP 2018, Learning Gender-Neutral Word Embeddings
  • Sahlgren & Ohlsson, 2018, Gender Bias in Pretrained Swedish Embeddings
  • Kiela & Bottou, EMNLP 2014, Learning Image Embeddings using Convolutional Neural Networks for Improved Multi-Modal Semantics
  • Kågebäck, Mogren, Tahmasebi, Dubhashi, 2014, Extractive summarization using continuous vector space models http://mogren.one/summarization/
  • Zhao, et.al., NAACL 2018, Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods
  • Zhang, et.al., AIES 2018, Mitigating Unwanted Biases with Adversarial Learning
  • Sato, et.al., ACL 2019, Effective Adversarial Regularization for Neural Machine Translation
  • Wang, et.al., ICML 2019, Improving Neural Language Modeling via Adversarial Training
  • Martinsson, J., Listo Zec, E., Gillblad, D., Mogren, O. Adversarial representation learning for synthetic replacement of private attributes. https://arxiv.org/abs/2006.08039, 2020.
  • Kai-Wei Chen's talk on bias and fairness https://www.youtube.com/watch?v=WypSLlPaKBg, http://kwchang.net/talks/genderbias

Slides (PDF)

GAIA Conference 2020, 2020-11-27
Olof Mogren

Olof Mogren, PhD, RISE Research institutes of Sweden. Follow me on Bluesky.