Teaching machines to play fair
Machine learning is increasingly used to make decisions about people’s lives, such as whether to give someone a loan or whether to interview someone for a job. This brings with it the risk of discrimination, particularly if the data used for training the machines contains bias.
One strategy for ensuring such systems are fair is to modify the training data they learn from. Such approaches have been successful empirically, but typically lack strong theoretical fairness guarantees.