Here's a new blog post I wrote on how my team eliminates unwanted biases (e.g., by gender, ethnicity) from algorithms we've deployed in the recruitment sector.
Devising algorithms that stamp out unwanted biases without skimping on accuracy or performance adds time and effort to the machine learning model-design process. When algorithms can have a considerable social impact, as ours do in the human-resources space at GQR Global Markets, investing this time and effort is essential to ensuring equitable treatment of all people.