The “algorithm” is now an entity. It is a subject that society is talking a lot lately. In 2015, a photo app automatically tagged two Afro-American friends as gorillas. In 2016, a bot called Tay learned to be racist, Holocaust denier and that feminists “should all die and burn in hell”, in 12 hours. In less than 24 hours, it was shut down. There is unpredictability of machine learning algorithms when confronted with real people. How much bias machine learning algorithms can introduce? How much came from the data used to train the algorithms and how much came from the algorithm itself? How to create products based on machine learning avoiding gender, race, age or culture bias and others and avoiding doing harm to those groups?
Yates (Communication of ACM, June 2018) said that “any remedy for bias must start with awareness that bias exists.” Page (The Difference, 2007) proposed that identity diversity (our gender, race, religion, etc.) leads to cognitive diversity (the way we think and solve problems), mainly in tasks as prediction and problem-solving. A study made by McKinsey & Company in 2014 says that diversity fosters innovation and increase financial results. So, workplace diversity can help in different ways, including to detect and reduce bias in algorithms design and execution.
How much agile teams, from the beginning of software development chain, can help to minimize bias and reduce backslash to the end user? What is the role of agile when teams are built to work in a machine learning world? Agile Manifesto values individuals and interactions over processes and tools. Agile teams are built on that. Recently, Modern Agile also set two of four values based on people: make people awesome and make safety a prerequisite. Not as a causality, but, maybe, as a correlation, agile values are good evidence that we can have development environments that better support diversity. Once we have more diverse teams, we can expect better outputs (less biased) from machine learning algorithms.