How To Overcome Bias In AI And ML
Artificial intelligence (AI) and machine learning (ML) are is set to shape the future in everything from financial to legal systems. Algorithms are choosing who gets granted a loan and who gets denied parole.
A survey conducted jointly by MIT Technology Review and Google Cloud showed that 60 percent of respondents have already implemented an ML strategy in their organization. Deloitte also predicts that spending on ML and AI will nearly quadruple from $12 billion in 2017 to $57.6 billion in 2021.
However, there’s a growing concern that AI and ML systems are inheriting the biases of their human makers. This concern is not without reason, there are several examples of extreme prejudice to consider:
Microsoft's Tay Twitterbot had to be taken down after it learned to be racist within just 16 hours and Google also had to block gender-based pronouns from its Smart Compose feature – one of its AI-enabled innovations.
It was also reported that Amazon ditched an AI recruiting tool for being biased towards selecting men for technical roles after it was fed ten years of Amazon’s hiring data; the vast majority of hires were men.
Additionally, Carnegie Mellon research found that women were shown significantly fewer online ads for jobs paying more than $200,000 per year compared to men, and minorities have been reported to generally pay more for car insurance due to the algorithms used to determine premiums.
These are just some of the many examples that prove bias is quickly becoming an issue for developers of AL and ML.
What Is the Source Of Bias In AI And ML?
We have come to the realization that just like any other computing-based system, AI and ML are not in any way immune to the oldest rule of computing: “garbage in, garbage out”. It turns out that algorithms are only as good as the mountains of data they are fed to extrapolate, interpret and learn.
It also turns that most this data being used by AI/ML systems tends to come from people of the same socio-economic class. So as AI/ML has learned to interpret human language, it has also accidentally learned the human biases – racial, gender, economic, etc. – of its teachers as well.
With subtle bias spread all over our society, it’s not really surprising that the machines we have made have taken it and magnified it. And unfortunately, unlike humans, algorithms are not (yet) sophisticated enough to consciously unlearn learned biases despite common misconceptions about their objectivity.
How To Manage Bias In AI and ML
Some tech companies, which have previously been accused of not paying enough attention to the problem of AI bias in the past, are now taking steps to tackle it. Microsoft is hiring non-tech professionals to train its AI bots in nuanced language, IBM is applying independent bias ratings and Google has adopted a set of AI principles aimed at doing away with bias.
Researchers have also been looking into how to account for and reduce human biases in the AI/ML industry. Here are some good anti-bias practices to follow in developing AI/ML innovations:
1. Carefully Select the Right Learning Model
Every AI model is unique because each problem requires a unique solution and different data sets. Data scientists need to identify the best model for a given situation by carefully considering the varied strategies they can take, the different data sets they can use, and their respective implications.
For example, unsupervised models can learn bias from their data set while supervised models allow for more risk of human bias being introduced into the process. Developers should also make sure to simulate real-world applications as much as possible when building algorithms and run their statistical methods against real data whenever possible instead of testing algorithms already in production.
2. Use More Varied Training Data Sets
Using more data is one of the simplest, yet most effective ways, to prevent AI bias. The bias often arises from a lack of training samples for one subgroup relative to other more dominant/common ones. Collecting more training samples for the underrepresented group goes a long way.
However, this alone might not be enough. You might need to use weighting to raise the consideration given to data from the underrepresented group but this should be done very carefully as it can open the door for unexpected new biases and also lean the system toward reading random noise as trends.
3. Embrace More Feature Engineering
Feature engineering is one of the most promising solutions to bias in AI and ML. Understanding how certain features carry different meaning with certain subgroups can help do away with some subtle bias.
A common example of this is a recruitment algorithm that learns to associate long periods of no work with poor job performance. However this would disproportionately affect women applicants, who are more likely to have gaps in their job history from taking time off to care for their children/families.
A feature that differentiates between involuntary unemployment and parental/family leave is better, both for reducing the bias against women and for improving the overall performance of the model.
4. More Diversity in the Demographics Working in AI and ML
As explained earlier, one key root of bias in AI and ML is the people creating the solutions in the first place. Data from the Bureau of Labor Statistics shows that the professionals who write AI programs are still mostly white males. A study by Wired and Element AI found that only 12 percent of leading ML researchers are women.
Companies in this sector are going to have to take more deliberate steps to introduce more diversity and inclusion on the teams working on their AI and ML innovations to curb the risk of human bias.
Is human bias creeping into your AI and ML solutions?
With their increasing application in critical industries, such as healthcare, transport and finance, it is imperative that we find a way to limit our human biases from proliferating in AI and ML systems.
Let the experts at ASB Resources guide you in finding and deploying the right modeling principles to reduce or eliminate bias in your AI and ML solutions. Schedule a call with one of our experts today!