Algorithmic Bias - Yes, It’s Real

Ronny Aoun
By
Ronny Aoun
Founder & Chief Executive Officer
Algorithmic Bias - Yes, It’s Real

There is such a thing as algorithmic bias, and its implications are real and can be potentially serious. Despite the sense of wonder and mysticism that’s sometimes applied to the subject these days, artificial intelligence (AI) is not magic. In and of itself, AI cannot solve all the world’s challenges nor is it the sole domain of technology geniuses who need PhDs to use and understand the technology. 

Humans are behind the algorithms and when it comes to machine learning (ML), one of the most talked about subset of AI, algorithmic bias tends to make itself known. ML is the use and development of computer systems that are able to learn and adapt without following explicit instructions by using algorithms and statistical models to analyze and draw inferences from patterns in data. AI systems then use patterns in the data to make assumptions based on that data.


No magic tricks involved

It’s all about algorithms, datasets and data training - not magic. When studies continue to show that there is a diversity crisis within the AI industry, it becomes easier to understand the role unconscious bias can play in development. Human beings can transfer their own implicit biases to algorithms, leading to systematic and repeatable errors in systems and models.

At the same time, one cannot underestimate the significant potential of AI to actually eliminate and correct existing biases and act as an enabler of greater diversity. But the right parameters need to be in place for that to happen.  


What’s the big deal?

Given that AI is being used to solve social, medical, educational, environmental and a whole host of complex business challenges, algorithmic bias can have real-life consequences. Take facial recognition software used by some police forces to compare suspects’ photos to mug shots and driver’s license images. Research has shown that the poorest accuracy is consistently found in subjects who are female, Black and aged 18-30 years old. In healthcare, a lack of data integrity can promote health inequity. 

To be fair, most people don’t deliberately set out to create biased algorithms and those who make decisions about commercial applications of AI are not often aware of the bias embedded in their models.

None of this information is new. Though, there’s still so much for us to do and learn when it comes to AI, there are some practical steps we must take to ensure we do better.


Here are four actions the AI industry must take to help overcome bias:

Drive awareness across the organization. Innovation is not an excuse for ignorance. Sure, we’re learning as we go, but it’s not a reason to bury our heads in the sand. Ensure that all the people who have an impact on development of systems and models understand that algorithmic bias is real. Help them understand the role of unconscious bias and how it can creep into our work despite our good intentions.

Hire more diverse people. To solve the real-world problems at scale, AI technologists need to reflect the populations they impact. If you have trouble recruiting the diverse workforce you need, then talk to the people who will be impacted by your programs. Consult with people to understand potential concerns. Development cannot take place in a vacuum.

Stop relying heavily on historical data. You’ve heard it before: garbage in, garbage out. There is still too much reliance on historical data when it comes to data collection. Historical data is a problem when it no longer reflects current reality.

Make AI explainable and transparent. When it comes to AI, businesses must prioritize explainability and transparency. We will be able to build trust in AI when we build systems that are transparent in their operations so that all humans can understand how these systems arrive at their decisions and predictions. Explainable AI is responsible AI.


AI can indeed play a role in correcting existing biases but first the conditions need to be in place to prevent AI technologists from transferring their own unconscious biases into their work. 

AI doesn’t run itself. AI technologists need to have proper governance in place. They need to be aware of their own biases. They need to ensure there are diverse voices around the table when it comes to decision-making. And, they need to ensure that algorithms are explainable and transparent.  

When all of these parameters are met, absolutely, AI can be a powerful tool in helping to fight bias and contribute to decision-making within organizations.


Share this article on