Artificial Intelligence

The Biases in Artificial Intelligence

 By Bipin Regmi ,  Anthony Haswell  &  Arnie Hayes

We often hear that the biases in AI happen while collecting the data cause ethical issues. These problems arise when the data collected reflects the preexisting discriminations and biases found in our societies. An example of a deep learning algorithm bias could be when a facial recognition system is given more pictures/examples of light-skinned individuals than those who are dark-skinned. The algorithm would act worse in case of recognizing dark-skinned faces causing discriminating biases [3]. Choosing what kinds of attributes that are to be considered in the deep learning process is going to drastically affect the AI’s ability to perform the task with precision and identifying these potential biases is the first step.

 The main idea is that the developers won’t know exactly how it’s going to behave when trials begin so they won’t be able to recognize the algorithm’s behaviors/issues until the system undergoes mass testing and usage. That’s why it can be so difficult to track down and identify where exactly the bias is coming from. The learning behaviors of the system are not modeled or designed in such a way to detect the biases as they are mostly tested out for their performance of the specific task [3]. Before the system is released the computer scientists test out the system with some huge random data but the problem is that the limited data sample being tested on will not assure that the data includes all the biases. AI systems learn to make decisions based on training data or previous data, which can result in biased human decisions or social inequities, even if sensitive variables such as gender, race, or sexual orientation are considered[2]. Some AI’s such as Amazon’s next-day delivery has a bias due to some areas of the US, typically lower-class areas, do not have same-day delivery like the middle and upper-class areas do. This is because the AI learned from past redlining areas in poorer neighborhoods. The areas who were redlined are the same areas that aren’t getting same-day or next-day delivery. The issue with training data is that groups are over or underrepresented resulting in flawed data sampling[2]. AI bias is the programmer’s responsibility but clearly, it’s hard to address all of the potential reasons why biases occur.  

AI systems are created by individuals with their own backgrounds, experiences, even prejudices and this usually cause an unintentionally biased system[6]. It is the programmer’s responsibility to create fair and unbiased algorithms but inadvertent consequences do occur[6]. The consequences of bias are too significant to ignore so action and thorough considerations are necessary. As AI technology becomes more widely accepted, available, and relied on in all industries precision cannot be neglected. Consider the examples of AI in the use of health care systems or credit checks for banking firms. In health care the use and accuracy of AI can be a matter of life and death, likewise, when used in credit checks it determines approval or denial of an urgent loan[6]. This leads to the issue of liability to the company or individuals that built the AI technology[6]. The question of who is responsible/liable arises when these mistakes occur and the fallout can be devastating; especially when the bias is unintentional. Determining this fault can be complicated and the only way to avoid this is to improve the training data and try to identify any biases that were missed in initial testing[6]. Addressing the issues that caused the bias is challenging, but there are ways to improve your AI to minimize bias and the risk of potential discrimination/mistakes. Careful analysis of the training data, identifying issues in the root algorithm, and removing all societal human biases is how programmers can begin to create a fair and accurate system[5].

 Having fair and equal representation of different demographics plays the biggest role in reducing problematic biases in AI. Programmers must be mindful of the demographic representation in the training data[5]. Ensuring equal representation isn’t always obvious and generally overlooked[5]. Largely because the majority of programmers and computer scientists are men[5]. Diversity within the team and workplace is a first step into ensuring equal representation of all groups. As the team gets more diverse a greater understanding of how to represent all groups becomes easier. The problem is that we live in complex societies and identifying every group and confirming that they have equal representation with the training data is impossible[5]. Some specific groups will always be marginalized, but as long as programmers are not deliberate and are considerate of these groups then we can lessen biases in the system that can be much more inclusive[5].

This understanding of underrepresentation leads right into the analysis of the system as a whole, but this can turn out to be the most challenging. Detecting causes of bias in the root algorithm requires particular consideration of how the system is performing the intended task[5]. Programmers must recognize how the current algorithm allows bias to occur or go unnoticed. Initially, many deep learning systems are created without the detection of bias in mind[3]. This causes the first few versions of the AI to be built without any bias consideration, rooting the issue in the structure of the algorithm itself[3]. The functionality of the AI was constructed using marginalized testing data allowing biases to go undetected and even amplified in some cases[5]. 

More recent AI systems that do include bias also include the human personal biases that each person learns and develops throughout their life. Everyone on earth has a different opinion or perception of everything on this planet and those are the opinions and perceptions that are put into biased AI systems unconsciously. Many people often inherit their parent’s political, moral, religious and even ethical mindsets, which is why it is so hard to have a completely unbiased system. We often see that the biases within the world, more specifically the US, are the same or close to the same biases found within AI systems. Implicit bias refers to the patterns learned by our brains from the small number of examples, experiences, stories, etc. we’ve been exposed to[7]. These habits allow us to forecast decisions to protect us, who to trust, and how to survive. Therefore the decisions we unintentionally make are because of previous stereotypes that have been embedded in our unconscious minds since we were born. A quote from Sukis perfectly describes why there are biases in AI. She says, ”AI, most often in the form of machine learning models due to their reliance on big data, reveals our biases through the patterns of interaction and forms of discrimination that we embed into it because we are unable to consistently be conscious of or eradicate our own implicit biases as we are creating these systems.”[7]. Essentially AI systems reflect the people who designed them and they also reflect the biases and unconscious ideas within those people.

In conclusion, biases in AI come from similar biases within society. These biases can cause ethical issues, discrimination, and inequality among different people. When the AI systems are developed it is given sample data so there is no telling what biases could be lying within the code because it hasn’t undergone human testing yet. AI also learns from training data, which can also have biases based upon past demographic history, but this training data isn’t the only biased input within AI systems. The programmers who develop it, which are mostly men, all individually have unconscious biases and preconceived notions about the world, and these personal insights tend to fall in the cracks within the code which can cause even more bias and inequality. In our opinion, I don’t think that there will ever be a completely non-biased, equal and fair AI technology. There are too many intricate conditions within our societies that will cause a never-ending bias loop because the term “fair” has thousands of different meanings within diverse demographics. Even AI systems that were created without biases still have transgression inequalities that cannot escape human society. 

References:

[1] – “AI Bias Will Explode. But Only the Unbiased AI Will Survive.” AI and Bias – IBM Research – US, www.research.ibm.com/5-in-5/ai-and-bias/.

[2] – Presten, James ManyikaJake SilbergBrittany. “What Do We Do About the Biases in AI?” Harvard Business Review, 25 Oct. 2019, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

[3] – Hao, Karen. “This Is How AI Bias Really Happens-and Why It’s so Hard to Fix.” MIT Technology Review, MIT Technology Review, 4 Feb. 2019, https://www.technologyreview.com/s/612876/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/.

[4] – McKinsey & Company ,https://www.mckinsey.com/featured-insights/artificial-intelligence/tackling-bias-in-artificial-intelligence-and-in-humans

[5] – Smith, Craig S. “Dealing With Bias in Artificial Intelligence.” The New York Times, The New York Times, 19 Nov. 2019, https://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html.

[6] – Taulli, Tom. “How Bias Distorts AI (Artificial Intelligence).” Forbes, Forbes Magazine, 12 Aug. 2019, https://www.forbes.com/sites/tomtaulli/2019/08/04/bias-the-silent-killer-of-ai-artificial-intelligence/#229a7f187d87.

[7] – Sukis, Jennifer. “The Origins of Bias and How AI May Be the Answer to Ending It’s Reign.” Medium, Design at IBM, 17 Jan. 2019, https://medium.com/design-ibm/the-origins-of-bias-and-how-ai-might-be-our-answer-to-ending-it-acc3610d6354.    

“The potential benefits of artificial intelligence are huge, so are the dangers.”

Dave Waters

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.