top of page

Stereotyping and Bias in AI – How it happens and 5 ways to combat it


In our last blog, we talked about societal biases and how financiers can and should be at the forefront of fighting these using AI technology. Unlike humans, machines are inherently unbiased, making them a potent weapon in the fight against systemically ingrained inequities. However, like a coin, there are two sides here. While machines do not have overt biases, they can, unfortunately, be taught it. With AI set to play a bigger role in modern credit management systems, this is a major cause for concern. Left unchecked, it can lead to technology amplifying historical prejudices instead of mitigating them.


An image of AI code on a navy blue background
If AI bias is left unchecked, it can transport society’s past prejudices into the future



How does AI become biased?


Recently, Apple’s credit-card scoring algorithm came under scrutiny when it was reported that the system gave women much lower credit limits than men. This was the case even when the women had better credit scores. So, how can a fact-and-logic-based system of ones and zeros become biased? The answer lies in the data running through its system.


AI models need data to learn and make decisions. The danger with data is that it oftentimes reflects society’s biases. And when you use this tainted information to train and run self-learning systems, it naturally leads to biased outputs. After all, what you put in often determines what comes out. As a computer geek would put it - garbage in, garbage out. Or in this case, bias in, bias out.


A prime example of biased data giving equally biased results can be found way back in 1986 (sadly showing that this is not a new problem). In 1986, St. George’s Hospital Medical School, a prestigious medical college in the UK, initiated a computer program to help pre-screen candidates for job interviews at the institute. The system’s designers used the institute’s prior admission and employment records as training data for the program. These records consisted overwhelmingly of Caucasian males – a reflection of prevalent, historical preferences in the medical profession. Unsurprisingly, the program turned out to be biased against women and ethnic minorities and often screened them out of the selection process completely. As you can see, the underlying cause of AI bias here is not technological but societal.


Another infamous example is Tay, an AI chatbot designed by Microsoft to interact with Twitter users. With an inbuilt learning function, Tay was supposed to become smarter with every human interaction. However, due to a coordinated attack by a whole bunch of unscrupulous Twitter users, the chatbot learned to become biased instead. It put out a barrage of racist and sexist tweets, causing Microsoft to pull the plug on it just 16 hours after it debuted on the platform. Sadly, it took less than 24 hours for the bot to go from being a neutral entity to an offensively biased one.


Other reasons for bias in AI


Apart from using historically biased data, prejudices can also creep into the AI system when programmers train the system using either too little data or data that over-represents some parameters. For example, using only data about male doctors to train an AI system will cause it to erroneously conclude that all doctors are males. AI bias can also stem from faulty algorithm design. Everything from the variables programmers use to the categories and thresholds they set can introduce bias into the system. For example, if programmers give too much importance to one parameter and trivialize others, it will naturally lead to skewed results.


­5 Ways to combat AI bias


Because historical data often reflects ingrained societal inequalities, the problem of AI bias is a tricky one to solve. But it’s not all gloom and doom. Despite what Hollywood might say, we are not heading towards a future where biased AI machines rule the world. Contrarily, AI systems can learn to operate without prejudice. But it takes constant vigilance and a continuous cycle of testing, retraining, and parameter optimization to get it there. Here are some ways to combat bias in AI:



1. Use training data that represents reality


The world is becoming an increasingly diverse place. One way to combat AI bias is to use training data that accurately represent this diversity. Over or underrepresenting a group will almost certainly lead to sample bias. Of late, facial recognition systems have come under scrutiny for this exact problem. Most of these systems have proved inept at recognizing women of color. For example, Rekognition, Amazon’s image and video analysis software, infamously misclassified Oprah Winfrey as male.


One reason for this is the lack of diverse facial recognition training datasets. An MIT researcher found that a popular facial recognition dataset called ‘Faces in the Wild’ had training images that were 70% male and over 75% white. When you train an AI system with this non-diverse dataset, it will find it harder to detect faces that deviate from its training. Having an inclusive dataset is, therefore, imperative to reducing AI bias and improving accuracy.


2. Select the right variables


AI systems need human programmers to define the algorithm’s purpose, set working parameters, and dictate what variable to consider while finding patterns and making decisions. These guidelines can sometimes be the difference between fair and unfair AI outputs. An infamous example of this is a healthcare algorithm used widely across the United States to assign personalized care to chronically ill patients. Researchers found that the algorithm was more likely to recommend non-minorities and people with higher socioeconomic statuses for programs aimed at providing extra health care.


The researchers also discovered that the system’s programmers used healthcare costs as an important decision-making parameter, thereby training the algorithm to assume that higher healthcare costs always meant greater health needs. While this can often be the case, healthcare costs do not always reflect a person’s wellbeing. Other factors such as upbringing, religion, economic status, and race also play a part in determining how much a person is willing to spend on their health. And according to the AARP, minorities spend less on their health than others. This statistic is due to several reasons ranging from a general distrust of the healthcare system to their economic status.


As a result of using this faulty variable, the healthcare algorithm assigned personalized care to only 18% of minorities. Thankfully, when the company became aware of the bias in their system, they worked with the researchers to find variables that reflect the ground reality more. The result was a system that was over 80% more efficient and bias-free than its previous iteration.


3. Test, validate, retrain, repeat


Bias can creep into AI systems even when programmers exclude sensitive variables such as gender, ethnicity, socioeconomic status, and race. That’s because there are plenty of other variables that can serve as proxies to these sensitive parameters. In the above example of the healthcare algorithm, healthcare costs served as a proxy for race. Similarly, a person’s shopping habits can serve as a proxy for gender since men and women generally have different needs and tastes. In this way, even if an algorithm is programmed to be blind to sensitive variables, it can unintentionally discriminate against certain groups.


This is what makes testing and validating an algorithm’s results such a crucial step in the fight against AI bias. Testing an AI system’s outputs can reveal bias and allows programmers to discover negative feedback loops that can make the system increasingly prejudiced over time. While testing, it is important to audit for both accuracy and fairness. Prioritizing one over the other invariably leads to an incomplete examination of all the variables and conditions. This in turn can hide error types that introduce bias into the system.


In addition, it is important to conduct as much real-world testing as is feasible. De

facto situations can bring up many unforeseen variable proxies. Discovering these unexpected and unwanted connections can help designers better identify the problem-causing equation. They can then remove this information from the evaluation process and retrain the model so that it is making decisions blind to them. This constant cycle of testing and retraining can help explain or resolve sensitive parameters to a point where they no longer cause harm in the real world.


4. Make AI systems explainable


AI systems differ significantly from standard computer programs in that they do not need a software expert to explicitly write every line of code for it. These algorithms are incredibly sophisticated, self-learning systems that work very much like the human brain. However, an AI system’s sophistication is both a blessing and a curse. While its power ensures that AI can process information and reveal hidden connections in the blink of an eye, its inherent complexity makes it harder to see how a system makes the decisions that it does. Explainable AI demystifies the AI black box and reveals the path an algorithm took to arrive at a particular conclusion (for more details on why and how this happens, read our blog on explainable AI). This makes it easier to identify and flush out any bias-backed reasoning within the system.


5. Keep diverse humans in the loop


AI systems may be incredibly sophisticated, but they still cannot detect and counter bias as human beings can. So, to enhance transparency in AI, human governance and oversight are essential. Humans need to be at the center of all the remedial steps such as improving data representation, quality, and explainability. In addition, only humans can ensure that an AI system is behaving according to its design parameters by testing and verifying the results it generates. This is even more important when the system is making potentially life-altering decisions. In such scenarios, having a human make the final decision can vastly improve accountability.


Obviously, this is a double-edged sword. What if the humans in the loop are themselves operating on conscious or unconscious biases? There is no easy answer here. But one way to mitigate this is by improving diversity within the communities that build these AI-powered technologies themselves. The tech industry is infamous for its diversity problem. Minorities make up only around 3% of the tech workforce and only around 12% of ML researchers are women. As the people building the machines diversify, they bring with them a broader worldview. This will naturally help in developing AI systems that are more inclusive and representative of humanity as it is.


How TRaiCE ensures bias-free efficiency


At TRaiCE, we take our role as innovators and path-breakers very seriously. In a world where gender and minority bias is a serious issue, it is important to us that our solution treats everyone equally. That’s why we ensure our proprietary algorithms remain bias-free by explaining the reasoning behind its outputs and allowing humans in the decision-making loop. Our algorithms also have the ability to process structured and unstructured data, giving investors a more socially inclusive way to monitor their borrowers. This way, our customers can make smarter yet fairer investment decisions. This is an essential feature of our product.



Conclusion - Don't transport past biases into the future


MIT cofounder Andrew McAfee once said, “If you want the bias out, get the algorithms in”. In the light of everything we have discussed above, this statement is a half-truth. AI systems can make processes fairer and more bias-free. But, if we’re not careful, they can just as easily transport society’s past biases into the future. The good news is that we can program prejudice out of an AI system (sadly, that would probably be easier than getting it out of a human mind). However, we need to invest the time and effort needed to do just that.



final1.png
Subscribe to the TRaiCE blog
Get our posts delivered straight to your inbox

Thanks for subscribing

bottom of page