Hue-mans: Differentiation vs Discrimination / Humans + Tech - #32
As we struggle to treat each other with equality, we are also transferring our biases to machines and algorithms. This may be a lot more difficult to identify and fix.
Hi,
Reading the news in the last few days has been particularly depressing. The murder of George Floyd was the straw that broke the camel’s back. The #BlackLivesMatter protests against police brutality and racial discrimination in the USA have attracted worldwide attention. Similar protests have taken place in countries around the world [CNN], including here in Kenya, in front of the US Embassy [Daily Nation].
The topic of discrimination has been on my mind for a long time. I could never find the right way to articulate these thoughts, but the events over the last few days have pushed me to write this down. This subject is intricately intertwined with technology. The machine learning and AI tools being deployed are being trained on historical data and they are learning our biases.
It’s a complex subject both on the human side as well as the tech side. I don’t think it’s ever easy to press publish on topics like this because you never feel like you’re doing enough justice to the subject. But it’s important to start a dialogue and there is no time like the present. I hope you’ll read through and share your thoughts.

The events over the last few days reminded me of these lyrics from the song #WHEREISTHELOVE (2016) [YouTube], by the Black Eyed Peas:
But if you only got love for your own race
Then you're gonna leave space for others to discriminate
And to discriminate only generates hate
And when you hate, then you're bound to get irate
Madness is what you demonstrate
And that's exactly how hate works and operates
Man, we gotta set it straight
Take control of your mind, just meditate
And let your soul just gravitate to the love
So the whole world celebrate it
[…]
What happened to the love and the values of humanity?
What happened to the love and the fairness and equality?
Instead of spreading love we're spreading animosity
Lack of understanding leading us away from unity
Lack of understanding leading us away from unity
This post is about racial discrimination and discrimination of all types: age, tribe, caste, physical appearance, disability, family status, gender, genetics, nationality, colour, race, ethnicity, religion, sexual orientation, political ideology, societal class, and many more.
It is important to address all forms of discrimination because we need to understand all the ways that we are dividing ourselves. Unity is impossible without covering all forms of division.
Let’s talk about the human side and then discuss how our biases are translating to technology and how this could lead to an even greater divide if we don’t fix it now.
Differentiation vs Discrimination
Let’s start with the basics. Humans perceive the world through five basic senses – vision, hearing, smell, touch, and taste, but we actually have between 14 and 20 senses [How Stuff Works].
Our senses help us to understand the world around us. Our sense of sight helps us see colours. Here is where differentiation comes in. When you see a black person, a brown person, and a white person, it’s your sense of sight that helps you make that differentiation. It is only when you turn that differentiation into discrimination that the problem arises.
Consider a scenario where you have to prescribe the right type of sunscreen to people. Being able to differentiate between black, brown, and white people will help you prescribe the right strength of sunscreen they should use. But that differentiation of colour has no bearing on how much they should get paid for the same job, how they are treated within the justice system, and whether they are eligible to rent an apartment. When the colour of their skin is used to make these types of decisions, it is discrimination.
I once thought to myself that if we were all blind, racism would no longer exist. But then I quickly realized, we would instead discriminate based on our other senses. It may turn into discrimination by accent, smell, or the way people felt. So the problem is not a particular sense, it’s the way we process the information that our senses deliver to us.
Understanding how we process information
Our brains process loads of information every second. To be as efficient as possible, our brains take the information derived from our senses and try to create patterns and mental models. We later use this pattern recognition to make decisions in the future. This is how we make groups to organise information and come up with stereotypes. Our pattern matching and the mental models we create are influenced by various inputs - our culture, our society, our education, our experiences, the media we consume, our role models, and many more factors.
"We seldom realise, for example, that our most private thoughts and emotions are not actually our own. For we think in terms of languages and images which we did not invent, but which were given to us by our society.” — Alan Watts
Stereotypes don’t arise only from our personal experiences. They are derived from the media and other people’s experiences as well. When you hear the word “terrorist,” what type of image forms in your mind? You may not have met a terrorist in real life, but most likely you pictured a middle-eastern Islamic man because that is what the media has trained us to think. If I tell you the story of a nurse, you are most likely imagining a woman in that role and not a man. If I mention a cybersecurity expert, you are most likely imagining a man in that role and not a woman.
When we introspect and understand where our decision making comes from, it is critical to understand if those decisions are based on differentiation or discrimination. If we determine it is discrimination, we have to make a huge effort to unlearn the patterns and mental models that led us down that path of thinking. And then create new mental models that get rid of those biases. It is extremely difficult, but necessary if we are to have a society that treats everyone as equals.
We need to strive to treat people as individuals instead of making assumptions based on the stereotypes our mental models put them into.
We are transferring our biases to technology
Centuries of human bias is stored in historical data. We are training AI and machine learning algorithms on this data and they are picking up our biases. This is setting a dangerous precedent as these tools are being deployed within law enforcement, HR systems, government systems, courts, education, health care systems, and many more. Left unchecked, these algorithms are going to continue to discriminate behind the opaque box that is AI and it’s going to be much more difficult to overcome than purely human bias and discrimination.
Racist and sexist computer vision algorithms
Computer vision algorithms are the worst offenders. In an article titled “The racism of technology - and why driverless cars could be the most dangerous example yet,” [The Guardian] Alex Hern discusses how these computer vision algorithms fail to recognise dark-skinned pedestrians.
More recently, Google’s computer vision API judged a black person holding a hand-held thermometer to be a firearm, while for a light-skinned Asian holding a similar device, it was judged to be an electronic device.



Joy Buolamwini, a computer scientist, and founder of the Algorithmic Justice League is working to fight bias in machine learning, a phenomenon she calls the "coded gaze."
She wrote a beautiful poem entitled, “AI, Ain’t I A Woman?” which you can watch below, on how AI systems from the leading tech companies failed to correctly classify the faces of Oprah Winfrey, Michelle Obama, and Serena Williams. I also highly recommend watching her TEDx talk on how she’s fighting bias in algorithms [TED].
A study by the National Institute of Standards and Technology (NIST) found that algorithms currently on the market can misidentify members of some groups up to 100 times more frequently than others [The Verge].
NIST says it found “empirical evidence” that characteristics such as age, gender, and race impact accuracy for the “majority” of algorithms. The group tested 189 algorithms from 99 organizations, which together power most of the facial recognition systems in use globally.
The findings provide yet more evidence that many of the world’s most advanced facial recognition algorithms are not ready for use in critical areas such as law enforcement and national security. Lawmakers called the study “shocking,” The Washington Post reports, and called on the US government to reconsider plans to use the technology to secure its borders.
Discrimination in HR by AI tools
In October 2018, a Reuters report said that an AI hiring tool that Amazon was building, gave preference to men [Reuters]. The tool was trained on 10 years of Amazon’s hiring data. The dominance of males in technology jobs led to the tool interpreting that as a preference for males and putting females at an immediate disadvantage. Amazon never deployed the tool even after removing the gender bias, as they couldn’t be sure it wouldn’t discriminate on other factors.
Most HR software developers that make use of AI, market it as a solution to human bias. SHRM (Society for Human Resource Management) have outlined the benefits and drawbacks of AI in HR as well as the potential risks for discrimination in an article titled, AI: Discriminatory Data In, Discrimination Out [SHRM].
Discrimination in health care by AI tools
A study reported in Nature states that millions of black people have been affected by racial bias in health-care algorithms widely used in US hospitals [Nature].
Ziad Obermeyer, who studies machine learning and health-care management at the University of California, Berkeley, and his team stumbled onto the problem while examining the impact of programmes that provide additional resources and closer medical supervision for people with multiple, sometimes overlapping, health problems.
When Obermeyer and his colleagues ran routine statistical checks on data they received from a large hospital, they were surprised to find that people who self-identified as black were generally assigned lower risk scores than equally sick white people. As a result, the black people were less likely to be referred to the programmes that provide more-personalized care.
+ Healthcare Algorithms Are Biased, and the Results Can Be Deadly [PC Mag]. This article lists some of the efforts underway to fix these issues.
Some efforts are underway to address bias and fairness in AI-based healthcare systems. Last year’s NeurIPS conference ran a workshop to address fairness in machine learning for health applications. The workshop included several papers that explored the assessment of algorithmic fairness, discovering proxies, and calibrating algorithms for subpopulations. And the Alliance for Artificial Intelligence in Healthcare, a nonprofit organization founded in December 2018, brings together developers, device manufacturers, researchers, and other professionals to advance the safe and fair use of AI in medicine.
Discrimination by AI in the criminal justice system
Rachel Cicurel, a public defender in Washington D.C. was representing a black teenager and had an agreement with the prosecutor that probation would be a fair punishment. But in the last minute, her client was deemed a high-risk for criminal activity by an AI algorithm that prompted the prosecutors to recommend juvenile detention instead [The Atlantic].
Cicurel was furious. She issued a challenge to see the underlying methodology of the report. What she found made her feel even more troubled: D’s heightened risk assessment was based on several factors that seemed racially biased, including the fact that he lived in government-subsidized housing and had expressed negative attitudes toward the police. “There are obviously plenty of reasons for a black male teenager to not like police,” she told me.
Cicurel and her team dug further and realized these algorithms were never validated and eventually the judge threw out the test results. But by that time the algorithm had been used for over 10 years and had impacted the lives of thousands of people.
+ Hamid Khan, the founder of Stop LAPD Spying Coalition has been working for 35 years to eradicate racist policing technologies [MIT Technology Review]. Stop LAPD Spying Coalition has most recently been advocating against predictive policing. On April 21st this year, the LAPD announced an end to all predictive policing programs.
In her latest newsletter, The Algorithm, for the MIT Technology Review, which features the interview with Hamid Khan, Karen Hao also writes [The Algorithm]:
Like many of you, I am still on a journey of learning how to be a better ally and how to be anti-racist. For those of you who are searching for resources, here’s one place to start. So much of what I’ve written about over the last year and a half of producing The Algorithm is built on the foundational work of black researchers and scholars: MIT researcher Joy Buolamwini and AI Now fellow Deborah Raji’s work revealing discrimination in face recognition, which has changed how companies build their systems and how the US government audits them. AI Now policy research director Rashida Richardson’s work uncovering the systemic practice of corrupted, even falsified, data being used to train policing algorithms. Harvard fellow Mutale Nkonde’s work supporting the writing of critical legislation for regulating algorithmic systems and deepfakes. There are so many others: Ruha Benjamin, Timnit Gebru, Rediet Abebe, Abeba Birhane, William Isaac, Yeshimabeit Milner. Follow them, read their work, engage in it deeply, and if you are able, consider donating to Black in AI.
We need explainable AI
Many AI and machine learning algorithms are opaque boxes. We don’t know how they arrive at their conclusions and that is a big problem [The Burn-In]. If we can’t audit their decision making logic, it makes it much harder to correct their biases.
Just like we need to introspect and fix our internal mental models to eliminate discrimination, we need to eliminate AI bias by understanding how it arrives at its conclusions. Explainable AI can help us do that [Enterprise AI].
What you can do to help
Differentiate, don’t discriminate. Introspect and fix your mental models when you find yourself discriminating in any way. Treat everyone with kindness.
Language matters. Avoid using words like Blacklist/Whitelist, Blackhat/Whitehat that always depict Black as bad and White as good [UK National Cyber Security Centre]. It influences us subconsciously to think that way in other contexts.
If you can afford to, donate to organizations that are working to eliminate discrimination of any type, both on the human side [the Strategist] and the technology side. They need all the help they can get.
Teach children to treat everyone with kindness regardless of gender, age, race, or any other differentiating feature. These problems are going to take a few generations to fix.
Demand equality from your politicians and local authorities and push them to actively audit and regulate AI tools and algorithms that are deployed for public services, to monitor for discrimination of any kind.
Quote of the week
Racism in America is like dust in the air. It seems invisible — even if you’re choking on it — until you let the sun in. Then you see it’s everywhere. As long as we keep shining that light, we have a chance of cleaning it wherever it lands. But we have to stay vigilant, because it’s always still in the air.
—Kareem Abdul-Jabbar, Op-Ed: Kareem Abdul-Jabbar: Don’t understand the protests? What you’re seeing is people pushed to the edge [Los Angeles Times]
I wish you equality, fairness, and peace.
Neeraj
Heightened awareness of differentiation is a good start for each of us. Just noted that the e-reader I used to listen to this interesting blog spoke in a white female voice. No options to change.