Can AI be racist/sexist?

Human vs Machine bias

Humans beings by nature are imperfect; research in psychology and studies of human behavior show that we have implicit biases and use heuristic shortcuts to process information. Prejudice or stereotypes emerge from one’s upbringing, culture, and environment, and can be so pervasive that we may never be fully made aware of them. A particularly insidious form of prejudice is racism:

“…any attitude, action or institutional structure which subordinates a person or group because of their color . . . Racism is not just a matter of attitudes; actions and institutional structures can also be a form of racism.”

In both racism and sexism, the subordination of minority groups or women is necessarily made through the existence of a power dynamic, where services or benefits or other such offers that can come from the more powerful group are denied to those with less.

Implicit biases affect our judgment and decision-making, and can lead to assumptions that have far-reaching consequences. Machines can show bias in a technical sense, but it can also exhibit biases that humans have, particularly when those biases are implicit and difficult to disentangle from the data. When machines come to these types of conclusions, does it mean the machine is sexist/racist?

The short answer is: yes and no. Yes in the sense that we see machines exhibit behavior modeled after human behavior, which by nature is imperfect and riddled with problematic assumptions (which we will see below, even from what we assume to be pristine sources). No in the sense that while machines are beginning to mature faster in the learning process, there’s still a long way to go before anyone deems them to have an independent thought process removed from humans.

There are real life, horribly egregious examples of machines that appear to be wildly racist or sexist. One shocking example was an instance where Google Photos, using image recognition algorithms, automatically tagged Black people as ‘Gorillas’. It was embarrassing for Google, extremely offensive for Black people, and also highlighted how pervasive racism is on social media when others were found making inappropriate jokes that the algorithm wasn’t wrong.

Another highly public and memorably embarrassing project by Microsoft was called Tay.AI. The project creators created an account on Twitter and set it up with an existing base of conversational knowledge, and its purpose was to converse with others on Twitter to build upon its existing foundation of conversation and culture. Without filters or the human feature of “common decency” built in, within a matter of hours it began to parrot back hate speech and extremely offensive replies, without being prompted. So of course, it was a hilarious disaster, and was shut down by Microsoft within a day.

What is Artificial Intelligence?

There are a multitude of ways to define or think about AI, but at the core of its essence is to emulate human intelligence. Whether or not AI is in and of itself making the decisions to be racist or sexist, it can certainly provide faulty results to guide a human relying on such processes, leading to problematic conclusions that without deeper examination would create undesirable and unfair outcomes.

Even more troubling is that the use of AI tends to be paired with the intention of automation at scale. AI can be used toward systems that end up replicating problematic judgments and result in real life decision making and actions that are biased that affect large groups at once. Unfortunately, the research shows that this has been the case and it is a red flag to all those who would like to use AI in automating processes.

Garbage In, Garbage Out

One growing field of machine learning is the exploration of Natural Language Processing, which is used in any application or objective to glean information from bodies of text (whether it be articles, social media, or other forms of human communication) and using them to create conversational robots. A seminal research paper, published in 2003 by Google researchers, outlined a method to relate words in a body of text and make inferences by their proximity and frequency of occurrence with each other.

Word2Vec was a method introduced to work with much larger data sets and with much lower computational complexity. It also introduced surprising secondary benefits of being able to apply simple arithmetic to derive associated words — such that if king is to queen, man is to X, and where X could be calculated as “woman”.

The corpus of data that was used for this research was chosen under the assumption that it was representational and comprehensive: the input data used was gathered from news article written by professional journalists taken from Google News. However, a follow up research report in 2016 discovered and reported a variety of problematic associations.

Researchers Bolukbasi et al discovered that there were strong correlations for biased inferences that emerged from biases inherent in the data, in particular related to instances where females are not well-represented, such as male-dominated occupations including computer programmers.

The above two figures are from the paper and list some of the problematic inferences and associations the researchers came across.

Through this discovery, the researchers discussed the serious and harmful implications that would emerge from propagating problematic associations on a mass scale which they termed bias amplification.

“To illustrate bias amplification, consider bias present in the task of retrieving relevant web pages for a given query. In web search, one recent project has shown that, when carefully combined with existing approaches, word vectors have the potential to improve web page relevance results. As an example, suppose the search query is cmu computer science phd student for a computer science Ph.D. student at Carnegie Mellon University. Now, the directory offers 127 nearly identical web pages for students — these pages differ only in the names of the students. A word embedding’s semantic knowledge can improve relevance by identifying, for examples, that the terms graduate research assistant and phd student are related. However, word embeddings also rank terms related to computer science closer to male names than female names (e.g., the embeddings give John:computer programmer :: Mary:homemaker). The consequence is that, between two pages that differ only in the names Mary and John, the word embedding would influence the search engine to rank John’s web page higher than Mary. In this hypothetical example, the usage of word embedding makes it even harder for women to be recognized as computer scientists and would contribute to widening the existing gender gap in computer science.”

The above hypothetical was put to the test with a paper published in 2015, where researchers Datta et al created an automated tool called AdFisher, which is “a tool for automating randomized, controlled experiments for studying online tracking.”

In the experiments that were run by the researchers, they found there were statistically significant differences when Google Ad account profile settings were selected as male or female. In the most staggering example, a career coaching service for “$200K+” executive positions were shown 1852 times to the male group compared with 318 times to the female group.

The results that emerge from this research demonstrates quite simply the mechanics of neural networks, where learning depends heavily on the data provided for that learning. When that corpus of data is skewed heavily in one direction or another, such as we’ve seen in male dominated occupations, it becomes difficult to extrapolate fair and balanced results as projections. Historically, the general rule of computing is termed “garbage in, garbage out”, in that the logic of machinery is very straightforward — there is no “magic” inside the box and what results is a simple matter of the input given and programming. If the input is faulty, the output will likely be faulty as well.

Just as much as we’d like to lean more heavily on the automation of neural networks, two points emerge in the research since these discoveries of unfairness:

  1. The input needs to be reviewed carefully with an understanding that utilizing past data is only a method to project into the future, but it does a poor job of predicting the future (note: nothing can truly predict the future!).
  2. The context of which the data is used, and what sort of results are expected, are both equally important in experimental research. Understanding the sociological and historical aspects of certain datasets are relevant to the outcomes and objectives for which such research may be used toward.

Moving Forward

More than two years after the embarrassing “Gorillas” photo tagging incident, Google’s workaround was to remove the search term altogether and still has remained the solution for the time being. Reporting this update in 2018 for Wired Magazine, Tom Simonite writes,

“A Google spokesperson confirmed that “gorilla” was censored from searches and image tags after the 2015 incident, and that “chimp,” “chimpanzee,” and “monkey” are also blocked today. “Image labeling technology is still early and unfortunately it’s nowhere near perfect,” the spokesperson wrote in an email, highlighting a feature of Google Photos that allows users to report mistakes.

Google’s caution around images of gorillas illustrates a shortcoming of existing machine-learning technology. With enough data and computing power, software can be trained to categorize images or transcribe speech to a high level of accuracy. But it can’t easily go beyond the experience of that training. And even the very best algorithms lack the ability to use common sense, or abstract concepts, to refine their interpretation of the world as humans do.”

Does this mean that we are still unable to generate algorithms and automated systems that are fair? Not necessarily. There is a large body of current research that aims to actively fight against implicit biases in different ways.

In a paper published recently in April 2018 by IBM Research-India and IIIT-Delhi, the authors investigate the different methods of de-biasing AI models and what their proposed method was. The paper enumerates some of the recent work done by other researchers on de-biasing methods:

  1. De-biasing the training algorithm (Beutel et al., 2013), (Zhang et al., 2016)
  2. De-biasing the model after training, by “fixing” the model after training is complete (Bolukbasi et al., 2016)
  3. De-biasing the data at the source, by ensuring the corpus of data used for input is at a level of sufficient diversity (Reddy and Knight, 2016)

The authors propose a system to root out gender-neutral and gender-specific occupations, and provide counterfactual evidence as a form of guidance to the user. As the authors explain:

“… given a sentence: [ Jane is a dancer ] The tool is able to identify that dancer is a gender neutral occupation with a lot of counter-factual evidences of males being dancers in the past. It presents these pieces of evidences to the user to be able to fix this sentence in a guided manner.”

AI supported services and activities have gradually seeped into the nooks and crannies of even the most mundane parts of our daily lives. No longer restricted to the sole use of the tech savvy early adopter, common activities utilizing AI are now readily accessible by many, such as speaking with your voice assistant of choice on routine or household tasks (“Hey Alexa/Google,Siri, show me the weather”), doing a spot of online shopping, or playing around with increasingly sophisticated consumer facial recognition and mapping software such like the filters available on Snapchat.

If we reflect on the idea that we are building machines to think like humans, we could consider the trajectory of human learning from when we are children to becoming adults. From a young age, humans are socialized and don’t learn independently from others; all human activity is social by nature, and with that we learn a fundamentally crucial aspect of socializing. Within this domain of knowledge, there are unwritten and sometimes unspoken rules of engagement, of which are all fully acknowledged and commonly known ways of interacting with each other that are deemed acceptable in society.

To further this analogy of building and emulating human brains, we are still at the infancy of artificial/synthetic intelligence. It is still relatively easy to know when you are chatting with a bot online due to limitations in its understanding and variety in constraints that will show its artifice. To the extent that we would like to replicate human thinking, we certainly have a desire and an obligation to build the better portions and higher functioning representations of such thinking. When we rely on machines to make decisions small or large, it seems safe to say that we should all hope they are moral, just, and well-informed decisions.

References

  • Proceedings of Machine Learning Researchproceedings.mlr.press/v81/.
  • Barr, Alistair. “Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms.” The Wall Street Journal, Dow Jones & Company, 2 July 2015, blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/.
  • Bolukbasi, et al. “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.” [1607.06520] Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings, 21 July 2016, arxiv.org/abs/1607.06520.
  • Datta, Amit, et al. “Automated Experiments on Ad Privacy Settings.” Proceedings on Privacy Enhancing Technologies, De Gruyter Open, 25 June 2015, doi.org/10.1515/popets-2015–0007.
  • Dwork, et al. “Fairness Through Awareness.” [1104.3913] Fairness Through Awareness, 29 Nov. 2011, arxiv.org/abs/1104.3913.
  • Greenwald, Anthony G. et al. “Implicit Bias: Scientific Foundations.” 94 Calif. L. Rev. 945, 2006, doi: 10.15779/Z38GH7F
  • Hunt, Elle. “Tay, Microsoft’s AI Chatbot, Gets a Crash Course in Racism from Twitter.” The Guardian, Guardian News and Media, 24 Mar. 2016, www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter.
  • Kleinberg, et al. “Selection Problems in the Presence of Implicit Bias.” [1801.03533] Selection Problems in the Presence of Implicit Bias, 4 Jan. 2018, arxiv.org/abs/1801.03533.
  • Madaan, et al. “Generating Clues for Gender Based Occupation De-Biasing in Text.” [1804.03839] Generating Clues for Gender Based Occupation De-Biasing in Text, 11 Apr. 2018, arxiv.org/abs/1804.03839.
  • Pierson, and Emma. “Demographics and Discussion Influence Views on Algorithmic Fairness.” [1712.09124] Demographics and Discussion Influence Views on Algorithmic Fairness, 5 Mar. 2018, arxiv.org/abs/1712.09124.
  • Simonite, Tom. “When It Comes to Gorillas, Google Photos Remains Blind.” Wired, Conde Nast, 18 Jan. 2018, www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/.
  • Sweeney, Latanya. “Discrimination in Online Ad Delivery.” SSRN Electronic Journal, 2013, doi:10.2139/ssrn.2208240.
  • Tversky, A., and D. Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science, vol. 185, no. 4157, 1974, pp. 1124–1131., doi:10.1126/science.185.4157.1124.
  • Zhao, et al. “Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-Level Constraints.” [1707.09457] Men Also Like Shopping: Reducing Gender Bias Amplification Using Corpus-Level Constraints, 29 July 2017, arxiv.org/abs/1707.09457.

no responses for Can AI be racist/sexist?

    Leave a Reply

    Your email address will not be published. Required fields are marked *