Bias in AI

Joshua Bull
2 min readFeb 23, 2021

In Weapons of Math Destruction, Cathy O’Neil describes how big data is being used and abused by upper divisions of society. O’Neil describes a model as an abstract representation of some process. The author’s invention of the term “Weapon of Math Destruction” describes an AI model with 3 characteristics: Opacity, Scale and Damage. Opacity meaning they are unclear in their actions, scale meaning they scale up to have an everlasting impact on the area the algorithm touches, and damage being the damage they cause at scale. One of the primary reasons that the models can have such a negative impact on the people they are used on is the fact that the algorithms have bias in them. This bias can be found in terms of racial bias in the use of software to determine the length of a prison sentence. The unjust use of the LSI-R is an example. In the questionnaire, inmates are asked about things that would weigh in on them possibly being repeat offenders. This questionnaire asks questions about an inmates past history, and because of the culpability of the American law enforcement system to target minorities and the poor, this questionnaire is biased against those same people, putting people who have had lower socioeconomic experiences in a higher risk category. This bias manifests from the fact that these people do not have much control over many of the questions asked in the questionnaire, which is a problem because this questionnaire is supposed to isolate the things that the person can actively control. This is an example of how a “Weapon of Math Destruction” can use big data and algorithms to actively discriminate against people of lower socioeconomic background and of a minority race. Hopefully in the future, bias free algorithms can be used to determine the length of a prison sentence.

--

--