Book cover of Benjamin’s Race After Technology

Book Review

Cybele Garcia Kohel
6 min readSep 12, 2021

--

Race After Technology: Abolitionist tools for the new Jim Code by Ruha Benjamin

“A map is not the territory…” Alfred Korzybski

What does this quote by Korzybski mean, exactly? In this statement Korzybski was explaining that a representation, whether it be a drawing, a picture, a sculpture, or even data — any type of representation — is not and never can be the object it represents. At best, a representation can give us a snapshot. It tells you a very short story about the subject, so to speak. At its worst, it is a biased, one-dimensional interpretation which really tells you more about how the interpreter sees the world. This is what Ruha Benjamin discusses in her book, Race after technology — what it means to have data represent (or misrepresent) people (2019).

Benjamin’s book is a fascinating and important look at how coding and the algorithms implanted into artificial intelligence, used purportedly to automate many systems that make our lives easier, also has the ability to target, discriminate, ignore, and victimize many communities. Benjamin details the many ways technology can be the bane of a Black or brown person’s existence; from bathrooms whose automated water or soap dispensers don’t work when people with darker skin shades use them; to being put on a gang member database when you are simply born in a particular zip code; to facial recognition software that can’t tell people apart because it is programmed by people who have racial biases. Black and brown bodies are either rendered invisible, or they are hyper-scrutinized (Benjamin, 2019). The main assertion in Benjamin’s book is that when code is written by people with bias, the results are algorithms that perpetuate racist systems.

https://bit.ly/3z0hGjH

After reading Benjamin’s many cited examples, it is hard to disagree. Facial recognition software, for example, has the worst track record in being able to distinguish among Black and brown faces. Facebook and Google are both guilty of having “misidentified” Black people as gorillas in their facial recognition software (Benjamin, 2019). In a study by Georgetown Law School researchers, databases used by police departments across the country use similar facial recognition software, which consistently mis-identifies Black faces (Benjamin, 2019). This is because those writing the code behind the software are either predominately white, or from parts of the world where there are no Black people and few brown people (Benjamin, 2019). This constant, misplaced persecution that Black and brown people are subjected to leads to mistrust of systems, especially the information that comes from them. This fits with the Ecological Model of Everyday Life Information Seeking Behaviors, developed by Kirsty Williamson (Savolainen, 2017). Those who don’t trust systems, or entities perceived as part of a system (such as a library), won’t seek information with them. It is critical for libraries to understand that if they are viewed as part of a larger oppressive state entity, then they will miss out on patronage.

So how else does this relate to libraries? Let’s think in terms of the public library space. Some of us may remember that when the United States Congress first passed The Patriot Act in 2001, as a response to 9/11, public libraries across the US were very concerned. Why? Because the law stated that the Federal government had the right to subpoena library patron records, which included account history. This meant that the list of what materials you were checking out at the library could be used against you in a court of law to prove you were a terrorist. This is essentially profiling someone because of what they read. With the Patriot Act, the Federal government could also subpoena a patron’s WiFi history while utilizing the library, and even the email they sent and received during that time. The Patriot Act made libraries an unwilling profiler for the Federal government. Librarians across the country responded by dumping the parts of a patron’s record that included the history of what they had read. Since then, many libraries keep only records on what a patron has checked out at the moment. Thankfully, the Patriot Act has been rewritten to regard libraries as not being subject to National Security Letters (NSLs).

As our libraries become more tech-savvy and tech-friendly, we must be aware of the ways we are exposing our patrons — our information communities — to unwanted profiling, or even biased outreach. Let me propose an example of the latter. My son participated in a read-a-thon through a local public library which required him to sign up for a Beanstack.com account. This account captured what grade he was in, his gender, it tracked how many minutes he read, and what books he read, as well as his how he liked the books he was reading. In order for him to participate, he had to sign up through Beanstack.com. This website now has the reading history information for my son. A public library may not have to comply with NSLs, but a private company does. When we ask our patrons to use technology and services that we do not control, are we putting them at risk? I think this is a fair question to ask. Another example of this is RefWorks. Using RefWorks is very convenient for a grad student like me. But is the research I am doing, and saving, subject to government scrutiny? Could it be? If RefWorks is a private company, it could be. Now, back to the case of Beanstack.com: because my son signed up with them, I now receive emails with reader advisory suggestions from Beanstack.com for him. The suggestions I receive are sometimes really good, and sometimes not age appropriate. In this case sometimes the suggestions are for early readers, even though he was reading 6 thgrade level during the read-a-thon. Truly, in this case the error isn’t a big deal. But this underlines how easy it is for algorithms to get it wrong, because our algorithms, databases, data, and tech have mistakes and biases programmed into them. Is the tech we are loaning safe for all of our patrons? Does the tech we loan or advocate to our patrons (i.e. Beanstack.com) track data or put patrons at risk? What “maps” of patrons (or users) is built into the tech we are loaning? Does the tech treat all our patrons equally? Who is paying attention? How do we avoid these issues? These are all the questions we should be asking ourselves before utilizing or aligning our libraries with any kind of technology.

If you’ve made it this far, please enjoy this Get Fuzzy comic: Get Fuzzy gets an automated litter box.

References

Afigbo, C. [@nke_ise]. (2017, August 16). If you have ever had a problem grasping the importance of diversity in tech and its impact on society, watch. Twitter. https://bit.ly/3z0hGjH

Benjamin, R., (2019) Race After Technology: Abolitionist tools for the new Jim Code. Wiley. https://doi.org/10.1080/01419870.2020.1715454

Dustin, J. (2021). Facebook apologizes after its AI labels black men as ‘primates’. National Public Radio. https://www.npr.org/2021/09/04/1034368231/facebook-apologizes-ai-labels-black-men-primates-racial-bias

Savolainen, R. (2017). Everyday life information seeking. In M. Levine-Clark & J. D. McDonald (Eds.), Encyclopedia of Library and Information Sciences (pp. 1506–1515). CRC Press.

Wikipedia. (2021, August 14). National Security Letter. https://en.wikipedia.org/wiki/National_security_letter

Originally published at https://ischoolblogs.sjsu.edu on September 12, 2021.

--

--

Cybele Garcia Kohel

Cybele Garcia Kohel is a Boricua writer living on unceded Tongva land. She writes poetry, short stories and essays. She is also a K-5 school librarian.