• HC Visitor
Skip to content
Information Ecosystems
Information Ecosystems

Information, Power, and Consequences

Primary Navigation Menu
Menu
  • InfoEco Podcast
  • InfoEco Blog
  • InfoEco Cookbook
    • About
    • Curricular Pathways
    • Cookbook Modules

It’s not “just an algorithm”

By: Erin O'Rourke
On: January 23, 2020
In: Safiya Noble
Tagged: Algorithms, Data, Information Science, Safiya Umoja Noble

Safiya Umoja Noble, known for her best-selling book, Algorithms of Oppression: How Search Engines Reinforce Racism, as well as her scholarship in Information Studies and African American studies at UCLA, visited Pitt the week of January 24. She spoke with participants in the Sawyer Seminar, gave a public talk and spoke with me in an interview for the Info Ecosystems Podcast.

In Algorithms of Oppression, Dr. Noble described her experiences searching for terms related to race, women, and girls, such as “black girls” and encountering pornographic or racist content. These initial searches led her to years of study in information science, using the first page of Google search results as data. Coming from an advertising background before obtaining her Ph.D. in Library and Information Sciences, Dr. Noble was uniquely situated in the early 2010s to recognize Google for the advertising company it really is, while working in a field where many scholars around her viewed it as a source with exciting potential.

Magnifying glass examining Google logo
Magnifying glass examining the Google Logo

Noble’s book examines what is present and absent in that first page of search results, and what those results say about the underlying mechanisms of organizing information and corporate decisions that enable those searches to occur. To open her public talk, Dr. Noble discussed several events that have occurred since her book was published in 2018. They notably included the exposure of Facebook’s privacy violations from 2017–2018 and the use of facial recognition technology by law enforcement and in public housing, despite research from Dr. Joy Buolamwini indicating that facial recognition and analysis algorithms are inaccurate and can be discriminatory when applied to people of color. Over the next hour, she dissected the ways in which Google has profited from a search engine with racism built-in, how that has impacted the ways people access and use information, and what one might hope to do about it.

In the seminar meeting the following day, Dr. Noble continued the discussion, emphasizing how alternative means of acquiring information, such as walking through the stacks of a library, add context to the results. Library users can see the often-frozen-in-the-1950s classification schemes used to organize books, as well as the volumes that surround them, and hopefully, learn something from the process of searching. Dr. Stacy Wood mentioned she has her students write questions for search engines in the format of Yahoo queries circa 1995 to better understand how information was historically organized. When the search engine was first created, “to find the rules to tennis … you would navigate to the Recreation & Sports category, then Sports, then Tennis and finally Instructions”. By having to navigate through hierarchies of manually-selected links, users then, and students now, have an under-the-hood view of how the search tools work.

In contrast, when using Google search, users have no exposure to the mechanism by which it arrives at its answer. Noble argued that Google has over time, acculturated people to accepting it as an authority. She added that this is the business of every advertising company — manufacturing status. Essentially, all systems of organizing information overlook the needs and interests of some users; some just make it clearer than others.

In both her book and her public talk, Dr. Noble mentioned instances where certain interest groups (including white supremacists and the porn industry) capitalized on “long-tail” keywords, search terms that turn up a very specific niche of results. For a long time, searching for “Jews” using Google had turned up antisemitic content, as well as a brief message stating that if one hopes to find something else, they might try searching for “Jewish people,” and that these results were merely the result of an algorithm that shouldn’t be changed to fix this anomaly. A similar phenomenon occurred when Charleston church shooter Dylann Roof searched for “black-on-white crime” and turned up results largely from white supremacist groups, rather than any fact-checkers or scholars who could debunk the existence of such a thing and explain the racist basis from which the term originated.

In these, and many other related cases, Google has implemented fixes accompanied by a minimally apologetic statement, and cited a defense along the lines of “it’s just an algorithm.” By only fixing search results in response to complaints like those, Google is outsourcing its work of minimizing racial bias to public scholars and institutions and failing to acknowledge its role and complicity in perpetuating a racist system. Dr. Noble instructed listeners, and especially, developers, to counter by asking, “Is there any person who will touch this algorithm’s input or output? Will it impact animals or the environment?” Only then can one determine if it is just an algorithm.

Dr. Noble and moderator Stacy Wood later turned the conversation toward additional recent news and events relating to technology and race, before discussing implications on teaching. In our podcast interview, Dr. Noble and I had spoken about how to better educate developers to create anti-racist technology. She strongly urged students to take classes about the societies they create tech for, especially courses in ethnic studies or gender studies. In the seminar conversations, Noble also noted that one must maintain a balance between acknowledging that math, science, and even artificial intelligence are liberal arts and have subjective elements, without destabilizing knowledge to the point of no longer acknowledging fact.

Throughout the seminar talk and public talk, Dr. Noble cited the work of many of her colleagues as well as her own in claiming that there may not be such a thing as a fair algorithm. Joan Donovan studies radicalization, hate speech, and deep fakes, and suggests that companies are likely overstating their abilities to detect and moderate these kinds of banned content automatically. Sarah T. Roberts studies content moderation on social media and how the work, rather than being done by algorithms as many users suspect, is often shafted to low-paid contract workers overseas. And with hundreds of hours of videos uploaded to YouTube each minute, how could a company hope to stay ahead of new attempts to disguise prohibited content? Bad actors can always move faster than the tech changes, even if tech was trusted to be unbiased to begin with. Dr. Noble countered that her end goal and the goal of her field is not to perfect technology but to critique capitalism and demonstrate how the very nature of technology can be oppressive. Armed with this knowledge, scholars, developers, and users of technology alike can check their assumptions of Google search as “just an algorithm” that churns out unbiased results, and technology broadly as beneficial to anyone but the oppressor.

2020-01-23
Previous Post: Racism and Representation in Information Retrieval
Next Post: What you can see in museums is just the tip of the iceberg

Invited Speakers

  • Annette Vee
  • Bill Rankin
  • Chris Gilliard
  • Christopher Phillips
  • Colin Allen
  • Edouard Machery
  • Jo Guldi
  • Lara Putnam
  • Lyneise Williams
  • Mario Khreiche
  • Matthew Edney
  • Matthew Jones
  • Matthew Lincoln
  • Melissa Finucane
  • Richard Marciano
  • Sabina Leonelli
  • Safiya Noble
  • Sandra González-Bailón
  • Ted Underwood
  • Uncategorized

Recent Posts

  • EdTech Automation and Learning Management
  • The Changing Face of Literacy in the 21st Century: Dr. Annette Vee Visits the Podcast
  • Dr. Lara Putnam Visits the Podcast: Web-Based Research, Political Organizing, and Getting to Know Our Neighbors
  • Chris Gilliard Visits the Podcast: Digital Redlining, Tech Policy, and What it Really Means to Have Privacy Online
  • Numbers Have History

Recent Comments

    Archives

    • June 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • October 2020
    • September 2020
    • May 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019

    Categories

    • Annette Vee
    • Bill Rankin
    • Chris Gilliard
    • Christopher Phillips
    • Colin Allen
    • Edouard Machery
    • Jo Guldi
    • Lara Putnam
    • Lyneise Williams
    • Mario Khreiche
    • Matthew Edney
    • Matthew Jones
    • Matthew Lincoln
    • Melissa Finucane
    • Richard Marciano
    • Sabina Leonelli
    • Safiya Noble
    • Sandra González-Bailón
    • Ted Underwood
    • Uncategorized

    Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org

    Tags

    Algorithms Amazon archives artificial intelligence augmented reality automation Big Data Bill Rankin black history month burnout cartography Curation Darwin Data data pipelines data visualization digital humanities digitization diversity Education election maps history history of science Information Information Ecosystems Information Science Libraries LMS maps mechanization medical bias medicine Museums newspaper Open Data Philosophy of Science privacy racism risk social science solutions journalism Ted Underwood Topic modeling Uber virtual reality

    Menu

    • InfoEco Podcast
    • InfoEco Blog
    • InfoEco Cookbook
      • About
      • Curricular Pathways
      • Cookbook Modules

    Search This Site

    Search

    The Information Ecosystems Team 2026

    This site is part of Knowledge Commons. Explore other sites on this network or register to build your own.
    Terms of ServicePrivacy PolicyGuidelines for Participation