Appraising Information in the Cybersphere, Pt. II

A five-column series on evaluating AI- and algorithmically generated text.

Appraising Information in the Cybersphere, Pt. II

In my work with college students, I teach critical reading. I think of it as both a praxis and a skillset that have, at their root, a disciplined form of questioning. For some, critical reading is solely the analytic work of active reading. For many of us, though, it includes giving thought to the racialized, gendered, political, and economic dynamics of knowledge production. How do societal power dynamics influence what we read? 

My undergrads and I have spent whole semesters turning that question around and around in our heads. Mostly, we close-read those eight-pound social-studies textbooks that students lug about the hallowed halls of their high schools. My students, fresh out of high school, return to those materials with their minds trained to analyze rather than just ingest. This time around, they pore over diction and word choice, visual rhetoric, innuendo and subtext, syntax, and many other such things — more often than not, coming to the conclusion that those textbooks are markedly biased in favor of white-/male-/cisgendered-/hetero-/upper-class-affirming perspectives. 

No surprise there. 

They also frequently find that high-school history books are only so accurate. About midway through the semester, then, I introduce them to fact-checking and lateral reading: the practices involved in evaluating the information in a given source by consulting other, credible sources and cross-comparing the rhetoric and content. The Stanford University educational researchers who coined the term “lateral reading,” Sam Wineberg and Sarah McGrew, stress throughout their body of work how crucial this skill is in the digital era. Their claim is twice as important in the age of AI — and three times for those who deem race, gender, and class inclusivity as vital in any presentation of what is holistically true. 

According to Professor Safiya Noble, director of the UCLA Center on Race & Digital Justice and a board member of the Cyber Civil Rights Initiative, it’s crucial that tech users “understand that mathematical formulations to drive automated decisions are made by human beings. While we often think of terms such as ‘big data’ and ‘algorithms’ as being benign, neutral, or objective, they are anything but. The people who make these decisions hold all types of values, many of which openly promote racism, sexism, and false notions of meritocracy, which is well documented in studies of Silicon Valley and other tech corridors.”

In other words, it’s farcical and technochauvinist to think that one could ask Siri a question or type a prompt into ChatGPT and get an “objective,” “unbiased,” 100-percent-reliable answer. 

According to University of Waikato literature and writing professor Benjamin Djain, “A chatbot’s output doesn’t just consist of the information that you asked for. It’s a performance of that information based on statistical probability, and any kind of convincing performance is going to be at odds with statistical probability. This has huge implications for how we choose to use technology. It’s one thing if I want a chatbot that can have a convincing conversation with me, and it’s quite another if I want it to be able to provide me with accurate information.”

Generative AI chatbots (like Gemini, Alexa, ChatGPT, and so many others), adds Djain, “are increasingly being used to help people navigate real world problems despite the potential for misinformation.” 

AI users should take the time to review fact-checking practices like those that I teach my undergraduates. The books that I surveyed for this article — pictured above — will help (especially La’s and Mallin’s). None of them discuss AI, but their insight into fact-checking social media, other digital sources, and traditional publications applies to responsible AI use. They emphasize the importance of lateral reading, evaluating sources and selecting credible ones for comparison, critical thinking, knowing one’s own biases, and managing one’s own emotions, among many other things.

There are innumerable skills and best practices across the four texts. Valuable skills and practices. But as I was reading them, I began to think in broad, abstract, more humanistic, and existential terms:

  • We should rekindle our love affairs with our own minds. 
  • We should be unafraid to raise a hairy eyeball at Big Tech and its self-promoting propaganda.
  • We should embrace the grey and get okay with not knowing if we can believe a thing we read. Maybe later, with proper time and effort and credible sources, we will come to understand that thing.
  • We should also be okay with staying in the lane of our expertise when we form opinions. AI and the internet in general give us so much so quickly, we occasionally fool ourselves into believing we understand things that we have not lived or studied deeply.

And we definitely should not use generative AI for anything that has a bearing on people’s lives or well-being. It is just too flawed for that. Instead, if we use it more widely, it should be as a brainstorming tool — one we pair with credible sources and methods of inquiry. And we should fact check. 

As the author of Fact Check Handbook: Navigating the Truth in the Age of Misinformation Viet-Phuong La reminds readers, fact-checking “can debunk false claims, clarify misunderstandings, and promote a culture of truth and accountability. However, fact-checking is not a magic bullet. It’s just one tool in the fight against misinformation. It’s also important to promote media literacy and critical thinking.” So, I’ll add to my list, “AI users must protect and preserve their own ability to reason.”

Using generative AI too often is a form of cognitive offloading that, some believe, can make us less intelligent, poorer thinkers and get us hooked on it at the same time. Djain reminds us that a “chat bot’s prioritization of a user’s chat history and their profile information does not demonstrate to me a system that is interested in providing accurate information. It is interested in driving engagement.” Through the circularity of pulling from what we prefer, algorithmically driven AI tools drastically reduce the likelihood that users will encounter and reason through views different than their own. Thus, they undermine the habits of mind of potentially critical thinkers. 

Maybe, then, we just shouldn’t be using generative AI so much. Particularly not the popular, publicly accessible forms of it that are trained on internet data. Or perhaps we should give up and embrace our fate. Cave to the pressure. Get comfy in “the age of outsourced reason,” as computer-science professor Advait Sarker named it, where “the knowledge worker no longer engages with the materials of their craft”: The “materials” being their own ideas and thoughts.

Sarah Trembath is an Eagles fan from the suburbs of Philadelphia who currently lives in Baltimore with her family. She holds a master’s degree in African American literature and a doctorate in Education Policy and Leadership. She is also a writer on faculty at American University. She reviews books for the Independent, has written extensively for other publications, and, in 2019, was the recipient of the American Studies Association’s Gloria Anzaldúa Award for independent scholars for her social-justice writing and teaching. Her collection of essays is currently in press at Lazuli Literary Group.

Believe in what we do? Support the nonprofit Independent!