Appraising Information in the Cybersphere, Pt. I

A five-column series on evaluating AI- and algorithmically generated text.

Appraising Information in the Cybersphere, Pt. I

The visuals accompanying articles, blogs, ads, and instructional manuals on AI always seem to look the same: There’s a disembodied brain or a white, translucent cyber-mannequin head. The head is often androgynous, but its features are always Anglo. The background is a soothing aqua, blue-green, or blue. There are thin lines of some sort — or numbers floating around — indicating connectivity and the general smarts of mathematics that neither you nor I understand. The future is here. It’s ethereal and robotic at the same time. 

The rhetoric that follows in the text announce this future-present. It instructs us how to proceed. Dive in. Trust the result. Just receive it.

This messaging shapes and feeds into an attitude that many of us have without knowing it: technochauvinism. This is “​​a kind of bias that considers computational solutions to be superior to all other solutions,” according to NYU data-journalism professor Meredith Broussard.

“Embedded in this bias is an a priori assumption that computers are better than humans — which is actually a claim that the people who make and program computers are better than other humans. [For behind] technochauvinism are very human factors like self-delusion, racism, bias, privilege, and greed,” she writes in More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (2023).

Underlying Broussard’s argument, and that of several other women writing at the intersection of technology and culture, is a powerful counternarrative to digital technology’s hype. Critically analytic tech writers like Safiya Umoja Noble, Laura Bates, and Joy Buolamwini bring forward overwhelming evidence pointing to this fact: Digital technologies are not neutral.

The cumulative work of these scientists/writers — and their peers, including Ruha Benjamin, Timnit Gebru, Olga Lautman, Emily Bender, and others — makes it clear that users should regard the messaging of AI companies as partial. We should also employ critical-reading strategies in our engagement with digital media. These are the main claims I’ll return to throughout this five-part series in “The Critical Reader.” Welcome to this, the inaugural installment, in which I introduce key concepts and terms, beginning with the false neutrality of digital technologies.

Commonly used platforms “are resoundingly characterized as ‘neutral technologies’ in the public domain and often, unfortunately, in academia [too],” writes Noble in Algorithms of Oppression: How Search Engines Reinforce Racism (2018). We can use all of them with enthusiasm and confidence, buyers-in suggest, despite the occasional imperfection or system glitch or bug. “Stories of ‘glitches’ found in systems” can be misleading, though, according to Noble.

After all, a glitch is “something temporary, a mysterious blip that may or may not be repeated,” explains Broussard, while a bug is a “substantial, ongoing [concern that] deserves attention.” Thus, this terminology does “not suggest that the organizing logics of the web could be broken but, rather, that these are occasional one-off moments when something goes terribly wrong with near-perfect systems,” Broussard continues.

So, what are these technologies and flawed organizing logics, and how do they amplify and spread all of the -isms

Algorithms are instructions given to computers to do tasks. These instructions include sorting information and filtering data. Online, algorithms also gather data from you. What those systems gather, then, informs them on a number of things: who you are, what you like, and what you’re likely to want to see, do, or buy. They determine what information you receive when you do a search, what information you won’t see, which ads you receive on social media, and other such things. After a typical Google search, for example, you’ll receive a number of articles algorithmically culled from the entire internet to fit your perceived preferences, based on your previous searches. Of course, this makes echo chambers out of our laptops and greatly reduces the opportunities we have to take in material that would challenge preexisting ideas — a necessity in critical thinking.

On top of that, these days, an AI-generated summary will appear in response to your question or search. These are the work of large language models, or LLMs, which function by gathering enormous amounts of text from other sources, like a database or the internet in general, and analyzing statistical patterns within them. (This is called “training” on data.) LLMs are programmed to “predict” the next words in the patterns that arise through that computational process and, in this way, produce outputs that we receive as answers to prompts, searches, or questions. 

These language-focused generative-AI models are often known by their brand names, like Google’s Gemini and OpenAI’s ChatGPT. Despite their glitches and bugs, the public uses these and other LLMs — and is constantly encouraged to do so — in innumerable ways. Critically analytic tech writers, however, warn that the problems associated with these technologies prefigure glitches, bugs, hallucinations (AI-produced incorrect or nonsensical information), and the model collapse that may come as internet-trained AI puts out flawed data that it will then train on some more.

According to data scientists who explore digital technology through a critical lens, racism, sexism, and all the other -isms perpetuate at lightning speed through AI and algorithmically generated internet searches. Writes Broussard, “The biases embedded in technology are more than mere glitches; they’re baked in from the beginning. They are structural biases, and they can’t be addressed with a quick code update.”

These biases and the reasons for them will be the subject of the second article in this series. I’ll also share case studies and scenarios of the harm both done and doable, like the facial-recognition technology that wouldn’t recognize the dark-skinned face of computer scientist Buolamwini until she literally donned a white mask. Such harm may still be negotiable and even mitigated by critical thinking and improvement in Big Tech ethics.

Articles three and four will dig deeper into critical AI use and the rhetorical-reading strategies useful in protecting our minds as we decide how (or whether) to use digital tools. These strategies include awareness against misleading euphemisms, guardedness around the AI hype, lateral reading, questioning, and self-education through connection with the Algorithmic Justice and Ethical AI movements.

Finally, in article five, I’ll look at the future as envisioned by the scholars exploring AGI (or artificial general intelligence). This in-development AI model is sponsored by some of today’s wealthiest and most powerful men, who are regarded by computer scientist Timnit Gebru as neo-eugenicists. AGI zealotry may be directly involved in some of today’s most pressing sociopolitical concerns and will, in the final article in this series, be called into question according to ethics and the credible research of the scientists involved in the Ethical AI and Algorithmic Justice movements. 

As far-off and dystopian as all of this may sound, it is neither. As Bates puts it in her 2025 The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny:

“[W]e often tend to think of the term AI as futuristic, distant, and improbable. Yet it is already all around us, embedded in our lives and daily routines in more ways than we might even be able to count. Have you used predictive text today? Swiped left or right on a dating app? Seen an ad pop up on Instagram? Used Face ID to unlock your phone? Grabbed an Uber? Watched a show on Netflix? Been saved from a phishing email by your spam filter? Checked your weather app? Watched a video recommended to you on YouTube? Asked Siri a question? Received a supermarket voucher for something you usually buy? If you answered yes to any of these questions, then AI is already intertwined in your day-to-day life; you just might not realize it yet. And over the next few years, with breathtaking speed, it is only going to become more and more integrated in our work, social circles, families, education, and love lives.” 

This is neither a reason to panic nor to stick our heads in the sand of Big Tech and its alluring propaganda. It is simply a reminder that we need up-to-date digital-literacy skills, a healthy dose of skepticism, and, yes, critical-reading abilities that are so practiced, they become automatic. Please join me over the coming months on my deep dive into the world of AI-generated text and our role as readers in navigating it wisely. 

Sarah Trembath is an Eagles fan from the suburbs of Philadelphia who currently lives in Baltimore with her family. She holds a master’s degree in African American literature and a doctorate in Education Policy and Leadership. She is also a writer on faculty at American University. She reviews books for the Independent, has written extensively for other publications, and, in 2019, was the recipient of the American Studies Association’s Gloria Anzaldúa Award for independent scholars for her social-justice writing and teaching. Her collection of essays is currently in press at Lazuli Literary Group.

Believe in what we do? Support the nonprofit Independent!