Appraising Information in the Cybersphere, Part III

A five-column series on evaluating AI- and algorithmically generated text.

Appraising Information in the Cybersphere, Part III

It would be difficult to overstate the effect of Artificial Intelligence (AI) on colleges and universities — professors, students, administrators, and staff alike. AI is embedded in all of our platforms: the ones we use to submit or grade papers; the ones we use to search for sources in the library; the ones that facilitate our recordkeeping, job applications, promotions, and myriad small tasks necessary to the work we do; and, of course, the ones that our students use to help them brainstorm, build skills, meet one another, and cheat on assignments. 

This last piece — the cheating — receives the most discussion. According to the Chronicle of Higher Education, “Colleges have struggled to manage the rise in academic-integrity complaints involving generative AI, which stretch the bounds of policies that were crafted primarily to deal with plagiarism.” Assuming the posture of the professor, the Chronicle also asks, “Are we grading robots?” The answer is yes. Sometimes, we are.

This begs the question: Could we be shoved further aside as robots begin to grade robots? Sure we could. An MFA student of mine told me recently that the school district in which she is employed as an English teacher requires her to “grade” the work of her 200+ students using AI. 

The mother of a student whom I once tutored was upset about the state of things in her son’s public school. The day she reached out for my support, she said that his English teachers weren’t assigning literature anymore. “Literature teaches us so much about how to be human, though!” she said. But, as the scholar Jeod says to the dragon rider in her son’s favorite fantasy novel, Christopher Paolini’s Eragon, good books impart wisdom: “These books are my friends, my companions. They make me laugh and cry and find meaning in life.”

For this reason — making sure our students are reading and writing — many of us in the academy have gone back to basics. We make sure that we see every stage of our students’ work, ask them to write some assignments on paper, give oral instead of digital exams, and so on. We don’t want to grade the work of robots. We want to continue the work that has always involved our minds and our students’ minds interacting with the world through sources created by thinking, feeling, and researching. 

During our time together, that mother’s son and I read Eragon, Harper Lee’s To Kill a Mockingbird, and even most of Omar El Akkad’s One Day, Everyone Will Have Always Been Against This. Over the course of our study together, the student did all the things that young people do when they read great books. He thought. He felt. He wondered. He cross-compared. He disagreed. He agreed. He imagined. Then he wrote his own words about it all. And they were good!

But not every kid is doing this kind of work before they get to college. For this reason, I’ve heard my colleagues discuss ways that we can help our college students learn to “think.” Not so much think well, think critically, or think about something in particular. Just think. In one of these meetings, a colleague emphasized the importance of keeping the “human” in the humanities. 

Many of our students, however, are concerned about the manner in which algorithm- and AI-laden platforms affect their emotional health, cognitive processing, and social wellbeing for the worse. Not all of them are passively participating in the industry-driven deterioration of their minds. Such critically engaged students are speaking common sense into this strange landscape. 

One of them writes:

“If I could contribute to changing anything in today’s society, I would focus on reducing the harmful effects that certain forms of social media have on younger people’s self image. Certain platforms [driven by larger social-media companies] promote unrealistic standards and change how younger people view themselves and others [and are] very harmful…I would [like to] push for stronger media literacy in schools and support changes to these platforms to benefit everyone.” 

Another concurs: 

“[I]f I could make a difference in some of the issues of today’s world, I would be committed to alleviating the pressure that young people feel when following the ‘perfect’ life path shaped by social media. This pressure often leads to anxiety, self-comparison, and fear of failure. [I would like to exert] greater pressure on social media platforms to prompt them to change their algorithms that promote unhealthy comparisons.”

Another writes: 

“If I could change anything in today’s society, I would focus on reducing the spread and impact of political misinformation, especially among young voters who are increasingly forming their beliefs through algorithm-driven social media. Misinformation distorts political reality, weakens democratic participation, and makes it harder for people to make informed decisions. [But researchers] are already working within the system to study how misinformation spreads and how it influences public opinion, while journalists and fact-checking groups also attempt to counter false narratives.”

One student linked in an assignment a source called the Center for Humane Technology. I clicked on it, read through the site, and found out that it is a nonprofit founded and run by tech workers “dedicated to ensuring that today’s most consequential technologies, such as AI and social media, actually serve humanity.” I joined the group, feeling grateful to the student writer. Then I felt a bit better about the fact that AI is deeply problematic, that overreliance on it is causing major problems, that its seeds are too deep to uproot, that its momentum is unstoppable, and that its use and development are going to continue. As Atticus said in Mockingbird, though, “Simply because we were licked a hundred years before we started is no reason for us not to try to win.”

I like these odds when I focus on my critically engaged students. I think I can make more of them like this one, who writes:

“[If I could], I would prioritize building stronger and clearer artificial intelligence guidelines…due to AI currently developing at a rate which outpaces any attempt to establish rules governing AI. As a result, AI will have significant effects on issues such as privacy, employment, and the spread of false or misleading information. Presently, OpenAI and governments, particularly in the case of the EU, are developing policies and regulations regarding the use of AI (including a proposed law called the AI Act) in an effort to ensure that it is utilized in a responsible manner…I would advocate for increased transparency and ethical standards in how AI systems are developed and the ways in which they operate plus ensure that citizens comprehend how decisions are reached and how their data is stored.” 

Parts IV and V of this five-part series, then, will survey the advice of those organizations and individuals calling for us to slow down, retain our thinking skills, increase transparency and ethical standards, and “ensure that AI is utilized in a responsible manner.” These final articles will include bullet-listed “best practices” for educators and others who want to be a part of retaining and developing the human in the digital age.

[Editor’s note: As a postsecondary educator and an educational researcher, Dr. Trembath applied for and received Institutional Review Board (IRB) exemption amounting to permission to use student anecdotes as data in her research and writing, provided that students quoted remain anonymous, as here. She wishes to thank her students for their candor, compassion, and foresight.]

Sarah Trembath is an Eagles fan from the suburbs of Philadelphia who currently lives in Baltimore with her family. She holds a master’s degree in African American literature and a doctorate in Education Policy and Leadership. She is also a writer on faculty at American University. She reviews books for the Independent, has written extensively for other publications, and, in 2019, was the recipient of the American Studies Association’s Gloria Anzaldúa Award for independent scholars for her social-justice writing and teaching. Her collection of essays, This Past Was Waiting For Me: A Chronicle at Quarter Century, is forthcoming this month from Lazuli Literary Group.

Believe in what we do? Support the nonprofit Independent!