My last boyfriend would spend many hours playing with AI-generated imagery. Apparently unconcerned about the ecological effects of his tinkering, I would return from a grueling day of class to find him laughing at increasingly unhinged imagery: my dog as a hipster barista, me in wedding guest attire in a bloodstained postapocalyptic bunker, or Sesame Street characters performing a variety of absurd crimes through the fishbowl lens of a Nest camera. There are many groups online that compete to craft the most disturbing AI imagery imaginable, such as the Cursed AI Facebook group, which has over 1 million members. While we were dating, I was mildly annoyed by all of this— I didn’t really find the images all that funny, and it seemed no more intellectually stimulating than scrolling mindlessly through TikTok— but, I thought, at least AI was being used for Dadaist absurdity. At the time, I thought it was rather harmless. And, glass houses, etc: I definitely had my own burst of AI-fueled vanity when putting a bunch of flattering photographs into an image generator. (Of course, these AI-made images always looked like mediocre DeviantArt fanfiction.) Two years later, I can’t imagine finding AI even remotely amusing.
Which doesn’t mean, like many, that I’d rather not discuss it at all. I find myself irresistibly pulled to discussion about AI, a feeling somewhere between picking-at-a-scab and righteous anger. My friend Misha Mihailova, animation scholar at San Francisco State University, recently edited a special issue of the Journal of Cinema and Media Studies on AI. 2023, she argues, was an “inflection moment” for AI, although the gradual commercialization of AI has been underway since the 1980s— longer, in fact, than either Misha or I have been around on this earth.
The special issue is remarkable because it takes a critical “rather than a techno-Utopian” angle while still “challeng[ing] and complicat[ing] the view of AI technologies as an unambiguous threat…” She writes:
In that sense, the entry of these new tools into established creative spheres is a rare opportunity to rethink the underlying frameworks—ideological, political, and aesthetic—of those spheres and question the implications of their apparent ripeness for algorithmic disruption (and corruption).
This is laudable. Putting our fingers in our ears and lalalala-ing our way to a pre-AI era would probably be as useless as my own (continuing, since forever) stubborn boycott of Apple products: personally gratifying, socially useless. My anger with the culture of AI has more to do with the assumption that the anti-AI crowd were somehow less tech-savvy, less quick on the uptake of new technologies. Instead, it would appear (entirely from my own experience) that the enthusiasm for AI exists along a bell curve, with the most critical populations on both sides of spectrum, from Midwestern Grandpa to Software Engineer.
Months ago, I didn’t really believe my feelings on AI were particularly strong. Surely, I thought, all intelligent people in the world share my strong skepticism. Yet two months ago, when asked to speak at a panel of fellow media studies faculty at my home institution, I was shocked when my panelists— many of whom are dear friends— seemed to kowtow to our silicon valley overlords. There was significant discussion about attempting an ethnography of AI, of an investigation of its use, of trying to collaborate with students to think through a more thoughtful and ethical implementation. At no point, shockingly, was there significant criticism of its use in the classroom and in research. It was as if the whole thing was a done deal, as if criticism was pointless: those on high have deemed AI inevitable, even useful, have even commanded its use, and so we, like Nick Carraway in the Great Gatsby, beat on, boats against the current, borne back ceaselessly into the past— as if any rejection of AI is not thoughtful or provocative but old-fashioned, old-timey Luddite musings from yet another owl-eyed professor incapable of opening a PDF.
I hadn’t intended on going off, and yet there I was, the youngest person on the panel by almost a decade, so enraged in my corduroy blazer and blouse covered in medieval animals that you would have mistaken me for the little red imp from the Inside Out franchise. Well, I thought, if no one would play the role of the grumpy professor (imagine: Bernie Sanders wagging his finger about income inequality), I will volunteer as tribute. Because here is the thing:
AI is not new (what we think of as AI has been around for a very, very long time, just as computers are over 100 years old). AI has been around since at least the 1950s, and it has been commodified since the 1980s.
AI is not intelligence; AI cannot think, AI can only summarize, can only mimic, can only amalgamate. It is certainly not creative, as one of my favorite writers, the Borgesian sci-fi author Ted Chiang, famously argued in his New Yorker piece, Why AI Isn’t Going to Make Art
AI can only, in fact, bullshit, as one peer-reviewed publication in Ethics and Information Technology even describes verbatim in its title: “ChatGPT is Bullshit.” (Hence: why papers written by AI are always B papers at heart. B for bullshit, B for boring)
AI is not helpful. In fact, AI is a labor disaster. It was the key battleground issue for the SAG-AFTRA and WGA strikes in 2023— strikes I was proud to picket for in my time in LA. Despite the slight promise of the early post-strike bargaining agreements, the use of AI in media production seems more pronounced than ever. One friend, formerly a hit screenwriter, now works an hourly job correcting the mistakes of an AI translating AI-written podcast material from China. Because the company doesn’t consider this “writing” as much as editing, it is painful, fast-paced labor and he is expected to deliver over ten thousand words a day. Another friend in Marketing had her job fused with a staff writer’s, whom the company had unceremoniously fired; she had the new job, in addition to the old one (already 60+ hours a week with no overtime), of feeding information into AI and correcting it.
AI is an ecological disaster. One estimate from four months ago noted that AI infrastructure may soon consume six times more water than Denmark, a country of 6 million. I’m willing to bet this number has been reached by now. Given that the impending (and current) climate crisis will inevitably result in a lack of access to water, this should be extremely concerning. And this doesn’t even take into consideration the hardware required: a 2 kg computer requires 800 kg of raw materials in an AI data center, and the microchips that power AI need rare earth elements (often mined through exploitation, neo-colonialism, and slave labor). And then there’s the issue of electronic waste produced by these data centers, which often contain mercury and lead.
A friend in the digital humanities, Javier Cha, recently noted on Facebook that a student taking 2 minutes to input prompts into ChatGPT may have used about 2 L of water. He argued, though, that “the 10 hours students spend every day on YouTube, Instagram, TikTok, WhatsApp, etc, cause exponentially more environmental damage than ChatGPT.” As he argues, “It's a question of scale and how often we use the technology.” Yet as the comments on that post argued, theoretically, we have a choice to indulge in this media. Yet now, AI has been foisted upon everyone, whether they are looking up a recipe or grading papers. While there is a way to google without AI—I got rid of the annoying (and often incorrect) AI searches in Chrome with TenBlueLinks— most people will probably not go through this step (although you should— it’s easy and takes 2 seconds!).
There is so much more to say about AI, and I will likely return to this subject in the coming months— possibly, with a series of interviews with friends who researched and/or worked in the field. Until then I find myself battling against the concept armed with knowledge and fueled by anger. This is especially true in the classroom, where faculty have been trying to find ways of crafting assignments that AI can’t use. It seems that students, at best, use AI to start assignments, and hopefully will delete the AI-written material later on. Even this horrifies me. Can intelligent thought exist on a bedrock of empty mediocrity? Are we so afraid of discomfort that we’re willing to give up art in the process? Is the future really an ever-increasing softening of our minds until we end up like the people swirling around on a spacecraft in Wall-E, having destroyed the globe through environmental pollution and nuclear apocalypse?
This semester I was lucky. I haven’t seen a single AI-created assignment, although I’ve also crafted assignments painstakingly to prevent any potential for AI. I’m also lucky in that I’ve only taught seminars this semester, full of students already sick of AI and Silicon Valley rhetoric, by their own admission (bless the humanities major!). In teaching, formal analysis is a great tool: film, art, and literature classes are fundamentally about the art of looking, and AI can’t analyze or describe images in the way we train students to (not yet, at least). Increasingly, my peers are phasing out take-home assignments. More and more people are also returning to the classroom as a laptop-free zone, which I will implement in my next lecture class. Because, why go to class if not to feel discomfort, to struggle productively? The terror of the empty page is very real, and very necessary. Love, my friends, is for the ones who love the work.