Appetite Studies: AI CAN NOT FEED US, IT DOESN’T KNOW WHAT INGREDIENTS ARE
Do people still have an appetite for artificial intelligence? Three years ago, we all got forced to learn what a “CHATGPT” is. Now, you can’t tread two steps digitally without a little prompt asking if you want to generate a paragraph or turn your text message into an image. As much as it spreads fear about disinformation, we’re still forced to entertain discussions about its usefulness in education.
There’s no denying students everywhere, humanities and beyond, use it to generate ideas or whole essays. I remember being in a class on Digital Media and Culture talking about this very topic. At the same time, I saw people generate text summaries to help write a response on Canvas without reading the assigned text. Is that not a time-honored tradition since CliffNotes came about? It’s not the best way to get information certainly but it can help expedite the process of research or argument forming. Think of one of those machines you see at a McDonalds: designed to heat as many burgers as possible as quickly as possible. Not the best source of nutrition, but where’s the harm if we all know that? We all crave some fast food now and again.
That argument focuses heavily on how ends justify the means. But the ends here are functionally useless and easy enough to prove with a few minutes of crosschecking. I asked CHATGPT about resources for an essay on Fredric Jameson, and multiple times it insisted I read The Cambridge Companion to Fredric Jameson. This book does not exist, and it isn’t linking me to The Cambridge Companion to Postmodernism as each time I’ve tested it gives me a different author and publication year for this fake book. It will insist it does exist with dead links until I specifically type in the phrase, “This book does not exist.”
With this in mind, CHATGPT isn’t even a reliable fast-food cooking machine. Sometimes you get real sources and working links cooked or you get a nothingburger of information. A student looking for a quick fix will struggle to find these fake sources or sometimes cite them without cross-referencing. You can hardly call this quick or valuable to learning. It isn’t even individualized learning as advocates claim because these Large Language Models (LLMs) are programmed to chat, not teach. The fake sources come from the LLM playing Mad-Libs from its data based on a prompt, guessing at the response you want. It does not “know” if a source is real or not, only trained on how to properly format a source. The misinformation and attempts of humanization hide the fact that LLMS works only by plugging in the equation, “If X then Y.”
But while these LLMS provide nothing for knowledge-hungry students, we are constantly feeding its insatiable appetite. Data farms are cropping up land across Virginia and Arizona for the tax breaks the local government offers. Just one of these facilities may require the power of a large suburb and they tend to cluster up to share information. These all need several million gallons per year of water to prevent overheating, taken from regional water storage and local groundwater sources. Researchers don’t even have access to the exact energy costs from companies like OPENAI.
No upper limit exists for consumption either: so long as research, art, and writing exist so does the possibility it will be scraped and mashed down into a slurry of “data” Even if it were possible to create a result that justifies taking the resources of local communities, these LLMS laughably fail to do so despite the marketing. My partner compared it to the idea of a “Wagyu Beef Burger.” Many consider Wagyu beef a luxury for the small sample size of cows and specific marbling of the fat. But a hamburger typically consists of a variety of beef made in patty form, fat can be added or taken away easily enough. When fast food chains offer a “Wagyu Beef Burger” it’s a real Ship of Theseus question if you can call something “Wagyu” if it’s only 42% Wagyu Beef crossbred with something else. But Theseus did not have a marketing team to convince the public of the rarity and worth of his ship.
But if LLMs just regurgitated a mix of subpar sources and in-depth sources, one could still argue for the usefulness for students. It fails compared to any other resource, even the paywall gatekept ones. To make the most American of analogies: imagine a database as a burger. Your typical search engine, for its faults, can provide in seconds a mix of promoted brands and easily digestible resources, a classic fast food cheeseburger. If you’re willing to put in the time, you might actually find a critical and in-depth source with better nutritional value for your intellectual appetite. Academic databases represent the Wagyu beef here, incredibly focused and thought-out coverage of a topic locked behind a paywall or institutional access. They may not always be what we’re looking for but the rarity entices us.
Enter OPENAI marketing its LLMs as having all that rich Wagyu beef available in seconds at no extra cost. In reality, there is no 100% Wagyu or even 42% Wagyu. It works more like a roulette wheel that might put some Waygu, some cheap beef, or just straight air into your burger because it doesn’t really know what beef is except a burger needs it. This roulette wheel also requires the power, water, and land of a small town. It certainly makes cooking an essay at home a little more appealing.