Rick McGinnis:

Interim writer, Rick McGinnis, Amusements

Lately I’ve been getting served a rush of media asking the question “Is the world getting worse?” in the form of online articles, Twitter/X threads, blog posts and YouTube videos. Most of the blame goes to social media and the spread of “misinformation,” which has made us angrier, less hopeful and increasingly distrustful of anything we read, hear or see.

But one notable culprit is Artificial Intelligence and more particularly “AI Slop” – a plague of “low-rent, scammy garbage generated by artificial intelligence and increasingly prevalent across the internet” as it’s described in “Drowning in Slop,” an article published by New York Magazine late last year. A recent Guardian story states that “more than half of longer English-language posts on LinkedIn are AI-generated” while Forbes reports how YouTube has allowed videos that are wholly fabricated – facts, footage and narration included – to become cancer cells “swiftly spreading through YouTube’s system … and at this point, the damage might be irreversible.”

At the same time on YouTube there are videos like one titled “Why is Everything Getting Worse?” where a young man complains that “We go to college for years just so that we can go broke and lose our jobs to AI.”

A young woman posted a video titled “How AI Killed the Internet” and says that “in 2010 the internet was full of people and in 2025 the internet is full of bots and AI pretending to be people. AI content is flooding platforms. Deep fakes are being used without our consent and human creators are now competing with machines that never sleep.”

It’s into this skeptical, anxious mood that Emily Bender and Alex Hanna have written The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. But Bender, a linguist, and Hanna, a sociologist, take aim above the AI Slop to the industry and the technology that is being sold as either revolutionary in its transformative potential or a threat to human civilization (or both).

Artificial Intelligence is less likely to be like Star Trek’s Data (seen above), a sentient being that dispassionately analyzes inputs and more likely produce what is called ‘AI slop.’

“Breathless reporting uncritically parrots corporate statement that AI is going to free you from work, educate your kids, and provide medical care to all who need it,” they write. “The hypers claim that AI will produce art but also might just kill us all. Should it decide to spare us, at the end of the day you’ll be able to kick back in a fully automated paradise, once AI has solved the climate crisis and poverty.”

“This is, of course, all b******t,” Bender and Hunt state pungently. “AI isn’t sentient, it’s not going to make your job easier, and AI doctors aren’t going to cure what ails you. But all these claims can make your work worse and reduce your quality of life, unless we fight back against the increasing encroachment of these products into every area of public and private life. Hype doesn’t occur by accident, but rather because it fulfills a function: scaring workers and promising to save decision-makers and business leaders lots of money.”

They trace the origin of the term “artificial intelligence” to a time before the pocket calculator, when big claims being made about the potential of computer networks required a bold description. Artificial is accurate enough – the technology was man-made and unlikely to develop without considerable research and (especially) investment; but intelligence was always a stretch, especially back when the most powerful computer systems were incapable of much more than complex sums and basic data collation.

Sci-fi helped us imagine artificial intelligence, good and bad, in some near future. There was the computer on the starship Enterprise on Star Trek, always ready with an accurate analysis of some new alien and their technology, and then the android Data, artificial intelligence embodied and on a constant trajectory to discover the humanity that flowed from his sentience. On the other hand. there was HAL 9000 in 2001: A Space Odyssey, just as useful but growing into a cold, utilitarian machine sentience that becomes homicidal.

Our hopes and fears about AI coalesce around these fictional paradigms and, like so many metaphors we create to help imagine concepts, obscure the real nature and potential of the technology nudging into our lives. Just as helpful was how we naturally humanized the mechanical output of even the most rudimentary AI – that we are programed to look for empathy in a machine’s stochastic output; that as Bender and Hanna write “we use the words and syntactic structures we perceive as a very rich clue to figuring out what the person who uttered them might have been trying to get us to understand…we encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text.”

This had done a lot to get the public to interact with the output of “text extruding” AI programs employed in the place of real tech support, customer service or medical or legal counsel even when it doesn’t – and never will – actually respond to our queries.

Language compels us to humanize AI; as it “trains” itself using ever larger sets of data mined from information we have given and content we have created, we imagine it “learning” – improving itself like Star Trek’s Data on a journey of self-realization. In reality, Bender and Hanna tell us, an immense amount of human assistance is needed behind the scenes, to prevent, say, self-driving cars from being a menace on the roads, or editing the erroneous, offensive, libelous, and even hateful responses out of text-extruding software that functions with no sense of truth, context, decorum, or tact.

AI boosters go so far as to downplay our own humanity, imagining humanity as a sophisticated machine that evolved to manipulate strings of data, like Sam Altman, CEO of OpenAI, who famously tweeted “i am a stochastic parrot, and so r u.”

“In other words,” Bender and Hanna write, “it’s not important to distinguish between ourselves and machines merely to string manipulation, because we are of the same ilk. It’s not a question of kind for Altman and others, but merely a question of scale. Once we have language models that are big enough, according to this view, they will be functionally indistinguishable from humans.”

Bender and Hunt say, “AI hype reduces the human condition to one of computability, quantification and rationality. If we are just organic versions of computing machines, then we should interact with these software systems as if they were silicon-based life forms, whether friends or foe.”

The boldest claim made for AI is that, by “training” itself on the vast library of research, journalism, literature, essays, art, and moving images we have generously left out in the open for it to “graze” upon, it can become a creative tool, unleashing a new renaissance (to presumably engage the energy of all the people it has made superfluous). “Creating an entity that can evoke wonder and awe,” the authors write, “produce verifiable science, or take over the important work of journalistic inquiry and holding power to account would be a monumental step towards showing that, yes indeed, these technologies are truly groundbreaking.”

“We were promised – in science fiction and speculative visions of the future – that automation would take over the drudgery of doing repetitive labor, like data entry, cleaning the dishes, or scheduling meetings between people,” write Bender and Hunt. “Instead, we’re supposed to accept (and even celebrate!) machines that are creating art and taking over other creative activities that are uniquely human.”

What we’re getting, by and large, is AI Slop in wholesale quantities, distributed on social media and employed in the place of real photos or artwork created by humans who not unreasonably wanted to be paid for their work instead of being used as data to “train” their AI replacement. Most of it is eerie kitsch, and the best of it is produced by a new creative class who have mastered the language of prompts that have the best odds of producing imagery that doesn’t immediately elicit unease and revulsion.

Emad Mostaque, founder and former CEO of the text-to-image AI Stable Diffusion, talked about “democratizing image generation.” What this ignores is the cruel truth that there is nothing democratic about artistic creativity. Anyone with the time or inclination can express themselves creatively, but the persistence and discipline necessary to refine that creative impulse is rare, and real talent rarest of all.

The authors of The AI Con are certain that we are on the far side of an AI bubble that will burst just like the dotcom, crypto, and NFT bubbles, simply because the technology can’t live up to the bellicose hype. Perhaps this is true. What’s certain is that in the few years that AI has become ubiquitous it has contributed mightily to that sense articulated by quite a few young people – and no small numbers of older ones – that the world we live in is becoming tacky, cheaper, uglier, less reliable and more dishonest. In a word, worse, and AI has done more than its share to get us there.