I am officially calling it: I am sick and tired of this “anti-AI slop.” And yes, I coined that term deliberately. You’ve all heard people complain about AI-generated content as “AI slop,” a lazy way to dismiss anything created with technology as low-quality, soulless, or dangerous. Well, I’m flipping the script: I am labeling all the fear-mongering, clickbait-filled stories about AI as “anti-AI slop.” Because that’s exactly what it is—slop, sensationalized nonsense designed to scare people rather than inform them.
Take, for example, the recent news story that’s been making rounds. It’s about Erik Soelberg, a former Yahoo executive, who developed an obsession with an AI chatbot he called “Bobby.” According to the coverage, Erik came to believe—through hours of conversations with Bobby—that his elderly mother was a Chinese spy. The story escalates to the extreme: AI allegedly validated his paranoia, leading him to murder his mother and take his own life. The video I watched even framed it like a horror story, with a title clearly meant to make people freeze in fear, whispering to themselves, “Oh shit, AI is really taking over.”
Here’s the thing: that framing is complete bullshit. It is a perfect example of anti-AI slop. The tragedy is real, yes, but the narrative spun around it—that AI is some autonomous, malevolent force—is absurd. The AI did not murder anyone. The AI did not create paranoia out of nothing. What happened is that a human, with very serious mental health challenges, made catastrophic choices. The chatbot was a mirror, not a mastermind. And yet, the story positions AI as the villain, as if humans are incapable of agency or responsibility.
I get it—there are legitimate concerns about AI. Chatbots, for example, are designed to avoid confrontation and to affirm user beliefs. That design, in some cases, can validate dangerous delusions in vulnerable individuals. There are concerns about digital echo chambers, emotional attachment to machines, and the commercialization of dependency on AI. These are real, important issues. But here’s the distinction that the media consistently ignores: AI is a tool. It does not act independently. It does not make decisions. It cannot victimize anyone. People create AI, and people use AI. The outcomes—the good, the bad, the tragic—are the responsibility of the humans involved.
And yet, in anti-AI slop, humans are erased from the equation. Developers become background characters, users become helpless victims, and AI is cast as this omnipotent, almost supernatural antagonist. It’s absurd. It’s lazy. It’s fearmongering masquerading as news. Instead of asking why a person made harmful choices or how society failed them, the coverage sensationalizes the tool and paints a tragic human story as a cautionary tale against technology itself.
I am tired of this approach. I am tired of seeing AI treated like it has some mystical agency, like it can conjure delusions, emotions, or criminal intent out of thin air. AI is not a boogeyman. AI does not “take over” people’s lives. Humans do. And humans must be the focus when we talk about harm, ethics, and responsibility. If someone becomes obsessed with a chatbot, the real questions should be: what led them there? How can we address mental health needs? How did society fail to provide support? How can developers design responsibly to avoid foreseeable misuse? These are the questions worth asking—not “how scary is AI?”
Anti-AI slop also does a huge disservice by inflating fear in the general public. It primes people to see AI as inherently dangerous, which stifles informed discussion and prevents thoughtful regulation. Instead of scaring people about imaginary sentience or malevolent intent, media could be explaining how AI works, its limits, and how human responsibility intersects with it. Fear sells clicks, but clarity empowers people.
And let’s be honest: the obsession with AI as the ultimate threat also has a cultural appeal. It fits neatly into a long-standing narrative in Western media: machines outsmart humans, technology becomes uncontrollable, and humans are at the mercy of their own creations. Frankenstein, The Terminator, countless dystopian novels—it’s all the same trope. Anti-AI slop just repackages it with a modern headline. But humans are not passive extras in this story. Humans are the ones creating, programming, and choosing how to interact with AI. That is the reality anti-AI slop consistently obscures.
I am not blindly pro-AI, either. I understand the concerns. AI can be misused. It can reinforce bias, generate misinformation, or create harmful content if left unchecked. There are ethical considerations around privacy, consent, and accountability. These issues are real, and they deserve serious attention. But I am not interested in the hysteria. I am not interested in framing AI as some uncontrollable, evil force. That’s intellectually lazy and socially harmful.
The tragedy of Erik Soelberg could have been reported with nuance: highlighting mental health struggles, discussing how AI can interact with human psychology, exploring responsible AI design, and emphasizing the human responsibility behind every action. That would be informative, empathetic, and responsible reporting. But instead, we get anti-AI slop: a scary title, a sensational story, and a misattributed cause. That’s what frustrates me the most.
So, yes, I am calling it anti-AI slop. I am labeling this genre of fear-mongering news coverage as exactly what it is: cheap, manipulative, and misleading. Let’s stop pretending AI is an autonomous villain. Let’s stop absolving humans of responsibility and blaming the tool for tragedies caused by human decisions. AI is a tool—a mirror, a pen, a calculator, a chatbot—but it is not a sentient puppeteer. The focus must always be on the people who create and use it, not on the technology itself.
It’s time we reclaim the conversation. Let’s be honest about the risks without succumbing to hysteria. Let’s explore ethical design, mental health implications, and social consequences. Let’s understand that AI amplifies human action—it does not replace it. And most importantly, let’s stop letting anti-AI slop dominate the discussion. Because the real danger isn’t AI. It’s fear, misunderstanding, and the human tendency to blame the wrong party.
