“AI Is Inherently Bad” Is a Slippery Claim, and the Debate Is More Complicated Than It Looks

an artist s illustration of artificial intelligence ai this illustration depicts language models which generate text it was created by wes cockx as part of the visualising ai project l

When people say AI is inherently bad or inherently evil, that framing creates a problem right away. It turns a complex, evolving technology into something absolute and fixed, as if its value is already fully determined before we even look at how it is actually being used.

That kind of thinking is a slippery slope because it tends to erase everything AI is already doing in practical, real-world contexts. It ignores the fact that people are actively using it to learn, to communicate, to translate ideas, to assist with accessibility, to brainstorm, to code, to research, to write, and to solve problems. It also ignores the possibility that AI could be used for even more beneficial applications in the future, especially if it is developed and regulated responsibly.

When a tool gets labeled as “inherently evil,” the conversation stops being about use and becomes about identity. And once that happens, nuance tends to disappear.

One thing that makes this whole debate even more complicated is how politically inconsistent the reactions to AI actually are.

You will see right-wing conspiracy spaces that are strongly anti-AI, sometimes framing it in apocalyptic or religious terms, even though parts of the broader right-wing political ecosystem—including figures aligned with the Trump era—have shown interest in AI development and adoption. At the same time, you also see Silicon Valley-aligned conservatives and business-oriented groups pushing for rapid AI expansion. So even within one political “side,” there is no unified position.

That contradiction raises an obvious question: if AI were truly just obviously good or obviously evil, why would reactions be so fractured across groups that are otherwise politically aligned?

Why would some people in pro-business, pro-tech political circles be skeptical or hostile toward AI? Why would conspiracy communities latch onto it as a threat? Why would artists, workers, academics, tech enthusiasts, and politicians all approach it differently?

The answer seems to point less toward a simple truth about AI itself and more toward something else: AI is not a single issue with a single moral label. It is a multi-layered technology that touches economics, labor, culture, information, education, power structures, and identity all at once.

And because of that, people are reacting to different parts of it for different reasons.

For me, that is where nuance becomes unavoidable.

I do not think the conversation is as simple as “AI good” or “AI bad.” I think there are real risks, real harms, and real concerns that deserve serious attention. But I also think there are real benefits that often get dismissed too quickly when the discussion becomes emotionally charged or politically polarized.

And there is another layer to this that is worth thinking about, even if it is uncomfortable: the intensity of the pushback against AI does not always look entirely organic.

I am not saying that opposition to AI is fake or invented. There are absolutely real people with real concerns. There are artists worried about their work being used without consent. There are workers worried about job displacement. There are privacy advocates concerned about surveillance. There are environmental critics raising legitimate points about energy consumption.

All of that is real.

But at the same time, it is fair to ask whether all of the broader narrative amplification is purely spontaneous. In many major technological shifts, you also see economic interests shaping public perception. You see industries that feel threatened by disruption pushing counter-narratives. You see institutions that benefit from the status quo resisting change. You see misinformation and fear being amplified when it serves someone’s interests.

That does not require a single coordinated conspiracy to exist. It can happen through normal incentives: competition, profit protection, influence, and fear of disruption.

So the question becomes less “is AI good or evil?” and more “who benefits from which narrative about AI, and why?”

Because when you look closely, AI is not just a tool—it is a shift in power. And whenever power shifts, resistance shows up from multiple directions for multiple reasons, some principled, some economic, some ideological, and some emotional.

That is why the debate feels so unstable. It is not because people are irrational. It is because they are reacting to different stakes all at once.

So when I look at the claim that AI is inherently bad, I cannot really accept it as a complete or useful framing. It flattens too much. It removes too much context. It ignores too many lived experiences. And it stops people from asking better questions about how the technology is actually being shaped and used.

The more honest conversation is messier. It involves tradeoffs. It involves competing interests. It involves real risks and real benefits existing at the same time.

And most importantly, it requires acknowledging that AI is not a fixed moral object. It is a tool embedded in systems—and those systems determine a lot of what it becomes.

Leave a Reply

Discover more from The Interfaith Intrepid

Subscribe now to keep reading and get access to the full archive.

Continue reading