For a while, I made a political prediction about artificial intelligence that I genuinely thought would come true. I believed conservatives, who often embraced business expansion, rapid technological growth, and market-driven innovation, would remain strongly pro-AI. I also thought liberals, progressives, and leftists—many of whom raised early concerns about labor displacement, surveillance, corporate abuse, and environmental strain—would eventually soften their stance and begin defending AI once conservatives turned against it. I expected the typical culture war reversal. I thought the sides would flip.
At least right now, that does not appear to be what happened.
Instead, what we are seeing in 2026 is something more chaotic and more revealing. Yes, some conservatives have started turning against AI. Some of that criticism is grounded in legitimate concerns about censorship, labor disruption, dehumanization, or distrust of large tech companies. But some of it has also drifted into bizarre conspiratorial territory, framing AI as some kind of spiritual evil, demonic force, or apocalyptic religious threat. Those narratives exist, and they deserve to be challenged.
But what is striking is that outside of pushing back against the most extreme conspiracy claims, there is not exactly some massive movement rushing in to defend AI either. In fact, across multiple political camps, public hostility toward AI seems to have grown over the past several months. Suspicion has increased. Cynicism has increased. Fatigue has increased. People are increasingly seeing AI less as a miracle and more as a threat, a scam, a shortcut, or a destabilizing force.
And to be clear, I am not blind to the glaring problems with AI. There are real concerns. Serious concerns. Concerns about job displacement, exploitation of creative labor, misinformation, privacy erosion, surveillance systems, monopolization by giant corporations, environmental costs, and the possibility of governments or bad actors weaponizing the technology. These are not fake issues. These are not trivial complaints. They matter.
But I am also not entirely anti-AI either. Because I still believe AI can be used ethically. I still believe tools themselves are not automatically evil simply because they can be abused. The internet can be abused. Smartphones can be abused. Search engines can be abused. Cameras can be abused. Social media can be abused. Yet society generally distinguishes between unethical uses of a technology and the existence of the technology itself.
That distinction often disappears in AI discourse.
One point raised recently by BadEmpanada in a video discussing how parts of the left are wrong about AI is worth examining. People often criticize AI for using large amounts of energy and resources. That criticism has truth to it. Training and running large models can be resource intensive. Data centers consume electricity. Water cooling systems can create strain. These are real environmental considerations.
But it is also true that many other things people casually use every day consume enormous resources too. Search engines consume power. Streaming platforms consume power. Cryptocurrency certainly consumes power. Gaming systems consume power. Cloud storage consumes power. Phones, tablets, laptops, televisions, refrigerators, air conditioners, dryers, washers, and endless digital conveniences all consume energy. The modern internet itself is resource hungry. Massive server farms power ordinary life.
Yet many people seem willing to ignore those realities until AI enters the conversation, and then suddenly a moral panic emerges. That inconsistency is worth discussing. If environmental concern is real, then it should be broad, systemic, and honest—not selectively activated depending on what technology is unpopular that month.
Now, that does not mean every criticism of AI is hypocrisy. It does not mean people cannot focus on emerging harms. It does not mean AI gets a free pass. It means the conversation should be principled rather than performative. If someone opposes AI because of environmental costs, labor exploitation, monopolistic control, or social harms, that can be a coherent stance. But if the outrage only appears when AI is the topic while everything else gets ignored, then it can feel less like principle and more like grandstanding.
The environmental argument in particular is complicated. Our environment has been under severe stress for decades before mainstream AI tools exploded into public awareness. Industrial pollution, fossil fuel dependency, deforestation, overconsumption, plastic waste, corporate deregulation, militarization, and unsustainable growth models were damaging the planet long before AI chatbots or image generators became household terms. AI did not invent ecological crisis. AI entered an already damaged system.
That does not absolve AI companies from responsibility. It does not mean we should shrug and say it is too late. It does not mean environmental impacts no longer matter. It means blaming AI as though it singlehandedly caused planetary decline ignores history. The roots of ecological crisis are older, deeper, and tied to broader economic systems.
What I think happened politically is that AI did not fit neatly into left-versus-right alignment the way I expected. Instead, AI scrambled the map. Conservatives may oppose it for cultural or conspiratorial reasons, or because they distrust Silicon Valley. Leftists may oppose it for labor and anti-corporate reasons. Liberals may worry about misinformation and democracy. Artists may oppose it over intellectual property and creative theft. Workers may fear replacement. Students may fear educational decay. Privacy advocates may fear surveillance. Tech optimists may support it but want guardrails.
That means AI is not a normal partisan issue. It is a cross-ideological anxiety issue.
And maybe that is why my prediction was wrong. I assumed political tribes would absorb AI into the usual culture war script. But instead, many people across ideologies distrust it for different reasons. The coalitions are unstable. The arguments overlap in strange ways. The defenders are quieter than expected. The critics are louder than expected.
My own position remains mixed. I do not worship AI. I do not fear it as some supernatural evil either. I do not think every use is good, and I do not think every use is bad. I think AI can be exploitative under profit-driven systems, but I also think it can be educational, assistive, creative, medical, accessible, and useful when governed ethically.
The real battle is not whether AI exists. It is who controls it, how it is used, who benefits, who gets harmed, what rules exist, and whether the public has any say at all.
That is where the focus should be. Not panic. Not blind hype. Not lazy tribalism. Not conspiracy thinking. Real accountability. Real nuance. Real standards.
Because like many technologies before it, AI will likely reflect the values of the systems that deploy it. If greed dominates, AI may deepen harm. If ethics dominate, AI may help people.
And right now, that question matters far more than whether my old political prediction turned out wrong.
