12.3.2025

How AI Feeds The Clean Energy Misinformation Machine

AI-driven misinformation affecting public perception of clean energy.

How AI Feeds The Clean Energy Misinformation Machine

Social media has become an increasinglypowerful tool for disseminating misinformation about clean energy. A growingnumber of Facebookgroups, influencers, and various online forums make a living byspreading mis- and disinformation about renewable energy sources like solar,wind, and hydroelectric power. Safiya Umoja Noble, an AI expert at UCLA, tellsus that neither corporate self-policing nor government regulation haskept pace with today’s technology’s growth and potential for harm. Instead, AIfeeds a sense that search engines simply reflect back on ourselves, so we don’tstop to think of search results as subjective.

We use search engines like fact checkers.It’s only been recently that more users have gained an awareness of “the waythese platforms are coded and how they prioritize certain types of values overothers,” says Noble in a UCLA interview. The algorithms that govern our Googleresults are just one of the multiplying ways that artificial intelligence isshaping our lives and livelihoods.

Because of algorithmic influences, it’simportant for each of us to learn more about why the internet works the way itdoes, what that means for an AI-powered future, and how clean energy could beaffected.

We tend to think that, because internetsearches are reliable for fairly meaningless inquiries, we, in turn, believethey’re meaningful for everything else. “But if you use that same tool for aquestion that is social or political, do you think you’re going to get areliable answer?” Noble asks. To answer that question, she offers us someinsights into the ways that algorithms work.

  • The companies that own search engines build responses that favor the highest bidder — it’s a cottage industry that exists to figure out how to manipulate search engines.
  • Searches start with an algorithm that sorts through millions of potential websites.
  • Through the gray market of search engine optimization, the search is influenced by industries, foreign operatives, and political campaigns.
  •  Particular information emerges on the first page that reflects the world view of the influencers.

In essence, there’s culpability on the partof companies that make products that are easily manipulable.

To explain this in another way, we all knowwe hold certain values. We hold these values strongly and work to ensurethey’re met. That’s the rub. “So that means we would have to acknowledge thatthe technology does hold as a particular set of values or is programmed arounda set of values,” she explains. “And we want to optimize to have more of thevalues that we want.”

A technology that’s completely neutral andvoid of any markers that differentiate people means we default to thepriorities driven by our own biases.

However, Noble admonishes us that, “if wewant to work toward pluralistic and pro-democratic priorities, we have toprogram toward the things we value.”

That’s an important caveat, as largelanguage models don’t have agency, so they can’t refuse programming — they’remerely statistical pattern matching tools. “So the models are limited in beingable to even produce certain types of results,” she notes.

With this backdrop, think of thealgorithmic impact of Big Oil, which spent $450 million to influence DonaldTrump and Republicans throughout the 2024 election cycle and 118th Congress.This funding includes direct donations, lobbying, and advertising to supportRepublicans and their policies. In the 2024 election cycle, oil and gas donorsspent:

  • $96 million in direct donations to support Donald Trump’s presidential campaign and super PACs between January 2023 and November 2024
  • $243 million lobbying Congress
  • nearly $80 million on advertising supporting Trump and other Republicans or policy positions supported by their campaigns
  • more than $25 million to Republican down-ballot races, including $16.3 million to Republican House races, $8.2 million to Republican Senate races, and $559,049 to Republican Governors

That investment is already paying off inmany ways, including spreading climate and clean energy mis- and disinformationthrough algorithmic manipulation.

No longer are obstructionist groupsprimarily attacking the reality of climate change; in fact, claims abound thatobstructionists now need to address climate change. But hold on a bit.“Solutions denial” has replaced “climate denial” in a way that’s equally asdevastating for a sustainable future. In the case of the California wildfires,for instance, representatives of fossil fuel interests willemphasize water supply and forest floor management even when there isno floor to speak of in a desert climate.

Interestgroup propagandists like Tucker Carlson makethe framing of arguments for remediation and land-use planning moredifficult in the western US no different than eastern US resistance torenewable energy efforts using offshore wind and solar subsidies. It stillisn’t about water supply — it’s more about constructionregulation and regionalplanning.

Why Large Language Models Are So Persuasive

ChatGPT is a type of AI that’s built onwhat is calls a “large language model.” These models scan and absorb nearlyeverything that’s available on the internet into their training data. As asearch engine, ChatGPT doesn’t differentiate propaganda and evidence. It takesin everything from copyrighted works and academic scholarship to randomsubreddits, “as if these things are all equally reliable.” Alot of what large language models produce isn’t true, so AI feeds climateand other kinds of mis- and disinformation.

News stories around generative AI tools andtheir problems are quite common. Generative AI isimplicated in a host of ethical issues and social costs, including:

  • bias, misrepresentation, and marginalization
  • labor exploitation and worker harms
  • privacy violations and data extraction
  • copyright and authorship issues
  • environmental costs
  • misinformation and disinformation

Young people are especially susceptible,says Noble. Her students come to class “and use propaganda sites as evidence,because they can’t quite tell the difference.”

Google and other companies are the first tosay that they know these problems exist and they’re working on them. Thecompanies that are producing generative AI have released products that are notready for everyday searching, according to Noble. She doesn’t see Google andother internet sites dealing with the power and inequalities in their systems.Instead, they tweak algorithms rather than remake them in profound andsystemically strong ways. She explains:

“People who make the predictive AI modelsargue that they’re reducing human bias. But there’s no scenario where you donot have a prioritization, a decision tree, a system of valuing something oversomething else. So it’s a misnomer to say that we endeavor to un-biastechnology, any more than we want to un-bias people. What we want to do is bevery specific: We want to reduce forms of discrimination or unfairness, whichis not the same thing as eliminating bias.”

Might many of us agree that democracy ismessy. But are we ready to sacrifice our freedoms so some leaders in SiliconValley believe they can design a better society? I think not. But, as Noblepoints out, “Those politics are imbued in the products they make, who they’repointed toward, who’s experimented upon and who’s considered disposable aroundthe world.”