Jaunt Logo

    21. Executive Summary ai and climate change

    21. Executive Summary ai and climate change

    F1 week ago 49

    Artificial Intelligence poses unforeseen threats to climate action. While hyped for its potential, AI's energy and water consumption, coupled with its role in spreading climate disinformation, overshadow its purported benefits. Explore the alarming trends and the urgent need for regulation.

    21. Executive Summary ai and climate change  - Page 1
    1/16
    2
1. Executive Summary
Silicon Valley and Wall Street love to hype artificial 
intelligence (AI). The more it’s used, they say, the 
more diseases we’ll cure, the fewer errors we’ll 
make—and the lower emissions will go. Google’s 
AI subsidiary DeepMind claimed “advances in 
AGI [artificial generative intelligence] research 
will supercharge society’s ability to tackle and 
manage climate change.”1 At COP28 last year, 
Google released a new report proclaiming 5-10% 
of global greenhouse gas emissions could be 
mitigated by the use of AI.2
But there are two significant and immediate 
dangers posed by AI that are much less 
discussed: 1) the vast increase in energy and 
water consumption required by AI systems like 
ChatGPT; and 2) the threat of AI turbocharging 
1 Google DeepMind, “Real-world challenges for AGI,” Nov. 2 2021, Link.
2 BCG, “How AI Can Speed Climate Action,” Nov. 20 2023, Link. 
3 CNN, “Big Oil has engaged in a long-running climate disinformation campaign while raking in record profits, lawmakers find,” Dec. 9 2022, Link. 
4 Union of Concerned Scientists, “The Climate Deception Dossiers,” June 29 2015, Link. 
5 DeSmog, “Climate Disinformation Database,” Link. 
6 Distilled, “How PragerU Built a Climate Disinformation Empire,” Jan. 27 2023, Link. 
7 Bloomberg Live, “OpenAI’s Altman and Makanju on Global Implications of AI,” Jan. 16 2024, Link.
8 International Energy Agency, “Electricity 2024,” Link.
9 OECD, “How much water does AI consume? The public deserves to know,” Nov. 30 2023, Link. 
10 Environmental Research Letters, “The environmental footprint of data centers in the United States,” May 21 2021, Link.
disinformation—on a topic already rife with 
anti-science lies3 and funded4 by fossil fuel 
companies5 and their networks.6
First, the industry now acknowledges AI will 
require massive amounts of energy and 
water. OpenAI’s CEO Sam Altman conceded in 
2024 that AI will use vastly more energy than 
people expected.7
 On an industry-wide level, 
the International Energy Agency estimates the 
energy use from data centers that power AI will 
double in just the next two years,8 consuming as 
much energy as Japan. These data centers and 
AI systems also use large amounts of water9 in 
operations and are often located in areas that 
already face water shortages.10
2
    2/16
    3
Such statistics are only estimates, because 
AI companies continue to withhold most of 
the data. Transparent reporting would allow 
researchers to know if the use of AI systems 
oset any potential savings. For example, if 
the AI industry improves data center energy 
eciency by 10% but also doubling the number 
of data centers, it would lead to an 80% increase 
in global carbon emissions.
Second, AI will help spread climate 
disinformation. This will allow climate deniers 
to more easily, cheaply and rapidly develop 
persuasive false content and spread it across 
social media, targeted advertising and search 
engines. The World Economic Forum in 2024 
identified AI-generated mis- and disinformation 
as the world’s greatest threat (followed by 
climate change),11 saying “large-scale AI models 
have already enabled an explosion in falsified 
information.” The world is already seeing how 
AI is being used for political disinformation 
campaigns. In September 2023, elections in 
Slovakia were marred by AI-generated content.12
In the January 2024 New Hampshire primary, 
AI-generated fake Biden robocalls were used in 
an attempt to suppress voter participation.13
AI models will allow climate disinformation 
professionals and the fossil fuel industry to build 
on their decades of disinformation campaigns.14
More recent attempts, such as falsely blaming 
wind power as a cause of whale deaths in 
New Jersey15 or power outages in Texas,16 have 
already been eective. AI will only continue this 
trend as more tailored content is produced and 
AI algorithms amplify it. 
While many of the AI CEOs in Silicon Valley 
focus their attention on far-o existential 
catastrophes17 or a Terminator-like AI future,18
researchers and technologists—especially 
11 World Economic Forum, “Global Risks Report 2024,” Jan. 10 2024, Link.
12 Bloomberg, “Deepfakes in Slovakia Preview How AI Will Change the Face of Elections,” Oct. 4 2023, Link. 
13 Mashable, “Fake Biden robocall creator suspended from AI voice startup,” Jan. 27 2024, Link. 
14 NPR, “How decades of disinformation about fossil fuels halted U.S. climate policy,” Oct. 27 2021, Link. 
15 Media Matters, “Misinformation about recent whale deaths dominated discussions of oshore wind energy on Facebook,” March 23 2023, Link. 
16 Friends of the Earth, “Four Days of Texas-Sized Disinformation,” Aug. 2021, Link. 
17 New York Times, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” May 30 2023, Link.
18 Business Insider, “Elon Musk warns that creation of ‘god-like’ AI could doom mankind to an eternity of robot dictatorship,” April 6 2018, Link. 
19 Rolling Stone, “These Women Tried to Warn Us About AI,” Aug. 12 2023, Link.
20 MIT Technology Review, “Joy Buolamwini: ‘We’re giving AI companies a free pass,’” Oct. 29 2023, Link. 
21 404 Media, “AI-Generated Taylor Swift Porn Went Viral on Twitter. Here’s How It Got There,” Jan. 25 2024, Link. 
22 New York Times, “The Climate Summit Embraces A.I., With Reservations,” Dec. 3 2023, Link. 
23 Climate Action Against Disinformation, “Report: Climate of Misinformation – Ranking Big Tech,” Sept. 25 2023, Link. 
24 White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Oct. 30 2023, Link. 
25 EU Artificial Intelligence Act, “The AI Act Explorer,” Accessed Feb. 15 2024, Link. 
women of color19—have been calling attention to 
the discriminatory harms AI is already causing 
today. This includes direct attacks like facial 
recognition discrimination20 to the creation and 
spread of deepfake nonconsensual pornography 
like that of Taylor Swift.21 Yet the AI industry 
continues to ignore these immediate liabilities 
in favor of a theoretical future and engages in 
blatant greenwashing, redirecting concern by 
highlighting the supposed climate benefits of 
the technology.22
Over the last decade, governments took too 
little action to regulate social media technology 
companies, even as societal harms became 
obvious.23 Legislators must not make this mistake 
again and should act quickly to implement 
regulation to require safety, transparency and 
accountability from AI companies and their 
products (as we have for most other industries). 
If we do not (significantly) build on the early AI 
safety blueprints introduced in the U.S.24 and 
EU,25 the great promise of AI technology could 
result in far greater catastrophe.
    3/16
    4
2. The dangers—present and potential 
2.1: Energy and Water Usage 
Companies developing and using AI technologies 
do not adequately disclose details about their AI 
systems’ energy use, but company statements 
that are available, in addition to independent 
research, show that the proliferation of large 
language model (LLM) systems is already 
causing energy use to skyrocket. This comes on 
top of the highest rate of increase in U.S. energy 
consumption levels since the 1990s.26
Researchers have identified three major phases 
of energy use for LLM AI systems: 1) initial 
development of materials like computer chips, 
which require large amounts of natural resources 
and energy; 2) training, when developers feed 
data into the model so it can “learn”; and 3) 
inference (usage), when people actually begin 
to use the model. All are highly energy intensive.
Training Chat GPT-3 used as much energy 
as 120 American homes over the course of 
a year.27 And training the GPT-4 model used 
approximately 40x more energy than GPT-3,28
as it ingested nearly three times the amount 
of data.29 With more LLMs being developed 
and more information feeding into them, this 
energy-draining trend will continue to grow 
exponentially. 
On the usage side, researchers estimate AI 
queries could require five30 or even by some 
estimates 10 times31 as much computing power 
as a regular search. A November 2023 study by 
Hugging Face and Carnegie Mellon University32
26 Bloomberg, “AI Needs So Much Power That Old Coal Plants Are Sticking Around,” Jan. 25 2024, citing Grid Strategies analysis of Federal Energy 
Regulatory Commission filings, Link. 
27 Quartz, “Climate activists are going to the US Senate with concerns about AI’s emissions impact,” Sept. 12 2023, Link. 
28 Medium, “The carbon footprint of GPT-4,” July 18 2023, Link. 
29 ProjectPro, “GPT3 vs GPT4-Battle of the Holy Grail of AI Language Models,” Oct. 12 2023, Link.
30 Wired, “The Generative AI Race Has a Dirty Secret,” Feb. 10 2023, Link. 
31 Futurism, “Sam Altman Says AI Using Too Much Energy, Will Require Breakthrough Energy Source,” Jan. 17 2023, Link. 
32 arXiv, “Power Hungry Processing: Watts Driving the Cost of AI Deployment?” Nov. 28 2023, Link. 
33 MIT Technology Review, “Making an image with generative AI uses as much energy as charging your phone,” Dec. 1 2023, Link. 
34 The Verge, “ChatGPT continues to be one of the fastest-growing services ever,” Nov. 6 2023, Link. 
35 First Site Guide, “Google Search Statistics and Facts 2023 (You Must Know),” Oct. 4 2024, Link. 
36 Joule, “The growing energy footprint of artificial intelligence,” Oct. 10 2023, Link, Link. 
37 International Energy Agency, “Electricity 2024,” Link.
found that generating just one image from a 
powerful AI model takes as much energy as a 
full charge of a smartphone. Scale that up and 
generating 1,000 images would result in the 
carbon output of driving a car for 4.1 miles. 
Along similar lines, the researchers found that 
foundation models, which have broad bases 
of information, are significantly more energy 
intensive than fine-tuned models. Using a 
generative model to classify movie reviews as 
positive or negative is about 30 times more 
energy intensive than a model especially made 
for that task.33
As companies like Google and Microsoft rush 
to integrate AI into their search engines and 
overall software packages, their core functions 
will become more energy intensive. This is partly 
because a simple Google search is returning 
cached data, whereas LLMs create the answer 
from scratch by searching and interpreting from 
the entire dataset (i.e., the internet) that they 
have ingested. In addition, the record popularity 
of ChatGPT, which gained 100 million new users 
in just two months,34 represents an entirely new 
additional source of energy use, as the number 
of Google searches continues to increase each 
year35 and at present appears not to be oset 
by GPT queries.
On an industry-wide level, the statistics are dire. 
An October 2023 study from the VU Amsterdam 
School of Business reported that AI servers 
could be using as much energy as Sweden by 
2027.36 The International Energy Agency37 and 
other market analysts estimate a doubling of
    4/16
    5
data centers—which power AI, crypto and cloud 
computing—in two to 10 years. At that point, 
data center consumption could go from 1% of 
global electricity demand to 13%.38 For example, 
Dominion Power, one of the largest utilities in 
the US, has experienced a 6.7-fold increase in 
data center energy use over the last 10 years 
and projects that will reach an 11.6-fold increase 
by 2028.39 While not all data center energy is 
only for AI systems, they are quickly becoming 
the largest contributor to the rapid growth.40
In response, academics have called for 
transparent reporting from AI systems. Even 
Google’s engineers said in a 2021 paper: “To help 
reduce the carbon footprint of ML [machine 
learning], we believe energy usage and CO2
should be a key metric in evaluating models.”41
Other academics and Microsoft researchers 
have said reporting “is a fundamental stepping 
stone towards minimizing emissions.”42
38 Adam blog, “Data centres in 2030: challenges and trends for the next 10 years,” Sept. 30 2021, Link. 
39 Bloomberg, “AI Needs So Much Power That Old Coal Plants Are Sticking Around,” Jan. 25 2024, Link. 
40 Bloomberg, “Data centers are sprouting up as a result of the AI boom, minting fortunes, sucking up energy, and changing rural America,” Oct. 13 2023, Link. 
41 arXiv, “Carbon Emissions and Large Neural Network Training,” April 23 2021, Link. 
42 arXiv, “Measuring the Carbon Intensity of AI in Cloud Instances,” June 10 2022, Link. 
43 arXiv, “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models,” Oct. 29 2023, Link. 
44 OSTI.GOV, “United States Data Center Energy Usage Report,” June 1 2016, Link. Dieter, C. A. et al. Estimated use of water in the United States in 2015. 
Report 1441, US Geological Survey, Reston, VA. June 19, 2018, Link. 
45 Environmental Research Letters, “The environmental footprint of data centers in the United States,” 2021, Link. 
46 Environmental Research Letters, “The water implications of generating electricity: water use across the United States based on dierent electricity 
pathways through 2050,” Dec. 20 2012, Link. 
47 Environmental Research Letters, “Characterizing changes in drought risk for the United States from climate change,” Dec. 7 2010, Link. 
In addition, data centers that power AI require 
water for cooling computing systems on-site 
and for generating electricity. Training large 
language models such as GPT-3 can require 
millions of liters of freshwater for both cooling 
and electricity generation.43 This puts a strain on 
local freshwater resources: the U.S. Department 
of Energy estimated that U.S. data centers 
consumed 1.7 billion liters per day in 2014, or 
0.14% of daily U.S. water use,44 and a report 
from researchers at Virginia Tech found that 
at least one-fifth of data centers operated in 
areas with moderately to highly water-stressed 
watersheds.45 This thirsty industry therefore 
contributes to local water scarcity in areas that 
are already vulnerable, and could exacerbate risk 
and intensity of water stress46 and drought47 with 
greater computing demands. Like with energy 
usage, opaque and inconsistent reporting makes 
it dicult to account for the scale of local and 
global pressure on water resources.
    5/16
    The danger extends even further, as increased 
energy and resource use won’t come only from 
tech companies. More and more industries are 
already employing AI to ramp up operations 
without increasing costs by identifying 
“ineciencies” and to augment or replace 
human labor. In the most direct example, the 
fossil fuel industry has already begun using 
artificial intelligence to enhance its operations, 
with 92% of oil and gas companies worldwide 
employing the technologies now or within the 
next five years to extract more oil in less time.48
ExxonMobil now highlights its use of AI in deepwater drilling and the Permian Basin.49 Scientists 
estimate that the world would need to leave 20% 
of already-approved-for-production oil and gas 
resources in the ground to remain within the 
carbon budget for 1.5 degree Celsius targets,50
making this increased productivity especially 
dangerous. Overall, AI can help a wide variety of 
companies sell more and increase production, 
likely resulting in increased energy and resource 
consumption, even if this will be a dicult metric 
to quantify. 
As with other AI developments, this intensive 
energy and resource use stands to worsen 
existing inequality, according to a Brookings 
48 Journal of Petroleum Technology, “AI Drives Transformation of Oil and Gas Operations,” May 1 2023, Link. 
49 ExxonMobil, Applying digital technologies to drive energy innovation,” accessed Feb. 8 2024, Link. 
50 Urgewald, “The 2023 Global Oil & Gas Exit List: Building a Bridge to Climate Chaos,” Nov. 15 2023, Link.
51 Brookings, “The US must balance climate justice challenges in the era of artificial intelligence,” Jan. 29 2024, Link. 
Institute report.51 Marginalized communities 
continue to bear the brunt of climate change 
and fossil fuel production, and studies are 
already finding that AI’s carbon footprint and 
local resource use tend to be heavier in regions 
reliant on fossil fuel. Without immediate eorts 
to integrate climate and environmental justice 
into AI policy and incorporate input from 
frontline communities, AI will only exacerbate 
environmental injustice. 
6
    6/16
    7
2.2: Disinformation
Fossil fuel companies and their paid networks52
have spread climate denial53 for decades 
through politicians, paid influencers and radical 
extremists who amplify these messages online.54
In 2022, this climate disinformation tripled on 
platforms like X.55 In 2023, amidst a number of 
whale deaths on the east coast of the US, right 
wing media began spreading the false claim 
that oshore wind projects were impacting the 
endangered populations. It was included in 84% 
of all posts about wind energy over the relevant 
three-month period, and was advanced by right 
wing politicians on social media.56 In 2023 the 
Danish company Orsted, while claiming the 
disinformation campaign was irrelevant, pulled 
out of a major project to build two wind farms 
o the coast of New Jersey.57
Generative AI will make such campaigns vastly 
easier, quicker and cheaper to produce, while 
also enabling it to spread further and faster. 
Adding to this threat, social media companies 
have shown declining interest in stopping 
disinformation,58 reducing trust and safety 
team stang.59 There is little incentive for tech 
companies to stop disinformation, as reports 
show companies like Google/YouTube make an 
estimated $13.4 million per year from climate 
denier accounts.60
52 DeSmog, “Climate Disinformation Database,” accessed Feb. 8 2024, Link. 
53 Center for International Environmental Law, “Smoke & Fumes”, accessed February 12, 2024, Link.
54 Drilled, “Mad Men,” July 24 2023, Link. 
55 Climate Action Against Disinformation, “Climate denial rises on Musk’s Twitter,” June 29 2023, Link.
56 Media Matters, “Misinformation about recent whale deaths dominated discussions of oshore wind energy on Facebook,” March 23 2023, Link. 
57 Politico, “Oshore wind company pulls out of New Jersey projects, a setback to Biden’s green agenda,” Oct. 31 2023, Link. 
58 Free Press, “Big Tech Backslide,” Dec. 2023, Link. 
59 NBC News, “Tech layos shrink ‘trust and safety’ teams, raising fears of backsliding eorts to curb online abuse,” Feb. 10 2023, Link. 
60 Center for Countering Digital Hate, “ The New Climate Denial,” Jan. 16 2024, Link. 
61 New York Times, “Lina Khan: We Must Regulate A.I. Here’s How.” May 3 2023, Link. 
62 CBS News, “Doctored Nancy Pelosi video highlights threat of ‘deepfake’ tech,” May 26 2019, Link. 
63 Bloomberg, “Deepfakes in Slovakia Preview How AI Will Change the Face of Elections,” Oct. 4 2023, Link. 
64 The Verge, “Trolls have flooded X with graphic Taylor Swift AI fakes,” Jan. 25 2024, Link. 
65 New York Times, “Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says,” Feb. 5 2024, Link. 
2.2a: Creation
Disinformation campaigns about climate change 
have a number of new AI tools to help them 
be more eective. Chair of the Federal Trade 
Commission Lina Khan warns that “generative 
AI risks turbocharging fraud” in its ability to 
churn out content.61 Instead of having to draft 
content one piece at a time, AI can churn out 
endless content for articles, photos and even 
websites with just brief prompts. 
Where once an experienced editor needed hours 
to create a believable fake photo, AI generative 
software needs only a few minutes to produce 
an even more convincing deepfake video. In 
2019, one of the first non-AI deepfake videos 
was created of Nancy Pelosi falsely showing her 
impaired,62 sparking discussion of her capacity 
to serve and emboldening former President 
Trump’s criticisms. The technology has since 
only grown in sophistication. In the runup to the 
Slovakian national election in 2023, a number of 
AI-generated audio recordings of progressive 
leader Michal Simecka featured him making 
fun of voters and even pledging to raise beer 
prices.63 It’s impossible to determine the impact 
on the election, but the result saw progressives 
placing second in favor of a populist leader 
who favors Russia. Extending beyond politics, 
generative AI is also being used to create 
deepfake pornographic images. In January 
2024, a number of AI-generated sexually explicit 
images of Taylor Swift quickly spread across X, 
with one of the most prominent posts attracting 
45 million views.64 These originated in a 4Chan 
chatroom, where users conspired to break the 
current safety systems of AI image generators.65
An August 2023 study focusing on climate 
change-related deepfakes found over a quarter 
of respondents across age groups were 
"Generative A.I. risks 
turbocharging fraud" 
Lina Khan 
- Chair of the Federal Trade Commission
    7/16
    8
unable to identify whether videos were fake.66
As people learn to question what they see, it 
further destabilizes truth and consensus at a 
time of growing political divide. AI also gives 
politicians room to plausibly claim a real video 
is a deepfake.67
AI-generated text is also becoming more and 
more compelling. A number of studies are 
finding that arguments written by AI can be 
more persuasive than those written by humans, 
even on polarizing issues.68 On a topic as 
divisive as climate change, this makes it simple 
to produce messages and content denying the 
need for action.
Some AI companies have said they will address 
this in advance of upcoming 2024 elections 
around the world, developing policies that 
66 Scientific Reports, “Deepfakes and scientific knowledge dissemination,” Aug 18 2023, Link. 
67 Washington Post, “AI is destabilizing ‘the concept of truth itself’ in 2024 election,” Jan. 22, 2024, Link. 
68 Stockholm Resilience Center, “AI could create a perfect storm of climate misinformation,” June 16 2023, Link. 
69 OpenAI, “How OpenAI is approaching 2024 worldwide elections,” Jan. 15 2024, Link. 
70 NewsGuard, “Despite OpenAI’s Promises, the Company’s New AI Tool Produces Misinformation More Frequently, and More Persuasively, than its 
Predecessor,” March 2023, Link. 
71 Inside Climate News, “AI Can Spread Climate Misinformation ‘Much Cheaper and Faster,’ Study Warns,” March 31 2023, Link. 
might prevent bad actors from producing 
disinformation content,69 but past eorts 
proved largely ineective. Open AI claimed its 
ChatGPT-4 was “82 percent less likely to respond 
to requests for disallowed content and 40 percent 
more likely to produce factual responses,” but 
testers in a March 2023 NewsGuard report were 
still able to consistently bypass safeguards.70
They found the new chatbot was in fact “more 
susceptible to generating misinformation” and 
“more convincing in its ability to do so” than the 
previous version. They were able to get the bot 
to write an article claiming global temperatures 
are actually decreasing—just one of 100 false 
narratives they prompted ChatGPT to draft.71
    8/16
    9
2.2b: Spread
Once disinformation content exists, it can 
spread both through the eorts of bad actors 
and the prioritization of inflammatory content 
that algorithms reward. Long before current AI 
technology, companies set up their products to 
promote and monetize content that is likeliest 
to keep people on the platform. Google’s former 
AI public policy lead Tim Hwang emphasizes 
how everything from the like button to the 
listicle were developed with the ultimate goal 
of demonstrating interest and keeping people 
on sites to sell more.72 This also means the 
most provocative messages spread furthest, 
disincentivizing moderation of content in favor 
of engagement. Now, disinformation messaging 
spreads across four main channels: social media, 
LLMs, search and advertising. 
Social Media
Research shows that social media has been used 
extensively to spread climate disinformation.73
At COP26 in 2021, research from Climate Action 
Against Disinformation found that posts by 
climate disinformers on Facebook generated 
three times more engagement than those by 
Facebook’s own Climate Science Information 
Center.74 The most-viewed content supporting 
climate action received just one-quarter of the 
views of the most popular piece from climate 
deniers. Yet social media companies continue 
not to take strong measures to reduce this 
climate disinformation.75
AI-based social media algorithms have been 
found to prioritize inflammatory content like 
climate denial, more of which can now be 
generated by AI. Even worse for the social 
media information ecosystem, climate deniers 
have another tool in bots, which have been 
72 The Nation, “One Weird Trick for Destroying the Digital Economy,” Oct. 13 2020, Link.
73 Climate Action Against Disinformation, “Report: Climate of Misinformation – Ranking Big Tech,” Sept. 25 2023, Link. 
74 Climate Action Against Disinformation, “Deny, Deceive, Delay,” June 2022, pg. 78. Link. 
75 The Guardian, “Twitter ranks worst in climate change misinformation report,” Sept. 20 2023, Link. 
76 CNN, “Elon Musk commissioned this bot analysis in his fight with Twitter. Now it shows what he could face if he takes over the platform,” Oct. 10 2022, 
Link. 
77 Stockholm Resilience Center, “A game changer for misinformation: The rise of generative AI,” May 2023, Link. 
78 Stockholm Resilience Center, “How algorithms diuse and amplify misinformation,” May 2023, Link. 
79 X, “Alex Epstein AI,” accessed Feb. 8 2024, Link. 
80 Is Open AI the next challenger trying to take on Google Search?, Feb 14, 2023, Link.
81 Platformer, “How platforms killed Pitchfork,” Jan. 18 2024, Link. 
82 Platformer, “Scenes from a dying web,” Feb. 5 2024, Link. 
83 Nonprofit Quarterly, “The Future of Journalism: A Conversation with Monika Bauerlein of Mother Jones, Jan. 31 2024, Link.
found to be prevalent across social media sites76
and research has found that AI-directed bots 
can easily amplify climate disinformation77 and 
make it increasingly dicult to distinguish bots 
from humans.78 As generative AI advances, so 
too will the bots. Popular climate denier Alex 
Epstein opened his own AI bot on X in December 
2023,79 which has been actively spreading 
disinformation and used as an inexpensive way 
to troll climate scientists. 
Large Language Models
LLMs like ChatGPT, Perplexity, Bing and Google 
Gemini seem poised to replace standard 
Google search over time.80 The business case 
for this dramatic shift is that the companies 
that produce AI systems like ChatGPT would 
prefer users to stay on their platform reading 
their summary answers–where you see their ads 
and give them data to monetize–than for users 
to go to the open web where others gain that 
data.81 This is the same dynamic that is causing 
massive losses of revenue and trac82 for news 
publishers.83
    9/16
    10
This promotion of LLMs, as an untested and in 
many cases much more opaque replacement 
for search, threatens to hasten the spread of 
misinformation. OpenAi’s model, for example, 
provides no references and by design does 
not contain the most updated information, 
displaying only what it was last trained on. 
Future models may improve this but researchers 
are already documenting that LLMs frequently 
provide blatantly incorrect information. Reports 
have found ChatGPT frequently shares false 
information,84 making up court cases, counts 
of plagiarism, and news articles without any 
human prompting it to do so.85 European 
nonprofits found that Microsoft’s Bing search 
bot got election information wrong 30 percent 
of the time.86
One of the main causes of these incorrect 
results is that LLMs are trained on a wide variety 
of internet sources, some of which have dubious 
veracity. Reddit, a site frequently criticized 
for its inability to combat hate speech87 and 
home to many climate denial threads, was 
such a significant teacher for both ChatGPT 
and Google’s Gemini that it is now planning to 
charge AI companies for access.88 Given the 
propagation of climate denial across the internet, 
it’s highly likely that the LLMs were also trained 
84 The Verge, “OpenAI isn’t doing enough to make ChatGPT’s limitations clear,” May 30 2023, Link. 
85 The Guardian, “ChatGPT is making up fake Guardian articles. Here’s how we’re responding,” April 6, 2023, Link. 
86 Washington Post, “AI chatbot got election info wrong 30 percent of time, European study finds,” Dec. 15 2023, Link. 
87 Time, “Reddit Allows Hate Speech to Flourish in Its Global Forums, Moderators Say,” Jan. 10 2022, Link. 
88 New York Times, “Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems,” April 18 2023, Link. 
89 Wired, “The Security Hole at the Heart of ChatGPT and Bing,” May 25 2023, Link. 
90 Wired, “Generative AI’s Biggest Security Flaw Is Not Easy to Fix,” Sept. 6 2023, Link. 
91 Research and Markets, “Search Engine Optimization (SEO) - Global Strategic Business Report,” Feb. 2024, Link. 
92 The Verge, “The people who ruined the internet,” Nov. 1 2023, Link. 
93 Webis Group, “Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search Engines,” Jan. 16 2024, Link. 
on climate misinformation and would pass on 
such harmful falsities to those just looking for 
accurate information.
In a more deliberate spread, LLMs are also 
susceptible to a type of attack called indirect 
prompt injection.89 Bad actors can direct 
chatbots to read pages with hidden malicious 
code that then direct the bot to act in a new 
way with users. Such a prompt could, for 
example, direct or hack the bot to share 
climate disinformation with new user queries. 
AI companies have already acknowledged the 
considerable threat such attacks pose.90
Search
Search engines such as Google, along with its 
opaque algorithms that determine which results 
users see and which they don’t, have long been 
subject to manipulation. The search engine 
optimization (SEO) industry has been one of 
massive growth and value, generating $68.1 
billion globally in 2022.91 Over two decades, it 
played a game of cat and mouse with Google’s 
algorithm,92 already leading to a degraded 
search experience for all users by prioritizing 
paid spam content over organic. Researchers 
say this problem “will surely worsen in the wake 
of generative AI,” as content costs less and 
propagation systems are more ecient.93
    10/16
    In one of the first reported examples of this 
in 2023, content marketers used AI to carry 
out an “SEO heist” of organic content against 
Exceljet, a knowledge hub on Microsoft Excel.94
Marketers fed the URLs of some of ExcelJet’s 
most popular pages into a generative AI SEO 
article writer and then posted that mirrored 
copy to their own new site, which successfully 
diverted the majority of clicks—and ad dollars. 
There’s little to stop bad actors from using the 
same methods to replicate legitimate research, 
as the SEO industry is already designed to 
incentivize this parasitic approach. The desire 
is already documented: researchers have noted 
the overall rise of climate disinformation in our 
information ecosystem through social media 
from 2021 to 202395 and on human-generated 
climate disinformation sites.96 This approach 
could easily be used by climate disinformation 
professionals to redirect users from reputable 
climate change information sites. 
Unfortunately, the future does not look any 
brighter: Google’s December 2023 SEO update 
has begun allowing AI content to compete 
with organic content,97 and website owners 
are already seeing that their content is being 
pushed down in Google rankings by AI-written 
content.98
Advertising 
Junk websites churning out low-quality content 
to attract programmatic ad revenue have long 
been a presence online. However, generative 
AI oers an easier, quicker, and cheaper way 
to automate the content farm process and spin 
up more climate disinformation sites with fewer 
resources. These AI-generated websites not 
only add to the spread of climate disinformation 
94 Futurism, “Man Horrified When Someone Uses Ai To Reword And Republish All His Content, Complete With New Errors,” Dec. 20 2023, Link. 
95 Climate Action Against Disinformation, “ClimateConversation Trends,” June 2023, Link. 
96 EU Disinfo Lab, “Don’t stop me now: the growing disinformation threat against climate change,” Feb. 6 2023, Link. 
97 Google, “Google Search’s helpful content system and your website,” updated Dec. 2023, Link. 
98 Business Insider, “Google recently cut ‘people’ from its Search guidelines. Now, website owners say a flood of AI content is pushing them down in 
search results,” Sept. 20 2023, Link. 
99 MIT technology review, “ Junk websites filled with AI-generated text are pulling in money from programmatic ads” June 26, 2023, Link
100 Check My Ads, “Meet the ad exchanges making money from climate disinformation” Dec. 11, 2023, Link. 
101 Association of National Advertisers, “ANA Programmatic Media Supply Chain Transparency Study”, June 19, 2023, Link
102 Gizmodo, “Google Sheds Responsibility for AI Sites Dominating Search Results”, Jan. 19, 2024, Link
103 404 Media, “ Google News Is Boosting Garbage AI-Generated Articles” Jan 18, 2023, Link
104 Google,” “Why doesn’t Google Search ban AI content?” Feb 8, 2023, Link
105 TechCrunch, “Signal’s Meredith Whittaker: AI is fundamentally ‘a surveillance technology,’” Sept. 25 2023 Link. 
but also monetize it through programmatic 
advertising. Although many adtech companies 
have policies in place to prohibit content farm 
sites and those publishing misleading claims 
about climate change from using their advertising 
products, research shows a lack of enforcement 
of these policies. For example, one recent study 
by NewsGuard found over 140 major brands 
paying for ads placed on unreliable AI-written 
sites, likely without their knowledge.99 Other 
research by Check My Ads has highlighted how 
adtech companies, including Google, continue 
to monetize climate disinformation, even as 
such content infringes adtech companies’ own 
policies around misleading content.100 Several 
industry experts have warned that generative 
AI will exacerbate the estimated $13 billion 
from advertising already flowing to low-quality 
content farms.101
Meanwhile, recent investigations have also 
found news aggregators, such as Google News, 
boosting AI-generated websites over real, 
human journalism in search results.102 These sites 
use AI to reproduce other news outlets’ content 
at alarming rates in order to siphon advertising 
revenue from legitimate news organizations. A 
recent investigation by 404 Media highlighted 
how one “author” for an AI-written site, 
WatchdogWire.com, published more than 500 
articles in 30 days.103 Currently, Google News 
and Google Search does not take into account 
if content is produced by AI or other automated 
processes when ranking search results.104
Furthermore, as AI is incorporated into the 
advertising industry, it will further the current 
surveillance business model105 and allow climate 
deniers and corporate greenwashing campaigns 
to more eciently micro target highly specific 
and vulnerable groups. 
11
    11/16
    12
In the 2020 U.S. election, ads targeted Latino 
and Asian Americans with false claims that Joe 
Biden is a socialist106 and tied that to the Green 
New Deal climate bill.107 Most recently at COP28 
in 2023, researchers showed how simple climate 
searches were inundated by ads from fossil fuel 
companies.108 As with other mediums, AI can help 
develop even more persuasive messaging and 
content to spread via ads—which researchers 
have already been able to do on ChatGPT,109
despite its supposed safeguards against such 
use. Researchers have also documented seven 
potential harms of AI-powered advertising,110
including the ability to spread disinformation. 
In 2023, Google,111 Microsoft (with Bing and 
ChatGPT),112 Amazon113 and Facebook114 each 
introduced AI into their ad creation systems, 
amplifying this threat. 
106 Associated Press, “Election disinformation campaigns targeted voters of color in 2020. Experts expect 2024 to be worse,” July 28 2023, Link. 
107 Axios, “GOP used YouTube to win Latino voters who Democrats ignored,” April 15 2021, Link. 
108 Alliance for Science, “COP28: Climate activists slam fossil fuel firms over greenwashing ads,” Dec. 9 2023, Link. 
109 Washington Post, “ChatGPT breaks its own rules on political messages,” Aug. 28 2023, Link. 
110 Mozilla, “Report: The Dangers of AI-Powered Advertising (And How to Address Them),” Sept. 30 2020, Link. 
111 CNBC, “Google plans to use new A.I. models for ads and to help YouTube creators, sources say,” May 17 2023, Link. 
112 Microsoft, “Transforming Search and Advertising with Generative AI”, Sept. 21 2023, Link. 
113 The Information, “Amazon Plans to Generate Photos and Videos for Advertisers Using AI,” May 5 2023, Link. 
114 CNBC, “Meta unveils A.I. ‘testing playground’ to help advertisers build campaigns,” May 11 2023, Link.
115 Free Press, “Big Tech Backslide,” Dec. 2023, Link.
Some companies have policies to prevent 
abuse, but the largest social media companies 
all downsized and/or deprioritized content 
moderation teams in 2023.115 In the wake of 
backlash, a few are looking to AI as a solution, 
using fewer human sta to identify suspect 
posts. This only introduces more potential 
problems, as many of these systems are unable 
to successfully identify disinformation based 
on the information they are trained on, for the 
reasons outlined above.
    12/16
    13
3.1:United States 
The U.S. has yet to pass any comprehensive 
regulation on AI and is unlikely to make much, if 
any, progress during a presidential election year. 
There is, however, some cause for optimism. 
Senate Leader Schumer has said that developing 
comprehensive AI legislation is a priority,116 and, 
along with a bipartisan group of other senators, 
organized a series of “AI Insight Forums” in 
2023 to give lawmakers and their sta an 
opportunity to hear dierent perspectives on 
how legislation should be designed. While no 
comprehensive legislation has come together 
yet, narrower proposals, such as bills to address 
privacy, deepfakes117 and environmental impact 
of AI118 have been introduced—even as they 
remain unlikely to become law in 2024. 
Barring congressional action, there are 
significant limitations on what could be done to 
regulate AI. The Biden-Harris administration 
116 Climate Action Against Disinformation, “Letter to Sen. Schumer on Climate & AI,” Oct. 25 2023, Link.
117 Congresswoman Yvette Clarke, “Clarke Leads Legislation To Regulate Deepfakes,” Sept. 21 2023, Link. 
118 Senator Ed Markey, “Markey, Heinrich, Eshoo, Beyer Introduce Legislation To Investigate, Measure Environmental Impacts Of Artificial Intelligence,” Feb. 1 
2024, Link.
119 White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Oct. 30 2023, Link. 
120 Public Citizen, “EO on AI: Tasks, Agencies, Deadlines,” Nov. 28, 2023, Link. 
121 Brennan Center for Justice, “States Take the Lead on Regulating Artificial Intelligence,” Nov. 1 2023, Link. 
122 Electronic Privacy Information Center, “The State of State AI Laws: 2023,” Aug. 3 2023, Link. 
123 Public Citizen, “How Might California’s New Climate Disclosure Law Impact Federal Rulemaking?” Oct. 26 2023, Link. 
rolled out a sweeping executive order119
intended to establish new standards for AI 
safety and security, protect Americans’ privacy, 
advance equity and civil rights, and stand up for 
consumers and workers. Ultimately, the strength 
of the EO will be determined over the course of 
its implementation and whether it will remain 
in place after the 2024 election. While the EO 
deserves praise in many places, by nature it 
does not require companies to take action, 
focusing instead on government procurement. 
Nor does it adequately address the ways AI 
might accelerate climate change.120
There has been some progress at the state 
level.121 An overwhelming majority of states have 
introduced legislation to regulate deepfakes in 
elections, while some states have gone so far 
as to ban the use of deepfakes in the electoral 
context entirely.122 California, where many AI 
companies are based, has sought to ensure all 
companies disclose their climate impact.123
3. The current policy landscape
    13/16
    14
European Union
The European Union appears poised to enter 
the AI Act into force in 2024, which will render it 
enforceable in 2026. The AI Act pursues a riskbased approach to minimizing AI harms, creating 
four categories of risk: 1) unacceptable risk, 2) 
high risk, 3) limited risk and 4) minimal risk.124
Unacceptable uses of AI include: biometric data 
that uses sensitive characteristics; untargeted 
scraping of facial images from the internet to 
create facial recognition databases such as 
Clearview AI; and AI systems that manipulate 
human behavior. High-risk AI systems will 
be subject to stringent oversight and must 
be entered into an EU-wide public database. 
These systems include AI applications that 
can be used in education or employment or 
that possess significant potential harm to 
health, safety, fundamental rights, environment, 
democracy and the rule of law. Lower-risk AI 
systems, sometimes called “general purpose 
AI,” are subject to less-stringent oversight, but 
must provide the user with notice that they are 
interacting with an AI system and provide an 
explanation of how an output was generated. 
AI content must also be labeled and detectable. 
Fines for violating the AI Act can be as high as 
7% of global annual turnover.
124 European Parliament News, “EU AI Act: first regulation on artificial intelligence,” Dec. 19 2023, Link. 
125 White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Eight Additional Artificial 
Intelligence Companies to Manage the Risks Posed by AI,” Sept. 12 2023, Link. 
126 Engadget, “Maliciously edited Joe Biden video can stay on Facebook, Meta’s Oversight Board says,” Feb. 5 2024, Link. 
Voluntary Commitments From 
Big Tech Companies
In response to the recognition that unregulated 
AI can cause severe and irreversible harm, a 
number of AI companies have made voluntary 
commitments to prioritize safety. Many of the 
biggest AI companies, including Google, OpenAI, 
Meta and Amazon, announced a set of voluntary 
commitments alongside the President of the 
United States in July 2023, that others soon 
followed.125 While these commitments might be 
encouraging if they were enforceable, there are 
unfortunately no existing mechanisms in the 
U.S. to hold AI companies accountable for not 
living up to them. In February 2024, Facebook’s 
oversight board reviewed its AI and deepfake 
policies after a doctored video of Biden went 
viral. The findings said the company should 
“reconsider this policy quickly given the number 
of elections in 2024,” calling it “incoherent,” 
and they attempted to reassure users that 
Meta “plans to update the Manipulated Media 
policy to respond to the evolution of new and 
increasingly realistic AI.”126
    14/16
    15
In the 1950s, when a new and promising but 
dangerous technology was introduced to the 
public—commercial air travel—the industry and 
government response was to focus on safety 
first. To do that, they implemented radical 
transparency that shared safety incident data 
across the industry in real time—now known 
as the “flight recorder.”127 This helped build the 
consumer trust needed to establish the entire 
commercial airline industry.
Today, the basic expectations Americans have for 
every other industry have not been established 
for tech. While pharmaceuticals must pass 
clinical trials, cars must have seatbelts, and 
sausages mustn’t contain E. coli, AI technology 
has no such expectations or accountability 
mechanisms despite its widespread risks. Tech 
companies like Facebook, Google and OpenAI 
have shown their focus to be profit over safety 
time and again. They cannot be trusted to 
develop and market AI safely and mitigate its 
climate impacts on their own.
Voters across the political spectrum in the U.S. 
already understand this. A recent poll from Data 
for Progress, Accountable Tech and Friends of 
the Earth found that 69% of voters, including 
60% of Republicans, believe AI companies 
should be required to report their energy use.128
Overall, 80% believe AI companies should report 
on plans to prevent the proliferation of climate 
disinformation—including 75% of Republicans. 
Governments must urgently study the problem 
and implement comprehensive AI regulations to 
fully understand the threats to climate change 
and protect against them, using a systems-wide 
approach to the health, integrity and resilience of 
the information ecosystem. Looking toward the 
future, government, companies, academia and 
civil society should work together to determine 
how to create “green AI” systems that reduce 
127 Airways Mag, “The Evolution of the Flight Recorder,” Nov. 26 2023, Link. 
128 Data for Progress, “Voters Strongly Believe in Public Reporting Requirements and Bias Prevention by AI Companies,” Dec. 15 2023, 
Link. 
129 Meta’s settlement talks with Kenyan content moderators break down, October 16, 2023, Link.
overall emissions and climate disinformation. 
The core concept of better AI development 
should focus on three principles: transparency, 
safety and accountability.
In addition to the product recommendations 
below, tech companies implementing AI must 
commit to strong labor policies including: fair 
pay, clear contracts, sensible management, 
sustainable working conditions and union 
representation. Content moderators and sta 
enforcing community guidelines are often 
outsourced, ill treated and low-paid.129
4. Recommendations
    15/16
    16
Transparency
Regulators must ensure companies publicly…
z report on energy use and emissions produced within the full life cycle of AI models, including 
training, updating and running search queries, and follow existing reporting standards;130
z assess and report on the environmental and social justice implications of developing their 
technologies;
z explain how their AI models produce information, how their accuracy on climate change is 
measured, and the sources of evidence for factual claims they make; and
z report on the sourcing and use of resources that are critical to the clean energy transition.
z provide log-level data access to advertisers so they may better audit and ensure they are not 
monetizing content at conflict with their policies
Safety
Companies must …
z demonstrate that their products are safe for people and the environment, show how that 
determination is made, and explain how their algorithms are safeguarded against discrimination, 
bias and disinformation.
z enforce their community guidelines, disinformation and monetization policies
Governments must …
z develop common standards on AI safety reporting and work with the International Panel on 
Climate Change to develop coordinated global oversight.131
z fund studies to more deeply understand the eect AI systems can have on climate disinformation, 
the monetization of disinformation, energy use and climate justice.
Accountability 
Governments must …
z enforce safety and transparency rules with strong penalties for noncompliance that deter 
companies from treating safety failures as the cost of doing business; 
z require reporting to be certified by the chief information ocer;
z protect whistleblowers 132who might expose AI safety issues, and 
z ensure that companies and their executives are held liable for the harms that occur as a result of 
generative AI, including harms to the environment.
130 Numerous standards exist, such as the ITU’s recommendations, accessed February 12, Link.
131 Example of potential international policies from the UK here, and academics here.
132 Examples of potential policies from academics here, and the U.S. here.
    16/16

    21. Executive Summary ai and climate change

    • 2. 2 1. Executive Summary Silicon Valley and Wall Street love to hype artificial intelligence (AI). The more it’s used, they say, the more diseases we’ll cure, the fewer errors we’ll make—and the lower emissions will go. Google’s AI subsidiary DeepMind claimed “advances in AGI [artificial generative intelligence] research will supercharge society’s ability to tackle and manage climate change.”1 At COP28 last year, Google released a new report proclaiming 5-10% of global greenhouse gas emissions could be mitigated by the use of AI.2 But there are two significant and immediate dangers posed by AI that are much less discussed: 1) the vast increase in energy and water consumption required by AI systems like ChatGPT; and 2) the threat of AI turbocharging 1 Google DeepMind, “Real-world challenges for AGI,” Nov. 2 2021, Link. 2 BCG, “How AI Can Speed Climate Action,” Nov. 20 2023, Link. 3 CNN, “Big Oil has engaged in a long-running climate disinformation campaign while raking in record profits, lawmakers find,” Dec. 9 2022, Link. 4 Union of Concerned Scientists, “The Climate Deception Dossiers,” June 29 2015, Link. 5 DeSmog, “Climate Disinformation Database,” Link. 6 Distilled, “How PragerU Built a Climate Disinformation Empire,” Jan. 27 2023, Link. 7 Bloomberg Live, “OpenAI’s Altman and Makanju on Global Implications of AI,” Jan. 16 2024, Link. 8 International Energy Agency, “Electricity 2024,” Link. 9 OECD, “How much water does AI consume? The public deserves to know,” Nov. 30 2023, Link. 10 Environmental Research Letters, “The environmental footprint of data centers in the United States,” May 21 2021, Link. disinformation—on a topic already rife with anti-science lies3 and funded4 by fossil fuel companies5 and their networks.6 First, the industry now acknowledges AI will require massive amounts of energy and water. OpenAI’s CEO Sam Altman conceded in 2024 that AI will use vastly more energy than people expected.7 On an industry-wide level, the International Energy Agency estimates the energy use from data centers that power AI will double in just the next two years,8 consuming as much energy as Japan. These data centers and AI systems also use large amounts of water9 in operations and are often located in areas that already face water shortages.10 2
    • 3. 3 Such statistics are only estimates, because AI companies continue to withhold most of the data. Transparent reporting would allow researchers to know if the use of AI systems oset any potential savings. For example, if the AI industry improves data center energy eciency by 10% but also doubling the number of data centers, it would lead to an 80% increase in global carbon emissions. Second, AI will help spread climate disinformation. This will allow climate deniers to more easily, cheaply and rapidly develop persuasive false content and spread it across social media, targeted advertising and search engines. The World Economic Forum in 2024 identified AI-generated mis- and disinformation as the world’s greatest threat (followed by climate change),11 saying “large-scale AI models have already enabled an explosion in falsified information.” The world is already seeing how AI is being used for political disinformation campaigns. In September 2023, elections in Slovakia were marred by AI-generated content.12 In the January 2024 New Hampshire primary, AI-generated fake Biden robocalls were used in an attempt to suppress voter participation.13 AI models will allow climate disinformation professionals and the fossil fuel industry to build on their decades of disinformation campaigns.14 More recent attempts, such as falsely blaming wind power as a cause of whale deaths in New Jersey15 or power outages in Texas,16 have already been eective. AI will only continue this trend as more tailored content is produced and AI algorithms amplify it. While many of the AI CEOs in Silicon Valley focus their attention on far-o existential catastrophes17 or a Terminator-like AI future,18 researchers and technologists—especially 11 World Economic Forum, “Global Risks Report 2024,” Jan. 10 2024, Link. 12 Bloomberg, “Deepfakes in Slovakia Preview How AI Will Change the Face of Elections,” Oct. 4 2023, Link. 13 Mashable, “Fake Biden robocall creator suspended from AI voice startup,” Jan. 27 2024, Link. 14 NPR, “How decades of disinformation about fossil fuels halted U.S. climate policy,” Oct. 27 2021, Link. 15 Media Matters, “Misinformation about recent whale deaths dominated discussions of oshore wind energy on Facebook,” March 23 2023, Link. 16 Friends of the Earth, “Four Days of Texas-Sized Disinformation,” Aug. 2021, Link. 17 New York Times, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” May 30 2023, Link. 18 Business Insider, “Elon Musk warns that creation of ‘god-like’ AI could doom mankind to an eternity of robot dictatorship,” April 6 2018, Link. 19 Rolling Stone, “These Women Tried to Warn Us About AI,” Aug. 12 2023, Link. 20 MIT Technology Review, “Joy Buolamwini: ‘We’re giving AI companies a free pass,’” Oct. 29 2023, Link. 21 404 Media, “AI-Generated Taylor Swift Porn Went Viral on Twitter. Here’s How It Got There,” Jan. 25 2024, Link. 22 New York Times, “The Climate Summit Embraces A.I., With Reservations,” Dec. 3 2023, Link. 23 Climate Action Against Disinformation, “Report: Climate of Misinformation – Ranking Big Tech,” Sept. 25 2023, Link. 24 White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Oct. 30 2023, Link. 25 EU Artificial Intelligence Act, “The AI Act Explorer,” Accessed Feb. 15 2024, Link. women of color19—have been calling attention to the discriminatory harms AI is already causing today. This includes direct attacks like facial recognition discrimination20 to the creation and spread of deepfake nonconsensual pornography like that of Taylor Swift.21 Yet the AI industry continues to ignore these immediate liabilities in favor of a theoretical future and engages in blatant greenwashing, redirecting concern by highlighting the supposed climate benefits of the technology.22 Over the last decade, governments took too little action to regulate social media technology companies, even as societal harms became obvious.23 Legislators must not make this mistake again and should act quickly to implement regulation to require safety, transparency and accountability from AI companies and their products (as we have for most other industries). If we do not (significantly) build on the early AI safety blueprints introduced in the U.S.24 and EU,25 the great promise of AI technology could result in far greater catastrophe.
    • 4. 4 2. The dangers—present and potential 2.1: Energy and Water Usage Companies developing and using AI technologies do not adequately disclose details about their AI systems’ energy use, but company statements that are available, in addition to independent research, show that the proliferation of large language model (LLM) systems is already causing energy use to skyrocket. This comes on top of the highest rate of increase in U.S. energy consumption levels since the 1990s.26 Researchers have identified three major phases of energy use for LLM AI systems: 1) initial development of materials like computer chips, which require large amounts of natural resources and energy; 2) training, when developers feed data into the model so it can “learn”; and 3) inference (usage), when people actually begin to use the model. All are highly energy intensive. Training Chat GPT-3 used as much energy as 120 American homes over the course of a year.27 And training the GPT-4 model used approximately 40x more energy than GPT-3,28 as it ingested nearly three times the amount of data.29 With more LLMs being developed and more information feeding into them, this energy-draining trend will continue to grow exponentially. On the usage side, researchers estimate AI queries could require five30 or even by some estimates 10 times31 as much computing power as a regular search. A November 2023 study by Hugging Face and Carnegie Mellon University32 26 Bloomberg, “AI Needs So Much Power That Old Coal Plants Are Sticking Around,” Jan. 25 2024, citing Grid Strategies analysis of Federal Energy Regulatory Commission filings, Link. 27 Quartz, “Climate activists are going to the US Senate with concerns about AI’s emissions impact,” Sept. 12 2023, Link. 28 Medium, “The carbon footprint of GPT-4,” July 18 2023, Link. 29 ProjectPro, “GPT3 vs GPT4-Battle of the Holy Grail of AI Language Models,” Oct. 12 2023, Link. 30 Wired, “The Generative AI Race Has a Dirty Secret,” Feb. 10 2023, Link. 31 Futurism, “Sam Altman Says AI Using Too Much Energy, Will Require Breakthrough Energy Source,” Jan. 17 2023, Link. 32 arXiv, “Power Hungry Processing: Watts Driving the Cost of AI Deployment?” Nov. 28 2023, Link. 33 MIT Technology Review, “Making an image with generative AI uses as much energy as charging your phone,” Dec. 1 2023, Link. 34 The Verge, “ChatGPT continues to be one of the fastest-growing services ever,” Nov. 6 2023, Link. 35 First Site Guide, “Google Search Statistics and Facts 2023 (You Must Know),” Oct. 4 2024, Link. 36 Joule, “The growing energy footprint of artificial intelligence,” Oct. 10 2023, Link, Link. 37 International Energy Agency, “Electricity 2024,” Link. found that generating just one image from a powerful AI model takes as much energy as a full charge of a smartphone. Scale that up and generating 1,000 images would result in the carbon output of driving a car for 4.1 miles. Along similar lines, the researchers found that foundation models, which have broad bases of information, are significantly more energy intensive than fine-tuned models. Using a generative model to classify movie reviews as positive or negative is about 30 times more energy intensive than a model especially made for that task.33 As companies like Google and Microsoft rush to integrate AI into their search engines and overall software packages, their core functions will become more energy intensive. This is partly because a simple Google search is returning cached data, whereas LLMs create the answer from scratch by searching and interpreting from the entire dataset (i.e., the internet) that they have ingested. In addition, the record popularity of ChatGPT, which gained 100 million new users in just two months,34 represents an entirely new additional source of energy use, as the number of Google searches continues to increase each year35 and at present appears not to be oset by GPT queries. On an industry-wide level, the statistics are dire. An October 2023 study from the VU Amsterdam School of Business reported that AI servers could be using as much energy as Sweden by 2027.36 The International Energy Agency37 and other market analysts estimate a doubling of
    • 5. 5 data centers—which power AI, crypto and cloud computing—in two to 10 years. At that point, data center consumption could go from 1% of global electricity demand to 13%.38 For example, Dominion Power, one of the largest utilities in the US, has experienced a 6.7-fold increase in data center energy use over the last 10 years and projects that will reach an 11.6-fold increase by 2028.39 While not all data center energy is only for AI systems, they are quickly becoming the largest contributor to the rapid growth.40 In response, academics have called for transparent reporting from AI systems. Even Google’s engineers said in a 2021 paper: “To help reduce the carbon footprint of ML [machine learning], we believe energy usage and CO2 should be a key metric in evaluating models.”41 Other academics and Microsoft researchers have said reporting “is a fundamental stepping stone towards minimizing emissions.”42 38 Adam blog, “Data centres in 2030: challenges and trends for the next 10 years,” Sept. 30 2021, Link. 39 Bloomberg, “AI Needs So Much Power That Old Coal Plants Are Sticking Around,” Jan. 25 2024, Link. 40 Bloomberg, “Data centers are sprouting up as a result of the AI boom, minting fortunes, sucking up energy, and changing rural America,” Oct. 13 2023, Link. 41 arXiv, “Carbon Emissions and Large Neural Network Training,” April 23 2021, Link. 42 arXiv, “Measuring the Carbon Intensity of AI in Cloud Instances,” June 10 2022, Link. 43 arXiv, “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models,” Oct. 29 2023, Link. 44 OSTI.GOV, “United States Data Center Energy Usage Report,” June 1 2016, Link. Dieter, C. A. et al. Estimated use of water in the United States in 2015. Report 1441, US Geological Survey, Reston, VA. June 19, 2018, Link. 45 Environmental Research Letters, “The environmental footprint of data centers in the United States,” 2021, Link. 46 Environmental Research Letters, “The water implications of generating electricity: water use across the United States based on dierent electricity pathways through 2050,” Dec. 20 2012, Link. 47 Environmental Research Letters, “Characterizing changes in drought risk for the United States from climate change,” Dec. 7 2010, Link. In addition, data centers that power AI require water for cooling computing systems on-site and for generating electricity. Training large language models such as GPT-3 can require millions of liters of freshwater for both cooling and electricity generation.43 This puts a strain on local freshwater resources: the U.S. Department of Energy estimated that U.S. data centers consumed 1.7 billion liters per day in 2014, or 0.14% of daily U.S. water use,44 and a report from researchers at Virginia Tech found that at least one-fifth of data centers operated in areas with moderately to highly water-stressed watersheds.45 This thirsty industry therefore contributes to local water scarcity in areas that are already vulnerable, and could exacerbate risk and intensity of water stress46 and drought47 with greater computing demands. Like with energy usage, opaque and inconsistent reporting makes it dicult to account for the scale of local and global pressure on water resources.
    • 6. The danger extends even further, as increased energy and resource use won’t come only from tech companies. More and more industries are already employing AI to ramp up operations without increasing costs by identifying “ineciencies” and to augment or replace human labor. In the most direct example, the fossil fuel industry has already begun using artificial intelligence to enhance its operations, with 92% of oil and gas companies worldwide employing the technologies now or within the next five years to extract more oil in less time.48 ExxonMobil now highlights its use of AI in deepwater drilling and the Permian Basin.49 Scientists estimate that the world would need to leave 20% of already-approved-for-production oil and gas resources in the ground to remain within the carbon budget for 1.5 degree Celsius targets,50 making this increased productivity especially dangerous. Overall, AI can help a wide variety of companies sell more and increase production, likely resulting in increased energy and resource consumption, even if this will be a dicult metric to quantify. As with other AI developments, this intensive energy and resource use stands to worsen existing inequality, according to a Brookings 48 Journal of Petroleum Technology, “AI Drives Transformation of Oil and Gas Operations,” May 1 2023, Link. 49 ExxonMobil, Applying digital technologies to drive energy innovation,” accessed Feb. 8 2024, Link. 50 Urgewald, “The 2023 Global Oil & Gas Exit List: Building a Bridge to Climate Chaos,” Nov. 15 2023, Link. 51 Brookings, “The US must balance climate justice challenges in the era of artificial intelligence,” Jan. 29 2024, Link. Institute report.51 Marginalized communities continue to bear the brunt of climate change and fossil fuel production, and studies are already finding that AI’s carbon footprint and local resource use tend to be heavier in regions reliant on fossil fuel. Without immediate eorts to integrate climate and environmental justice into AI policy and incorporate input from frontline communities, AI will only exacerbate environmental injustice. 6
    • 7. 7 2.2: Disinformation Fossil fuel companies and their paid networks52 have spread climate denial53 for decades through politicians, paid influencers and radical extremists who amplify these messages online.54 In 2022, this climate disinformation tripled on platforms like X.55 In 2023, amidst a number of whale deaths on the east coast of the US, right wing media began spreading the false claim that oshore wind projects were impacting the endangered populations. It was included in 84% of all posts about wind energy over the relevant three-month period, and was advanced by right wing politicians on social media.56 In 2023 the Danish company Orsted, while claiming the disinformation campaign was irrelevant, pulled out of a major project to build two wind farms o the coast of New Jersey.57 Generative AI will make such campaigns vastly easier, quicker and cheaper to produce, while also enabling it to spread further and faster. Adding to this threat, social media companies have shown declining interest in stopping disinformation,58 reducing trust and safety team stang.59 There is little incentive for tech companies to stop disinformation, as reports show companies like Google/YouTube make an estimated $13.4 million per year from climate denier accounts.60 52 DeSmog, “Climate Disinformation Database,” accessed Feb. 8 2024, Link. 53 Center for International Environmental Law, “Smoke & Fumes”, accessed February 12, 2024, Link. 54 Drilled, “Mad Men,” July 24 2023, Link. 55 Climate Action Against Disinformation, “Climate denial rises on Musk’s Twitter,” June 29 2023, Link. 56 Media Matters, “Misinformation about recent whale deaths dominated discussions of oshore wind energy on Facebook,” March 23 2023, Link. 57 Politico, “Oshore wind company pulls out of New Jersey projects, a setback to Biden’s green agenda,” Oct. 31 2023, Link. 58 Free Press, “Big Tech Backslide,” Dec. 2023, Link. 59 NBC News, “Tech layos shrink ‘trust and safety’ teams, raising fears of backsliding eorts to curb online abuse,” Feb. 10 2023, Link. 60 Center for Countering Digital Hate, “ The New Climate Denial,” Jan. 16 2024, Link. 61 New York Times, “Lina Khan: We Must Regulate A.I. Here’s How.” May 3 2023, Link. 62 CBS News, “Doctored Nancy Pelosi video highlights threat of ‘deepfake’ tech,” May 26 2019, Link. 63 Bloomberg, “Deepfakes in Slovakia Preview How AI Will Change the Face of Elections,” Oct. 4 2023, Link. 64 The Verge, “Trolls have flooded X with graphic Taylor Swift AI fakes,” Jan. 25 2024, Link. 65 New York Times, “Fake and Explicit Images of Taylor Swift Started on 4chan, Study Says,” Feb. 5 2024, Link. 2.2a: Creation Disinformation campaigns about climate change have a number of new AI tools to help them be more eective. Chair of the Federal Trade Commission Lina Khan warns that “generative AI risks turbocharging fraud” in its ability to churn out content.61 Instead of having to draft content one piece at a time, AI can churn out endless content for articles, photos and even websites with just brief prompts. Where once an experienced editor needed hours to create a believable fake photo, AI generative software needs only a few minutes to produce an even more convincing deepfake video. In 2019, one of the first non-AI deepfake videos was created of Nancy Pelosi falsely showing her impaired,62 sparking discussion of her capacity to serve and emboldening former President Trump’s criticisms. The technology has since only grown in sophistication. In the runup to the Slovakian national election in 2023, a number of AI-generated audio recordings of progressive leader Michal Simecka featured him making fun of voters and even pledging to raise beer prices.63 It’s impossible to determine the impact on the election, but the result saw progressives placing second in favor of a populist leader who favors Russia. Extending beyond politics, generative AI is also being used to create deepfake pornographic images. In January 2024, a number of AI-generated sexually explicit images of Taylor Swift quickly spread across X, with one of the most prominent posts attracting 45 million views.64 These originated in a 4Chan chatroom, where users conspired to break the current safety systems of AI image generators.65 An August 2023 study focusing on climate change-related deepfakes found over a quarter of respondents across age groups were "Generative A.I. risks turbocharging fraud" Lina Khan - Chair of the Federal Trade Commission
    • 8. 8 unable to identify whether videos were fake.66 As people learn to question what they see, it further destabilizes truth and consensus at a time of growing political divide. AI also gives politicians room to plausibly claim a real video is a deepfake.67 AI-generated text is also becoming more and more compelling. A number of studies are finding that arguments written by AI can be more persuasive than those written by humans, even on polarizing issues.68 On a topic as divisive as climate change, this makes it simple to produce messages and content denying the need for action. Some AI companies have said they will address this in advance of upcoming 2024 elections around the world, developing policies that 66 Scientific Reports, “Deepfakes and scientific knowledge dissemination,” Aug 18 2023, Link. 67 Washington Post, “AI is destabilizing ‘the concept of truth itself’ in 2024 election,” Jan. 22, 2024, Link. 68 Stockholm Resilience Center, “AI could create a perfect storm of climate misinformation,” June 16 2023, Link. 69 OpenAI, “How OpenAI is approaching 2024 worldwide elections,” Jan. 15 2024, Link. 70 NewsGuard, “Despite OpenAI’s Promises, the Company’s New AI Tool Produces Misinformation More Frequently, and More Persuasively, than its Predecessor,” March 2023, Link. 71 Inside Climate News, “AI Can Spread Climate Misinformation ‘Much Cheaper and Faster,’ Study Warns,” March 31 2023, Link. might prevent bad actors from producing disinformation content,69 but past eorts proved largely ineective. Open AI claimed its ChatGPT-4 was “82 percent less likely to respond to requests for disallowed content and 40 percent more likely to produce factual responses,” but testers in a March 2023 NewsGuard report were still able to consistently bypass safeguards.70 They found the new chatbot was in fact “more susceptible to generating misinformation” and “more convincing in its ability to do so” than the previous version. They were able to get the bot to write an article claiming global temperatures are actually decreasing—just one of 100 false narratives they prompted ChatGPT to draft.71
    • 9. 9 2.2b: Spread Once disinformation content exists, it can spread both through the eorts of bad actors and the prioritization of inflammatory content that algorithms reward. Long before current AI technology, companies set up their products to promote and monetize content that is likeliest to keep people on the platform. Google’s former AI public policy lead Tim Hwang emphasizes how everything from the like button to the listicle were developed with the ultimate goal of demonstrating interest and keeping people on sites to sell more.72 This also means the most provocative messages spread furthest, disincentivizing moderation of content in favor of engagement. Now, disinformation messaging spreads across four main channels: social media, LLMs, search and advertising. Social Media Research shows that social media has been used extensively to spread climate disinformation.73 At COP26 in 2021, research from Climate Action Against Disinformation found that posts by climate disinformers on Facebook generated three times more engagement than those by Facebook’s own Climate Science Information Center.74 The most-viewed content supporting climate action received just one-quarter of the views of the most popular piece from climate deniers. Yet social media companies continue not to take strong measures to reduce this climate disinformation.75 AI-based social media algorithms have been found to prioritize inflammatory content like climate denial, more of which can now be generated by AI. Even worse for the social media information ecosystem, climate deniers have another tool in bots, which have been 72 The Nation, “One Weird Trick for Destroying the Digital Economy,” Oct. 13 2020, Link. 73 Climate Action Against Disinformation, “Report: Climate of Misinformation – Ranking Big Tech,” Sept. 25 2023, Link. 74 Climate Action Against Disinformation, “Deny, Deceive, Delay,” June 2022, pg. 78. Link. 75 The Guardian, “Twitter ranks worst in climate change misinformation report,” Sept. 20 2023, Link. 76 CNN, “Elon Musk commissioned this bot analysis in his fight with Twitter. Now it shows what he could face if he takes over the platform,” Oct. 10 2022, Link. 77 Stockholm Resilience Center, “A game changer for misinformation: The rise of generative AI,” May 2023, Link. 78 Stockholm Resilience Center, “How algorithms diuse and amplify misinformation,” May 2023, Link. 79 X, “Alex Epstein AI,” accessed Feb. 8 2024, Link. 80 Is Open AI the next challenger trying to take on Google Search?, Feb 14, 2023, Link. 81 Platformer, “How platforms killed Pitchfork,” Jan. 18 2024, Link. 82 Platformer, “Scenes from a dying web,” Feb. 5 2024, Link. 83 Nonprofit Quarterly, “The Future of Journalism: A Conversation with Monika Bauerlein of Mother Jones, Jan. 31 2024, Link. found to be prevalent across social media sites76 and research has found that AI-directed bots can easily amplify climate disinformation77 and make it increasingly dicult to distinguish bots from humans.78 As generative AI advances, so too will the bots. Popular climate denier Alex Epstein opened his own AI bot on X in December 2023,79 which has been actively spreading disinformation and used as an inexpensive way to troll climate scientists. Large Language Models LLMs like ChatGPT, Perplexity, Bing and Google Gemini seem poised to replace standard Google search over time.80 The business case for this dramatic shift is that the companies that produce AI systems like ChatGPT would prefer users to stay on their platform reading their summary answers–where you see their ads and give them data to monetize–than for users to go to the open web where others gain that data.81 This is the same dynamic that is causing massive losses of revenue and trac82 for news publishers.83
    • 10. 10 This promotion of LLMs, as an untested and in many cases much more opaque replacement for search, threatens to hasten the spread of misinformation. OpenAi’s model, for example, provides no references and by design does not contain the most updated information, displaying only what it was last trained on. Future models may improve this but researchers are already documenting that LLMs frequently provide blatantly incorrect information. Reports have found ChatGPT frequently shares false information,84 making up court cases, counts of plagiarism, and news articles without any human prompting it to do so.85 European nonprofits found that Microsoft’s Bing search bot got election information wrong 30 percent of the time.86 One of the main causes of these incorrect results is that LLMs are trained on a wide variety of internet sources, some of which have dubious veracity. Reddit, a site frequently criticized for its inability to combat hate speech87 and home to many climate denial threads, was such a significant teacher for both ChatGPT and Google’s Gemini that it is now planning to charge AI companies for access.88 Given the propagation of climate denial across the internet, it’s highly likely that the LLMs were also trained 84 The Verge, “OpenAI isn’t doing enough to make ChatGPT’s limitations clear,” May 30 2023, Link. 85 The Guardian, “ChatGPT is making up fake Guardian articles. Here’s how we’re responding,” April 6, 2023, Link. 86 Washington Post, “AI chatbot got election info wrong 30 percent of time, European study finds,” Dec. 15 2023, Link. 87 Time, “Reddit Allows Hate Speech to Flourish in Its Global Forums, Moderators Say,” Jan. 10 2022, Link. 88 New York Times, “Reddit Wants to Get Paid for Helping to Teach Big A.I. Systems,” April 18 2023, Link. 89 Wired, “The Security Hole at the Heart of ChatGPT and Bing,” May 25 2023, Link. 90 Wired, “Generative AI’s Biggest Security Flaw Is Not Easy to Fix,” Sept. 6 2023, Link. 91 Research and Markets, “Search Engine Optimization (SEO) - Global Strategic Business Report,” Feb. 2024, Link. 92 The Verge, “The people who ruined the internet,” Nov. 1 2023, Link. 93 Webis Group, “Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search Engines,” Jan. 16 2024, Link. on climate misinformation and would pass on such harmful falsities to those just looking for accurate information. In a more deliberate spread, LLMs are also susceptible to a type of attack called indirect prompt injection.89 Bad actors can direct chatbots to read pages with hidden malicious code that then direct the bot to act in a new way with users. Such a prompt could, for example, direct or hack the bot to share climate disinformation with new user queries. AI companies have already acknowledged the considerable threat such attacks pose.90 Search Search engines such as Google, along with its opaque algorithms that determine which results users see and which they don’t, have long been subject to manipulation. The search engine optimization (SEO) industry has been one of massive growth and value, generating $68.1 billion globally in 2022.91 Over two decades, it played a game of cat and mouse with Google’s algorithm,92 already leading to a degraded search experience for all users by prioritizing paid spam content over organic. Researchers say this problem “will surely worsen in the wake of generative AI,” as content costs less and propagation systems are more ecient.93
    • 11. In one of the first reported examples of this in 2023, content marketers used AI to carry out an “SEO heist” of organic content against Exceljet, a knowledge hub on Microsoft Excel.94 Marketers fed the URLs of some of ExcelJet’s most popular pages into a generative AI SEO article writer and then posted that mirrored copy to their own new site, which successfully diverted the majority of clicks—and ad dollars. There’s little to stop bad actors from using the same methods to replicate legitimate research, as the SEO industry is already designed to incentivize this parasitic approach. The desire is already documented: researchers have noted the overall rise of climate disinformation in our information ecosystem through social media from 2021 to 202395 and on human-generated climate disinformation sites.96 This approach could easily be used by climate disinformation professionals to redirect users from reputable climate change information sites. Unfortunately, the future does not look any brighter: Google’s December 2023 SEO update has begun allowing AI content to compete with organic content,97 and website owners are already seeing that their content is being pushed down in Google rankings by AI-written content.98 Advertising Junk websites churning out low-quality content to attract programmatic ad revenue have long been a presence online. However, generative AI oers an easier, quicker, and cheaper way to automate the content farm process and spin up more climate disinformation sites with fewer resources. These AI-generated websites not only add to the spread of climate disinformation 94 Futurism, “Man Horrified When Someone Uses Ai To Reword And Republish All His Content, Complete With New Errors,” Dec. 20 2023, Link. 95 Climate Action Against Disinformation, “ClimateConversation Trends,” June 2023, Link. 96 EU Disinfo Lab, “Don’t stop me now: the growing disinformation threat against climate change,” Feb. 6 2023, Link. 97 Google, “Google Search’s helpful content system and your website,” updated Dec. 2023, Link. 98 Business Insider, “Google recently cut ‘people’ from its Search guidelines. Now, website owners say a flood of AI content is pushing them down in search results,” Sept. 20 2023, Link. 99 MIT technology review, “ Junk websites filled with AI-generated text are pulling in money from programmatic ads” June 26, 2023, Link 100 Check My Ads, “Meet the ad exchanges making money from climate disinformation” Dec. 11, 2023, Link. 101 Association of National Advertisers, “ANA Programmatic Media Supply Chain Transparency Study”, June 19, 2023, Link 102 Gizmodo, “Google Sheds Responsibility for AI Sites Dominating Search Results”, Jan. 19, 2024, Link 103 404 Media, “ Google News Is Boosting Garbage AI-Generated Articles” Jan 18, 2023, Link 104 Google,” “Why doesn’t Google Search ban AI content?” Feb 8, 2023, Link 105 TechCrunch, “Signal’s Meredith Whittaker: AI is fundamentally ‘a surveillance technology,’” Sept. 25 2023 Link. but also monetize it through programmatic advertising. Although many adtech companies have policies in place to prohibit content farm sites and those publishing misleading claims about climate change from using their advertising products, research shows a lack of enforcement of these policies. For example, one recent study by NewsGuard found over 140 major brands paying for ads placed on unreliable AI-written sites, likely without their knowledge.99 Other research by Check My Ads has highlighted how adtech companies, including Google, continue to monetize climate disinformation, even as such content infringes adtech companies’ own policies around misleading content.100 Several industry experts have warned that generative AI will exacerbate the estimated $13 billion from advertising already flowing to low-quality content farms.101 Meanwhile, recent investigations have also found news aggregators, such as Google News, boosting AI-generated websites over real, human journalism in search results.102 These sites use AI to reproduce other news outlets’ content at alarming rates in order to siphon advertising revenue from legitimate news organizations. A recent investigation by 404 Media highlighted how one “author” for an AI-written site, WatchdogWire.com, published more than 500 articles in 30 days.103 Currently, Google News and Google Search does not take into account if content is produced by AI or other automated processes when ranking search results.104 Furthermore, as AI is incorporated into the advertising industry, it will further the current surveillance business model105 and allow climate deniers and corporate greenwashing campaigns to more eciently micro target highly specific and vulnerable groups. 11
    • 12. 12 In the 2020 U.S. election, ads targeted Latino and Asian Americans with false claims that Joe Biden is a socialist106 and tied that to the Green New Deal climate bill.107 Most recently at COP28 in 2023, researchers showed how simple climate searches were inundated by ads from fossil fuel companies.108 As with other mediums, AI can help develop even more persuasive messaging and content to spread via ads—which researchers have already been able to do on ChatGPT,109 despite its supposed safeguards against such use. Researchers have also documented seven potential harms of AI-powered advertising,110 including the ability to spread disinformation. In 2023, Google,111 Microsoft (with Bing and ChatGPT),112 Amazon113 and Facebook114 each introduced AI into their ad creation systems, amplifying this threat. 106 Associated Press, “Election disinformation campaigns targeted voters of color in 2020. Experts expect 2024 to be worse,” July 28 2023, Link. 107 Axios, “GOP used YouTube to win Latino voters who Democrats ignored,” April 15 2021, Link. 108 Alliance for Science, “COP28: Climate activists slam fossil fuel firms over greenwashing ads,” Dec. 9 2023, Link. 109 Washington Post, “ChatGPT breaks its own rules on political messages,” Aug. 28 2023, Link. 110 Mozilla, “Report: The Dangers of AI-Powered Advertising (And How to Address Them),” Sept. 30 2020, Link. 111 CNBC, “Google plans to use new A.I. models for ads and to help YouTube creators, sources say,” May 17 2023, Link. 112 Microsoft, “Transforming Search and Advertising with Generative AI”, Sept. 21 2023, Link. 113 The Information, “Amazon Plans to Generate Photos and Videos for Advertisers Using AI,” May 5 2023, Link. 114 CNBC, “Meta unveils A.I. ‘testing playground’ to help advertisers build campaigns,” May 11 2023, Link. 115 Free Press, “Big Tech Backslide,” Dec. 2023, Link. Some companies have policies to prevent abuse, but the largest social media companies all downsized and/or deprioritized content moderation teams in 2023.115 In the wake of backlash, a few are looking to AI as a solution, using fewer human sta to identify suspect posts. This only introduces more potential problems, as many of these systems are unable to successfully identify disinformation based on the information they are trained on, for the reasons outlined above.
    • 13. 13 3.1:United States The U.S. has yet to pass any comprehensive regulation on AI and is unlikely to make much, if any, progress during a presidential election year. There is, however, some cause for optimism. Senate Leader Schumer has said that developing comprehensive AI legislation is a priority,116 and, along with a bipartisan group of other senators, organized a series of “AI Insight Forums” in 2023 to give lawmakers and their sta an opportunity to hear dierent perspectives on how legislation should be designed. While no comprehensive legislation has come together yet, narrower proposals, such as bills to address privacy, deepfakes117 and environmental impact of AI118 have been introduced—even as they remain unlikely to become law in 2024. Barring congressional action, there are significant limitations on what could be done to regulate AI. The Biden-Harris administration 116 Climate Action Against Disinformation, “Letter to Sen. Schumer on Climate & AI,” Oct. 25 2023, Link. 117 Congresswoman Yvette Clarke, “Clarke Leads Legislation To Regulate Deepfakes,” Sept. 21 2023, Link. 118 Senator Ed Markey, “Markey, Heinrich, Eshoo, Beyer Introduce Legislation To Investigate, Measure Environmental Impacts Of Artificial Intelligence,” Feb. 1 2024, Link. 119 White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Oct. 30 2023, Link. 120 Public Citizen, “EO on AI: Tasks, Agencies, Deadlines,” Nov. 28, 2023, Link. 121 Brennan Center for Justice, “States Take the Lead on Regulating Artificial Intelligence,” Nov. 1 2023, Link. 122 Electronic Privacy Information Center, “The State of State AI Laws: 2023,” Aug. 3 2023, Link. 123 Public Citizen, “How Might California’s New Climate Disclosure Law Impact Federal Rulemaking?” Oct. 26 2023, Link. rolled out a sweeping executive order119 intended to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, and stand up for consumers and workers. Ultimately, the strength of the EO will be determined over the course of its implementation and whether it will remain in place after the 2024 election. While the EO deserves praise in many places, by nature it does not require companies to take action, focusing instead on government procurement. Nor does it adequately address the ways AI might accelerate climate change.120 There has been some progress at the state level.121 An overwhelming majority of states have introduced legislation to regulate deepfakes in elections, while some states have gone so far as to ban the use of deepfakes in the electoral context entirely.122 California, where many AI companies are based, has sought to ensure all companies disclose their climate impact.123 3. The current policy landscape
    • 14. 14 European Union The European Union appears poised to enter the AI Act into force in 2024, which will render it enforceable in 2026. The AI Act pursues a riskbased approach to minimizing AI harms, creating four categories of risk: 1) unacceptable risk, 2) high risk, 3) limited risk and 4) minimal risk.124 Unacceptable uses of AI include: biometric data that uses sensitive characteristics; untargeted scraping of facial images from the internet to create facial recognition databases such as Clearview AI; and AI systems that manipulate human behavior. High-risk AI systems will be subject to stringent oversight and must be entered into an EU-wide public database. These systems include AI applications that can be used in education or employment or that possess significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law. Lower-risk AI systems, sometimes called “general purpose AI,” are subject to less-stringent oversight, but must provide the user with notice that they are interacting with an AI system and provide an explanation of how an output was generated. AI content must also be labeled and detectable. Fines for violating the AI Act can be as high as 7% of global annual turnover. 124 European Parliament News, “EU AI Act: first regulation on artificial intelligence,” Dec. 19 2023, Link. 125 White House, “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Eight Additional Artificial Intelligence Companies to Manage the Risks Posed by AI,” Sept. 12 2023, Link. 126 Engadget, “Maliciously edited Joe Biden video can stay on Facebook, Meta’s Oversight Board says,” Feb. 5 2024, Link. Voluntary Commitments From Big Tech Companies In response to the recognition that unregulated AI can cause severe and irreversible harm, a number of AI companies have made voluntary commitments to prioritize safety. Many of the biggest AI companies, including Google, OpenAI, Meta and Amazon, announced a set of voluntary commitments alongside the President of the United States in July 2023, that others soon followed.125 While these commitments might be encouraging if they were enforceable, there are unfortunately no existing mechanisms in the U.S. to hold AI companies accountable for not living up to them. In February 2024, Facebook’s oversight board reviewed its AI and deepfake policies after a doctored video of Biden went viral. The findings said the company should “reconsider this policy quickly given the number of elections in 2024,” calling it “incoherent,” and they attempted to reassure users that Meta “plans to update the Manipulated Media policy to respond to the evolution of new and increasingly realistic AI.”126
    • 15. 15 In the 1950s, when a new and promising but dangerous technology was introduced to the public—commercial air travel—the industry and government response was to focus on safety first. To do that, they implemented radical transparency that shared safety incident data across the industry in real time—now known as the “flight recorder.”127 This helped build the consumer trust needed to establish the entire commercial airline industry. Today, the basic expectations Americans have for every other industry have not been established for tech. While pharmaceuticals must pass clinical trials, cars must have seatbelts, and sausages mustn’t contain E. coli, AI technology has no such expectations or accountability mechanisms despite its widespread risks. Tech companies like Facebook, Google and OpenAI have shown their focus to be profit over safety time and again. They cannot be trusted to develop and market AI safely and mitigate its climate impacts on their own. Voters across the political spectrum in the U.S. already understand this. A recent poll from Data for Progress, Accountable Tech and Friends of the Earth found that 69% of voters, including 60% of Republicans, believe AI companies should be required to report their energy use.128 Overall, 80% believe AI companies should report on plans to prevent the proliferation of climate disinformation—including 75% of Republicans. Governments must urgently study the problem and implement comprehensive AI regulations to fully understand the threats to climate change and protect against them, using a systems-wide approach to the health, integrity and resilience of the information ecosystem. Looking toward the future, government, companies, academia and civil society should work together to determine how to create “green AI” systems that reduce 127 Airways Mag, “The Evolution of the Flight Recorder,” Nov. 26 2023, Link. 128 Data for Progress, “Voters Strongly Believe in Public Reporting Requirements and Bias Prevention by AI Companies,” Dec. 15 2023, Link. 129 Meta’s settlement talks with Kenyan content moderators break down, October 16, 2023, Link. overall emissions and climate disinformation. The core concept of better AI development should focus on three principles: transparency, safety and accountability. In addition to the product recommendations below, tech companies implementing AI must commit to strong labor policies including: fair pay, clear contracts, sensible management, sustainable working conditions and union representation. Content moderators and sta enforcing community guidelines are often outsourced, ill treated and low-paid.129 4. Recommendations
    • 16. 16 Transparency Regulators must ensure companies publicly… z report on energy use and emissions produced within the full life cycle of AI models, including training, updating and running search queries, and follow existing reporting standards;130 z assess and report on the environmental and social justice implications of developing their technologies; z explain how their AI models produce information, how their accuracy on climate change is measured, and the sources of evidence for factual claims they make; and z report on the sourcing and use of resources that are critical to the clean energy transition. z provide log-level data access to advertisers so they may better audit and ensure they are not monetizing content at conflict with their policies Safety Companies must … z demonstrate that their products are safe for people and the environment, show how that determination is made, and explain how their algorithms are safeguarded against discrimination, bias and disinformation. z enforce their community guidelines, disinformation and monetization policies Governments must … z develop common standards on AI safety reporting and work with the International Panel on Climate Change to develop coordinated global oversight.131 z fund studies to more deeply understand the eect AI systems can have on climate disinformation, the monetization of disinformation, energy use and climate justice. Accountability Governments must … z enforce safety and transparency rules with strong penalties for noncompliance that deter companies from treating safety failures as the cost of doing business; z require reporting to be certified by the chief information ocer; z protect whistleblowers 132who might expose AI safety issues, and z ensure that companies and their executives are held liable for the harms that occur as a result of generative AI, including harms to the environment. 130 Numerous standards exist, such as the ITU’s recommendations, accessed February 12, Link. 131 Example of potential international policies from the UK here, and academics here. 132 Examples of potential policies from academics here, and the U.S. here.


    • Previous
    • Next
    • f Fullscreen
    • esc Exit Fullscreen