'Immediate dangers' | 'AI Threats to Climate Change' report raises three ESG implications for HR

'AI Threats to Climate Change' report raises three ESG implications for HR

A report from the Climate Action Against Disinformation coalition highlights multiple threats that AI poses to climate change, making recommendations that have implications for HR teams.

Artificial Intelligence Threats to Climate Change”, collated by groups including Greenpeace and Friends of the Earth, raises two 'immediate dangers' to the climate crisis following the mass popularization of AI: Significant increases in energy and water usage, and the spread of climate disinformation.

The report finds that, according to company statements and independent research, energy usage is already skyrocketing due to “the proliferation of language model (LLM) systems”.

It references an admission from OpenAI CEO Sam Altman that AI will use “vastly more energy than people expected alongside findings from the International Energy Agency that estimates the energy use from data centers that power AI will double in the next two years.

As companies attempt to increase the scale or speed of operations with AI – including augmenting or replacing human labor – without increasing costs, there are multiple ways in which AI will drive up energy and water usage.

Per the report, companies both developing and using AI systems are not transparently reporting on the impact this has on energy usage.

The report also claims that AI is making climate disinformation “easier, quicker, and cheaper to produce,” including compelling text and deepfakes, and as such is more readily accessible on social media, through online searches, and in advertising.

The World Economic Forum says “large-scale AI models have already enabled an explosion in falsified information.”

Transparency, safety, accountability: What HR can do to tackle AI climate risks

The coalition notes that whilst there is some progress on a state level, the U.S. is “yet to pass any comprehensive regulation on AI and is unlikely to make much, if any, progress during a presidential election year.”

In turn, it pushes regulators to resolve the lack of AI accountability mechanisms, citing a poll from Data for Progress, Accountable Tech, and Friends of the Earth which found that 69% of voters believe AI companies should be required to report their energy use.

Each of the principles it recommends – transparency, safety, and accountability – while not enforceable regulations, do present considerations for HR teams and employers.

1. Transparency: “Regulators must ensure companies publicly report on energy use and emissions produced within the full life cycle of AI models… assess and report on the environmental and social justice implications of developing their technologies.”

This recommendation prompts HR teams, as owners or co-owners of ESG strategy, to drive a culture of transparency and lobby other business leaders to ensure they are transparently reporting on the footprint of their AI use, particularly against any net-zero targets.

Looking for more

2. Safety: Companies must… enforce their community guidelines, disinformation, and monetization policies.”

HR teams are frequently directly responsible for the creation of community and company guidelines, and ensuring employees stick to them. As calls grow for companies to implement clear policies on AI usage within the company, including bans on creating disinformation, this recommendation pushes HR teams to work with their employees to ensure AI is always used ethically and never to produce or distribute falsified information.

3. Accountability:Governments must… protect whistleblowers who might expose AI safety issues and ensure that companies and their executives are held liable for the harms that occur as a result of generative AI, including harms to the environment.”

This recommendation indicates HR teams should create safe channels for whistleblowing where employees are free to highlight concerns with company AI usage and its impact on the climate crisis without fear of retaliation. It also pushes employers to create mechanisms that hold leaders responsible for instances where policies are broken or ignored.

You are currently previewing this article.

This is the last preview available to you for the next 30 days.

To access more news, features, columns and opinions every day, create a free myGrapevine account.