AI in ESG: A powerful ally or ethical dilemma?
From energy data analysis to streamlining corporate sustainability reporting, more sustainability professionals are looking to harness AI to assist with their workload.
While AI developers and data centre operators focus on reducing energy consumption, attention now shifts to businesses looking to adopt AI. What ethical and environmental questions should they be asking before embarking on this journey? edie spoke to some of the leading voices and institutions in sustainability and technology to find out.
Ian Ellison, former head of sustainability at Jaguar Land Rover (JLR) and now an executive at the Cambridge Institute for Sustainability Leadership (CISL), answers: “Asking questions about the energy and carbon footprint of an AI tool, where the data comes from, potential biases, ethical challenges, and whose intellectual property has been used — those are all important considerations.”
Now working in executive education at CISL, Ellison focuses on systems change, with AI playing a significant role in that transformation.
Ellison notes that while these questions around corporate responsibility are crucial, the industry is evolving so rapidly that “if you think twice, you’ve already lost.”

It [AI] is one of those things where if you don’t catch the bus, it’s gone, and there isn’t another bus because everybody else will be on that one.
.
Ian Ellison, former head of sustainability at Jaguar Land Rover (JLR) and now an executive at the Cambridge Institute for Sustainability Leadership (CISL):


Building the business case for ‘AI in ESG’
A 2024 Salesforce survey of 452 sustainability professionals — including 384 leaders — across companies in the US, UK and Canada reveals a mixed outlook on AI’s role in sustainability. While nearly 40% worry it could hinder progress, 58% believe its benefits will outweigh the risks in addressing the climate crisis.
Ellison highlights that the business case for AI use in environmental, social and governance (ESG) initiatives lies in its ability to streamline strategy building. While AI doesn’t replace human decision-making, it dramatically accelerates the research and analysis phase — compressing what would typically be a time-intensive process.
This can allow sustainability leaders to focus on defining purpose, asking the right questions, and implementing solutions with greater speed and efficiency.
Ellison says: “In most sustainability issues, we’ve done too little, too late — we’re in catch-up mode. Compressing the labour-intensive middle section of research and data analysis is an opportunity to catch up.”
Against the backdrop of mixed opinions on AI in ESG, edie hosted a panel discussion at its largest in-person event of the year, edie 25, in March. The session featured former CDP chief commercial officer and AI expert Dexter Galvin, exploring whether AI ultimately aids or hinders businesses’ sustainability efforts.
Galvin echoes Ellison’s views that AI has the potential to significantly advance sustainable action by optimising resource use, mapping climate risks, and gathering high-quality data for informed decision-making. However, given AI’s substantial environmental footprint — particularly its energy demands — businesses must be intentional in how they deploy it, ensuring that its benefits outweigh its impact and align with long-term sustainability goals.
Additionally, both experts stress that AI can carry biases, making it crucial for businesses to conduct regular audits and ensure human oversight throughout its use.
Case study
Tetra Pak: AI-powered carton sorting installed at Hartlepool recycling facility
Tetra Pak has funded the installation of two robotic arms at J&B Recycling’s materials recycling facility (MRF) in Hartlepool to improve the sorting of food and beverage cartons. Developed by Recycleye, the technology enables more accurate separation of cartons from mixed recycling, ahead of new government waste collection requirements coming into force in 2026.
Ensuring ethical AI: The human factor
Concerns over AI replacing human jobs are widespread, with research from the Institute for Public Policy Research (IPPR) revealing that up to 70% of tasks in certain roles could be significantly transformed or replaced by AI. AI’s greatest impact is expected in “organisational and strategic tasks” as well as “repetitive and non-repetitive cognitive and analytical tasks,” sparking questions about how businesses will adapt to these rapid changes in the workforce.
Goldman Sachs further highlights sectors like administration, legal professions, architecture and engineering as the most vulnerable, where up to 46% of tasks in administrative roles could be automated. While this may not directly affect sustainability professionals, it could conflict with corporate commitments to supporting a Just Transition, potentially creating challenges in workforce planning and ethical AI adoption.
Ellison argues, “AI doesn’t eliminate the human role, but it shifts it to the beginning and end — guiding the process, ensuring ethical oversight and ultimately making decisions.
“If you’re working on a sustainability strategy, it doesn’t mean there’s no need for humans. You still have to ask the right questions, critique the mechanism and responses, and take action.”
The human role evolves but doesn’t disappear, with AI serving to enhance human capabilities, not replace them.
Carl Ennis, chief executive of Siemens UK, a global technology company specialising in industrial AI solutions, shares a similar perspective on workforce upskilling but also stresses that active participation in AI is essential to driving positive industry change.
He argues that businesses embracing AI are in a prime position to lead innovation in energy efficiency—an increasingly critical priority as AI’s environmental impact continues to expand.
Up to 70% of tasks in certain roles could be 'significantly transformed', according to IPPR research.
Up to 46% of administrative roles could be automated, according to Goldman Sachs research.
Driving energy efficiency in AI: The role of businesses
Ennis highlights that the first step in promoting energy efficiency in AI is knowing when to use it—and when not to.
He says: “As engineers, one of our key tasks is simplification—finding the simplest possible solution that delivers the desired outcome. However, deploying technology just for the sake of it isn’t always the right choice.”
He explains that using complex AI models with extensive data can be costly and inefficient if they don’t meet the intended goals. Sometimes, laying foundational steps before introducing AI is the better approach, ensuring it’s the right tool at the right time.
Sustainable procurement is another key piece of the puzzle.
Research from Capgemini revealed that while nearly half of business executives acknowledge that the use of Gen AI is increasing their emissions, only one in eight businesses are currently able to track this effectively due to a lack of data on emissions from providers.
Ellison advises businesses to make sustainability a criterion when sourcing AI services from service providers. “Make it a conversation, not just buying a commodity at a cheap price, regardless of the implications.”
Setting standards for low-carbon and ethical AI solutions ensures businesses push the demand for responsible AI, which ultimately benefits everyone. Many businesses are now incorporating sustainability requirements into supplier contracts and tenders, and a similar approach could be taken with AI.
Collaboration is vital here. As Ellison suggests, businesses should join forces across sectors to establish common standards: “Getting together with others in your sector—or even in other sectors—to say, ‘We will all buy this way’ helps prevent a race to the bottom.”
By agreeing on thresholds for energy efficiency, carbon footprint and ethical practices, companies can collectively drive change and influence AI suppliers to adopt more sustainable approaches.
However, Ennis stresses the need for immediate action, stating: “Technology will naturally reduce data centre energy consumption—history shows that things get faster, cheaper and less energy-intensive over time.
“But that doesn’t mean we should wait. The risk is that if we don’t embrace it now, we won’t benefit from those technological advancements.”

Technology will naturally reduce data centre energy consumption—history shows that things get faster, cheaper and less energy-intensive over time.
But that doesn’t mean we should wait. The risk is that if we don’t embrace it now, we won’t benefit from those technological advancements.
Carl Ennis, CEO, Siemens UK

The need for rapid AI regulation
As AI continues to evolve at a rapid pace, the need for effective regulation becomes ever more critical. However, experts argue that regulation must be approached with caution but fast.
Ellison notes: “The concern with AI is that it will likely be regulated reactively to problems that were anticipated. It’s a fact that regulation is slow, especially when dealing with well-understood problems like carbon emissions. But we don’t fully understand AI risks, and they are changing fast.
“There will be a disaster, and then we’ll start to regulate it—ban certain things, control others—because of what went wrong. So, most regulation will be reactive, and while there are some regulations in place, they’re not enough.”
A reactive approach to regulation risks missing opportunities to manage potential harms proactively.
The EU has made some progress with the AI Act, which came into force last year to establish a framework for the ethical development of AI. However, it does not address AI-related emissions or energy consumption.
Additionally, at the AI Summit in Paris, some EU members expressed intentions to loosen regulations to attract private sector investment, reflecting a broader global trend.
Ennis points out the importance of tailoring regulation to specific contexts: “AI often gets lumped into one box, but we need to distinguish between AI in general and AI in industrial settings.
“In industrial environments, excessive regulation could stifle innovation, especially as AI plays a key role in tackling challenges like energy consumption and sustainability.”
Both experts agree that regulation should be well-considered and international. Ennis notes that overly isolated regulatory approaches, particularly in places like the UK, may be counterproductive.
“If we see value in doing things differently, we should. But if there’s no clear benefit, we shouldn’t create unnecessary divergence, as that could increase costs,” he adds.
As the AI sector evolves amidst regulatory shifts and rapid innovation, Ellison advises businesses looking to embark on their AI journey to act quickly but with caution as the sector currently has an environmental problem. As the UN Environment Programme (UNEP) summarises:
- Producing a 2 kg computer requires 800 kg of raw materials.
- AI microchips rely on rare earth elements, often mined in environmentally destructive ways.
- Data centres generate e-waste containing hazardous substances like mercury and lead.
- AI-related infrastructure could soon consume six times more water than Denmark.
- Gen AI could consume up to 33 times more energy than traditional task-specific software.
- Most data centres are still powered by carbon-intensive energy sources.
However, Ellison stresses the importance of staying involved in the AI revolution to avoid being left behind, as delays can be costly. Additionally, he warns against rushing into major investments without fully understanding the fast-moving landscape.
He explains that the key is to engage thoughtfully, with internal checks and balances to manage risks and opportunities.
“It [AI] is one of those things where if you don’t catch the bus, it’s gone, and there isn’t another bus because everybody else will be on that one,” he concludes.