At the end of 2023, climate scientists were frantically sounding the alarm. Last year was officially the hottest year on record and Earth’s global warming was escalating at an alarming pace. According to data collected by NASA, temperatures around the globe were an average of 2.1 degrees Fahrenheit (1.2 degrees Celsius) higher than the 20th-century baseline.
At the same moment, generative A.I. had already disrupted industries around the globe and its societal impact continues to expand exponentially. OpenAI’s DALL-E 3 had already flooded the internet with surreal paintings virtually indistinguishable from the works of human artists, while the company’s ChatGPT-4 was churning out coherent text on all sorts of subjects.
Both activists and world leaders are continuing to put pressure on the automotive, manufacturing, and oil and gas industries to change their ways while there’s still time left to mitigate the damage. Yet as generative A.I. 's role grows, more conservationists are calling for a more sustainable approach.
The Cost of Big Tech
In a 2023 interview for Communications of the ATM, Matt Warburton, principal consultant and sustainability lead with global technology research and advisory firm ISG, suggested that A.I. should be measured as a subset of the larger information and communications industry, which hovered around 1.8 to 2.8 percent of global carbon emissions in 2020.
Admittedly, that’s still far below, say, automotive emissions, which in 2022 accounted for 15 percent of global emissions. However, part of the challenge of decreasing A.I.-related carbon emissions is that there’s a shortage of quantifiable data on exactly how much the industry currently produces.
We already know that some of Silicon Valley’s most prominent players are a serious source of carbon emissions. Meta, Alphabet, Microsoft, Amazon, and Apple—the so-called “Big Five” tech companies—alone leave a serious dent. In 2021, Amazon was responsible for 16 million metric tons of carbon dioxide.
When it comes to large language models (LLMs) such as DALL-E, tech companies have been less than forthright with tracking and disclosing the amount of emissions required to train their creations. Training is the process by which LLMs trawl immense datasets in order to learn. It’s such a power-hungry process that Microsoft has looked into building its own nuclear reactors to generate the necessary electricity.
According to MIT Technology Review, the startup Hugging Face estimated that ChatGPT-3’s training would have resulted in more than 500 metric tons of carbon dioxide. For reference, that’s roughly the carbon footprint of 171 flights from New York City to Tokyo.
The scale is set to increase across the board. The volume of data in the world doubles every two years. As LLMs grow increasingly powerful and are trained on increasingly massive data sets, they are only going to require more energy to operate. A recent peer-reviewed analysis estimated that by 2027, global A.I. energy demands could be as much as that of whole countries such as Argentina or Sweden.
Searching for Solutions
As generative A.I. becomes an ever-more crucial part of society, it is important that the global community takes greater steps to make it as eco-friendly as possible. The first and most obvious is to demand greater transparency around emissions linked to generative A.I. Holding leading tech companies accountable for accurately monitoring and reporting their emissions tied to LLMs will both help us understand the present situation and the appropriate steps to take.
A more challenging, but ultimately essential step is going to be greening the global power grids. As the MIT Business Review points out, France, which derives more than 60 percent of its energy from nuclear power, will have far lower emissions due to generative A.I. The more nations can wean themselves off of their dependence on fossil fuels and gravitate towards renewable sources of energy such as solar, wind, and hydroelectric, the better.
In the meantime, some tech companies have been taking admirable steps towards greening their own energy usage. Alphabet has invested heavily in renewable energy sources including the Rødby Fjord solar project in Denmark, the Piiparinmäki wind farm in Finland, and a series of wind turbines in Chile, all with the goal of having what it calls 24/7 carbon-free energy by the year 2030. Apple has also stated that it aims to be carbon neutral by 2030 and has invested heavily in renewable energy sources.
Realistically though, overhauling the world’s energy supply is a herculean undertaking that will not be completed in the short-term. While world leaders continue to tackle that problem, there are plenty of smaller, strategic moves that tech companies can employ to lower their emissions.
As a 2023 article in the Harvard Business Review suggests, an early step should be to limit the use of LLMs to where they’re most beneficial for society. Generative A.I. has astonishing research applications, particularly in the field of medical science.
Yet many of the LLMs that the public currently interacts with are largely used for entertainment purposes. Ajay Kumar and Tom Davenport, the article’s authors, suggest that we should limit the use of models such as Midjourney or DALL-E-2. By the estimate of one peer-reviewed analysis, 1,000 images created by Stable Diffusion, one such image-generating LLM, is equivalent to 4.1 miles in a gasoline-powered car.
The authors further suggest that instead of training large generative models virtually from scratch, tech companies should mostly build off of existing LLMs. By adjusting current models, rather than forcing new ones to churn through mammoth data sets, companies can minimize their impact further.
There is still a long way to go when it comes to making this developing industry greener. The time to act is now, in order to set a positive precedent for an A.I.-powered future.
Posted 04 Jul 2024