Explanted: Impact on the generative environment AI | Myth news

In a two -part series, Myth news He studies environmental implications of generative artificial intelligence. In this article, we look at why this technology is so demanding. The second song will examine what experts do to reduce Genai carbon trace and other impacts.

Excience related to the potential benefits of generative artificial intelligence, from improving employee performance to the development of scientific research, is difficult to ignore. While the explosive development of this new technology has enabled the rapid implementation of powerful models in many industries, the environmental consequences of this generative “gold” fever “remain difficult to withdraw, let alone relief.

The computing power required for training of generative AI models, which often have billions of parameters, such as GPT-4 OPENAI, may require a stunning amount of electricity, which leads to increased emissions of carbon dioxide and pressure on the electrical network.

In addition, the implementation of these models in real applications, enabling millions to use generative artificial intelligence in their daily lives, and then refine the models to improve their performance attracts large amounts of energy long after developing the model.

In addition to the demand for electricity, a lot of water is needed to cool the equipment used for training, implementing and tuning generative AI models that can strain urban water supplies and interfere with local ecosystems. The growing number of generative AI applications also stimulated the demand for high -performance calculation equipment, adding indirect effects of the environment from its production and transport.

“When we think about the impact of generative artificial intelligence in the environment, not only the energy you consume after connecting the computer. There are much broader consequences that go to the system level and persist on the basis of the actions we undertake,” says Elsa A. Olivetti, professor of the Department of Materials and Engineering and conducting the mission of decaronization myth myth Climate project.

Olivetti is the senior author of the newspaper from 2024 “Climate implications and sustainable generative development AI“Co -author of myth collaborators in response to a call to the entire Institute to documents that examine the transformation potential of generative AI, both in positive and negative directions of society.

Demanding data centers

The demand for electricity related to data centers is one of the main factors contributing to the impact of generative artificial intelligence, because data centers are used to train and conduct deep learning models for popular tools such as chatgpt and DALL-E.

The data center is a controlled building with a temperature, which houses computing infrastructure, such as servers, data disks and network equipment. For example, Amazon has more than 100 data centers around the worldEach of which has about 50,000 servers, which the company uses to handle cloud processing services.

While data centers have existed since the 1940s (the first was built at the University of Pennsylvania in 1945 to support The first digital computer of general useENIAC), the increase in generative artificial intelligence has significantly increased the pace of building a data center.

“What differs in generative artificial intelligence is the power density that it requires. Basically, it is simply a calculation, but the generative training cluster AI can use seven or eight times more energy than a typical computing burden,” says Noman Bashir, the main author of Impact paper, which is a computer and climate influential employee in MIT CLIMA and balanced settlement and MCC) (Csail).

Scientists have estimated that the energy requirements of data centers in North America increased from 2688 megawatts at the end of 2022 to 5341 megawatts at the end of 2023, partly driven by the requirements of generative AI. Globally, electricity consumption through data centers increased to 460 Terawatt in 2022, it would make data centers 11. As for the size of the electricity consumer in the world, between the nations of Saudi Arabia (371 terawatts) and France (463 Terawatts), according to the Organization of Economic and Development.

By 2026, it is expected that the consumption of electricity in data centers will approach 1050 terawatts (which would hit data centers to the fifth place on a global list, between Japan and Russia).

Although not all calculations of data centers include generative artificial intelligence, technology was the main driving force of growing energy demand.

“The demand for new data centers cannot be satisfied in a balanced manner. The tempo at which companies build new data centers, means that most electricity for power supply must come from fossil fuel -based power plants,” says Bashir.

The power needed to train and implement a model such as the GPT-3 OPENAI is difficult to determine. In the research article in 2021, scientists from Google and the University of California in Berkeley estimated the training process itself used 1 287 megawat electricity hours (sufficient to supply about 120 medium -sized houses in the US for a year), generating about 552 tons of carbon dioxide.

While all machine learning models must be trained, one of the problems unique for generative artificial intelligence are rapid fluctuations in energy consumption that occur in different phases of the training process, explains Bashir.

The power network operators must have a way to absorb these fluctuations to protect the net and usually employ Generators based on diesel oil for this task.

Increasing the impact on the application

After training the generative model AI, the energy requirements do not disappear.

Every time the model is used, perhaps by a person asking ChatgPT for a summary of the E -Mail message, computer equipment that performs these operations, uses energy. Scientists have estimated that CHATGPT inquiry uses about five times more electricity than simple search on the web.

“But everyday user doesn't think much about it,” says Bashir. “The ease of using AI generative interfaces and a lack of information on the impact of my activities means that as a user I do not have much motivation to limit my use of generative artificial intelligence.”

Thanks to traditional artificial intelligence, energy consumption is quite evenly divided between data processing, model training and inference, which is the process of using a trained model to predict new data. However, Bashir expects that the requirements of electricity resulting from the generation of AI ultimately dominate, because these models become ubiquitous in so many applications, and the electricity needed to apply increases when the future versions of the models become larger and more complex.

In addition, AI generative models have particularly short appendages, powered by the growing demand for new AI applications. Companies release new models every few weeks, so the energy used to train earlier versions is wasted, adds Bashir. New models often consume more energy for training, because they usually have more parameters than their predecessors.

While the demand for electricity related to data centers can pay the greatest attention in research literature, the amount of water consumed by these facilities also has an impact on the environment.

Chilled water is used to cool the data center by absorbing heat from computing equipment. It is estimated that for each kilowat energy hour it consumes a data center, would need two liters of water for cooling, says Bashir.

“Just because it is called” cloud processing “does not mean that the equipment lives in the cloud. Data centers are present in our physical world, and due to their water consumption, they have direct and indirect implications for biological diversity, “he says.

Internal calculation equipment of data centers brings its own less direct impact on the environment.

Although it is difficult to estimate how much power is needed to produce GPU, a type of powerful processor that can support intensive AI generative loads, it would be more than what is needed to produce a simpler processor, because the production process is more complex. The GPU carbon trail is intensified by emissions related to the transport of material and the product.

There are also environmental implications of obtaining raw materials used for GPU production, which may include dirty extraction procedures and the use of toxic chemicals for processing.

The Techinsights market company estimates that the three main manufacturers (NVIDIA, AMD and Intel) sent 3.85 million graphics processors to data centers in 2023, compared to around 2.67 million in 2022.

Bashir says that the industry is unbalanced, but there are ways to encourage responsible development of artificial intelligence, which supports environmental goals.

He, Olivetti and their colleagues MIT claim that this will require comprehensive consideration of all environmental and social costs of generative artificial intelligence, as well as a detailed assessment of value in the perceived benefits.

“We need a more contextual systematic method and comprehensive understanding of the implication of new achievements in this space. Due to the speed at the improvements, we did not have the opportunity to catch up in the field of measuring and understanding compromises,” says Olivetti.

LEAVE A REPLY

Please enter your comment!
Please enter your name here