Antoni Bosch, Senior VP Telecom Solutions, Prysmian Group
Recently, I took part in ‘The power of innovation: Europe’s search for tech leadership’ at the Tech & Politics forum organized by Financial Times and ETNO, the European Telecommunications Network Operators’ Association. Here, I spoke about the fact that data traffic will grow by up to 30% YoY until 2030, according to the latest forecasts. That means we need to ask ourselves if current networks are equipped for such incredible data traffic growth – especially considering the fact that these forecasts have most likely not considered the full impact of AI.
A large part of the vast amount of (AI) traffic will be generated and stored in the Data Centre. AI requires huge amounts of data as well as truly vast computing power to work. New Data Centers designed with AI capabilities demonstrate an incredible increase in computing power and connectivity inside the Data Center and between Data Centers.
This, in turn, introduces vast energy consumption. As the number of users, devices and the volume of queries increases, this will grow exponentially. According to a recent Morgan Stanley study, a search query in ChatGPT costs seven to 20 times more than a search in Google. ChatGPT uses more computing resources and, therefore, more energy. And this is why, of course, you receive a more elaborate reply. Are we ready for a massive deployment of AI? Not in the short term.
Data centers around the world consume a significant amount of electricity, largely due to the need for continuous power to operate servers and other equipment, as well as for cooling and building systems. According to the International Energy Agency (IEA) data centers account for 1.15% of total electricity consumption worldwide. In 2021, global Data Center power consumption was 220–320 Terawatt hours, approximately 0.9%–1.3% of the total electricity demand: a 10%–60% rise in data center energy use compared to 2015. However, in the same period, data center workloads increased by 160%, largely thanks to improvements in PUE and energy efficiency.
In many developed countries, Data Centre energy consumption represents 4 – 5% of the total energy consumption. In countries like Ireland with a higher Data Center per capita ration, this even reaches more than 10%. Just imagine most developed countries moving from 5 to 10%.
We need an increasing amount of power supply that is not available today. What’s more, this power should come from a renewable source – otherwise it won’t be sustainable. This is why we have started talking about the Twin Transition: the Digital Transition will not happen without the Green Energy Transition and vice versa. As we have seen, Digitalization and AI will require more green and renewable energy generation. At the same time, the smart and efficient deployment of the Green Transition requires digitalization.
Innovation is necessary to realizing this. Investing in R&D will push the limits and provide innovative solutions for the Twin Transition. This is what we are trying to do in Prysmian. We also need to be more aware of the hidden cost of digital services, such as Social Networks. Every time someone posts on social media there’s a hidden cost because their post will be stored in one or more Data Centers, consuming energy indefinitely. I’d like to call for a kind of social responsibility: by all means post relevant information worth storing in a Data Center but be careful with sharing and indefinitely storing useless posts!
How good Power Quality enhances PUE and efficient Data Center operation
Jorlan Peeters, Managing Director, HyTEPS
Data centers started off as collections of computers in the basements of banks and companies. As capacity requirements increased, installations became larger and outgrew those basements. Separate locations were set up and efficiency became important. Over the years, Data Centers have grown from a handful of servers to hundreds, thousands, sometimes even hundreds of thousands of servers in one location. Such configurations can substantially affect the grid and other users in a region.
Over the years, industry-wide data center workloads increased eightfold (see fig. 1&2), yet remarkably total energy consumption remained virtually the same. This is thanks to far-reaching resource pooling and optimization at device level and in data center design. Scale increases encouraged the most efficient possible design of servers, primary technical systems, and support installations, such as cooling and backup. However, global data center energy consumption remains vast. Power demand won’t decrease any time soon, with developments in cloud applications, mobile or at-home working, Artificial Intelligence and Internet of Things boosting capacity and power requirements. Historically, Data Center energy consumption was fairly constant throughout the day and night. Bringing all processes together under one roof reduced fluctuations: you could more easily plan activities to optimize server utilization and, as a result, grid load. Nowadays, of course, fluctuations don’t just occur on the usage side, but also in the electricity supply.
Measuring, monitoring, simulating, analyzing
We’d like to share some learnings from discussions and experiences with our data center clients, including some of the world’s largest players. Keeping power quality in order is essential, helping data centers hit targets and keep improving, without affecting key processes. Adjustments can be made with absolute certainty equipment will keep working without downtime. By zooming in and mapping complex relationships between devices and components, data centers can make informed adjustments, track the sources of unexpected measurements, and implement durable, effective remedies. The key is to look at the system as a whole. What is the quality of voltage and current? What is the grid doing? What causes fluctuations? What harmonics are injected? How do components affect each other? Monitoring and analyses are key, helping uncover usage patterns, for example. Through active filtering, reactive power compensation, and other proactive approaches, we maintain network performance. Furthermore, ‘inspansion’- increasing the capacity of an existing installation through smart measurements and effective measures – is key to efficient, cost-effective, sustainable utilization of electrical power, as well as optimal grid interfacing.
Power consumption optimization will remain important for years to come. Fortunately, there are all kinds of ‘quick wins’. Checking whether server rooms might tolerate slightly higher temperatures and switching to free air cooling, for example: low-hanging fruit requiring comparatively small investments. You shouldn’t just assume that your performance is as good as it gets because you have the best hardware and the smartest processes in place. You have to come up with new insights and ideas by continuously measuring, monitoring, simulating, and analyzing.
Theory and practice are very different things. Devices from different vendors might all be neatly within specifications – but connect them all at the same time, and anything can happen! It’s vital to consider the complexity of your system. Today, Data Center operators have more insight in their infrastructure than ever, but power is often a blind spot. The Data Center industry has achieved a great deal, but there are always more energy efficiency improvements to be made!
SOURCE: Global data center energy demand by data center type, 2015-2021, International Energy Agency (IAE)
SOURCE: Global trends in internet traffic, data center workloads and data center energy consumption, 2010-2021, International Energy Agency (IAE)
Power Usage Effectiveness (PUE) is a ratio that describes how efficiently a computer data center uses energy. A PUE value of 2 means that for every watt of power used for primary processes, another watt is consumed for power distribution, cooling and related processes. In this case. almost half of the total energy consumption is not used for processing.