Bessere PUE-Werte in Rechenzentren aber kleinere Effizienz - Warum?
Die kürzlichen Sicherheitsprobleme bei Mikroprozessoren (Meltdown, Spectre) haben Auswirkungen auf den Energieverbrauch in Rechenzentren. Ein Bericht von unserem Mitglied Seán Murphy.
Microprocessors at the heart of our computing infrastructure account for a significant amount of energy consumption in our data centres. The numbers vary somewhat depending on the type of data centre and the types of servers but in a compute heavy server, the processor can account for 60% of the total energy consumption, although it is commonly lower for less powerful machines.
With this in mind, chip designers such as Intel, AMD and ARM have expended enormous energy over the last decades on squeezing more compute cycles from every Watt, resulting in ever more compute power for a given amount of energy. While this has been very successful, the gains currently being achieved in this realm are tapering off.
Given the large impact CPU energy consumption has on the data centre segment, it is noteworthy that the recent security vulnerabilities (Meltdown, Spectre) discovered in multiple microprocessors arose from flaws in processor optimizations. The essential approach to protect systems from these vulnerabilities is to disable these optimizations which has been shown to result in increased microprocessor energy consumption for a given workload.
Initial results indicated that there could be a significant impact with 30% increases in CPU load being reported for workloads arising from the computer gaming sector. Subsequent analysis showed that this is not so representative and that the impact, although present, is more modest, typically in the range of a few percent.
For any given server or even from the perspective of an organization's Data Centre, an increase of a few percent is not likely to have much impact but from the perspective of the IT sector as a whole, it is remarkable that a single security vulnerability could result in a worldwide increase in server energy consumption: further, such an increase would not result in any more compute work and hence it is less efficient use of energy (although obviously necessary for security reasons).
A final observation on this matter: the current Data Centre energy efficient metrics (PUE) focus primarily on what fraction of a Data Centre’s energy consumption is accounted for by the IT workload, compared with the fraction that is consumed by other Data Center systems - cooling and airflow, primarily; if the fraction consumed by the IT systems is larger, the Data Centre is considered more energy efficient. An increase in energy consumption such as that arising in this case would increase the fraction of energy consumed by the IT load and could, perversely, result in improved perceived energy efficiency. We could see better PUEs and worse efficiency in the near future.
Dayarathna, Miyuru, Yonggang Wen, and Rui Fan. "Data center energy consumption modeling: A survey." IEEE Communications Surveys & Tutorials 18, no. 1 (2016): 732-794. http://ieeexplore.ieee.org/document/7279063/