Implementation

Team recommendations

The Data Center Team of the Green IT SIG summarized the most relevant recommendations to achieve a Green Data Center based on their daily experience

  1. Verhinderung der Energieverschwendung

    • Konsequente Verwendung von modernsten Technologien, diese sind energieoptimiert. (Ein Wiedereinsatz von alten Servern lohnt sich aus energetischen Gründen in den wenigsten Fällen)
    • Anwendung der verfügbaren Stromreduzierungsoptionen durch den Anwender. (Heruntertakten von Prozessoren, Abschalten von überzähligen Powersupplies, Abschalten von backup Speicher-Infrastrukturen (Spin down der Platten) bei Nichtgebrauch)
    • Einsatz von Bandstationen für den Backup, anstelle von Plattenspeichern
    • Reduzieren der Anzahl der IT-Komponenten mit Hilfe von virtualisierten Systemen (Server und Speicher)
    • Deinstallieren von allen nicht mehr verwendeten Hardware-Komponenten
  2. Optimierung von Datacenter Space und Infrastruktur

    • Energieeinsparung im Rechenzentrum durch Erhöhung der Raumtemperatur (siehe oben)
    • Infrastrukturen für direkte Wasserkühlung von IT Komponenten und Racks einplanen um so auch die direkte Kühlung der Server für zukünftige, noch unbekannte Hardware zu erlauben
    • Einsparen und optimieren von Datacenter-Fläche, die Kaltgangeinhausung
  3. Erfahrungsaustausch betreiben

    • Data Center Betreiber sollen geeignete und effiziente Infrastrukturanlagen für hoch verfügbare Data Centers verwenden
    • Rack-Hersteller bieten effiziente und technisch ausgeklügelte wassergekühlte Rackssysteme und Kalt-oder Warm-Gang Einhausungen an
    • Forschungsarbeiten von flüssigkeitsgekühlten Prozessoren und ganzen Motherboards unterstützen
    • Hochschulen/Universitäten motivieren im Umfeld der Rechenzentren nach energiesparende Massnahmen in Strom und Klima zu forschen
    • Spezialisierte Planer haben grosse Erfahrung für die Erstellung oder den Umbau grüner Rechenzentren

Some practical efficiency objectives

  1. Concentrate the servers on the smallest possible floor space
    • Implement the cold / hot aisle concept (efficient cooling up to 10 kW)
    • Implement water cooled racks (efficient cooling higher than 20 kW)
  2. Save but highest possible load of the racks
    • Measuring of the power usage of the ICT components in each rack, to prevent an overload
  3. Implement self-contained cooling units with efficient DC-motors
    • Up to 50% reduction of power with the help of speed-controlled DC-motors.








Temperature optimization

Results of an inquiry to HW providers

Many Datacenter operators have not yet analysed an increase of the temperature in the computer rooms. One reason why they have not dealt with this issue is because they do not want to change the existing infrastructure and, because on the data sheet of the providers of servers, storage, and network units the recommended temperature is still 23°C or below.

In 2011 we have asked the largest IT providers wether they would support an increase of the temperature in the computer room according to the ASHRAE standards 2008 and 2011.

*The following providers were asked to answer the inquiry: Dell, Fujitsu, IBM, HDS, EMC, HP, Oracle(SUN), Cisco. The answers of the providers are summarized here.

The questions

  1. Do you support 27°C at the Intake of your IT equipment over an indefinite time?
  2. Are there exceptions?
  3. Do you observe higher energy consumption in the IT equipment when the air temperature reaches 27°C at it's intake?
  4. How much does the energy consumption of the IT equipment increase in % when running at 27°C compared to when running at 24°C?

With questions 3 and 4 we wanted to find out wether the energy saved through reduced cooling in the server would to a large extent again be lost within the servers (fans in the servers run faster with higher intake temperatures).

The answers

  1. The units of the asked HW providers can work properly without restriction with an intake temperature of 27°C. The upper temperature limits are today rather at 35°C, with some at 32°C.
  2. No exceptions were mentioned.
  3. At the same time it was confirmed that in some storage units and servers an intake temperature of 27°C let the fans run faster and therefore they consume more electricity. This behavior is specially observed with more recent fans which have a more temperature optimized steering mechanism.
  4. The power consumption is visualized with graphs, although without indication of the resulting increase of the power consumption in the server.

Most of the providers offer a service to analyse the energy savings in the server environment.

Summary

An increase of the temperature in the computer room is according to the hardware providers possible and desirable. With an increase of the temperature from 24°C to 27°C there is little saving in power usage because the higher speed of the fans eliminates again some of the savings. This fact is also mentioned in the ASHRAE documentation.

In addition one needs to pay attention that with a temperature of 18°C in the raised floor the temperature in a well utilized server room the temperature near the floor is in the range of 22°C and around 2 meters about 28°C. This is caused by "thermical short circuits".

Conclusion: without additional measures the temperature in the room will not increase because the servers in the upper parts of the racks already today take in very hot air.

Two substantial accompanying measures are:

These measures allow (a.) to reduce the temperature delta in front of the ICT units and (b.) substantial energy savings without an increase of the temperature. Both measures should be introduced together if new or expanded server rooms are constructed.

You need to pay attention that the dimension of the new air cooling units is such that the air-flow corresponds to 50% of the rated air-flow. In this way up to 70% of the energy savings in the air cooling units can be achieved.

Further savings, e.g. with an increase of the water temperature and the extension of the "free cooling" times are not discussed here but are of course an important component of an energy-efficient cooling system.

Financials

The energy savings in server rooms through the replacement of existing air cooling units with modern variably controlled ones can not be justified by financial reasons. The operational savings (OPEX) in comparison to the acquisition cost (CAPEX) of the new units are too low. The return of investment (ROI) considering the actual energy cost of about CHF 0.11 per kWh can take up to 20 years. However the replacement does make sense at the beginning of a new life cycle.

With the implementation of hot and cold aisles alone the expansion of existing cooling units can be avoided and a consistent temperature in front of the servers will be achieved.