When a customer looks for a Data Center for their IT systems, one of the first parameters they focus on is the PUE indicator. The problem is that the number itself says very little without context—and that context can be complex.

What exactly is PUE?

PUE (Power Usage Effectiveness) is the ratio of the total energy consumed by a Data Center facility to the energy used solely to power IT equipment. The numerator therefore includes everything—servers, cooling systems, UPSs, transformers, and even elevators or the heating of generator engines. The denominator includes only IT equipment.

The formula is simple, but its interpretation is not. The lower the PUE, the better—at least in theory. It means that a smaller portion of energy is used to operate the facility, and more goes directly to IT workloads.

Why a low PUE does not always mean a better Data Center

Customers often come in expecting a very low PUE. Sometimes this is because they have read about Data Centers in Norway or Greenland, where cooling is almost “free” all year round. In Poland, however, the climate is very different. In summer, temperatures can reach 35°C, and free cooling—using cold outside air—no longer works. Compressors must then operate at maximum capacity.

Moreover, a low PUE can be achieved at the expense of safety. Some data centers use one power path with UPSs and another without them. They save energy on UPS losses, but the customer receives a lower level of redundancy. Another way to reduce PUE is to increase the temperature in the server room—cooling systems work less intensively and PUE drops. The risk, however, is that if one cooling unit fails, there may not be enough time to switch to a backup before environmental thresholds are exceeded. Not all server configurations and high-density deployments allow for elevated inlet temperatures.

There is also the ecological aspect. A low PUE means lower energy consumption, but if that energy comes from coal-fired power plants, the environmental benefit is questionable. Scale matters as well—a very large Data Center may have an excellent PUE while still consuming several megawatts of “dirty” energy.

How PUE is measured—and why it matters

PUE should be calculated as an annual average. A January result—when free cooling operates at full capacity—may be as low as 1.1. The same facility in summer may reach a PUE of 1.6 or 1.7. Presenting only winter results is misleading.

Equally important is what the Data Center includes in its total energy calculation. Are UPS losses included? Transformers? Generator heating systems? Some facilities omit these elements, artificially lowering their PUE. There is no single, perfectly precise universal definition of what must be included in total facility energy.

In a well-managed Data Center, measurements are taken in real time—every few minutes. Meters are installed at dozens of points: on every main circuit, within each cooling system, in every server chamber, and separately for primary and secondary power paths. This level of detail not only enables reliable PUE calculation but also helps identify optimization opportunities.

Efficient cooling management—the key to lower PUE

The foundation is a cold aisle containment system. Two rows of server racks face each other, and the space between them is enclosed. Cold air is supplied from beneath the raised floor directly into this enclosed aisle. Servers draw in cold air from the cold aisle and exhaust hot air outside it. This way, cooling is delivered only where it is needed—not to the entire room.

Sealing is critical. Every gap between devices in a rack must be filled with blanking panels. Floor tiles can be adjustable to direct airflow precisely where required. Cooling units are equipped with their own temperature and pressure sensors in the cold aisle, allowing them to automatically adjust airflow to the current load.

The limits of air cooling

Classic air-cooling systems handle loads of several to several dozen kW per rack quite well. Beyond roughly 20–30 kW per rack, they begin to require aggressive airflow optimization, and at around 40–50 kW, they become practically uneconomical. At that point, the required airflow is so high that air ceases to be an efficient cooling medium, making it necessary to switch to alternative cooling solutions.

At this point, liquid cooling comes into play, delivering coolant directly to processors inside servers. Even then, air cooling does not disappear entirely—other server components (such as power supplies) still require ventilation. It is estimated that between 10% and 30% of cooling must still be provided by air.

What to look for when choosing a Data Center

Focusing solely on the PUE indicator is a dead end. A low PUE may indicate excellent efficiency, but it may also hide compromises in safety or unreliable measurement methodologies.

It is worth asking a potential provider three questions. First: what exactly is included in total facility energy when calculating PUE? Second: does the facility use cold aisle containment and cool only dedicated zones, or the entire room? Third: how is cooling redundancy ensured—how many chillers are installed, how many refrigerant circuits are there, and what happens if one device or circuit fails?

Optimizing PUE makes sense—but not at the expense of reliability. The best Data Centers are those that maintain high energy efficiency while preserving the highest standards of safety and service availability. Ultimately, business continuity is what matters most.