Something Powerful

Tell The Reader More

The headline and subheader tells us what you're offering, and the form header closes the deal. Over here you can explain why your offer is so great it's worth filling out a form for.

Remember:

  • Bullets are great
  • For spelling out benefits and
  • Turning visitors into leads.

Data Center Cooling: 3 Popular System Designs

Posted by Ken Kaye on Nov 15, 2022 10:33:34 AM

With the explosive growth of the digital economy over the last quarter century, the need for services like data storage, large-scale immediate computing operations, web hosting, and others has exploded as well. In that time span, the data centers where these functions take place have mutated from a handful of on-premises server racks performing local operations to sprawling commercial structures with thousands of servers.

And as technology advances, the task of managing the thermal loads generated by increasingly powerful servers becomes more challenging.

In this article, we’ll start with an overview, then look at some common cooling system designs for three segments of the data center market:

  1. Hyperscale data centers (owner-operator)
  1. Hyperscale (collocations)
  2. Mid-size on-premises

Hyperscale Data Centers (Owner-Operator)

Overview and Priorities

In the data center industry, hyperscale data centers are the largest classification, with the term usually reserved for facilities of at least 10,000 square feet and housing a minimum of 5,000 servers. Hyperscale data centers typically fall into one of two categories: owner-operator – also called enterprise – and collocations. The owner-operator variety, as its name suggests, is a facility that is owned and operated by its user.

Picture1-1

These users are often large tech companies like Amazon, Google, Microsoft, and others – massive companies with immense processing and computing needs. Enterprise hyperscale data centers are controlled from inception to completion by the organization that will be using them, either directly or through trusted subcontractors.

Since these large tech companies provide so many of the critical, web-based services that we use every day, uptime is critical at the data center facilities where these operations – hundreds of petabytes worth per day[1] – take place. And, perhaps the most critical defense against downtime are properly designed cooling systems. Thus, such cooling systems are meticulously planned, designed, and maintained. And the engineers charged with their design and maintenance are highly skilled, with a deep understanding of the thermal management of information technology equipment (ITE). Their designs account for everything from stratification – the uneven lateral distribution of heat in a space – to water chemistry used in equipment.

Prevalent Cooling System Design

An important distinguishing factor of owner-operator hyperscale data centers is present in the design phase. Large data providers will typically employ engineers from multiple disciplines to coordinate such projects, affording the organization complete control of this critical step.

Regardless of the size or layout of a data center, there are two main classifications of cooling system that can be employed:

  1. Air-cooled systems
  • CRAC units
  • CRAH units
  1. Liquid-cooled systems
  • CDU/Rear door heat exchanger design

Air-Cooled Systems

Air-cooled data center cooling systems rely on computer room air handlers (CRAH) and computer room air conditioning units (CRAC). The difference between the two is that CRAH units cool ITE using chilled water and a control valve, where CRAC units use a refrigerant + compressor design.

CRAC Units

In cooling systems featuring CRAC units, hot air containing the thermal load from the server racks is forced across the refrigerant coil within the CRAC unit, which functions like any standard compression/expansion air conditioning equipment. The heat is absorbed into the refrigerant and then vented away either in an airstream or imparted into a fluid like water or glycol.

These systems may make use of a raised floor design, as seen in the picture below, in which the ITE is placed on an elevated floor, creating a cavity between it and the actual floor of the building. These raised floors can be vented near ITE or permeable throughout. This design can help compensate for stratification and generally contribute to more even distribution of conditioned air.

Picture2-1

Cooling systems in enterprise hyperscale data centers are precise and targeted in their design, often taking place at the row or rack level, which can be achieved with air cooling. Rack-level cooling systems will typically have around one cooling unit for each rack of ITE and row-level cooling schemes typically feature a CRAC for every row of racks.

CRAH Units

Computer room air handlers function using chilled water and control valves, similar to cooling systems used in large commercial buildings. A chiller, also called a chilled water plant, supplies fluid coils within the CRAH unit with chilled water.

These chillers can be located either outside the data center facility or within, but the function is the same in either configuration. Heat from the ITE is absorbed into the fluid then pumped back to the chiller where the heat is exhausted and the cycle repeats.

Liquid-Cooled Systems

While air-cooled systems are effective for certain hyperscale data centers, for applications that require even more narrowly targeted thermal management, liquid-cooled systems can be used.

Unlike air-cooled systems, where cooling occurs at the server row or server rack level, liquid-cooled systems typically remove ITE thermal loads directly from the chips themselves – commonly referred to as chip-level cooling. One of the most prevalent liquid-cooled system configurations we support at SRC features a combination of a coolant distribution unit (CDU) and a rear door heat exchanger (RDHX)

CDU/Rear Door Heat Exchanger Design

This type of cooling system configuration provides the most targeted thermal management of the designs we’ve covered in this post. They feature a coolant distribution unit and a rear door heat exchanger which work in tandem to remove heat at the rack level.

Heat from the ITE is absorbed into the rear door heat exchanger, which is a coil mounted in the door of the server cabinet as pictured below. That thermal load is then sent to a second fluid coil within the CDU, where it’s absorbed into a secondary chilled water loop, which is separated from the facility’s primary chilled water plant.

Picture3

The isolated nature of this secondary loop affords the user excellent control over water quality, temperature and humidity control, etc.

Hyperscale Data Centers (Collocations)

Overview and Priorities

Colocation data centers, or “collo” data centers, comprise those hyperscale data centers that aren’t owned and operated a single, large organization. If those are single-family houses, collocation data centers would be something like an apartment complex.

Collo data centers are large facilities that serve multiple clients, sometimes numbering in the hundreds. However, they do usually feature an “anchor” client. For example, if a corporation is building a large office campus, a data center construction organization may build a collo facility nearby to support the corporation’s data needs.

Since collo facilities are often built speculatively based on anticipated data needs of an area, their configurations are much different from their enterprise counterparts, where the data center’s user has complete control of the development process. Priorities in these sorts of facilities tend to be more focused on speedy construction and adequate cooling – it’s kind of a “one size fits most” approach.

Prevalent Cooling System Design

Collocation data centers tend to feature less targeted cooling than would be found in enterprise facilities. Rather than row or rack-level cooling, collos are likely to have room-level systems, where there may be one or two CRAC units dedicated to several rows of server racks. The working principle is the same as row or rack-level cooling, there’s just fewer units, like in the illustration below.

Mid-size On-Premises

Overview and Priorities

This type of data center rounds out the three segments that we support most often at SRC. Mid-size on-premises data centers are smaller than their hyperscale counterparts and are sort of a hybrid between collo and enterprise data centers when it comes to priorities. Mid-size on-prem facilities typically serve a single client –like hospital campuses or government offices – but uptime and cybersecurity are the chief priorities, rather than precision or hyper efficiency.

Picture4

Prevalent Cooling System Design

Again, the principles behind mid-sized on-prem data centers are the same as the others that we’ve covered. But, given the smaller size of these facilities, a hyper-targeted approach like the ones found in enterprise data centers isn’t necessary. Mid-size data centers can feature room-level cooling or even building-level cooling, where one or two CRAC or rooftop units manage the thermal load of the entire facility.

We should note that the systems we’ve described above are not the only methods of data center cooling that exist. However, a large percentage of the world’s data is processed and stored via one of the three types of data centers we’ve discussed. And much of the equipment that does these important tasks is cooled using some type of system like the ones we touched on. Data center cooling is a very fluid industry, however, as computing power continues to increase, forcing data center cooling to evolve and improve along with it.

If you’re designing a data center cooling system and are looking for some help engineering your heat transfer components, give us a call. We’ve been designing heat exchangers for the data center industry for decades and our experience includes liquid-cooled systems, air-cooled systems, submerged cooling, and more.

Don’t get left out in the cold when it comes to heat transfer information. To stay up to date on a variety of topics on the subject, subscribe to The Super Blog, our technical blog, Doctor's Orders, and follow us on LinkedIn, Twitter, and YouTube.

[1] https://lewisdgavin.medium.com/googles-data-footprint-will-blow-your-mind-2237cf8e0d4#:~:text=200%20Petabytes%20of%20data%20per,Search%20is%20handling%20per%20day.