Data Center Cooling Solutions: How to Support High-Density, Efficient, Future-Ready Facilities
- Corey Mullikin

- Apr 2
- 6 min read
As computing demands grow, data center cooling has moved from a background utility to a strategic infrastructure decision. AI workloads, denser racks, and higher energy costs are pushing operators to rethink how they manage heat, improve uptime, and plan for future growth. The U.S. Department of Energy says U.S. data centers consumed approximately 176 TWh in 2023, or about 4.4% of total U.S. electricity use, and projects continued growth through 2028. DOE also notes that cooling can account for up to 40% of overall data center energy use, making cooling performance a major cost and sustainability factor.

For facility owners, engineers, and operators, the question is no longer whether cooling matters. The real question is: what type of data center cooling system best fits your facility, your rack density, and your long-term operating goals? As AI adoption increases, that question becomes even more urgent. Uptime Institute reports that AI training clusters are accelerating rack density growth and are driving wider adoption of cold plate and immersion cooling technologies, with some product roadmaps pointing toward 200 kW per rack and above within a few years.
At Thermal Equipment Sales, we help organizations evaluate data center cooling solutions that align with reliability, efficiency, scalability, and maintainability. Whether you are upgrading an existing room, planning a retrofit, or designing for higher-density loads, understanding your cooling options is the first step.
Why Data Center Cooling Matters More Than Ever
Traditional enterprise environments were often built around lower rack densities and conventional air movement strategies. But today’s infrastructure is changing. DOE’s updated best-practices guidance now spans everything from traditional air-cooled data centers to cutting-edge facilities with higher rack power densities and liquid cooling, reflecting how quickly the industry is evolving.
This shift is being driven by several converging factors:
AI and high-performance computing are increasing server heat loads and rack densities.
Energy efficiency is becoming more important as data center power use grows.
Water use and sustainability are receiving more scrutiny in cooling system design. DOE highlights water usage effectiveness (WUE) as an important metric, and McKinsey notes rising concern around data center water consumption in stressed regions.
Retrofit practicality matters, especially for existing facilities that need to support higher loads without full rebuilds. Uptime’s 2025 cooling survey found that ease of retrofit into existing infrastructure was the top factor operators consider when evaluating direct liquid cooling viability.
In short, data center cooling systems now sit at the intersection of uptime, operational cost, scalability, and sustainability.
Air Cooling vs. Liquid Cooling in the Data Center
When people search for data center cooling, they are usually trying to answer one core question: Should this facility use air cooling, liquid cooling, or a hybrid approach?
Air Cooling
Air cooling remains the most common strategy across the industry. Uptime Institute’s 2025 survey found that 75% of operators use perimeter air cooling, while close-coupled and fresh-air strategies remain common as well.
Air-cooled data center cooling systems typically include:
CRAC or CRAH units
Hot aisle/cold aisle layouts
Containment systems
Economization strategies where appropriate
Variable-speed fans and controls
For lower- to moderate-density applications, well-designed data center HVAC systems can still perform effectively — especially when airflow management, containment, and controls are properly engineered. DOE’s design guidance continues to include strong best practices for improving air-cooled performance, even as facilities move toward higher densities.
Liquid Cooling
As rack densities rise, liquid cooling for data centers becomes more attractive because liquid can remove heat more efficiently at or near the source. ASHRAE has published extensive guidance on the emergence of liquid cooling, water-cooled servers, and the design considerations surrounding these systems. [ashrae.org], [ashrae.org]
Common liquid cooling options include:
Direct-to-chip cold plate cooling
Rear-door heat exchangers
Immersion cooling
Hybrid systems that combine liquid cooling with room-level air systems
Uptime reports that the biggest driver for liquid cooling is still extreme heat output from dense racks and high-powered servers, while sustainability and energy efficiency are also important factors. At the same time, operators continue to cite barriers such as retrofit complexity, cost, maintenance concerns, and limited standardization.
That means the “best” cooling system is not universal. It depends on workload type, redundancy expectations, rack density, space constraints, and how much change your team can realistically support.
How to Choose the Right Data Center Cooling Solution
Choosing the right data center cooling solution starts with the facility’s actual operating profile — not just the trendiest technology.
1. Understand Rack Density and Heat Load
AI and high-performance workloads can quickly outgrow conventional air strategies. Uptime notes that current-generation AI systems can surpass 40 kW per rack, with some 2025 deployments exceeding 100 kW, and future leading-edge systems moving even higher.
2. Evaluate Existing Infrastructure
If you are retrofitting an operating facility, cooling system viability often depends on how readily new equipment integrates into the current environment. That retrofit challenge is one of the most important decision factors for operators considering direct liquid cooling.
3. Balance Efficiency and Resilience
DOE’s updated guidance emphasizes optimizing internal systems for efficiency and tracking performance through metrics such as PUE, ERE, WUE, and CUE. It also points to heat reuse and dry cooler strategies as ways to reduce water consumption and improve overall system performance.
4. Design for Future Growth
Cooling systems should be selected not only for today’s rack loads, but for where the facility is headed in the next several years. AI-driven growth is reshaping thermal demands, and facilities that plan only for current conditions may face premature upgrades.
5. Align With Industry Guidance
ASHRAE remains a foundational source for data center thermal management, including environmental guidance, testing standards, and energy standards such as ANSI/ASHRAE Standard 90.4-2025 for data centers.
Best Practices for Better Data Center Cooling Efficiency
No matter which cooling method you choose, several best practices consistently improve data center cooling efficiency:
Improve airflow management with containment and proper air separation.
Use controls and monitoring to track performance and adjust cooling to actual load.
Benchmark with efficiency metrics like PUE and WUE.
Plan for maintainability and redundancy based on the criticality of the workload. Uptime notes that resiliency expectations can differ significantly between AI training and mission-critical business IT.
Consider hybrid or phased approaches when moving from traditional air cooling to higher-density support. DOE’s updated design guide reflects the need to support both conventional and liquid-cooled environments.
Partner With Thermal Equipment Sales for Data Center Cooling Support
As heat loads rise and cooling strategies become more specialized, organizations need more than equipment alone — they need a partner who understands how thermal performance affects uptime, efficiency, and future scalability.
Thermal Equipment Sales supports customers with practical, application-focused guidance for data center cooling systems, including:
Cooling system evaluation for existing and new facilities
Support for air-cooled, hybrid, and liquid cooling applications
Equipment selection aligned with density, efficiency, and maintainability goals
Insight into retrofit challenges, growth planning, and thermal system performance
If your team is evaluating data center cooling solutions, planning for AI data center cooling, or comparing air cooling vs. liquid cooling, Thermal Equipment Sales can help you move from uncertainty to a more informed cooling strategy.
Need help planning your next data center cooling project? Contact Thermal Equipment Sales to discuss system options, operating requirements, and the right path for your facility.
FAQ
What is the best cooling system for a data center?
The best cooling system depends on rack density, application type, redundancy requirements, and whether the project is a retrofit or new build. Air cooling remains common, but liquid cooling is gaining ground in high-density and AI environments.
When should a data center use liquid cooling?
Liquid cooling becomes increasingly attractive as rack densities rise beyond what conventional air systems can economically or practically support. Uptime notes that AI systems can exceed 40 kW per rack today, with some deployments above 100 kW.
How can a data center improve cooling efficiency?
Operators can improve efficiency through airflow management, containment, monitoring, optimized controls, proper equipment selection, and by tracking metrics such as PUE, WUE, ERE, and CUE.
Is air cooling still used in data centers?
Yes. Uptime’s 2025 cooling survey found that perimeter air cooling remains the most widely used data center cooling method among operators surveyed.



