DataCool, the specialised cooling division of JohnsonMarCraft HVAC Products, just announced three new product lines built specifically for the AI infrastructure boom. The Alpine, Glacier and Kodiak platforms are engineered for thermal environments that barely existed commercially five years ago — and they represent one of the clearest signals yet that traditional HVAC manufacturers are positioning aggressively for the data centre opportunity.

The announcement, released April 16, 2026, lands at a moment when the data centre cooling market is growing at 17% annually and hyperscalers are on track to spend more than $593 billion in capital expenditure in 2026 alone. For HVAC manufacturers with the engineering capability to serve this market, the timing is not a coincidence.

What DataCool Is Launching and Why It Matters

The three new product lines each target a distinct segment of the data centre cooling challenge:

• Alpine Series: Designed for air-side economisation in moderate-climate data centre environments. The Alpine line uses advanced airflow management and variable-speed technology to maximise the hours per year that a data centre can use outside air for cooling — reducing mechanical cooling energy consumption and lowering PUE (Power Usage Effectiveness).

• Glacier Series: A direct expansion and chilled water cooling platform optimised for high-density rack deployments. As AI GPU clusters push rack densities beyond 30kW per rack — compared to the 5 to 10kW per rack typical of traditional IT infrastructure — the Glacier series is designed to maintain temperature stability under loads that standard computer room air conditioning units cannot manage.

• Kodiak Series: A modular liquid cooling distribution infrastructure product designed to deliver coolant to the rack level for direct-to-chip and immersion cooling applications. The Kodiak series interfaces with server-level liquid cooling hardware from major compute vendors.

DataCool, a division of JohnsonMarCraft HVAC Products, launched three new product lines on April 16, 2026 — the Alpine, Glacier, and Kodiak series — targeting air-side economisation, high-density rack cooling, and liquid cooling distribution for AI data centre infrastructure.

The AI Data Center Cooling Problem Explained

To understand why product launches like DataCool's matter, you need to understand the scale of the thermal challenge that AI computing creates. A modern AI training cluster — the kind used by hyperscalers to train large language models — can draw 10 to 50 megawatts of power in a single facility. Every watt of power consumed by computing hardware eventually becomes heat that must be removed from the facility.

Traditional data centre cooling infrastructure, designed for IT loads of 5 to 10kW per rack, cannot manage the 30 to 100kW per rack loads that AI GPU clusters generate without either massive overcapacity of conventional cooling equipment or a fundamental shift to liquid cooling technologies that can remove heat closer to the source.

The industry consensus is increasingly clear: liquid cooling — in various forms, from rear-door heat exchangers to direct-to-chip cooling to full immersion — is the technology direction for AI infrastructure. The HVAC companies that develop credible products in these categories are positioning themselves for a market that Bloomberg projects will reach $45.8 billion by 2033.

How the Alpine, Glacier and Kodiak Lines Work

Each platform in the DataCool launch addresses a distinct point in the cooling infrastructure chain:

The Alpine series operates at the building envelope level — managing how outside air is conditioned, filtered, and distributed through the data centre white space. In climates where outside air temperatures are suitable for extended periods, air-side economisation can dramatically reduce mechanical cooling costs. The Alpine series is engineered to maximise those hours while managing the humidity, particulate, and temperature stability requirements of sensitive IT equipment.

The Glacier series operates at the room level — providing the precision cooling capacity needed to stabilise temperatures in high-density AI deployment zones. Its variable-capacity refrigerant circuits and advanced airflow management are designed to respond to the rapid load changes that characterise AI training workloads, where GPU utilisation — and therefore heat output — can shift from 20% to 100% of capacity within seconds.

The Kodiak series operates at the rack level — distributing chilled water or glycol solution to rack-mounted liquid cooling manifolds that then deliver coolant to individual server components. This is the enabling infrastructure for the direct-to-chip cooling approach that major server manufacturers including Intel, AMD, and Nvidia have been investing in as the standard for next-generation AI compute hardware.

What This Means for the Broader HVAC Industry

DataCool's product launch is significant not just for what it reveals about JohnsonMarCraft's strategy, but for what it illustrates about the direction of the broader HVAC industry. Traditional HVAC manufacturers — whose business has historically been defined by residential and commercial building climate control — are facing a new and enormous adjacent market in data centre thermal management.

The companies that move first with credible, technically differentiated products in this space gain relationships with hyperscalers, colocation operators, and the mechanical engineering firms that specify cooling infrastructure for data centre projects. Those relationships compound over time — a data centre operator who trusts a cooling vendor for their first AI-optimised facility is likely to specify the same vendor for subsequent projects.

For HVAC contractors, the DataCool launch is a signal that the product ecosystem for data centre work is expanding. Contractors with commercial HVAC capability who are evaluating whether to pursue data centre work will find more qualified manufacturers and products available to them in 2026 than in any prior year.

Frequently Asked Questions

What is DataCool's AI data center cooling platform?

DataCool, a division of JohnsonMarCraft HVAC Products, launched three product lines in April 2026: the Alpine series for air-side economisation, the Glacier series for high-density rack cooling, and the Kodiak series for liquid cooling distribution to the rack level for AI data centre applications.

Why do AI data centers need specialised cooling?

AI GPU clusters generate heat densities of 30 to 100 kilowatts per rack — compared to 5 to 10kW per rack for traditional IT infrastructure. Conventional air cooling cannot manage these loads efficiently. Liquid cooling and advanced airflow management solutions are required to maintain thermal stability in AI computing environments.

What is the data center cooling market size?

The global data centre cooling market was valued at approximately $19.5 billion in 2025 and is projected to reach $45.8 billion by 2033 at a compound annual growth rate of approximately 17%, driven by the rapid expansion of AI infrastructure investment by hyperscalers and colocation operators.

How can HVAC contractors get into data center cooling work?

Commercial HVAC contractors with experience in large-tonnage chilled water systems, precision cooling, and mechanical commissioning are well-positioned to pursue data centre work. Pursuing manufacturer certifications for data centre cooling products, connecting with mechanical engineering firms that specify data centre infrastructure, and developing relationships with local colocation operators are effective entry points.

What is direct-to-chip cooling in HVAC?

Direct-to-chip cooling delivers liquid coolant directly to the heat-generating components of server processors and GPUs, rather than cooling the ambient air around them. This approach is significantly more efficient at high heat densities and is emerging as the standard for next-generation AI computing hardware.