Overview of the 3rd Constructing Green Data Centers 2026

Innovatrix returned to the States with the 3rd Constructing Green Data Centers: Revolutionizing Planning, Design, and Engineering Summit, taking place in Las Vegas, USA, on February 10-11, 2026, and brought together an incredible and diverse group of delegates with key stakeholders from operations, design, engineering, IT management, energy efficiency, and security being represented to discuss the future of data centers and how emerging technologies are shaping their evolution. 

Our 3rd Constructing Green Data Centers delegates discussed the latest innovations in modular building designs and explored how the integration of 5G technology is enabling faster, more efficient data centers capable of handling increasing loads and shared the latest advancements in building green, energy-efficient data centers while retrofitting existing structures to meet sustainability standards.

This article will provide a session recap for those who didn’t get the chance to attend #CGDSUSA and serve as a reminder for those who attended.

The torrid pace continues – State of the Market 2026

Sean Farney, Vice President, Data Center Strategy at JLL

Sean outlined a period of unprecedented expansion in the global data centre sector, driven by cloud adoption and the rapid acceleration of artificial intelligence.

He highlighted that nearly 100 GW of new capacity is expected to be added between 2026 and 2030, effectively doubling global capacity and fuelling a projected 14% CAGR. This growth is underpinned by a major infrastructure investment cycle, with up to $3 trillion required by 2030. The Americas are leading this expansion, although strong growth is also expected across EMEA and APAC.

AI is set to fundamentally reshape demand, with workloads potentially accounting for half of all data centre activity by 2030. While training currently dominates, inference is expected to overtake by 2027, driving more geographically distributed deployments to reduce latency.

However, supply-side constraints remain significant. Grid connection delays—often exceeding four years—are pushing operators towards self-generation, power purchase agreements, and private energy solutions. Equipment lead times remain elevated, prompting early procurement and inventory strategies.

Rising demand and constrained supply are also driving costs, with construction expenses increasing by around 7% annually. Speed-to-power was identified as a critical success factor, shaping both site selection and investment decisions.

Data Center Optimization and RCA at hyperscale

Qaisar Ali, Manager Field Engineering at AWS

Qaisar presented an advanced approach to Root Cause Analysis (RCA) in data centre operations, focusing on automation, scalability, and a shift from reactive to proactive management.

He defined RCA as the process of identifying the underlying cause of incidents to prevent recurrence, supported by structured, runbook-driven workflows that standardise responses and reduce resolution times. However, traditional RCA remains resource-intensive and difficult to scale across large data centre portfolios.

To address this, he introduced Automated Deep Dive (ADD), which uses high-resolution operational data to automatically analyse events, identify root causes, and generate detailed reports. While this significantly reduces manual effort, it is inherently reactive, responding only after incidents occur.

The introduction of Data Center Performance Metrics (DCPM) enables a proactive approach by establishing baseline performance thresholds and continuously monitoring systems for anomalies. When deviations are detected, ADD is automatically triggered, allowing issues to be investigated before impacting operations.

AI integration further enhances this framework by automating data analysis, identifying patterns across historical and real-time datasets, assigning standardised RCA classifications, and initiating remediation workflows. This approach improves consistency, enables fleet-wide scalability, and reduces engineering effort, while relying heavily on high-quality, accurate data inputs.

Why Sustainability Is A Data Problem Disguised As An Energy Problem

Matt Muir, Technical Account Manager at Cupix

Matt Muir framed sustainability in data centres as fundamentally an efficiency challenge rather than purely an energy issue. He emphasised that waste originates in the earliest stages of a project, where decisions have the greatest influence on outcomes, yet are often made with incomplete or disconnected data. As projects progress, the cost of changes increases significantly while the ability to influence efficiency declines.

A key concern highlighted was the impact of fragmented data across design, construction, and operations. This disconnect leads to inefficiencies in time, materials, and energy use throughout the project lifecycle. Muir identified several “hidden carbon multipliers,” including over-designed models that drive unnecessary material use, late-stage design changes that require rework and additional labour, and conservative decision-making caused by uncertainty in site conditions—often resulting in overbuilding.

He argued that improving data continuity and visibility is critical to reducing these inefficiencies and associated embodied carbon. In this context, owners play a pivotal role, as they are uniquely positioned across the full lifecycle—from design through to operations—to ensure alignment, reduce ambiguity, and drive more informed, efficient decision-making.

Overall, the presentation underscored that better data integration and early-stage coordination are essential to achieving meaningful sustainability outcomes.

Advanced Nuclear at the Crossroads

Stewart Forbes, Counsel at Hogan Lovells

Stewart Forbes examined the growing intersection between energy infrastructure and data centre expansion, highlighting unprecedented increases in electricity demand driven by data centres, electrification, and the reshoring of manufacturing. He noted that supply constraints—including lengthy licensing processes, turbine shortages, transmission backlogs, tariffs, and regulatory uncertainty—are limiting the pace at which new capacity can be added.

Nuclear energy was presented as a key solution due to its high capacity factor and reliability compared to intermittent renewable sources. Advances in reactor performance over time, combined with new-generation technologies, position nuclear as a stable foundation for supporting continuous, high-load environments such as data centres.

He outlined significant government support for nuclear development, including funding for research, fuel supply chains, and small modular reactor (SMR) programmes, alongside regulatory reform efforts aimed at accelerating deployment. However, he stressed that demand is currently outpacing the ability of regulatory frameworks to respond, creating friction in project delivery.

The close link between energy infrastructure and AI-driven data centre growth was a central theme, with reliable, always-on power identified as essential. Investment activity reflects this shift, with reactor restarts and new developments aligned with major technology demand.

Looking ahead, key milestones between 2026 and 2027 are expected to advance nuclear deployment, supporting the synchronisation of energy supply with data centre expansion timelines.

Optimizing data center construction with generative scheduling technology

Andy Gabele, Director, Solutions Engineering at ALICE Technologies

The presentation highlighted the increasing complexity of sustainable data centre construction, driven by the need to balance speed, cost, and environmental impact in an era of AI-driven demand.

A key challenge is the intense pressure on delivery timelines. The rapid growth of AI workloads has created unprecedented urgency, where delays directly translate into lost revenue. At the same time, projects must meet strict environmental targets, including net-zero commitments, while minimising construction waste and inefficiencies. Traditional scheduling methods struggle to balance these competing priorities, often lacking the ability to assess the broader impact of planning decisions.

To address this, the speaker introduced generative scheduling as an advanced, data-driven approach to construction planning. This method builds a parametric model of the project schedule, constrained by milestones and resource availability, and uses simulation to generate optimised execution strategies. By testing “what-if” scenarios, it enables rapid evaluation of alternative approaches.

Generative scheduling allows teams to explore millions of potential scenarios in minutes, identifying faster, more cost-effective, and lower-risk execution paths. It also supports real-time adaptation during construction, helping manage projects as dynamic production systems.

Overall, this approach improves decision-making before and during execution, enabling projects to remain on schedule while optimising resource use and reducing environmental impact.

Is the race to AGI really the race to Sovereign Storage?

Chris Stott, Founder & CEO at Lonestar Data Holdings

Chris Stott presented a forward-looking perspective on the future of data storage, driven by the rapid growth of artificial general intelligence (AGI) and the resulting surge in data generation. He emphasised that as data volumes expand, so too does the need for secure storage aligned with increasingly strict data sovereignty requirements.

A central theme was the environmental impact of traditional data centres, which is becoming a growing global concern. In response, he argued that sustainability must be a core priority in future storage strategies, highlighting that long-term data protection is inseparable from environmental responsibility.

Stott introduced a hybrid vision for the future of data infrastructure, combining terrestrial data centres with space-based storage solutions. In this model, conventional “grey space” facilities continue to require significant power, cooling, and connectivity, while space-based systems operate with minimal infrastructure requirements, focusing primarily on payload and mission-critical hardware.

This approach positions space as a potential solution to some of the industry’s most pressing challenges, including energy consumption, environmental impact, and data sovereignty. By moving certain storage functions beyond Earth, the model aims to reduce reliance on resource-intensive ground-based infrastructure while enabling highly secure and resilient data storage.

Overall, the presentation suggested that the future of data centres may extend beyond terrestrial boundaries, with hybrid architectures redefining how and where data is stored.

Building AI-Ready, Water-Positive Data Centers

Alise Porto, SVP of Energy & Sustainability at Switch

Alise Porto explored how AI-driven power density is transforming data centre design, requiring fundamental changes to infrastructure, cooling, and energy systems. The emergence of advanced “superchips” is enabling significantly higher compute densities, driving the need for hybrid cooling configurations and scalable power distribution capable of supporting exponential increases in load.

She highlighted the transition from traditional air cooling to hybrid air and liquid-to-chip systems, which are essential for managing thermal loads in high-density AI environments. These liquid-based approaches improve both energy efficiency and water usage effectiveness (WUE), while enabling the deployment of larger, more powerful compute clusters supported by advanced networking.

Rob Roy’s role in evolving Switch infrastructure was noted, particularly in developing a portfolio that supports both hyperscale cloud and ultra-high-density AI workloads. A key differentiator is the organisation’s net-positive water strategy, underpinned by 100% renewable energy, net zero Scope 1 and 2 emissions, and industry-leading efficiency metrics.

Flagship initiatives include fully recycled water systems, elimination of groundwater use, and the ability to replenish more than twice the annual water consumed. With measurable reductions in water use and the capability for zero-water operations in new facilities, the approach strengthens long-term resilience.

The presentation concluded that sustainability performance is increasingly a competitive and financial advantage in scaling AI infrastructure.

Building for 1MW+ Rack Densities

Scott Charter, Director of AI Strategy at Oracle

Scott Charter outlined the three primary catalysts driving AI growth: data, compute, and algorithms. He emphasised that modern AI systems rely on vast datasets and increasingly large-scale compute clusters, noting that by 2025, deployments have reached clusters of approximately 130,000 GPUs within data centres.

This rapid scale-up is fundamentally changing infrastructure requirements. Rack densities are expected to rise dramatically, potentially reaching up to 1MW per rack by 2030, placing unprecedented demands on power delivery, cooling systems, and overall facility design.

Charter highlighted several critical considerations for supporting this growth. Supply chain constraints remain a key risk, particularly as demand for specialised hardware continues to outpace availability. Cooling strategies must also evolve, with liquid-to-chip solutions becoming essential for high-density environments. However, he stressed that air cooling will continue to play an important complementary role within hybrid configurations.

Water usage was identified as another crucial factor, as advanced cooling methods can significantly impact water consumption. Balancing performance, efficiency, and resource use will therefore be essential.

Overall, the session underscored that scaling AI infrastructure requires a holistic approach, integrating compute, cooling, supply chain, and sustainability considerations to support future demand.

Adopting Modular Design to Align Performance, Cost, and Environmental Goals

Dave Perez, Director of Advanced Technology and Mission Critical at Consigli Construction, and Eden Smalley, Senior Modular Design Engineer at Flexnode

Dave Perez and Eden Smalley outlined how next-generation data centres are being reshaped by the convergence of sustainability, industrialised construction, and AI-driven infrastructure demand.

They identified three core principles: sustainability is now synonymous with cost efficiency; modern manufacturing methods are transforming how facilities are delivered; and AI infrastructure represents a fundamentally different class of build, requiring new design and delivery approaches. Within this context, modular design was presented as a key enabler of faster, lower-cost, and lower-carbon construction, without compromising quality.

A central theme was that sustainable optimisation directly improves capital efficiency. Both embodied and operational carbon can be significantly reduced through smarter design choices, material selection, and more efficient operational models. Reusable components were highlighted as particularly valuable, retaining approximately 40–60% of their value and improving lifecycle economics while reducing environmental impact.

Local sourcing was also emphasised as a dual-benefit strategy, lowering transport-related emissions while reducing costs, thereby aligning economic and environmental objectives.

The speakers noted that modular construction enables distributed deployment models, allowing capacity to scale more flexibly and reducing the concentration of environmental and operational impacts typically associated with hyperscale facilities. This approach supports faster deployment of AI-ready infrastructure while maintaining sustainability and cost discipline.

Sustainable Concrete

Amanda Angelo, Technical Sales Representative at Sika Corporation

Amanda Angelo outlined the growing significance of embodied carbon in the built environment, noting that cement and steel production account for approximately 11% of global carbon emissions. Cement, in particular, was highlighted as a critical focus area for emissions reduction, with clinker responsible for around 85% of CO₂ emissions from concrete production.

She emphasised multiple strategies to reduce the global warming potential of concrete, starting with clinker reduction through blended cements such as limestone, pozzolans, and calcined clay, as well as supplementary cementitious materials including blast furnace slag, silica fume, and fly ash. These approaches significantly lower emissions while maintaining performance.

Additional efficiency gains can be achieved through the use of high-range water reducers, which reduce water consumption by up to 30%, improve workability, and lower labour costs. Recycled aggregates also play a key role, with fine aggregates capable of replacing 10–20% of material and coarse recycled aggregates potentially substituting up to 100% where performance criteria are met.

She highlighted overdesign as a major source of unnecessary emissions, noting that AI and machine learning can help optimise concrete strength calculations and reduce material waste. Fibre reinforcement was also presented as an alternative to steel, offering benefits in durability, labour reduction, crack control, and lifecycle performance.

The presentation concluded that low-carbon concrete adoption depends on early collaboration across the value chain and a shift towards performance-based design aligned with emerging standards such as LEED v5.

Connecting community

Patriece Thompson, Senior Leader, Director of Community & Citizenship at Turner Construction

The presentation, delivered by Patriece Thompson, focused on the importance of community engagement as a critical factor in the successful delivery of large-scale construction and data centre projects.

A key message was that community acceptance has become a fundamental requirement for project success, directly influencing schedule certainty, cost control, and overall risk. If developers do not actively shape the narrative around a project, local communities will form their own perceptions, which can often result in opposition and delays.

Thompson emphasised that meaningful change in perception is driven by actions rather than communications alone. Trust is built through transparency, consistent engagement, and the visible demonstration of local value. Human connection and openness were highlighted as essential elements in establishing long-term acceptance.

The presentation also outlined the operational benefits of strong community relationships. Effective engagement can reduce complaints and escalations, improve coordination around construction impacts such as traffic and noise, and increase community tolerance during disruption. It also provides early visibility of potential issues, enabling proactive mitigation.

Overall, the session concluded that community engagement should be treated as core infrastructure in project delivery. When embedded early and maintained consistently, it helps protect investment, stabilise schedules, and improve construction efficiency by reducing external friction and building lasting trust with local stakeholders.

Deploying High-Power Density Racks: Practical Challenges Beyond the Architecture Diagrams

Akshay Viradiya, Senior Staff Data Center Engineer at LinkedIn

Akshay Viradiya examined how rapidly increasing AI workloads are fundamentally reshaping data centre design, with power density emerging as the dominant constraint. He noted that rack densities have accelerated beyond traditional assumptions, with 100kW per rack now becoming the baseline for AI infrastructure deployments.

He outlined several practical challenges across planning, power, cooling, structural, and operational domains. From a planning perspective, constraints in grid availability and uncertainty in GPU technology evolution create significant risks of infrastructure under-design or premature obsolescence. These issues also complicate capital planning and increase financial exposure.

On the power side, dynamic GPU load behaviour and legacy electrical systems are major constraints, with many existing facilities undersized for modern requirements. The integration of battery energy storage systems introduces additional complexity but is increasingly necessary for load management and resilience.

Cooling was identified as a critical bottleneck, as traditional air cooling systems struggle beyond approximately 50kW per rack. While direct-to-chip liquid cooling offers a viable alternative, retrofit deployment is constrained by existing facility designs, material compatibility concerns, long equipment lead times, and immature vendor ecosystems.

Structural and operational challenges were also highlighted, including limitations in floor loading, incompatibility with legacy rack formats, reduced serviceability, and increased safety risks such as arc flash and liquid cooling operational maturity gaps.

Overall, the presentation emphasised that AI-scale infrastructure requires a full rethinking of traditional data centre design assumptions across all layers of the stack.

A Complete Case Study of Construction Technology Platform Adoption

Chenghao Peng, Senior Manager, IT at Equinix

Chenghao Peng discussed the role of digital transformation in large-scale construction portfolio management, focusing on the need for a consolidated construction management platform to support global operations.

A key objective was the creation of a single source of truth across all projects and portfolios. This would enable standardised processes across regions, improve decision-making, enhance operational efficiency, and reduce overall costs. Equally important was leadership transparency, with real-time visibility into project status, risks, costs, and decision-making processes, helping to reduce fragmentation and support faster, data-driven oversight globally.

The selection of a construction management platform was presented as a structured process. Key inputs included industry references from hyperscalers and clients, third-party consultants, competitive benchmarking, internal use cases, and gap analysis to identify process optimisation opportunities. Vendor evaluation focused on usability, scalability, and demonstrated value in real-world scenarios.

Implementation was framed as a staged change management journey, beginning with planning and stakeholder alignment, followed by configuration and validation, phased regional rollout, and ongoing measurement and optimisation. Training, governance models, and local change champions were highlighted as essential for successful adoption.

Key lessons learned included the importance of clearly defined project benchmarks to avoid subjective success measures, recognising regional data and behavioural differences, and understanding that standardisation must be flexible. The conclusion emphasised that global consistency is achieved through incremental alignment rather than rigid uniformity.

International Project Management in the field of Clean & Dry Rooms

Tobias Frieser, Business Development Europe & North America at Pophen

Tobias Frieser focused on the increasing importance of clean and dry room environments within data centre construction in China and Europe, highlighting their role as mission-critical infrastructure for next-generation facilities. These systems are essential for maintaining precision environmental control and protecting sensitive equipment, particularly as data centres scale in complexity and density.

He outlined key requirements for mission-critical infrastructure, including 24/7 continuous operation with zero tolerance for downtime, redundancy to ensure fault tolerance during maintenance or failure events, and seamless integration with HVAC, fire suppression, and security systems. Energy efficiency was also emphasised, with PUE targets increasingly aligned with sustainability and cost optimisation goals. Additionally, scalability was identified as essential, with modular designs enabling rapid expansion in response to growing computing demand.

A central theme was the comparison of regional project management models. The Chinese approach was described as speed-driven, leveraging vertical integration, decisive leadership, and manufacturing-linked execution to enable rapid deployment. The United States model prioritises performance, contractual precision, and rigorous testing, while the European model is characterised by compliance, documentation, and regulatory alignment.

The presentation concluded that clean and dry rooms are becoming essential infrastructure for advanced data centres. Success depends on combining integrated manufacturing capability and fast execution methodologies with adherence to regional standards and regulatory frameworks, enabling reliable delivery at hyperscale.

Beyond Siloed Intelligence: AI Orchestrating the Data Center as a Cyber-Physical System

Oliver Scott Palmer, Director – Industrial Controls Systems Engineering & OT Cybersecurity at Microsoft

The presentation, delivered by Oliver Scott Palmer, focused on the growing fragmentation of intelligence across data centre and energy infrastructure systems, and the need for coordinated optimisation as AI workloads reshape operational demands.

He outlined how key domains—IT workload scheduling, cooling systems, electrical infrastructure, and grid operations—currently operate in isolation, each with independent decision loops and limited shared awareness. This fragmentation leads to conservative operating buffers, stranded capacity, and inefficient use of both thermal and electrical resources.

A central theme was that AI workloads are fundamentally changing infrastructure physics. Increasing power density, higher load volatility, faster ramp rates, and tighter coupling with grid dynamics are turning thermal limits into effective capacity constraints. This shift requires a more integrated approach to system management.

Palmer proposed that AI itself could function as a coordination layer across these domains, enabling thermal-aware workload placement, improved utilisation of cooling systems, smoother power ramping, and reduced coincident peak demand. These capabilities could unlock additional capacity without physical expansion while reducing overall system inefficiency.

He also highlighted emerging stability challenges within power networks, including sub-synchronous oscillations caused by dynamic load interactions. As compute demand becomes more synchronised, these effects may increase, placing additional stress on infrastructure.

The session concluded that coordinated demand management is becoming essential for maintaining grid stability, improving operational efficiency, and enabling scalable integration between digital infrastructure and energy systems.

Future-proofing digital infrastructure for AI

Wellington Lordelo, Global Director, AI and Innovation at Digital Realty

Wellington Lordelo focused on the accelerating demand for AI infrastructure and the widening gap between legacy data centre capabilities and modern workload requirements. He highlighted that the market for AI infrastructure is expected to grow significantly, rising from approximately $163 billion in 2024 to $850 billion by 2029, reflecting a compound annual growth rate of around 39%.

A key message was that the cost of inaction is increasing rapidly as AI adoption exposes structural limitations in existing infrastructure. Traditional environments are often unable to support high-performance computing and AI workloads effectively, creating a need for modular architectures integrated with advanced cooling systems.

He also emphasised that hybrid IT environments are introducing greater network complexity, requiring globally resilient, secure, and scalable connectivity solutions. At the same time, growing data volumes and evolving regulatory frameworks are driving increased demand for in-country infrastructure to meet data sovereignty requirements.

To address these challenges, he presented a range of high-density colocation cooling solutions. These include rear-door heat exchanger systems supporting up to 70kW per cabinet, direct liquid cooling configurations also supporting up to 70kW per cabinet, and combined liquid and rear-door hybrid solutions enabling densities of up to 150kW per cabinet, depending on configuration.

The presentation also highlighted an innovation framework designed to support AI development and validation through high-performance infrastructure, engineering support, flexible scaling options, and advanced software-defined connectivity, enabling faster deployment and integration of AI workloads across distributed environments.

The Responsive Node: Orchestrating Grid Synthesis and Adaptive Resilience

Adnan Khan, Sr. Director of Offering Management at Honeywell

Adnan Khan focused on the evolving role of data centres in an environment defined by escalating AI-driven demand and increasing grid instability. He highlighted that traditional efficiency metrics are being overtaken by a more critical requirement: uptime and operational continuity, particularly as extreme weather events and high-density compute loads strain existing infrastructure.

A central issue presented was the “double crisis” between rising compute density and declining grid reliability. AI training workloads are pushing rack densities towards 100kW per rack, while simultaneous baseload retirements are reducing grid inertia. At the same time, interconnection delays of five years or more are creating a structural mismatch between demand and supply.

To address this, he introduced the concept of the “responsive node,” where data centres evolve into dynamic energy assets capable of monetising grid interaction. In this model, facilities operate bi-directionally, not only consuming power but also providing services back to the grid, effectively acting as virtual power plants. This includes fast frequency response capabilities that can inject power at sub-second timescales and potentially offset operational expenditure through grid services.

He also discussed carbon-aware computing strategies, including spatial and temporal workload shifting to align computation with cleaner and lower-cost energy availability. Additionally, structural resilience measures such as seismic decoupling were highlighted to ensure operational continuity under physical stress conditions.

The presentation concluded with the importance of predictive orchestration, combining weather data, energy pricing, and grid load forecasting to proactively manage cooling, charging, and workload distribution.

Evolution of Datacenter Designs

William Schaumann, PE, Leader, Global Mission Critical Electrical Engineering Practice at Burns & McDonnell

William Schaumann outlined how data centre demand is increasingly segmented by workload type, each with distinct infrastructure implications. He described five primary categories: enterprise environments hosting dedicated corporate applications; cloud environments supporting distributed applications across public, private, and hybrid models; traditional edge deployments concentrated in high-usage urban areas; emerging edge models extending connectivity to remote regions and IoT-heavy applications such as 5G and autonomous vehicles; and AI workloads focused on large language model training and inference.

A key distinction was made between AI training and inference, with inference increasingly becoming the dominant operational workload as model deployment scales.

He then addressed how energy efficiency and capacity planning are guided by Power Usage Effectiveness (PUE), defined as the ratio of total facility power to IT equipment power. For example, a facility consuming 10 MW of total power with 8 MW allocated to IT would have a PUE of 1.25. He noted that peak PUE is used for infrastructure sizing, while average PUE is more relevant for operational efficiency analysis, though it can underestimate peak utility requirements.

The presentation also covered evolving infrastructure design considerations, particularly in HVAC and cooling systems. These include raised floor and slab-on-grade layouts, hot and cold aisle containment strategies, in-row cooling systems, and advanced approaches such as liquid-to-chip and immersion cooling, which are becoming increasingly important for high-density AI environments.

Overall, the session emphasised the growing complexity of workload-specific design requirements and the need for adaptable, high-efficiency infrastructure to support next-generation computing demands.

Evaluating Cooling Options to Seamlessly Integrate Innovations into Your Future Facility Designs

Frank McCann, formerly Assoc. Director Network Construction at Verizon

Frank McCann examined the rapid escalation of rack power density and its implications for data centre design, highlighting a structural shift that is redefining cooling, power delivery, and facility architecture.

He outlined how legacy environments designed for 2–5kW per rack have already evolved to 8–12kW, with high-density deployments now routinely reaching 10–30kW. Ultra-high-density racks are emerging in the 30–100kW range, while AI-dedicated environments average over 60kW and can peak at 200kW. Future hardware roadmaps, including next-generation GPU systems, are expected to push individual rack requirements towards 250–300kW.

Cooling technology limitations were identified as a key constraint. Traditional air cooling is generally effective up to 10–20kW per rack, with optimised designs extending to around 25–40kW. Beyond this, liquid-based approaches become necessary, including direct-to-chip cooling (approximately 30–85kW capability) and immersion cooling, which can support even higher densities.

He emphasised that the cooling landscape is becoming increasingly complex due to diverse workloads, rising energy costs, sustainability pressures, and the need for scalability and reliability. As a result, operators must adopt a holistic cooling strategy that bridges IT and facilities disciplines.

A range of targeted solutions was discussed, including in-row cooling and rear-door heat exchangers, alongside hybrid and zoned designs. He also highlighted the importance of modular planning, detailed thermal modelling, and robust monitoring systems.

The presentation concluded that while air cooling remains relevant for lower-density environments, the industry is steadily shifting towards liquid-based and hybrid cooling systems to meet the demands of modern AI-driven infrastructure.

From Projects to Systems: Rethinking Sustainability in Data Center Development

Londo Farmer, Senior Development Project Manager at T5 Data Centers

Londo Farmer presented sustainability in data centre development as a long-term systems challenge rather than a project-based objective with fixed endpoints. He argued that data centres operate over 20–30 year lifecycles, meaning that decisions made during early phases compound over time, while sustainability efforts are often lost due to short-term execution pressures and fragmented ownership.

A key issue identified was the breakdown of sustainability continuity across project handoffs. Design teams are constrained by rapid delivery timelines and permitting requirements, procurement is driven by lead times and availability, construction is focused on execution under high demand, and operations inherit systems that must function as designed under real-world conditions. Commissioning then serves as the final validation stage, where system performance is tested against intent.

He emphasised that optimising individual components does not guarantee system-level performance, noting that common metrics such as PUE and WUE only provide partial visibility and can mask broader inefficiencies. Sustainability failures often emerge in areas not captured by standard measurement frameworks.

The presentation highlighted that every decision involves trade-offs between speed, cost, resilience, and energy performance, and that real-world operations inevitably diverge from design assumptions due to staffing, training, human factors, and system complexity.

He proposed a new execution lens focused on scalability, adaptability, long-term operational resilience, and the ability to withstand substitution and change. The conclusion stressed that sustainability is ultimately a governance and systems problem, requiring decision frameworks that remain robust under evolving energy markets, regulations, and demand patterns.

3rd Constructing Green Data Centers Summit Sponsors

The 3rd Constructing Green Data Centers: Revolutionizing Planning, Design, and Engineering was supported by a wide range of sponsors who brought their teams to our exhibition hall, and Innovatrix would like to thank them again for their support.

BrightNight, Cupix, Alice Technologies, Pophen, Honeywell, Sika, Cooper Lighting, EAE USA, KALCON, BuilderLab and DroneDeploy.

If you want to attend our next summit serving the data center construction sector and have the opportunity to hear presentations like these and many more, join us for our next #CGDSUSA event, taking place in October later this year! Discover the latest  innovations in modular construction, enabling more scalable and flexible data center infrastructure, meet with solution providers and hear talks from industry leaders, attend the 6th Constructing Next-Gen Data Centers: Revolutionizing Planning, Design, and Engineering, taking place October 21-22, 2026, in Ashburn, Virginia, USA. 

For more information on #CGDSUSA, visit our website or email us at info@innovatrix.eu for the event agenda.  Visit our LinkedIn to stay up to date on our latest speaker announcements and event news.

Share this post:

Facebook
Twitter
LinkedIn

Most Popular

Explore our best-read blogs and find out why your industry trusts Innovatrix