Towards a Data Infrastructure Strategy: Data Centres and High Performance Computing
- Dr John H Howard
- 2 hours ago
- 7 min read
John H Howard, 22 December, 2025

Australia’s innovation prospects hinge on distinguishing generic data centres from high-performance computing (HPC) in policy, funding and narrative terms. Data centres underpin digital services, while tightly coupled HPC systems enable frontier AI, climate modelling, defence and advanced industry, with different architectures, risks and governance needs.
Treating them as one “compute” category misdirects investment towards cloud-style capacity, blurs sovereignty and security issues, and dulls environmental and skills policy. The paper calls for explicit “AI-ready HPC” and “AI-capable cloud” tracks in strategy, and for framing HPC as strategic industrial infrastructure, not niche research equipment.
Australia’s next wave of innovation may be shaped by how clearly policy distinguishes between generic data centres and high‑performance computing (HPC) infrastructure. Treating them as one amorphous “data infrastructure” risks underbuilding the capabilities that actually drive frontier science, AI and advanced industry.
Different engines, different purposes
Data centres are the backbone of digital service delivery: they host government platforms, cloud services, payments, e‑commerce and everyday business systems. Their policy logic is about reliability, security, privacy, competition, and cost‑effective scalability for millions of low‑ to medium‑intensity workloads.
HPC systems, by contrast, are scientific and industrial instruments: tightly coupled machines built to run a small number of extremely demanding simulations and AI workloads in climate, energy, defence, health and advanced manufacturing. They require very different architectures, interconnects, software stacks and support teams.
Treating both under one heading obscures that only HPC facilities can, for example, run exascale‑class models or national‑scale AI training central to technological sovereignty.
Investment and industrial strategy
Australia’s national research infrastructure roadmaps already hint at this distinction, but policy language often drifts back into generic “data infrastructure” framing. When data centres and HPCs are bundled, capital allocation appears to favour incremental cloud‑like capacity that is easier to explain to budget processes and vendors, at the expense of lumpy, long‑horizon HPC investments.
That is a problem because leadership‑class HPC is a prerequisite for serious participation in areas such as next‑generation AI, new materials discovery, and high‑resolution climate modelling, all of which underpin competitive advantage in energy, agriculture, resources, and defence.
Clear categorisation will allow governments to justify different funding models: market‑oriented approaches for commercial and government data centres versus sustained, programmatic investment in Tier‑1 and Tier‑2 supercomputing facilities as strategic national assets.
Sovereignty, security and alliances
Lumping commercial cloud data centres and national HPC into one policy bucket could also blur critical questions of sovereignty and security.
Data‑centre policy is rightly preoccupied with where citizen data sits, which providers operate “critical infrastructure”, and how to regulate hyperscale cloud in terms of localisation and incident response. HPC policy has a different centre of gravity: control over capability, dependence on foreign vendors for accelerators and interconnects, export controls, and participation in international compute collaborations.
A leadership‑class supercomputer used for defence modelling or advanced cyber can’t sensibly be governed under the same access and transparency settings as a generic government cloud platform. Making the distinction explicit in policy gives room for differentiated security baselines and international arrangements: one set for commercial cloud and open science compute, another for classified or strategically sensitive HPC.
Environmental and infrastructure planning
Data centres and HPCs both stress power, cooling and sometimes water, but in different patterns, which matters for planning and regulation. Hyperscale data centres can drive up local electricity demand and, in some Australian contexts, compete for limited water resources, forcing governments to grapple with zoning, efficiency standards and grid integration at the urban edge.
HPC facilities concentrate extreme power density on research campuses or in co‑located industrial precincts and often run at very high utilisation, making them prime candidates for experimental cooling technologies, waste‑heat reuse and direct integration with renewables. Without a policy distinction, there is a danger of either over‑regulating research HPC as if it were just another commercial cloud build, or under‑regulating the cumulative environmental impact of generic data centre expansion. A nuanced taxonomy allows targeted decarbonisation levers for each.
Capability, access and skills
Finally, the people and capability systems around these infrastructures diverge.
Access to commercial cloud and data‑centre services is largely a procurement and market‑design problem: are Australian firms and agencies getting secure, competitively priced, modern services?
Access to HPC is a capability‑building and allocation problem: are researchers, startups and scale‑ups able to secure time on leadership‑class machines, and do they have the skills to use them effectively?
National and international reports increasingly stress that HPC plus advanced data infrastructure are prerequisites for excellence in AI‑enabled science and industry, not optional add‑ons. Policies that name HPC separately from generic data centres can support dedicated training pipelines, allocation schemes, and co‑design programs with industry, so that compute does not sit idle while capability gaps widen.
For an innovation system trying to move up the value chain, that distinction is not a technical nicety; it is a structural requirement for staying in the game.
Where the current AI policy blurs things
National and international AI strategies usually talk about “AI compute capacity” and “AI data centres” in the same breath, emphasising aggregate accelerator clusters in data centres without distinguishing whether they are tightly coupled HPC, general cloud, or something in between. This framing makes sense for high‑level indicators, but it encourages a view that more GPUs in any data hall equals progress, regardless of whether the architecture actually supports frontier‑scale training, high‑end science, or only inference‑heavy enterprise workloads.
Compute‑governance work on AI safety has started to focus on “industrial‑scale compute” and chip registries, although even there the unit of analysis is often chips and aggregate FLOPs, not the institutional and architectural distinction between research supercomputers and commercial data centres.
In Australia, flagship announcements around AI investment tend to emphasise hyperscale cloud and sovereign data‑centre builds, with only scattered references to AI‑optimised supercomputing networks or national HPC strategies.
When AI policy stays at the level of undifferentiated “compute”, three things follow.
Funding and incentives gravitate toward cloud‑style data centres because they are more appealing to investors, easier to announce, and directly connected to enterprise AI services. In contrast, long‑horizon HPC capability appears to be a specialist science concern rather than a pillar of AI competitiveness.
Safety and security discussions target chip and model providers, but overlook that governance constraints appropriate for open cloud environments do not map neatly onto national‑security or defence‑oriented AI running on sovereign HPC systems.
Measurement efforts count accelerators and public‑cloud capacity but do not distinguish between compute that can be flexibly rented for small‑scale AI experiments and compute that is organised as a strategic, allocation‑based resource for large‑scale training and national missions.
For a country like Australia, this matters because the high‑end, tightly coupled segment is where the leverage lies for climate modelling, critical infrastructure digital twins, and training larger home‑grown models without exporting data or strategic dependency.
The popular discourse on Data Centres also has a strong economic development flavour, particularly around real estate and property development, with attention given to economic metrics such as inward investment attraction and construction jobs created during the build.
How to sharpen the discourse
From an innovation‑policy perspective, one of the most valuable contributions is to plug this distinction explicitly into the AI policy conversation.
In strategy: insist that AI strategies and national compute plans carry separate lines of sight for “AI‑ready HPC” and “AI‑capable cloud/data centres”, with different roles, metrics and risk profiles.
In governance: argue that compute‑governance tools (registries, caps, export controls) should treat research/HPC environments differently from commercial data‑centre environments, while still recognising that AI factories may be physically co‑located with data centres.
In investment and alliances: link sovereign AI rhetoric to concrete commitments around AI‑optimised supercomputing networks, not just to attracting hyperscale data‑centre investment.
That sort of framing lets you say, credibly: “AI needs both data‑centre capacity and HPC capacity, but they solve different problems, create different risks, and demand different policy tools.”
Is there a Political Economy of the data centre industry and the research/HPC community?
Some academic policy analysts might argue that the global data‑centre and cloud industry has far more commercial weight, lobbying capacity and narrative discipline than the research/HPC community, so its categories and language dominate the AI infrastructure debate. The HPC research lobby does have influence, but it is fragmented, institutionally weaker, and tends to speak in technical rather than industrial terms, which makes its distinctions easier to gloss over.
The data‑centre narrative
Hyperscale cloud and colocation firms are now framed as critical enablers of AI, fintech, e‑commerce and government digital services, which gives them strong, broad‑based alliances with business councils, tech associations and parts of government. This coalition pushes a coherent story about “AI data centres”, investment pipelines, jobs and regional development that naturally centres their infrastructure categories.
Industry bodies for data centres and cloud providers are increasingly organised around AI infrastructure strategy, producing reports, sponsoring events and engaging directly with policymakers to shape definitions of “compute” and “sovereign AI”. Their language tends to treat GPUs in data halls as the canonical unit of AI capability, with little incentive to foreground the architectural and governance differences of HPC.
The research/HPC voice
HPC tends to sit inside research agencies, universities and specialist institutes and centres, with influence routed through grant processes and consultation exercises rather than permanent, well‑resourced advocacy machines. These actors produce detailed submissions to infrastructure roadmaps, but their messages compete with many other research priorities and rarely set the high‑level AI narrative.
The research community also frames HPC as “research infrastructure” or “e‑infrastructure”, not as a mainstream industrial platform, which makes it harder to link directly into business‑led AI growth stories and into the wider digital‑industry lobby. That reinforces the perception among central agencies that data centres are the real economy story and HPC is a niche science concern.
The politics of conflation
The result is that AI policy conversations tend to adopt the categories that the most powerful, coordinated actors use: “data centres”, “cloud regions”, “AI factories”, “GPU clusters”. Once those become the default terms in briefing notes and media, the more nuanced distinction between enterprise AI compute and tightly coupled research/HPC compute looks like an esoteric refinement rather than a structural design choice.
Conclusion
From an innovation‑policy perspective, the task now is not only to insist on a distinction between data centres and HPC, but to connect that distinction to the same political narratives that already surround data‑centre investment: jobs, sovereignty, resilience and industrial capability.
Framed that way, leadership‑class HPC stops looking like a specialist science concern and starts to appear as core equipment for a country that wants to design advanced defence systems, run credible climate and energy models, and train competitive home‑grown AI systems without exporting its data or strategic dependence.
That in turn implies a different policy architecture: separate lines of sight in national AI and digital strategies for “AI‑ready HPC” and “AI‑capable cloud/data centres”; distinct funding models and governance baselines; and deliberate alliance‑building so that the HPC community can meet the data‑centre lobby on equal narrative terrain.
If Australia wants to move up the value chain rather than just host other people’s AI factories, it will need to treat HPC as a strategic industrial platform in its own right, not an afterthought inside an undifferentiated category of “compute”