AI Decisions That Cannot Wait: A New Book for Boards, Executives, Ministers and Advisers
- Dr John H Howard

- 8 hours ago
- 6 min read
John H. Howard, 10 March 2026

Decisions about artificial intelligence (AI) cannot wait for certainty. Policymakers must set direction while capabilities are still evolving. Executives must invest while returns remain unproven. Advisers must counsel while experts disagree. The luxury of waiting until the evidence is complete does not exist.
A new book, drawing on research undertaken by the Acton Institute for Policy Research and Innovation, Making Sense of AI in 2026: A Framework for Policy and Practice, addresses this challenge directly. Published on 8 March 2026 the book provides a new decision-making model designed to address opportunity, uncertainty and risk.
The audience for the book is a larger group than might initially appear, including:
Policymakers and their advisers who face a technology that is evolving faster than the institutional arrangements built to govern it.
Executives and boards wrestling with AI adoption looking for a structured way to assess whether proposed investments are likely to deliver on their promises.
Industry leaders and sector strategists looking to find the conditions under which AI generates genuine competitive advantage, and the conditions under which it does not.
Senior public servants who must translate policy intent into agency capability.
Executive coaches preparing their clients for the leadership demands that AI adoption creates.
Researchers and analysts will find the book's intellectual architecture useful as an organising framework.
Making Sense of AI in 2026: has received enthusiastic endorsements from Australian experts in AI and Australian and international innovation and industrial policy
"What this book offers policymakers, advisers, institutional leaders, and informed readers is a way to think clearly about AI under conditions of uncertainty. It does not overstate either the promise or the peril of AI. Instead, it provides an analytical framework that connects technology with capability, governance, and economic performance".
Professor Roy Green AM
Emeritus Professor and Special Innovation Adviser
University of Technology Sydney
"This is also a genuinely constructive book. It does not simply critique the hype cycle; it gives readers a language for making better decisions about where to invest, what to sequence first, and how to identify the real binding constraints"
Professor Cori Stewart, FTSE
Founder and CEO
Advanced Robotics for Manufacturing Hub (ARM Hub
"What ultimately stands out to me is the book’s insistence that choices matter; AI trajectories are not determined by the technology. They are shaped by decisions, about what we invest in, how we govern, which industries we prioritise, and how we balance openness with sovereignty. Human agency, exercised through policy, investment, and institutional design, can shape the direction of AI."
Dr Sue Keay FTSE
Director, UNSW AI Institute
Chair, Robotics Australia Group
Sydney, January 2026
The Discourse Problem
The public conversation about AI has reached a curious impasse. Technology companies promise transformation. Consulting firms produce reports designed to sell services. Journalists and commentators oscillate between wonder and alarm. The result is a discourse that generates heat but sheds limited light on the choices that actually matter.
A Cabinet Minister receiving briefings on AI might hear four contradictory assessments in a single afternoon: that AI will transform every industry within five years; that productivity gains will be modest at best; that a quarter of jobs will be displaced; that the real concern is existential risk. Each briefing is internally coherent. Each reflects genuine expertise. But they are not talking about the same thing.
The word 'AI' has become a container so capacious that it holds everything and therefore possibly nothing.
This fragmentation can create real problems for decision-makers. Without a framework for integration, policy risks becoming reactive, lurching between narratives as political attention shifts, captured by whichever story is loudest at any given moment.
The Complementarity Thesis
Drawing on the case study material assembled for the Handbook of Innovation Ecosystems (Acton Institute for Policy Research and Innovation, 2025), Making Sense of AI in 2026 offers an integrating principle: The AI Complementarity Thesis proposes that AI's effects depend on what it combines with, and the rate-limiting factors are usually the complements rather than the technology itself. This inverts the conventional framing, which focuses on what AI can do. Making Sense of AI in 2026 addresses this in some detail, particularly in Chapter 3 and the Appendixes.
The book canvases five categories of complement to assess the extent to which AI capability translates into productive and other valuable outcomes:
Data infrastructure and quality
Management capability for technology adoption
Workforce skills for human-AI collaboration
Organisational routines and business processes
Governance capacity at firm, industry and national levels.
Making Sense of AI proposes that when these complements are strong, AI investments deliver. When they are weak, the same technology disappoints. The book also provides details of "Supporting Complements” that fall within these categories
Technology alone is never enough. It is the combinations that count.
The AI Complementarity Thesis has a direct policy implication: invest in complements as seriously as in AI technology itself. If binding constraints are usually complements rather than AI capabilities, then policy attention and resources should flow accordingly. This is not the conventional wisdom, but it follows directly from the Acton Institute's analysis.
Nine Conversations That Shape the Discourse
The book maps the AI landscape into nine distinct conversations that rarely connect: enterprise AI adoption, industry transformation, scientific discovery, productivity, workforce effects, humanities and culture, cognition and communication, alignment and safety, and political economy. Each has its own assumptions, its own expert community, and its own blind spots.
A striking pattern emerges from this mapping: there is often an inverse relationship between a conversation's public prominence and its direct connection to actionable policy. The alignment conversation generates enormous online discussion but has the most uncertain policy implications. The enterprise AI adoption and productivity conversations generate less public excitement but connect directly to available policy instruments.
Despite their differences, the nine conversations share a common blind spot: each tends to focus on AI itself and to underweight what AI must be combined with to produce value or harm. The AI Complementarity Thesis cuts across all nine, providing a unifying analytical thread.
A Distinctive Voice
Making Sense of AI in 2026 establishes a voice that sits between academic scholarship and management consulting, occupying territory that neither typically covers.
The voice is that of a practitioner who has lived through multiple technology cycles and brings pattern recognition to the current moment. This positioning matters for boards and executives because it signals analysis grounded in operational realities; for Ministers and advisers, it provides intellectual scaffolding while remaining accessible.
The book maintains what might be termed 'optimistic vigilance' throughout. It neither embraces transformation rhetoric nor dismisses AI as overhyped. This tonal discipline, combined with a willingness to acknowledge the limits of knowledge through careful qualifiers, builds credibility with sophisticated readers who distrust unqualified claims.
AI is neither the transformative force that breathless commentary proclaims nor the overhyped technology that sceptics dismiss. AI is a powerful technology whose effects depend on choices: choices about what to build, how to deploy, whom to serve, and what to govern.
Making Sense of AI in 2026 maintains critical distance from vendor claims and consulting firm predictions without becoming dismissive, positioning the author as an ally of the reader against interests that would shape their understanding for commercial purposes. For Boards evaluating AI proposals and Ministers receiving briefings, this framing provides permission to ask harder questions.
Why This Book, Why Now
The scale of AI investment demands serious analytical tools. Global spending on data centre equipment and infrastructure exceeded US$290 billion in 2024. By 2026, hyperscaler capital expenditure is projected to exceed US$600 billion, with approximately three-quarters directly linked to AI infrastructure. Goldman Sachs projects total hyperscaler capital expenditure from 2025 through 2027 will reach US$1.15 trillion.
Australia is making meaningful commitments. AWS has announced a A$20 billion multi-year expansion. Microsoft has committed A$5 billion. Total committed investment reaches A$20-25 billion in the near term. NEXTDC’s development pipeline is AU$12.4 billion and is expected to exceed AU$17.5 billion by mid-2026.
The policy challenge is to ensure that institutional and human capital complements develop in step with physical infrastructure, so that when returns materialise, they are broadly shared.
The decisions being made about artificial intelligence today will shape organisations, industries, and economies for decades.
The Task Ahead
AI is neither something that happens to us nor something we simply choose to adopt or reject. It is something we build, deploy, and govern. The trajectories are not fixed; they respond to choices. The future is not determined; it is shaped by decisions made now.
The AI Complementarity Thesis provides guidance for those choices: build the complements; invest in data infrastructure, workforce skills, management capability, and governance capacity; attend to implementation, because capability without deployment is merely potential. Above all, recognise that technology does not determine outcomes. Human choices do.
Making Sense of AI in 2026: A Framework for Policy and Practice aims to help policymakers and practitioners make those choices wisely.
Dr John H Howard is Executive Director of the Acton Institute for Policy Research and Innovation. In addition to Making Sense of AI in 2026, he is the author of The Handbook of Innovation Ecosystems: Placemaking. Economics. Business. Governance (2025). Available in Paperback and Kindle.
If you and your colleagues would like to discuss the book, and receive advice on this and related issues, please do not hesitate to contact John at john@actoninstitute.au.
Note: This is the 150th Innovation Insight published by the Acton Institute for Policy Research and Innovation since its launch in April 2024.



Comments