top of page

The Political Economy of Artificial Intelligence: Automation, Augmentation and the Future of Discovery

John H Howard, 16 December 2025


ree

Generative artificial intelligence has precipitated a familiar economic anxiety: technology is everywhere, in our code editors, our word processors, and our boardrooms, but it is not being seen in productivity statistics. While individual tasks, from drafting emails to debugging software, have seen efficiency gains of some magnitude, these micro-level improvements have not yet translated into the macro-economic surges promised by the technology’s evangelists.

This discrepancy suggests being trapped in an implementation lag, a "J-curve" where the costs of adopting and adapting to a new general-purpose technology temporarily outweigh the benefits. However, treating this as a waiting game ignores the active choices currently shaping the trajectory of AI. The gap between capability and impact means navigating a tension between two economic logics: the logic of automation, which seeks to replace human labour to cut costs, and the logic of augmentation, which seeks to expand human capability to create value.

The choices being made now may undoubtedly impact future GDP growth, but more fundamentally, they can impact the future trajectory of the structure of science, research, and innovation. As the tensions between automation and augmentation are negotiated, they must address the valid and widespread safety concerns that have dominated recent public discourse. These safety issues are economic and social externalities that require a robust governance framework. They are not just technical bugs that need to be patched. To understand the future of our innovation ecosystem, the competing forces of automation, augmentation, and safety must be addressed.

The False Economy of "So-So" Automation

In the current discourse, automation is often presented as the inevitable endpoint of technological progress. The economic logic seems sound on the surface: if a machine can perform a task cheaper than a human, the firm should replace the human. This substitution effect drives immediate cost reductions and boosts margins. However, this view is dangerously myopic when applied to complex cognitive work.

Economists Daron Acemoglu and Simon Johnson describe the trap of "so-so automation," where technology is just good enough to displace human workers but not good enough to generate significant productivity gains. This effect is seen in automated customer service lines that trap users in frustration loops, or in AI-generated content that floods channels with mediocrity, requiring human intervention to fix. In these scenarios, labour demand falls, but productivity does not meaningfully rise. The economic surplus is merely transferred from labour to capital, without expanding the overall pie.

For the innovation system, an automation mindset can be particularly corrosive. Research is not an assembly line where the goal is to produce the same widget faster. The goal of research is discovery, the production of new knowledge. If AI is widely used to automate the writing of grant applications or the summarising of literature, it accelerates the bureaucratic churn of the academic mill. It risks creating a system in which AI-generated papers are reviewed by AI-generated peer reports, forming a closed loop of synthetic mediocrity that advances no genuine understanding.

The Promise of Augmentation and Epistemic Expansion

The other path is augmentation. This approach views AI as a complementary asset that extends epistemic reach rather than a replacement for human cognition. In this model, the human remains the architect of inquiry. At the same time, the AI acts as a high-dimensional instrument, capable of perceiving patterns and processing complexities that are biologically impossible for the human brain to grasp alone.

This is already being seen in the "science of science." DeepMind’s AlphaFold did not replace biologists; it gave them a tool to predict protein structures with an accuracy that would have taken decades of experimental labour. This did not reduce the demand for biologists; instead, it opened a new horizon of biological questions that had previously been unaskable. It shifted the bottleneck from structure prediction to function analysis and drug design.

This is the essence of augmentation: it changes the tasks rather than just doing the same tasks faster. It allows a materials scientist to navigate a design space of millions of potential compounds to find a new battery electrolyte, or a climate modeller to integrate thousands of variables into a coherent simulation. In an augmented system, productivity is measured by the impact of outcomes, not just outputs.  The economic value comes from the new discoveries made, the new problems that can be solved, the new markets that can be created, and the new frontiers of knowledge that can be opened.

Of course, augmentation is seriously much harder than automation. It requires organisational redesign, new skills, and a willingness to reimagine the discovery workflow. It demands seeing AI as a partner in the research process, a "co-pilot" that challenges assumptions rather than just confirming them. Most importantly, it requires a level of institutional agility that many research organisations currently lack.

The Political Economy of Safety

While the tension between automation and augmentation plays out in public research and industry, a parallel tension is unfolding in the regulatory sphere. The safety concerns surrounding AI are valid, ranging from the immediate risks of bias and hallucination to the long-term, existential risks of misalignment. However, the solutions to these safety problems are deeply entangled with the political economy of the AI industry.

Safety is not free. It imposes a compliance cost on innovation. Strict requirements for model auditing, red-teaming, and data provenance create a regulatory moat that favours large incumbents. Only the largest technology firms have the capital and legal infrastructure to navigate a heavy-handed safety regime. This creates a paradox: in attempting to make AI safe, there may be an inadvertent effect that centralises control in the hands of a few dominant players.

This concentration of power directly threatens the diversity of the innovation ecosystem. Science thrives on openness and plurality. If the tools of discovery are locked behind proprietary walls, or if the computational resources required to run them are accessible only to the best-funded labs, there is a risk in creating a two-tier scientific community. The "haves" will leverage augmented intelligence to accelerate their research, while the "have-nots" will be left behind, widening the gap between elite institutions and the rest of the research sector.

There is a tension between safety and the open-source ethos that underpins much of scientific progress. Open weights and open models allow researchers to scrutinise the tools they are using, ensuring reproducibility and trust. But, from a safety perspective, releasing powerful models into the wild can be viewed as a proliferation risk. Navigating this trade-off, between the transparency required for science and the control required for safety, is one of the most critical policy challenges of our time.

Restructuring the Innovation Ecosystem

To resolve these tensions, a new approach to the governance and management of our innovation ecosystems is needed. Neither the deployment of AI to market forces nor the introduction of blanket bans is a solution. A "narrow path" that incentivises augmentation while managing the safety externalities is required. This means:

  • Public research institutions and universities must become active architects of AI adoption. They cannot be passive consumers of commercial tools. They must invest in "public interest AI", models and infrastructure designed specifically for scientific inquiry, trained on high-quality, curated datasets, and governed by the values of academic integrity rather than shareholder value. This includes building sovereign capability in compute and data, ensuring that Australian researchers are not entirely dependent on foreign corporate infrastructure.

  • Rethink how to measure and reward research productivity. Continuing to incentivise volume, more papers, more citations, will encourage the "so-so automation" of science. This will inevitably create more noise and less signal. Instead, funding models evaluation frameworks that value high-risk, high-reward inquiry, should be prioritised, where AI is used to tackle grand challenges rather than to game the metrics.

  • The safety discussion must be democratised. Regulatory frameworks should be risk-weighted, distinguishing between a chatbot used for customer service and an AI system used to design synthetic pathogens. Safety regulations should not become a barrier to entry for smaller innovators or academic researchers. This might involve "regulatory sandboxes" where new tools can be tested in a controlled environment, or public subsidies for safety compliance for non-profit research actors.

Institutions in the Age of Artificial Intelligence

The integration of AI into the innovation system is an institutional challenge as much as a technical one. It means looking at the internal design and structures of universities, where computer science often sits separately from biology, and economics sits separately from ethics, attributes that are ill-suited for the age of AI. The most impactful innovation will happen at the intersections, where domain experts work alongside AI specialists to design bespoke tools for their specific fields.

This may require a new kind of AI translator who can bridge the gap between technical capabilities and domain-specific problems. These roles are currently rare and undervalued, although they are the linchpins of an augmented research system.

There is also the human capital dimension. The fear of replacement is real, and it can lead to resistance. It must be made clear that the goal of AI in science is not to replace the scientist, but to liberate them from drudgery. By automating routine tasks such as data cleaning, literature scanning, and formatting, cognitive bandwidth can be extended for the creative, critical, and conceptual work that machines cannot do.

Conclusion

The productivity of the AI age will be achieved through the deliberate design of economic and social institutions. There is a "fork in the road" where choices must be made: One path leads to a future of "so-so automation," where AI is used to cut costs and generate synthetic clutter, concentrating wealth and eroding the quality of our information environment.

The other path leads to a future of augmentation, where AI is a lever for the human mind, unlocking new eras of discovery and solving the complex problems that have long eluded us. Navigating this path requires a well-articulated political economy and a balance between the imperatives of safety and the necessities of open innovation.

Augmentation will require investment in public infrastructure that democratises access to knowledge and intelligence, and maintaining a clear vision of the human role in the loop, not as a supervisor or operator of machines, but as the source of the meaning, purpose, and direction that guides them.

As argued in a previous Innovation Insight, the productivity paradox is a temporary fog; the decisions made now will determine what lies on the other side.

Reference

Acemoglu, D., & Johnson, S. (2023). Power and Progress: Our Thousand-year struggle over technology and prosperity. Basic Books.

Comments


bottom of page