The gap between Australia’s AI ambition and its day‑to‑day government delivery appears increasingly to be a systems problem: fragmented platforms, inconsistent data standards and ageing core applications that can make even basic information‑sharing slow and risky. This matters because AI—particularly machine learning and generative tools—typically needs reliable, well‑governed data flows, stable infrastructure and clear accountabilities to operate safely at scale.
The Australian Financial Review’s argument that the nation’s AI upside hinges on modernising public sector systems is a reminder that “AI transformation” can be a misnomer: you don’t transform services with a model alone; you transform them with end‑to‑end redesign, from identity to records to payments and case management (Australia’s AI upside will hinge on upgrading public sector systems). In practical terms, that means turning AI from a bolt‑on assistant into a capability that can safely read, summarise, route and action work across agencies—without breaching privacy rules or producing conflicting “answers” depending on which database it queried.
A useful test is citizen experience. If a person still has to re‑enter the same details across multiple services, or if frontline staff can’t see a coherent history of interactions, then adding a chatbot on top may not deliver a dividend and can create an additional layer of confusion.
Data sharing laws exist; execution is the hard part
Australia has already laid some legislative track for better data use. The Data Availability and Transparency Act 2022 creates a framework for sharing Australian Government data for approved purposes, with controls intended to protect privacy and security (data availability and transparency framework). On paper, this should make it easier to build cross‑agency datasets for service delivery, research and policy evaluation—inputs that modern analytics and AI can often use effectively.
But legislation is not interoperability. Agencies still need consistent metadata, common definitions, data quality processes and modern integration patterns (APIs rather than bespoke point‑to‑point connections). Without those basics, data sharing can become a slow, bespoke negotiation each time—sometimes workable for one‑off projects, but difficult when trying to scale AI across entire service portfolios.
There is also a potential incentive problem. If one agency bears the cost of cleaning and publishing data while another receives most of the downstream benefit, progress can stall unless whole‑of‑government funding and governance arrangements are clear and sustained. From this perspective, government AI gains may depend less on finding the next breakthrough model and more on aligning budgets, standards and responsibilities so data can move lawfully and predictably.
Modernising identity and service channels is foundational
AI‑enabled services depend on knowing who is asking, what they are entitled to, and what actions are allowed—without forcing citizens into a maze of credentials and repeated verification. The Digital ID Act 2024 is a significant step towards a more consistent national approach to digital identity, setting rules for the Australian Government Digital ID System (Digital ID Act 2024 overview). If implemented well, stronger identity foundations can help reduce fraud, streamline onboarding and improve confidence in automated service steps.
Equally, AI “front doors” only help if the channels behind them are modern. The government has flagged modernisation work for myGov—important because myGov is a key gateway for federal services, and any AI layer will inherit its strengths and weaknesses (myGov modernisation announcement). A more reliable, accessible and secure service channel can make it easier to use AI to triage queries, pre‑fill forms, translate complex eligibility rules into plain language, and hand over to humans with context—rather than treating the contact centre as the clean‑up crew for digital failure.
The risk, of course, is mistaking a channel refresh for service reform. A new interface won’t compensate for broken back‑end workflows, duplicated records, or a payments system that cannot communicate with case management. AI can be unforgiving in this sense: it can make good processes faster and bad processes more scalable.
Responsible AI policy needs modern systems to be enforceable
The Australian Government has set expectations for how agencies should approach AI. The Digital Transformation Agency’s policy for responsible use of AI in government frames governance, risk management and transparency expectations (responsible AI in government policy). The APS also has guidance specifically for generative AI use, aimed at helping agencies manage risks and improve capability (generative AI in the APS guidance).
Those documents matter—but in practice they are easier to implement when underlying systems can support the controls they imply. For example:
- You can’t reliably audit AI outputs if you don’t have robust logging, version control and data lineage.
- You can’t protect privacy if sensitive data is scattered across uncontrolled spreadsheets or legacy systems with inconsistent access rules.
- You can’t manage model risk if you lack clear ownership for datasets, prompts, integrations and downstream decisions.
This is where public sector “tech debt” can become an AI governance risk, not just a financial one. When a legacy system can’t easily separate sensitive fields, or when integration requires copying data into shadow stores, AI adoption may push agencies into workarounds that undermine the safety standards they are trying to uphold. In that context, upgrading systems is not just an optional enabler—it can be a practical part of responsible AI.
Productivity gains depend on redesigning work, not automating chaos
Australia’s broader productivity debate increasingly points to digital capability and data as levers for lifting performance. The Productivity Commission’s work on digital technology and productivity notes that the payoff comes from diffusion and adoption across the economy, not isolated innovation (Digital technology and productivity research). In government, that diffusion challenge is amplified by procurement complexity, risk aversion, and the diversity of agencies and service models.
The practical productivity prize in the public sector is not necessarily replacing staff with bots; it is reducing friction: fewer manual re‑keying steps, quicker eligibility checks, less duplication between agencies, faster policy evaluation, and more consistent decision support. AI can help, but many of the largest savings may come from less visible rebuilding—standardising data capture, simplifying forms, reworking case flows, and rationalising systems.
Where AI is often useful today is in high‑volume knowledge work: summarising case notes, drafting routine correspondence, finding relevant policy guidance, and assisting with triage. Yet even those use cases can be constrained if agencies lack modern content management, reliable records, and clean datasets. AI’s “assistant” role becomes safer and more effective when government has authoritative sources of truth that are accessible through well‑designed interfaces and APIs.
Capability and procurement: the hidden levers
Even with better platforms, government needs people who can operate them. The APS has been trying to strengthen digital capability via initiatives such as the APS Digital Profession, which aims to build and recognise digital skills across the service (APS Digital Profession). This matters because AI projects can fail due to capability gaps as well as technical ones—especially in product management, data engineering, cyber security, user research and change management.
Procurement is another lever. If agencies buy AI as a stand‑alone product without investing in integration, data governance and ongoing model management, they can accumulate a new kind of debt: prompt libraries no one owns, opaque vendor models, and brittle connections to legacy systems. Conversely, if modernisation programs specify open standards, interoperability, and portability of data and models, they can reduce lock‑in and make it easier to adopt better tools over time.
There’s also a balancing act between speed and assurance. AI tools move quickly; government systems, by design, change slowly because they are critical infrastructure for citizens. Bridging that gap can require modular architectures—so agencies can upgrade components without risking full system failure—and clear guardrails so experimentation happens in controlled environments rather than in production by accident.
A pragmatic path: modernise the core, then scale AI
Australia’s best chance at capturing AI dividends in the public sector is to treat AI as the “last mile” of reform, not the first. The sequence matters:
- Modernise identity, records and payments foundations so services can verify, decide and transact reliably.
- Implement data standards and sharing mechanisms that work in practice, not just on paper.
- Upgrade integration patterns (APIs, event‑driven systems, consistent metadata) so agencies can collaborate without building bespoke bridges each time.
- Embed responsible AI controls—auditability, privacy safeguards, human oversight—into platforms, not just documents.
- Scale proven use cases in frontline operations once the back end can support them.
The AFR’s warning can be read as a compounding problem: each year government defers core upgrades, AI may become harder to deploy safely and effectively, because models can improve faster than the systems they need to connect to. Conversely, if Australia uses the current AI wave to justify long‑overdue modernisation—data governance, identity, interoperability and workforce capability—the public sector could become a catalyst for national AI adoption rather than a constraint.
The conclusion to be drawn from all this is straightforward. AI will not fix broken government technology. But fixing government technology is one of the most credible ways Australia can turn AI from a series of pilots into measurable improvements in service quality, resilience and productivity.
