AI advisory body binned

After 15 months of identifying and narrowing expert candidates, the government’s AI advisory body has been scrapped, leaving industry to guess what comes next.

 

As of today, the federal government has moved to scrap its artificial intelligence advisory body before it was established, according to reporting by ABC News. The decision lands at an awkward moment: businesses are asking for clearer “guardrails” around high‑risk AI, while concern about deepfakes, automated decision‑making and privacy breaches remains prominent in public debate.

The immediate practical effect is not that AI regulation disappears—Australia already regulates parts of AI use through existing laws and regulators—but that one of the government’s key consultative “listening posts” has been removed just as policy choices are becoming more concrete. The broader question is whether this is an administrative reset, or a sign the government is changing how it seeks and applies expert advice.

What was scrapped — and why it matters

The ABC reports the advisory group spent roughly 15 months gathering expert input before being discontinued, with members told the arrangement would end straight away. While public detail about the group’s internal work is limited, its intended role appears to have been straightforward: providing a forum where industry, researchers and civil society could test ideas about risk, safety and the economic opportunity of AI before proposals hardened into regulation.

That matters because AI policy is increasingly about choosing trade-offs—innovation versus compliance costs, transparency versus intellectual property, and speed versus safety. Advisory bodies can help governments avoid abrupt shifts after high-profile incidents, and support more consistent policy development.

Even if the government replaces the group with other consultative mechanisms, the suddenness reported by the ABC appears to have unsettled parts of industry. It may suggest a change in priorities or confidence in the advisory structure, though the government’s rationale has not been fully set out in public reporting.

Australia’s AI “guardrails” are still mid-build

The scrapping comes while Australia is still working through its roadmap for AI safety and responsibility. The Department of Industry, Science and Resources’ consultation paper on Safe and responsible AI in Australia flagged a menu of options, from voluntary measures to mandatory obligations for high‑risk uses. It also canvassed “whole-of-economy” issues such as accountability when AI systems are procured from overseas vendors and deployed in Australian settings.

The government later set out intended next steps in its response to the Safe and responsible AI consultation, signalling a preference for targeted interventions rather than a single “AI Act”-style regime. Exactly how far that targeted approach will go—particularly for generative AI models and foundation systems—remains contested, with stakeholders arguing for different levels of prescription.

In that context, a dedicated advisory body can act as connective tissue between consultation and implementation—particularly for fast-moving technical issues such as model evaluations, red-teaming, watermarking, compute governance and data provenance. Without it, it may be harder for external stakeholders to understand where expert advice is being weighed and what evidence is driving the sequencing of reforms.

Existing laws still bite — but the gaps are familiar

Australia is not starting from scratch. AI deployments that mishandle personal information can trigger privacy obligations, and the national privacy regulator has been explicit that “AI” does not remove compliance requirements. The Office of the Australian Information Commissioner outlines practical considerations in its guidance on artificial intelligence and privacy, including transparency, data minimisation, accuracy and security.

Likewise, harms that play out online—such as image-based abuse, synthetic sexual content and viral deepfakes—sit within a regulatory ecosystem that includes platform safety expectations and enforcement tools. The eSafety Commissioner discusses emerging challenges and mitigations in its overview of generative AI, reflecting wider efforts to reduce the downstream harms of synthetic content.

However, gaps are widely discussed in policy and industry circles. Existing law can be slow to map onto AI’s specific risks, including opaque systems used for consequential decisions, the difficulty of contesting outcomes, automation bias, and “model drift” over time. Voluntary principles can lift practice among willing organisations, but may have less influence where safety is treated as optional or secondary to speed-to-market.

This is one area where advisory groups can be useful: translating technical concerns into implementable policy, and stress-testing whether proposed obligations are feasible for SMEs as well as large organisations.

Advice versus authority: the National AI Centre and the machinery of government

The government’s AI work is spread across multiple institutions. The Department of Industry’s National AI Centre has played a prominent role in promoting AI adoption, convening stakeholders and supporting capability-building. Separately, regulators and portfolio agencies address impacts within their remits—privacy, consumer protection, competition, safety and public sector integrity.

That patchwork is not necessarily a problem; AI is a general-purpose technology and its risks are contextual. But fragmentation increases the premium on coordination. When advisory bodies are dismantled, a practical question follows: where does cross-cutting expertise now sit, and who is responsible for reconciling competing demands—for example, pressure to share more data for model development versus the need to protect personal and sensitive information?

The ABC’s reporting frames the decision as leaving industry “to guess what comes next”. That uncertainty can itself shape behaviour. Companies considering AI deployments in hiring, lending, education support, health triage or fraud detection are sensitive to regulatory direction. If the government wants businesses to adopt AI to lift productivity, it may also need to reduce ambiguity about future obligations—otherwise many organisations may default to delaying deployments or narrowing use cases.

Parliament is watching AI’s impact on work and society

The advisory body’s demise also sits alongside parliamentary scrutiny of AI’s effects. The House of Representatives committee inquiry into AI adoption and the future of work is one channel where industry, unions, academics and community groups have been putting evidence on the record, including how automation changes job design, where new roles may emerge, and what protections workers may need when algorithmic tools shape rostering, performance assessment or surveillance.

These processes are not a substitute for standing technical advice to government. Inquiries can take months, and their recommendations can be general by necessity. But they can provide political legitimacy for reforms—particularly when the social consequences of AI become more visible and harder to treat as niche “tech policy”.

Even if the government is moving away from one advisory mechanism, pressure is likely to remain for an alternative that can keep pace with the technology and provide practical, iterative guidance—especially where issues cut across employment, education, health, national security and media integrity.

Industry wants predictability, not perfection

A key theme in the reaction described by the ABC appears to be concern about the policy vacuum created by abrupt structural change, rather than opposition to regulation itself. For AI developers and major deployers, the preference is often for predictable rules, so systems can be designed to meet them.

One reason earlier government work attracted attention is that it acknowledged different risk levels: not every AI system needs the same level of oversight, and overly broad obligations can discourage beneficial use cases. The department’s AI Ethics Framework sets out voluntary principles such as fairness, transparency and accountability, which many organisations reference in procurement and governance documentation.

However, voluntary principles are sometimes criticised as insufficient after real-world incidents. Some stakeholders argue that without enforceable duties—particularly for high-impact use—organisations can present “ethics” commitments without meaningful auditability or avenues for review. Others argue that mandatory rules written too early can inhibit innovation, lock in assumptions about a fast-changing technology, and inadvertently favour well-resourced incumbents.

Those debates are unlikely to go away. Without a visible advisory body, the government may find it harder to demonstrate it is weighing competing views—especially where stakeholders disagree about what works in practice, such as audits, model cards, incident reporting, procurement standards or sandboxing.

What to watch next

The immediate unknown is what replaces the scrapped body: a new advisory structure, ad hoc expert panels, deeper regulator coordination, or a more centralised approach within the public service. The ABC reports the group’s work spanned 15 months; if that output is not published or clearly reflected in policy pathways, stakeholders may question whether the effort was effectively discarded or absorbed into internal processes.

In the near term, three signals are likely to matter.

First, whether the government clarifies how it will continue to take technical advice on fast-developing issues such as generative AI safety, evaluation methods and deepfake mitigation.

Second, whether “safe and responsible AI” reforms proceed on a clear timetable consistent with the government’s own consultation response, or whether timelines slip as structures are reshuffled.

Third, whether regulators (privacy, safety and consumer protection) coordinate more visibly, using existing powers and guidance such as the OAIC’s artificial intelligence and privacy, to give businesses more practical direction in the absence of a single advisory forum.

For now, the decision to scrap the advisory body leaves Australia in a familiar place: interested in AI’s potential, wary of its harms, and still negotiating the institutional setup needed to turn “guardrails” into measures that work in practice.

WhatsApp
LinkedIn
Facebook