Last week, leading AI researcher Toby Walsh warned Australia’s lack of guardrails risks young people being “sacrificed for the profits of big tech” after the government abandoned a proposal to establish an advisory body of AI experts. Instead, the government released a National AI Plan that emphasizes investment in data centres, telecommunications infrastructure and workforce training, envisages an “AI Safety Institute” (currently recruiting), and proposes some internal AI transparency measures for the public sector. Transparency so far has been limited.
How does this add up for AI regulation in Australia? Internationally, approaches vary. The EU’s AI Act bans uses such as exploiting vulnerable groups but is struggling to implement rules for high‑risk AI that are not outright prohibited. In the region, South Korea, Japan and Taiwan have passed new AI laws, mainly to give governments powers to act when they see fit, and industry pushback is expected. Other major jurisdictions—most notably the United States and the United Kingdom—do not have comprehensive exhaustive AI rules. In the US, the federal government has maintained safeguards for its own use even while limiting state regulation of private AI uses. The UK has pursued technical, non‑legal safeguards alongside a new AI Safety (now Security) Agency.
Differences reflect the familiar “dilemma of control”: when regulation is easy, risks are not yet visible; when risks are clear, regulatory change is expensive and slow. Australia also lacks the global AI clout of other sectors, limiting its influence. Given these uncertainties, regulatory predictability matters for innovation. Australia’s Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton, has acknowledged the need for clear principles and broad buy‑in as an “insurance policy.”
The government’s formal safety approach appears in action 7 of the National AI Plan, which argues that existing Australian frameworks “can apply to AI and other emerging technologies.” That is, current laws—consumer protections against misleading and deceptive conduct, for example—could cover AI. This view echoes earlier government positions and the Productivity Commission, though some experts dispute it as insufficient.
There are real problems with relying on existing laws. AI systems’ complexity, semi‑autonomy and opacity make it hard to attribute liability or responsibility for harms under current legal frameworks. Those limitations, identified as early as 2023, have not been systematically addressed.
The regulatory landscape is fragmented and uncertain. There are at least 21 mandatory or quasi‑mandatory state and federal policies on government use of AI. Courts have had few opportunities to clarify key legal questions—there are almost no test cases in negligence, administrative law, discrimination law or consumer law involving AI.
The new plan commits to monitoring AI development and responding as challenges arise, but it leaves open how monitoring will be resourced and coordinated. Will the government truly “empower every existing agency across government to take responsibility for AI”? Addressing privacy, consumer protection and anti‑discrimination in practice requires money, commitment and interagency coordination not yet demonstrated.
Looking ahead, signals matter. Political shifts overseas could change regulatory pressures here. A laissez‑faire posture does not automatically create predictability when many regulatory processes are stalled or partial. The government appears to expect courts, agencies, businesses and individuals to retrofit old laws and institutions to a new technological landscape. There is limited hope for stronger public‑sector automated decision‑making rules (promised after the Robodebt Royal Commission), but for most uses the approach is effectively “wait and see.” Whether that will be enough remains uncertain.
