South Korea has joined the growing club of jurisdictions moving from AI principles to enforceable obligations. The change is less about sudden technological breakthroughs than about a public-policy reality: generative AI is now embedded in everyday tools, while “high-impact” systems increasingly mediate access to jobs, credit and public services. Regulators are trying to catch up without choking off a sector seen as central to national competitiveness.
According to ABS-CBN’s report on the law taking effect, the new framework sets rules for “high-impact” AI and attaches specific duties to generative AI providers—pushing responsibility closer to the developers and deployers who profit from the technology.
A law built to make AI legible
The centrepiece is South Korea’s “AI Basic Act”, reported by Reuters as the country’s first comprehensive AI law, formally titled the Framework Act on the Development of Artificial Intelligence and Establishment of Trust. Reuters notes it aims to balance promotion of AI with safeguards, using a tiered approach that tightens requirements for more consequential uses and adds transparency expectations for generative systems.
This framing matters. Rather than treating “AI” as one monolithic risk, the act is designed to sort systems by how much harm they can plausibly cause. That approach resembles the risk-based logic of the European Union’s AI Act, which distinguishes between prohibited, high-risk and lower-risk categories and introduces transparency duties for certain AI interactions and content generation.
For businesses, the practical takeaway is that “we used an off-the-shelf model” may be a weaker defence when things go wrong. Regulators in several jurisdictions are signalling they want clearer accountability for design choices, deployment contexts and downstream impacts—particularly in domains where errors have real-world consequences.
“High-impact” AI: the compliance bullseye
Much of the act’s bite sits in how it treats high-impact AI. In broad terms, high-impact systems are those used in settings where decisions can significantly affect people’s rights, safety or access to essential opportunities—think screening job candidates, assessing creditworthiness or supporting public administration.
Legal briefings aimed at multinationals suggest that high-impact AI obligations are likely to focus on risk management, documentation and governance—demonstrating that foreseeable harms were assessed, mitigations were designed, and performance is monitored after deployment. A DLA Piper summary highlights expected themes such as internal controls, record-keeping, user information and measures to address fundamental-rights impacts, user protection, and reliability/safety risk controlsand safety risks for systems that fall into the higher-impact bucket.
This is where “trustworthy AI” rhetoric becomes operational. It is no longer sufficient to publish ethical guidelines while pushing a model into production; organisations may need to show their working. Importantly, the compliance burden is not necessarily limited to the largest foundation-model labs. Any organisation deploying a system classed as high-impact—banks, insurers, HR tech vendors, hospitals, government contractors—may find itself needing to maintain auditable processes, even if it did not train the model from scratch.
Generative AI gets transparency duties
Generative AI has become the consumer-facing tip of the AI spear—chatbots, image generators, voice cloning and summarisation tools. The act, as described in reporting and official summaries, adds obligations aimed at reducing confusion between human and machine output and limiting the social damage of synthetic media.
South Korea’s Ministry of Science and ICT (MSIT) has positioned the law as combining support for innovation with trust measures, including steps directed at generative AI such as clearer user information and content-related transparency (with specifics generally expected to be elaborated via subordinate regulations and guidance). MSIT’s English-language materials describe the act’s overall intent and the broad regulatory architecture, including differentiated treatment for higher-risk uses.
One recurring policy idea in this space—also seen internationally—is that when AI generates content that could be mistaken for authentic footage or human communication, users should be told, and provenance should be easier to track. Exactly how South Korea will implement these expectations (for example, via labelling rules, watermarking standards, or platform obligations) will depend on implementing decrees and technical guidance. That detail matters: labelling can be effective in higher-risk contexts such as political advertising or impersonation, but it can also be limited if metadata is easy to remove or enforcement is uneven across platforms.
For media and creative industries, the act’s generative AI measures may also intersect with ongoing debates about training data and copyright. While the Basic Act is not primarily a copyright instrument, transparency and documentation requirements may indirectly increase pressure on developers to be clearer about how systems are built and what content they ingest—especially where public trust is already strained.
Enforcement: incentives, penalties, and the “paper trail” problem
Any regulation is only as meaningful as its enforcement. While South Korea’s act is often presented as “pro-innovation”, it still needs teeth to change behaviour—particularly for firms that might otherwise treat safety and fairness work as optional overheads.
Large law firms tracking the act have highlighted enforcement tools such as corrective orders and administrative penalties, as well as compliance timelines that may phase in obligations as detailed rules are finalised. A Baker McKenzie explainer notes that the act contemplates sanctions and enforcement mechanisms that may favour organisations that invest early in governance, documentation and incident response, because these artefacts tend to be what regulators ask for first.
That “paper trail” requirement is often the most uncomfortable shift for fast-moving AI teams. Model development culture prizes iteration; governance culture prizes traceability. If the new rules effectively push firms to keep better records of training data sources, evaluation results, known limitations and deployment contexts, the law could reduce harm even without constant prosecutions—simply by making it harder to ship systems that have not been stress-tested.
At the same time, there is a risk of compliance theatre: documentation that looks robust but does not reflect real-world performance. How regulators audit, what evidence they accept, and whether civil society and academia can scrutinise outcomes will help determine whether the regime becomes a genuine accountability layer or primarily a box-ticking exercise.
Why South Korea is moving now
South Korea’s timing reflects both domestic pressures and international signals. Public concern about deepfakes, scams, privacy and biased decision-making has risen globally alongside rapid productisation of AI tools. For export-oriented tech economies, there is also a strategic motivation: if you can credibly claim your AI is “trustworthy” under a known national framework, you may reduce friction when selling into regulated markets.
The EU’s AI Act has become a reference point for this dynamic, functioning as a de facto standard-setter in a way sometimes compared with the EU’s influence on privacy rules through the GDPR. South Korea’s risk-based structure appears broadly compatible with that direction of travel, which could make cross-border compliance less chaotic for companies operating in both jurisdictions (even though details and thresholds will differ).
There is a geopolitical angle too. As AI becomes infrastructure, states want assurance that critical systems—health, finance, public administration, security—are not dependent on opaque tools that cannot be explained or controlled. An AI law can be as much about national resilience as consumer protection.
Still, the effectiveness of the act will depend on the quality of the implementing rules and the resourcing of regulators. It is one thing to pass a framework; it is another to develop technical standards, audit methodologies and incident-reporting channels that keep pace with models that update frequently.
What developers and businesses should do next
For Australian companies with Korean customers or partners—or for Korean firms operating in Australia—the law’s start date is a practical prompt to review AI governance now, not later. Three immediate workstreams stand out:
Classification: Work out whether any systems in use could be deemed “high-impact”, and document the reasoning. If you cannot show why you are not high-impact, you may be treated as if you are.
Transparency by design: For generative AI features, build user-facing disclosure and internal traceability into the product roadmap. Retrofitting is costly and often fails.
Operational readiness: Incident response plans should include AI failures—hallucinations that cause harm, biased outcomes, data leakage, impersonation. Regulators often focus on how quickly and competently an organisation reacts after harm is identified.
None of this guarantees frictionless compliance, especially as secondary rules mature. But the direction is clear: South Korea is signalling that “move fast and break things” is not an acceptable operating principle for AI systems that can damage livelihoods, reputations or safety.
South Korea’s AI Basic Act taking effect does not end the debate over how to regulate AI—if anything, it begins the harder phase of interpretation and enforcement. But it does shift the baseline. Developers and deployers now have less room to claim there was no rulebook, and more reason to prove—on the record—that their models deserve the trust they ask for.
South Korea’s AI Basic Act took effect on 22 Jan 2026, introducing transparency duties for generative AI and governance requirements for ‘high-impact’ uses, with a grace period before administrative fines are applied.
