On September 8, 2025, the South Korean Ministry of Science and ICT ("MSIT") released for public comment a consolidated draft package of sub-laws which supplement, implement, and explain Korea’s Basic Act on the Development of Artificial Intelligence and the Creation of a Trust Base (the “AI Framework Act”). These drafts are intended to operationalize the Act—which was passed on December 26, 2024 and scheduled to take effect on January 22, 2026—so that core duties and procedures are in place in advance of the effective date.
The package begins with a draft Enforcement Decree, which implements the AI Framework Act by clarifying scope, procedures, institutional roles, and key thresholds for applicability. Among other things, the Decree gives legal force to concepts such as extraterritorial applicability when conduct affects the domestic market and sets the cadence for national AI basic plans.
Further supplementing (and complementing) the Decree is a draft Safety Assurance Notice that defines which AI systems must implement lifecycle safety measures and submit results, using both a compute-based threshold—generally 1026 FLOPs (or “Floating Point Operations”)—and additional risk-based criteria for frontier systems. That notice details risk identification, assessment and mitigation, incident management, and time-bound reporting (three months after confirmation, one month after material changes or new risks, seven days after serious incidents, with annual submissions thereafter).
A parallel draft Business Responsibilities Notice specifies the obligations of operators of “high-impact AI,” including establishing risk management programs, preparing explanation plans, instituting user-protection measures, ensuring human oversight, and maintaining documentation. Operators must publicly post core governance information while protecting trade secrets. Two sets of guidelines round out the package. The Guidelines to Ensure AI Safety translate the Safety Assurance Notice into methods, examples, and forms, including multiple approaches to measuring cumulative compute (theoretical estimation, empirical/statistical, and hardware-log methods), and good practices for monitoring, red-teaming, and incident handling. The High-Impact AI Determination Guideline provides sectoral criteria and case-based examples for identifying “high-impact AI” across energy, drinking water, healthcare and medical devices, nuclear, criminal investigation and arrest (biometrics), recruitment and loan screening, transportation, public services, and primary/secondary education, and sets out a formal confirmation procedure that can draw on an expert committee.
Taken together, the various documents form a hierarchy—Act → Decree → Notices → Guidelines—in which the AI Framework Act provides the legislative foundation (including obligations for high-impact AI operators and safety requirements for advanced systems), the Enforcement Decree operationalizes those statutory duties, the Safety Assurance and Business Responsibilities Notices specify how to comply in practice (what to document, when to file, and how to manage risks, incidents, and transparency), and the Guidelines convert the notices into concrete methods, checklists, examples, and forms that facilitate compliance and supervision. The Guidelines are non-binding but are intended to shape implementation.
Two cross-cutting concepts dominate the drafts. First, a compute-and-risk gateway: systems trained above 1026 FLOPs—or otherwise designated as frontier/high-risk—must implement lifecycle safety programs and submit results to the MSIT. Second, a context-based definition of “high-impact AI,” with sectoral criteria, self-assessment examples, and a ministerial confirmation pathway when uncertainty remains; expert committees may advise, but courts would ultimately resolve disputes.
Obligations for high-impact AI operators
The Business Responsibilities Notice translates Article 34 of the Act into concrete, ongoing duties for operators of high‑impact AI. At its core, the regime first requires a documented, lifecycle risk management program. Operators are expected to adopt written policies, assign accountable roles, and run a governance process that identifies, assesses, and treats risks from design through retirement. In practice, this means building multi‑disciplinary review (technical, legal, and ethics), using structured methodologies for hazard identification and prioritization, and maintaining a standing mechanism to update controls as models, deployments, or use contexts change. The plan should not be static; it should be reviewed regularly and tied to operational monitoring and post‑incident learning.
A second pillar is an explanation plan, which must be prepared and implemented to the extent technically feasible. The plan addresses three elements: what the system produced, on what basis, and what is known about the training data. Operators should be able to describe the final outputs and their main determining criteria, articulate model limitations and appropriate user interpretations, and maintain an internal overview of training data (types, sources, preprocessing, representativeness, and the use of synthetic data). User‑facing communication should be clear and consistent with these records, with website postings and accessible documents that help users understand and appropriately rely on system outputs.
Third, operators must establish user protection measures that span both development and operations. During development, this includes lawful and secure data collection and management, resilient algorithm and system design (for example, against adversarial manipulation), and testing for exceptional and edge‑case scenarios. In operation, operators should monitor in real time for unsafe behavior, collect and act on user feedback, and maintain processes to protect user rights and provide remediation and compensation where harm occurs. These protections are designed to complement, not replace, privacy and cybersecurity obligations under other statutes.
Operators are also required to ensure robust human management and supervision. This is more than general oversight: it entails defining when and how humans can intervene; deploying interruption and rollback mechanisms; setting inspection and maintenance schedules; and training personnel so they understand model scope, limitations, and escalation pathways. The Enforcement Decree further requires naming a responsible manager and providing contact information so that there is a clear line of accountability.
Finally, the framework emphasizes documentation and transparency. Operators must prepare, maintain, and, where specified, publish records demonstrating compliance. As a default, documents evidencing risk management, explanation planning, user protection, and human‑oversight controls should be retained for five years. Public disclosures on the operator’s website must cover the main contents of the risk management plan, the explanation plan, user‑protection measures, and the identity and contact details of the responsible manager, while allowing redaction of bona fide trade secrets. The Decree recognizes equivalency where substantially similar measures are implemented under other laws, and clarifies allocation of responsibilities between developers and deployers: if a developer has already fulfilled specified measures and the user does not substantially change the system’s functions, the user may be deemed to have satisfied those corresponding obligations.
Together, these obligations operationalize governance, transparency, and accountability for high‑impact AI by requiring written, auditable programs; meaningful user‑facing explanations; concrete protections throughout the lifecycle; identified human control points; and durable records and disclosures aligned with the Act’s goals.
Incident management is treated as a full lifecycle discipline. Operators should maintain real-time monitoring, define severity, stop or roll back problematic releases as needed, notify users appropriately, and file initial and follow-up reports to the Ministry of Science and ICT on the prescribed timelines, followed by post‑mortems and updates to risk registers and safety cases. The drafts expressly map obligations to familiar frameworks (NIST AI RMF, EU AI Act, and domestic TTA standards) and clarify that these instruments complement existing privacy and security statutes while extending governance to non-technical risks such as bias, misuse, autonomy, and transparency.
For practical implementation, entities should first determine whether a system is in scope under the compute and risk criteria and/or qualifies as “high-impact” under sectoral criteria. If so, they should build documented risk programs, explanation plans, user-protection and human‑oversight protocols, and prepare to publish specified governance information. Initial safety‑results submissions should be filed within deadlines, followed by event‑driven updates and annual reports, with records retained and cooperation provided for verifications. For high‑impact determinations, operators should use the guideline’s examples and, if ambiguity persists, request ministerial confirmation using the provided template.
Overall, the September 2025 drafts create a layered, context‑sensitive regulatory framework that couples statutory obligations with detailed operational guidance, aligning Korean AI governance with leading international approaches while reflecting domestic policy objectives.