On December 26, 2024, the plenary session of the National Assembly voted to pass the "Framework Act on the Development of Artificial Intelligence and the Establishment of Foundation for Reliability" ("AI Framework Act"). The AI Framework Act is the second of its kind in the world following the EU AI Act. According to the Ministry of Science and ICT ("MSIT"), the enactment of the AI Framework Act comes at a time where many other major jurisdictions are also putting in substantial efforts in developing artificial intelligence and establishing regulations relating to artificial intelligence that are in favor of their countries. The AI Framework Act includes provisions relating to the establishment of a system for promoting trustworthy AI development, supporting and fostering the AI industry, and creating a foundation to ensure the safety and reliability of high-impact AI and generative AI.
What is an "artificial intelligence service provider"?
The AI Framework Act defines "artificial intelligence service provider" ("AI service provider") as a corporation, organization, individual, or state agency, etc. that is engaged in any business related to the AI industry (Article 2, Subparagraph 7). "High-impact AI" refers to an AI system that significantly affects or poses a risk to the safety or fundamental rights of individuals. High-impact AI systems are used in areas such as energy supply, drinking water production processes, digital medical device development and use, safety management and use of nuclear materials and nuclear facilities, and analysis and utilization of biometric information for criminal investigations or arrest (Subparagraph 4 of the same Article).
Particularly noteworthy for AI service providers are the (i) obligation to be transparent by notifying users when AI is being used under Article 31, (ii) obligation to ensure safety under Article 32, and (iii) the obligations for businesses who provide or use high-impact AI under Article 34. The above provisions impose obligations of notification, labeling, safety monitoring, risk management, high-impact AI management and supervision, and designation of a local agent, etc. on AI service providers. Therefore, AI service providers are strongly encouraged thoroughly review the obligations and implement measures to ensure compliance. These are further discussed below.
1. Obligation to ensure transparency
If an AI service provider intends to provide products or services that use high-impact AI or generative AI, the service provider must notify users in advance that their products or services operate using AI (Article 31(1)).
Furthermore, if an AI service provider intends to provide generative AI services or products or services using generative AI its products or services must be labeled as having being generated by generative AI (Article 31(2)).
Where a work consisting of virtual sounds, images, videos, etc. that can be mistaken for real (so-called "deep fakes") is intended to be provided using AI system, the service provider must clearly indicate the fact that the work has been generated using AI(Article 31(3)). However, if the work qualifies as an artistic or creative expression or forms a part thereof, the manner of labeling should not impede the exhibition or enjoyment of the work.
2. Obligation to ensure safety
Where the cumulative amount of data used for AI training exceeds the threshold set by the Presidential Decree, the AI service provider must : (i) identify, assess and mitigate risks throughout the AI life cycle , (ii) establish a risk management system to monitor and respond to AI-related safety incidents (Article 32(1)), and (iii) submit the results of the above to the Minister of the MSIT (Article 32(2)).
3. Responsibilities of business operators related to high-impact AI
Where an AI service provider intends to provide high-impact AI or any products or services that use a high-impact AI technology, the AI service provider is subject to certain obligations including the following to ensure the safety and reliability of its systems as prescribed by the Presidential Decree (Article 34(1)).
- Establishment and operation of a risk management plan (Subparagraph 1)
- Establishment and implementation of plans for explaining the final result derived by AI to the extent it is technically feasible, the key standards used to derive the final result, and an overview of the training data used for the development and utilization of the AI (Subparagraph 2)
- Establishment and operation of measures to protect users (Subparagraph 3)
- Human management and supervision of high-impact AI (Subparagraph 4)
- Preparation and storage of documents detailing the measures taken to ensure safety and reliability (Subparagraph 5)
- Other matters deliberated and resolved by the AI Committee to ensure the safety and reliability of high-impact AI systems (Subparagraph 6)
4. Fact-finding investigations / suspension and corrective orders/ fines
The AI Framework Act also stipulates sanctions against violations of the law. If the Minister of the MSIT becomes aware of, or receives a report or complaint on, violations of the law (including the labeling requirements, safety requirements, and the obligations of business operators relating to high-impact AI), an on-site inspection of the business premises of the business operator may be carried out by public officials to inspect its books, documents, and other materials or articles. If any violation is confirmed by the investigation, the Minister of the MIST is authorized to issue a suspension or correction order (Article 40).
Non-compliance with such orders or the violation of the advance notice requirement (Article 31 (1)), etc. may result in an administrative fine of up to KRW 30 million (Article 43).
5. Designation of a local agent
An AI service provider without a domicile or place of business in Korea whose number of users, sales, etc. meet certain criteria (to be prescribed by Presidential Decree) must designate a person with residence or a business operation in Korea as its domestic agent. On behalf of the service provider, the agent will be responsible for complying with obligations, including the filing of an application to confirm whether the service provided qualifies as high-impact AI and providing support for the implementation of measures to ensure the safety and reliability of high-impact AI. The designation of the agent must be made in writing and reported to the Minister of the MSIT (Article 36).
6. Distinction from the labeling obligations of other proposed laws
The partial amendment to the Content Industry Promotion Act proposed in May 2024 (Bill No. 2200048, proposed by National Assembly Member Yoo-Jeong Kang) and the partial amendment to the Copyright Act proposed in November 2024 (Bill No. 2205507, proposed by National Assembly Member Yong-Ki Jung) also stipulate an obligation to label when content or a copyrighted work was produced/created using artificial AI.
Specifically, the proposed partial amendment to the Content Industry Promotion Act requires that when content is produced using AI technology as prescribed by the Presidential Decree, it should be indicated (Article 26(3) of the proposed partial amendment), and the proposed partial amendment to the Copyright Act requires that a work created using generative AI technology be indicated as such (Article 7-2(1) of the proposed partial amendment).
While the indication obligation set out in the above two proposed amendments are similar to that provided by the AI Framework Act, there are differences in that the proposed partial amendment to the Content Industry Promotion Act focuses specifically on "content," while the proposed partial amendment to the Copyright Act focuses specifically on "copyrighted works." Unlike the AI Framework Act, the proposed partial amendment to the Content Industry Promotion Act imposes an obligation to label when "artificial intelligence technology prescribed by the Presidential Decree" is used, while the proposed partial amendment to the Copyright Act imposes the obligation to label when "generative artificial intelligence" is used.
The AI Framework Act will enter into force in January 2026 with a one-year transition period, after resolution and promulgation by the Cabinet. The government announced that it plans to take follow-up measures in the first half of 2025, such as establishing subordinate laws and guidelines to ensure a prompt implementation of the AI Framework Act.
We will closely monitor the implementation of the related subordinate laws and guidelines and keep our readers updated.
Related Topics