News & Notice

Press Releases

MSIT Participates in the International Network of AI Safety Institutes

담당부서
작성자
연락처

- Official launch of the "International Network of AI Safety Institutes" in San Francisco, USA (November 20, 2024), with participation from 10 leading nations (including Korea, the U.S., Japan, the U.K., and the EU), reinforcing global cooperation for AI safety.


The Ministry of Science and ICT (MSIT), led by Minister Yoo Sang-im, attended the "International Network of AI Safety Institutes" event held from November 20 to 21, 2024, at the Golden Gate Club in San Francisco, USA.


This event is a follow-up to the Seoul Declaration, a core agenda of the AI Seoul Summit (May 2024), which emphasized the establishment of AI safety institutes and international cooperation for safe AI development.* During the Summit, President Yoon Suk Yeol reaffirmed Korea's commitment to international AI safety efforts through the establishment of a domestic AI safety institute. In response, MSIT has taken swift action and plans to officially launch Korea's AI Safety Institute later this month. Korea's delegation to the event included Kim Myung-joo, recently appointed Director of the AI Safety Institute (November 12, 2024), to strengthen ties with other international AI safety institutes.

*The Seoul Declaration stated: "We support existing and ongoing efforts of the participants to this Declaration to create or expand Al safety institutes, research programmes and/or other relevant institutions including supervisory bodies, and we strive to promote cooperation on safety research and to share best practices by nurturing networks between these organizations."


The event brought together AI safety institutes (or equivalent organizations), technical experts in AI safety and representatives from 10 countries, including the U.S., Korea, the U.K., Japan, Singapore, Canada, France, the European Union, Kenya, and Australia. It marked the official launch of the International Network of AI Safety Institutes. Participants discussed the network's mission, operational framework, and technical approaches to AI safety research, testing, and guidelines.


The network aims to consolidate global expertise to foster a shared scientific understanding of AI safety risks and their mitigation. Through international research efforts, the network seeks to support the adoption of interoperable principles and best practices. Its core focus areas for global cooperation are: ① Research, ② Testing, ③ Guidance, ④ Inclusion.


The two-day event (November 20-21) included in-depth discussions across three core areas:

1. Mitigating Risks of Synthetic Content: Sharing best practices and emerging technologies for ensuring transparency in AI-generated content (e.g., watermarking and detection tools).

2. Testing AI Models: Discussing methodologies and tools for testing AI models and improving interoperability of testing processes and outcomes.

3. Evaluating Risks of Advanced AI Systems: Exploring methods for the quantitative identification and assessment of risks associated with advanced AI systems through the interpretation of experimental results.


Korea's delegation presented key domestic achievements in AI safety research, including:

"Research on Detection and Suppression of Image Manipulation and Deepfake Generation" by Professor Simon Sung-il Woo of Sungkyunkwan University.

"AI Risk Assessment and Verification Framework" by Kwak Joon-ho, Tech & Policy Leader of AI Policy & Research Team, Center for Trustworthy AI(CTA), Telecommunications Technology Association (TTA).

These presentations highlighted Korea’s leadership in AI safety research and helped define international research priorities while fostering collaboration with other nations' AI safety institutes.


Discussions were held on establishing a governance structure for the sustainable operation of the network. Korea emphasized its commitment by highlighting its successful hosting of the AI Seoul Summit, the upcoming launch of its AI Safety Institute (November 2024), and its proactive support for the network’s governance. Korea expressed its ambition to play a central role in the network's operations.


Song Sang-hoon, Deputy Minister of the Office of ICT Policy at MSIT, remarked, "International cooperation on AI safety has been a core agenda of Korea's leadership, as highlighted in the Seoul Declaration (May 2024). The values and principles of the Seoul Declaration have been swiftly realized through the launch of the International Network of AI Safety Institutes, with active participation and support from major countries. This is deeply meaningful."


He further stated, "Our government will officially open the AI Safety Institute this month, initiating dedicated research on AI safety. Korea will also actively engage in the International Network of AI Safety Institutes, taking a leading role in advancing global AI safety technologies and norms."



For further information, please contact the Public Relations Division (Phone: +82-44-202-4034, E-mail: msitmedia@korea.kr) of the Ministry of Science and ICT. 


Please refer to the attached PDF.

KOGL Korea Open Government License, BY Type 1 : Source Indication The works of the Ministry of Science and ICT can be used under the terms of "KOGL Type 1".
TOP