- Strengthening international networks to ensure AI safety and advancing the three pillars of global AI governance: safety, innovation, and inclusion
The Ministry of Science and ICT (MSIT, Minister Yoo Sang-im) hosted the launch ceremony for the “AI Safety Institute" on November 27 at the Global R&D Center in Pangyo.
During the AI Seoul Summit held in May, leaders from 10 nations reaffirmed that safety is a core element of responsible AI innovation. They emphasized the establishment of AI safety institutes and highlighted the importance of international cooperation for ensuring safe AI. President Yoon Suk Yeol echoed this sentiment, expressing his commitment to creating the AI Safety Institute as part of a global network aimed at enhancing AI safety. Following thorough preparations regarding its structure, budget, personnel, and functions, the MSIT officially launched the institute.
The AI Safety Institute serves as a dedicated organization designed to systematically and professionally address diverse AI risks, including those caused by technological limitations, human misuse, and loss of control over AI systems. Positioned as Korea’s hub for AI safety research, the institute will foster collaboration and information exchange among industry, academia, and research organizations. Additionally, it is a member of the "International Network of AI Safety Institutes" (launched on November 21, with participation from 10 countries) and will play an active role in global efforts for safe AI. The institute’s goals include developing competitive technologies and talent in the AI safety field while advancing policies grounded in scientific research and data.
The ceremony gathered key government officials, including:
Ryu Kwang-jun, Vice Minister for Science, Technology and Innovation
Yeom Jae-ho, Vice Chair of the National AI Committee
Lee Kyung-woo, Presidential Secretary for AI and Digital
More than 40 prominent figures from the AI industry, academia, and research sectors also attended. Notable participants included:
Oh Seung-pil, Chief Technology Officer (CTO) of KT
Oh Hye-yeon, Director of the KAIST AI Institute
Lee Eun-ju, Director of the Center for Trustworthy AI at Seoul National University
Bang Seung-chan, President of the Electronics and Telecommunications Research Institute (ETRI)
t the event, Professor Yoshua Bengio, a world-renowned AI scholar and global advisor to the National AI Committee, congratulated Korea on the institute’s launch during his keynote speech. He emphasized the institute’s pivotal role in (1) researching and advancing risk assessment methodologies through industry collaboration (2) supporting the development of AI safety requirements, and (3) fostering international cooperation to harmonize global AI safety standards.
Elizabeth Kelly, Director of the U.S. AI Safety Institute, commended Korea’s leadership in AI safety, stating, “The global leadership and support Korea has demonstrated for advancing AI safety is deeply appreciated.” She encouraged collaborative efforts between the U.S. and Korean institutes to establish shared scientific standards aimed at mitigating risks, maximizing benefits, and fostering innovation. Similarly, Oliver Illot, Director of the UK AI Safety Institute, and Akiko Murakami, Director of the Japan AI Safety Institute, emphasized the need for cross-border cooperation to ensure safe AI technologies.
Kim Myung-joo, the inaugural Director of the AI Safety Institute, presented the institute’s mission and operational roadmap. He stated, “The institute will focus on evaluating potential risks associated with AI utilization, developing and disseminating policies and technologies to prevent and minimize these risks, and strengthening collaboration both domestically and internationally.” Director Kim further stressed, “The institute is not a regulatory body but rather a collaborative organization dedicated to supporting Korean AI companies by reducing risk factors that hinder their global competitiveness.”
After the inauguration, a Memorandum of Understanding (MOU) was signed to form the "Korea AI Safety Consortium." This consortium, consisting of 24 leading industry, academic, and research organizations, will work alongside the AI Safety Institute on key initiatives, including:
Research, development, and validation of an AI safety framework (risk identification, evaluation, and mitigation),
Policy research to align with international AI safety norms, and
Technological collaboration on AI safety.
The consortium will refine its research themes and operational plans over time. Member organizations expressed a strong commitment to partnering with the AI Safety Institute.
Minister Yoo Sang-im emphasized, “AI safety is not only a prerequisite for sustainable AI development but also one of the most pressing challenges that all of us in the AI field must address collaboratively. In just one year since the AI safety summits in the UK (November 2023) and Seoul (May 2024), major nations—including the U.S., UK, Japan, Singapore, and Canada—have launched AI safety institutes, rapidly forming an unprecedented and systematic framework for global AI safety cooperation.”
He further stated, “Through the AI Safety Institute, we aim to unite research capabilities across industry, academia, and research sectors to swiftly build the technological and policy expertise needed for AI safety. This will lay a robust foundation for innovation within Korea’s AI industry. Moreover, we will actively support the institute in taking a leadership role within the international AI safety network, solidifying Korea’s position as the AI safety research hub of the Asia-Pacific region.”
For further information, please contact the Public Relations Division (Phone: +82-44-202-4034, E-mail: msitmedia@korea.kr) of the Ministry of Science and ICT.
Please refer to the attached PDF.