- Ministry of Science and ICT (MSIT) and Telecommunications Technology Association (TTA) Announce Results of Crosswalk Analysis on Korea’s 'Trustworthy AI Development Guide' and the U.S. 'AI Risk Management Framework' (AI RMF) of NIST. (Dec. 2024)
The Ministry of Science and ICT and the Telecommunications Technology Association announced that they have completed a crosswalk analysis* between Korea's 'Trustworthy AI Development Guide' and the 'AI Risk Management Framework' (AI RMF) of the National Institute of Standards and Technology (NIST) under the U.S. Department of Commerce.
* TTA, which developed the 'Trustworthy AI Development Guide,' and the U.S. National Institute of Standards and Technology (NIST), which developed the 'AI Risk Management Framework' (AI RMF), conducted the crosswalk analysis from February to December 2024.
The detailed results of the crosswalk analysis can be found on the following websites:
TTA: www.tta.or.kr/tta
NIST: airc.nist.gov/AI_RMF_Knowledge_Base/Crosswalks
Korea AI Safety Institute: www.aisi.re.kr/article
The MSIT and the TTA have been developing and disseminating the 'Trustworthy AI Development Guide' since 2021 to ensure AI trustworthiness. This guide presents 15 technical requirements (67 detailed verification criteria) for securing AI trustworthiness, and has supported domestic companies in ensuring AI trustworthiness during the development process of AI systems.
Additionally, based on the 'Trustworthy AI Development Guide,' efforts have been made to establish a sustainable AI innovation ecosystem, including the enactment of the 'Requirements for Enhancing the Trustworthiness of AI Systems' as a national standard (December 2023) and the operation of a 'Private Sector’s Voluntary AI Trustworthiness Certification' (starting November 2023, with a total of 7 certifications issued). These efforts contribute to the ongoing policy initiatives aimed at securing AI trustworthiness and safety.
< Overview of Korea's "Trustworthy AI Development Guide" (February 2022) >
|
∎ The technical requirements for ensuring AI trustworthiness and mitigating risks are specifically presented according to the five stages of the AI lifecycle*, supporting the voluntary achievement of AI trustworthiness by the private sector.
* (1) Planning and Design, (2) Data Collection and Processing, (3) AI Model Development, (4) System Implementation, (5) Operation and Evaluation
|
The U.S. NIST, a government agency leading AI trustworthiness research and policy development, announced the voluntary framework “Artificial Intelligence Risk Management Framework (AI RMF)” in January 2023. This framework supports individuals and organizations in understanding, managing, and mitigating risks related to the design, development, deployment, and use of AI systems. The NIST’s AI RMF has played a key role in global discussions and developments regarding AI trustworthiness in both the public and private sectors, and is being adopted and utilized by numerous organizations worldwide as a foundational framework for ensuring AI trustworthiness.
* (Cases of Crosswalk Analysis with U.S. ‘AI Risk Management Framework (AI RMF)’): International standards (ISO-IEC-42001: AI Management Systems), Singapore AI Verify (Oct. 2023), Japan AI Business Operator Guidelines (Apr. 2024/Sept. 2024, limited terms and concepts), etc.
< Overview of the U.S. ‘Artificial Intelligence Risk Management Framework (AI RMF)’ (Jan. 2023) >
|
∎ The guidelines, developed by the National Institute of Standards and Technology (NIST), help organizations designing, developing, deploying, or using AI systems to manage risks associated with these systems. The goal is to promote the development and use of trustworthy and responsible AI systems. These guidelines were developed based on the National AI Initiative Act of 2020 (AI Initiative Act), which provides the legal framework for AI research and development in the U.S.
|
This crosswalk analysis was conducted with the goal of analyzing the characteristics of the U.S. National Institute of Standards and Technology's (NIST) Artificial Intelligence Risk Management Framework (AI RMF) ―one of the most influential systems in the field of AI trustworthiness― and Korea's Trustworthy Artificial Intelligence Development Guidelines, and verifying the consistency of specific items to ensure mutual compatibility. The results of the crosswalk verification confirmed that a significant portion of the detailed items were aligned*, and it was confirmed that both frameworks could be similarly applied to enhance AI trustworthiness and mitigate related risks.
* While there are some differences in the level of detail for each item, the majority of the items are compatible (63 out of 67 detailed verification items from Korea's AI Development Guidelines were compatible with NIST's AI RMF).
Additionally, through the cross-analysis with NIST’s Artificial Intelligence Risk Management Framework (AI RMF), the level and completeness of Korea’s AI trustworthiness verification technology and system were confirmed on a global scale. Based on the results of this cross-analysis, the MSIT and the TTA plan to strengthen their technical cooperation with NIST and increase efforts to ensure alignment with technical standards and norms in the field of international AI trustworthiness. Moreover, any differences identified between the AI RMF and Korea’s Trustworthy AI Development Guidelines will be addressed during the enhancement of the guidelines to further improve the international consistency of Korea’s AI trustworthiness technology and verification system.
Son Seung-hyun, President of the TTA, stated, “This cross-analysis has served as a catalyst for elevating Korea’s AI trustworthiness assurance system to a global level. Moving forward, we will continue to expand international cooperation to strengthen the credibility of our AI trustworthiness technology and verification system, and solidify the support structure for domestic industries, thereby fulfilling our role as a specialized agency in the field of AI trustworthiness.”
Song Sang-hoon, Deputy Minister for the Office of ICT Policy at the MSIT, emphasized, “The AI Basic Act, recently passed at the Legislation and Judiciary Committee, has laid a broad foundation for government support to ensure AI trustworthiness and safety.” He further stated, “We will strengthen policy support to secure the trustworthiness and safety of the domestic AI industry, and actively promote international standardization in the field of AI trustworthiness and safety in cooperation with specialized institutes such as the AI Safety Research Institute and the Telecommunications Technology Association, to help domestic companies expand globally.”
For further information, please contact the Public Relations Division (Phone: +82-44-202-4034, E-mail: msitmedia@korea.kr) of the Ministry of Science and ICT.
Please refer to the attached PDF.