- Three out of five elite teams (LG AI Research Institute, SK Telecom, and Upstage) advanced to the second round
- One additional team will be selected to secure globally competitive AI models and foster a dynamic AI ecosystem
【Relevant National Task】 21. Becoming the world’s most AI-literate country
The Ministry of Science and ICT (MSIT, Deputy Prime Minister and Minister: Bae Kyung-hoon), along with the National IT Industry Promotion Agency (NIPA, President: Park Yoon-gyu), and the Telecommunications Technology Association (TTA, President: Son Seung-hyun), announced the Phase 1 evaluation results of the “Sovereign AI Foundation Model” project.
MSIT has been advancing the “Sovereign AI Foundation Model” project as part of the new administration’s strategic objective to position Korea among the world’s top three AI powerhouses (AI G3). The initiative is also intended to address issues of technological, cultural, economic, and security dependence stemming from heavy reliance on global AI models.
Five elite teams* selected for the project have driven continuous technological innovation through intense competition since August 2025. Their AI foundation models, unveiled at the end of 2025, were all recognized as “Notable AI Models” by Epoch AI, a US-based nonprofit AI research organization.
* NAVER Cloud, Upstage, SK Telecom, NC AI, and LG AI Research Institute
|
Team
|
Achievements
|
|
NAVER Cloud
|
An omni-modal architecture capable of
understanding text, images, and audio all together, aligned with the
company’s goal of operating a nationwide AI service platform.
|
|
Upstage
|
A foundation model with 100B (100 billion) parameters,
achieving performance comparable to large language models while maintaining a
relatively smaller parameter size.
|
|
SK Telecom
|
A 519B-parameter AI model with the goal
of building one of the world’s largest-scale AI models.
|
|
NC AI
|
A vertical AI model designed for application
across multiple industries including gaming, manufacturing, and defense.
|
|
LG AI Research
Institute
|
A frontier-level 236B AI model that can
be immediately deployed across a wide range of industrial settings.
|
Drawing attention and interest from the entire AI industry from the beginning of its proposal stage, this project has accelerated comprehensive cooperation among capable domestic AI companies, academia, and research institutions, contributing to injecting momentum into the AI ecosystem.
※ In addition to the models developed by some of the elite teams (LG AI Research Institute: K-EXAONE, Naver Cloud: Hyper CLOVA X SEED Think (32B), Upstage: Solar Pro2), Motif Technologies 's Motif-2-12.7B and KT's Mi:dm K2.5 Pro have also been listed on the LLM Leaderboard by Artificial Analysis, a global AI performance evaluation agency.
Leveraging large-scale GPU clusters, outstanding talent was able to freely develop hyperscale, innovative AI models that had previously been out of reach. The experience and know-how accumulated during this process are expected to become core assets for the future growth of Korea's AI industry.
MSIT, NIPA, and the five elite teams had in-depth discussions to devise the Phase 1 evaluation framework and criteria.
The Phase 1 evaluation consists of three pillars: benchmark evaluation, expert evaluation, and user evaluation. During this phase, comprehensive assessments were made not only on the AI Frontier Index (which measures AI model performance), as well as actual industrial applicability, model size, and cost effectiveness, but also on the AI Diffusion Index covering usability, ripple effects across internal and external AI ecosystems, and future plans.
1. Benchmark Evaluation (40 points)
The benchmark evaluation was made with categories of (1) the National Information Society Agency (NIA) benchmark evaluation (10 points), (2) the global common benchmark evaluation (20 points), and (3) the global individual benchmark evaluation (10 points).
The NIA benchmark evaluation assessed performance in mathematics, knowledge, and long-text comprehension. In addition, reliability, and safety were evaluated in collaboration with the AI Safety Institute (AISI).
For the global common benchmark evaluation, 13 internationally recognized benchmarks were selected, covering areas such as agent capabilities, mathematics, knowledge and reasoning, and instruction following.
The global individual benchmark evaluation applied five benchmarks to compare each elite team’s model with its corresponding global SOTA*-level model.
* SOTA (State-of-the-Art) models represent AI models that have demonstrated top-tier performance on major global leaderboards and benchmarks.
In the NIA benchmark evaluation, SK Telecom and LG AI Research Institute received the highest score, each earning 9.2 out of 10 points. LG AI Research Institute also led the global common benchmarks with 14.4 out of 20 points. In the global individual benchmark evaluation, Upstage and LG AI Research Institute both achieved a perfect score of 10 points.
Overall, LG AI Research Institute recorded the highest benchmark evaluation score with 33.6 out of 40 points (the average score of the elite teams: 30.4).
2. Expert Evaluation (35 points)
Expert evaluation was conducted by an evaluation committee composed of 10 external AI specialists from industry, academia and research institutions. The committee carried out a comprehensive and in-depth review of the materials submitted by each team over a long period of time based on the criteria of (1) development strategies and technologies, (2) development outcomes and future plans, and (3) ripple effects and contribution plans.
By analyzing technical reports on the AI models unveiled by the five elite teams, as well as their AI model training logs, the committee thoroughly assessed their technical development processes and capabilities including the sovereignty of their approaches.
In this evaluation category, LG AI Research Institute received the highest score of 31.6 out of 35 points (the average score of the elite teams: 28.56).
3. User Evaluation (25 points)
For the user evaluation, a panel of 49 expert users including CEOs of AI startups conducted an in-depth analysis of the websites incorporating the AI models developed by the elite teams to verify real-world applicability and the cost-efficiency of reasoning operations.
LG AI Research Institute excelled in this category, achieving a perfect score of 25 (the average score of the elite teams: 20.76).
After the benchmark, expert, and user evaluations, four teams—LG AI Research Institute, Naver Cloud, SK Telecom, and Upstage—were selected to advance to the next stage.
MSIT has planned and advanced this project to reduce dependence on global AI models and to secure original, sovereign AI technologies. To this end, MSIT defined a “sovereign AI foundation model” not as a “model derived through fine-tuning overseas AI models,” but as a “domestically designed and pre-trained model developed from scratch (free from any licensing constraints that may arise from the use of third-party models)”.
※ In the project announcement released in July 2025, MSIT emphasized that AI models fine-tuned from overseas AI models are not considered “sovereign AI foundation models,” as they conflict with the goal of the project.
From technological, policy, and ethical perspectives, a “sovereign AI foundation model” is defined as follows.
1. Technical Perspective
From a technical standpoint, the project pursues “sovereign AI models” that undergo end-to-end training across the entire AI development lifecycle, including the design of original AI model architecture, the acquisition and processing of large-scale data, and model training using independently developed learning algorithms.
While the prevailing trend in the global AI ecosystem is to use open sources, both domestic and international AI companies and academic institutions regard resetting model weights, retraining models, and forming and optimizing parameters (weights) as baseline requirements for establishing sovereign AI models.
Accordingly, the elite teams may strategically leverage proven open sources to utilize verified technological capabilities, maintain alignment with the global AI ecosystem, and facilitate entry into global markets. However, resetting weights and conducting independent training and development are regarded as the minimum requirements for securing the sovereign nature of the models.
2. Policy Perspective
The use of overseas AI models in areas such as defense, diplomacy, security, and national infrastructure (power grids, transportation systems, and communications networks) could pose threats to national security or increase the risk of leaks of confidential national information. To avoid these risks, the project seeks to secure the capability to develop and advance AI models independently at any time, thereby ensuring technological sovereignty, while also maintaining full control over the operation and use of AI models under all circumstances.
In other words, sovereign AI models must be either developed entirely using domestic technologies or capable of being developed and advanced through the use of open sources that are free from license restrictions. Specifically, this means that models leveraging open sources must remain free from external control or interference.
3. Ethical Perspective
In today’s AI ecosystem, the use of open sources has become a common practice. Against this backdrop, it is essential to promote the sound development of AI technologies by ensuring compliance with licensing requirements including explicitly disclosing references used in AI model development, enhancing trust in the AI ecosystem, strengthening public verification, and enhancing transparency.
After a comprehensive assessment across these three perspectives, it was determined that Naver Cloud did not meet the requirements for a “sovereign AI”. The evaluation committee, composed of AI specialists, also shared similar viewpoints. Accordingly, it was ultimately concluded that Naver Cloud failed to satisfy the criteria for a “sovereign AI foundation model”.
Out of the five elite teams that underwent the Phase 1 evaluation, LG AI Research Institute, Upstage, and SK Telecom advanced to the second round.
Since the purpose of the “Sovereign AI Foundation Model” project is to provide opportunities for all participants to advance their technologies to a world-class level, MSIT plans to select one additional elite team by opening applications to (1) consortia that applied for the initial project, (2) the consortia that were eliminated after the Phase 1 evaluation (two consortia led by Naver Cloud and NC AI), and (3) other qualified and capable companies.
※ By doing so, four top-tier teams will compete with one another in 2026.
The newly selected team will be given opportunities to develop its own sovereign AI foundation model, with government support in the form of GPU and data, as well as the designation as a "K-AI company”. MSIT plans to expedite the administrative procedures to launch an additional call for participants.
With the selection of one more team, a total of four elite teams will continue competing in technological innovation to develop globally competitive AI models throughout the first half of 2026.
"This project represents a historic challenge for South Korea to confront global AI competition with our own sovereign technologies,” said MSIT. “The government will concentrate all available national capabilities and resources on securing sovereign AI foundation models to establish a sustainable and sound AI ecosystem and to position South Korea at the forefront of global AI technological competition.”
For further information, please contact the Public Relations Division (Phone: +82-44-202-4034, E-mail: msitmedia@korea.kr) of the Ministry of Science and ICT.
Please refer to the attached PDF.