North Korea’s Risky Bet on Military AI

Integrating AI into the North Korean military comes with risks, from technological vulnerabilities to the potential for an inadvertent nuclear war.

The Diplomat
75
6 min read
0 views
North Korea’s Risky Bet on Military AI

Integrating AI into the North Korean military comes with risks, from technological vulnerabilities to the potential for an inadvertent nuclear war.

North Korea for the first time emphasized the integration of artificial intelligence (AI) into the Korean People’s Army (KPA) in the recent Workers’ Party of Korea Congress. The announcement made official the country’s plan to have each service in the KPA – including the Army, Navy, Air Force, Army Special Operations Forces, and Strategic Force – modernize their forces and battle plans using AI.

North Korea has long perceived AI to be a force multiplier. AI can assess a great amount of intelligence data to predict results, and it can improve those predictions so long as it is trained on high-quality data. North Korea’s notorious cyber forces can train AI for military purposes. However, integrating AI into the KPA will prove to be a risky bet, not only because of technological barriers to general adoption across all branches of service but also of the risk of an inadvertent nuclear war.

Developing AI requires a great number of graphics-processing units (GPUs), the data centers for intensive training, the electricity to power those centers, and the human resources to evaluate the inputs and outputs. While North Korea does not have a human resources problem, it does have difficulty with procuring hardware and electricity. Export controls on advanced GPUs prevent Pyongyang from procuring enough of them for large-scale AI training despite some smuggling efforts. The country’s electricity output in 2024 was only 4.2 percent of South Korea’s, which is not enough to power the data centers. 

Even the quality of the training data is doubtful. North Korea is likely to use battlefield data from Russia’s war against Ukraine to train its AI for target identification, obstacle detection, and facial recognition. The goal is to accelerate the kill chain and the accuracy of AI-enabled unmanned systems. However, the Russia-Ukraine War is largely a land war, which limits the usefulness of the data to the North Korean army. The air force, the navy, and nuclear force will have to train the AI on simulations. While simulations allow for dynamic training, they are not battle tested and can malfunction when encountering even small data deviations. 

On paper, AI can resolve the problem of human bias in decision-making, but humans determine the kind of data and method that AI is trained on, as well as what data outputs are appropriate. Using AI data from the Russia-Ukraine War means that the KPA’s experience will be heavily skewed toward Ukraine-like battle scenarios that feature trench warfare and low-level standoffs due to Ukraine’s huge strategic depth. This is different from the total-war scenario that many wargames for Korea have shown, due to Seoul being only 50 kilometers away from the inter-Korean border. 

The amount of training data available also depends on how much Russia is willing to share. North Korea’s limited number of intelligence, surveillance and reconnaissance assets will constrain the quality and quantity of training data, which further exacerbates the selection bias.

Still, a flawed integration of AI into the conventional doctrine will not be as risky as its integration with North Korea’s nuclear doctrine due to the great level of destruction and the short amount of time decisionmakers have between detecting a threat and launching a nuclear weapon.

North Korea currently adopts an asymmetric escalation strategy, which includes the threat of first use of nuclear weapons against both nuclear and conventional threats and a pre-delegation of launch authority to field commanders. Pyongyang has stressed AI-enabled unmanned systems in the force modernization program, and the adoption of AI augments this strategy of delegating control. If North Korea can train its AI-enabled nuclear weapons to identify targets and retaliate without human intervention when they lose contact with the political leadership, Pyongyang can arguably ensure mutual vulnerability in the event of a decapitation attack, which is highly likely in a Korean Peninsula conflict, based on the U.S. operations in Venezuela and Iran. An accelerated kill chain will lessen the amount of time its adversaries have to launch preemptive counterforce strikes against North Korean missiles.

Such adoption looks good in theory, but it will nudge North Korea closer to all-out inadvertent nuclear war because of insufficient and untested AI training data. AI could mistake a temporary loss of communication for a decapitation strike, or its sensors could struggle to differentiate between incoming enemy missiles and harmless aerial objects. North Korea’s ambition to add submarines to the nuclear triad will complicate the scenarios that AI has to be trained for. 

In the unlikely event that North Korea perfects its AI for military use, protecting the critical infrastructure will become another headache. Limiting human intervention reduces one failure point in the kill chain, but North Korea will also create new ones. Its enemies will now target the country’s data centers before it can even launch those weapons. Due to the limited amount of generated electricity and the uneven infrastructure quality between the city and countryside, North Korea cannot spread them out to ensure their survival. Low-level attacks on North Korea’s critical infrastructure could spiral into a nuclear war if North Korea perceives that those attacks can degrade its AI-enabled missiles. A shortened reaction time built into the AI-enabled doctrine will not buy enough time for cool heads to prevail. There will also be tremendous pressure on North Korean adversaries to quickly find and decapitate its nuclear missiles, undermining crisis stability.

Finding a balance between surrender and all-out nuclear war will not be easy, and the adoption of AI in military doctrine is irreversible. North Korea can improve the quality of its nuclear arsenal if it uses AI for target identification and early warning to supplement the decision-making loop. This will still ensure mutual vulnerability thanks to improved accuracy and survival of missiles. Even as North Korea adopts delegative control, humans must always be involved with the decision to launch nuclear weapons. An AI-enabled nuclear doctrine will feature human biases without human oversight. 

For now, limiting AI to the tactical level of war should be the safest bet as North Korea slowly adapts its experience with AI-enabled drones from the Russia-Ukraine War to the Korean battlefield and tests how much its existing critical infrastructure can train and sustain AI. Only when North Korea can sufficiently train its AI and defend the critical infrastructure from cyber and physical attacks should the country revise its doctrines across the board. 

Looking ahead, any future arms control agreement with the United States and South Korea should specify how North Korea can ensure AI will not have the sole authority to launch nuclear weapons.

Original Source

The Diplomat

Share this article

Related Articles