Deception operations are the ultimate mind games of war. Manipulating enemy commanders into expecting an attack at the wrong place, or tricking them into underestimating your strength can be far more powerful than tanks or bombs.
But what if the enemy is enhanced by a thinking computer?
Successful operations must now fool not only human commanders, but the AI that advises them, according to two US Army officers. And Russia and China — with their rigid, centralized command and control — may be particularly vulnerable if their AI is deceived.
“Commanders can no longer rely on traditional methods of deception like hiding troop movements or equipment,” argue Mark Askew and Antonio Salinas in an essay for the Modern War Institute at West Point. “Instead, shaping perceptions in sensor-rich environments requires a shift in thinking — from concealing information to manipulating how the enemy, including AI systems and tools, interpret it.”
Historically, commanders went to great lengths to fool enemy generals using misdirection, decoy armies and letting slip false war plans. Today, nations will have to focus on “feeding adversaries accurate if misleading data that can manipulate their interpretation of information and misdirect their activity,” the essay said
The idea is to turn AI into an Achilles heel of an enemy commander and their staffs. This can be done by making “their AI systems ineffective and break their trust in those systems and tools,” the essay suggests. “Commanders can overwhelm AI systems with false signals and present them with unexpected or novel data; AI tools excel at pattern recognition, but struggle with understanding how new variables (outside of their training data) inform or change the context of a situation.”
For example, “slight changes in a drone’s appearance might cause AI to misidentify it,” Askew and Salinas told Business Insider. “People are not likely to be thrown off by small or subtle tweaks, but AI is.”
To determine enemy intentions or target weapons, modern armies today rely on vast amounts of data from a variety of sources ranging from drones and satellites, to infantry patrols and intercepted radio signals. The information is so copious that human analysts are overwhelmed.
Master Sgt. Jeff Lowry/US Army
What makes AI so attractive is its speed at analyzing huge quantities of data. This has been a boon for companies such as Scale AI, which have won lucrative Pentagon contracts.
Yet the power of AI also magnifies the damage it can do. “AI can coordinate and implement flawed responses much faster than humans alone,” Askew and Salinas said.
Fooling AI can lead to “misallocation of enemy resources, delayed responses, or even friendly fire incidents if the AI misidentifies targets,” the authors told Business Insider. By feeding false data, one can manipulate the enemy’s perception of the battlefield, creating opportunities for surprise.”
Russia and China are already devoting great efforts to military AI. Russia is using artificial intelligence in drones and cyberwarfare, while the Chinese military is using the DeepSeek system for planning and logistics.
But the rigidity of Russian and Chinese command structures makes any reliance on AI an opening. “In such systems, decisions often rely heavily on top-down information flow, and if the AI at the top is fed deceptive data, it can lead to widespread misjudgments,” the authors said. “Moreover, centralized structures might lack the flexibility to quickly adapt or cross-verify information, making them more vulnerable to deception if they cannot protect their systems.”
In other words, false images are fed to an enemy’s sensors, such as video cameras, to try to get the AI to rush to the wrong conclusion, further blinding the human commander.
Naturally, China and Russia — and other adversaries such as Iran and North Korea — will seek to exploit weaknesses in American AI. Thus, the US military must take precautions, such as protecting the data that feeds its AI.
Either way, the constant presence of drones in Ukraine shows that the sweeping maneuvers and surprise attacks of Napoleon or Rommel are becoming relics of the past. But as the MWI essay points out, surveillance can determine enemy strength, but not enemy intent.
“This means deception must focus on shaping what the adversary thinks is happening rather than avoiding detection altogether,” the essay said. “By crafting a believable deception narrative — through signals, false headquarters, and logistical misdirection — commanders can lead enemy AI and human decision-makers to make ineffective decisions.”
Like any scam, military deception is most effective when it reinforces what the enemy already believes. The essay points to the Battle of Cannae in 216 BCE, when a Roman army was nearly annihilated by Carthage. Intelligence wasn’t the problem: the Romans could see the Carthaginian forces arrayed for battle. But Hannibal, the legendary commander, deceived Roman commanders into believing the center of the Carthaginian line was weak. When the Romans attacked the center, the Carthaginian cavalry struck from the flank in a pincer maneuver that encircled and decimated the legions.
Two millennia later, the Allies used elaborate deception operations to mislead the Germans about where the D-Day invasion would take place. Hitler and his generals believed the amphibious assault would occur in the Calais area, nearest to Allied ports and airbases, rather than the more distant Normandy region. Fake armies in Britain, complete with dummy tanks and planes, not only convinced the Germans that Calais was the real target. The German high command believed that the Normandy landings were a feint, and thus kept strong garrisons in Calais to repel an invasion that never came.
Drones and satellites have improved battlefield intelligence to a degree that Hannibal could never have imagined. AI can sift through vast amounts of sensor data. But there still remains the fog of war. “AI will not eliminate war’s chaos, deception, and uncertainty — it will only reshape how those factors manifest,” the essay concluded. “While intelligence, surveillance, and reconnaissance systems may provide episodic clarity, they will never offer a perfect, real-time understanding of intent.”
Michael Peck is a defense writer whose work has appeared in Forbes, Defense News, Foreign Policy magazine, and other publications. He holds an MA in political science from Rutgers Univ. Follow him on Twitter and LinkedIn.