In 2024, an AI-driven threat assessment platform used by a prominent Western defence consultancy flagged a “60% probability” of imminent Chinese military action over Taiwan. The trigger? Routine exercises by the PLA Navy in the Taiwan Strait. Headlines blared. But the forecast, like so many others churned out by algorithmic clairvoyants, missed the point entirely.
Far from prepping for war, China was signalling, as it often does, through calibrated ambiguity. The manoeuvres were deliberate, yet non-provocative: strategic theatre designed for multiple audiences. In Beijing’s diplomatic language, the louder the gesture, the deeper the restraint. But the AI didn’t get the memo.
As artificial intelligence becomes the new oracle of international forecasting, its blind spots are becoming geopolitical liabilities. Trained overwhelmingly on Western political history and international relations theory, AI models consistently mistake ambiguity for aggression, pragmatism for provocation, and strategic patience for creeping hegemony.
Hardwired Hegemony
Under the silicon hood lie two deeply entrenched Western paradigms. The first is liberalism, which sees trade and interdependence as stepping stones to democracy. The second is neo-realism, which treats rising powers as inevitable rivals. Both theories shape the data, assumptions, and reward functions of Western-trained AI systems. The result? Models that expect China to either converge with Western norms or Collide irreconcilably with them. .
Take the Belt and Road Initiative (BRI). Many AI-powered risk assessments flag it as a neo-imperial project designed to entrap smaller states in debt and bind them to China’s orbit. What they fail to see is how the BRI also reflects a reanimation of classical trade diplomacy, couched in the Confucian idiom of relational hierarchy, infrastructure-as-order, and the slow accretion of influence through presence rather than pressure.
Similarly, AI models tend to classify China-Russia ties as a transactional alliance bent on undermining the West. But that ignores the historical wariness between the two powers, and the way both navigate cooperation through calibrated limits rather than ideological unity. Algorithms thrive on binaries; geopolitics resists such simplicities.Machine-Built Misunderstanding
At the core of the problem is a philosophical flaw: positivism. This doctrine, dominant in the social sciences, assumes that political behaviour can be quantified and predicted using universal laws, just like gravity. Positivist paradigms reward what is measurable, troop number, trade flows, satellite imagery, while discarding what is not: symbolic gestures, ideological narratives, or the emotional memory of past humiliations.
Yet these are precisely the elements that shape BRICS+ countries’ foreign policy. To miss them is to misread the entire script. These misreadings are not just theoretical. They shape policy. Western governments increasingly use AI to supplement intelligence reports and shape diplomatic strategy.
This creates a feedback loop. AI misreads intent, policymakers react as if the worst is imminent, China perceives hostility and adjusts accordingly, confirming the AI’s initial “forecast.” What begins as ambiguity becomes, through mechanical misinterpretation, a self-fulfilling prophecy. The cold logic of the machine becomes the warm-blooded aggression of statecraft.
Taiwan is only the most visible case. AI assessments of China’s position on the Ukraine war, for instance, frequently conclude that its diplomatic ambivalence amounts to covert support for Russia. In fact, China’s fence-sitting is a calculated diplomatic position, balancing economic interests with geopolitical hedging. But these frameworks trained to detect alliances and blocs interpret strategic indeterminacy as duplicity. What they can’t explain, they flag as a threat.
Liberal Software, Global Bugs
Western-trained AI does not just operate on biased inputs. It enforces a narrow ideological output. Any deviation from the Western paradigm is flagged as abnormal, inefficient, or dangerous. China’s alternative vision of order, built on hierarchy, sovereignty, and civilisational pluralism, is treated not as a rival model, but as an error.
The irony is that the very technologies used to predict China are themselves products of a civilisation that struggles to understand China on its own terms. It is like using an English dictionary to decode classical Chinese poetry, everything is lost in translation.
The Western vision assume that the path to power is linear: more democracy, more markets, more alliances. China’s approach is layered, historical, and recursive. It prizes ambiguity over clarity, harmony over equality, and the moral role of the state over individual rights. And they are invisible to most AI.
Training the Oracle
Fixing the problem will not be easy. It requires more than tweaking datasets. It means rethinking the epistemological foundations of AI forecasting. Incorporating Confucian principles, strategic ambiguity, and dynastic historical memory into model architecture is not optional—it is essential. That also means integrating non-Western scholars, diplomatic archives, and qualitative expertise into the machine’s education.
Even then, AI should never be left to forecast alone. Its outputs must be filtered through the judgment of human analysts attuned to cultural nuance and political context. The alternative is a world where flawed forecasts fuel flawed strategies, and digital misreadings escalate real-world risks.
Statement
AI may be able to beat humans at Go, but it still struggles to understand why China plays it in the first place. Until Western systems learn to see through non-Western eyes, they will continue to misjudge intent, miscalculate risk, and misdirect policy. In geopolitics, as in intelligence, clarity without understanding is a dangerous illusion.