Categories:

Book Talk by Dr. Brian Wong, Faculty of Political Science, Chulalongkorn University

Artificial intelligence is often discussed as a technological breakthrough, an economic accelerant, or a regulatory headache. But at a recent book talk hosted at Chulalongkorn University, Dr. Brian Wong — Assistant Professor in Philosophy at the University of Hong Kong — challenged the audience to think much bigger.

In conversation with Dr. Thitinan Pongsudhirak of ISIS Thailand, Dr. Wong presented ideas from his forthcoming book, The Geopolitics of Artificial Intelligence, co-authored with Dr. Boris Babic. The central thesis was clear and provocative: AI should no longer be viewed as merely a passive instrument used within geopolitical competition. It is becoming a structural force that reshapes how global power is organized, contested, and legitimized.

What unfolded over the session was not a simple narrative of innovation or a deterministic vision of technological dominance. Instead, the discussion unpacked how AI interacts with authoritarian governance, non-state actors, fragile states, global inequality, environmental stress, and the erosion of sovereign self-determination.

This was less about who “wins” the AI race — and more about what kind of political order emerges in the process.

Beyond the Simplistic “AI Race”

Public discourse often frames AI development as a race between the United States and China — a sprint toward artificial general intelligence (AGI), technological supremacy, or economic dominance.

Dr. Wong questioned whether that framing is analytically sound.

Are these powers even racing toward the same destination?

Different political economies appear to prioritise different outcomes. For some, AI is a tool for industrial upgrading — optimising supply chains, logistics, and manufacturing efficiency. For others, AI is about automating cognitive labour, reshaping white-collar work, and scaling digital decision-making systems.

More importantly, speaking of “China” or “the US” as single, unified actors obscures internal fragmentation. Governments, tech firms, regulators, military establishments, and civil society actors often pursue diverging agendas. Even within authoritarian systems, bureaucratic competition, regulatory turf wars, and political incentives shape outcomes in ways that defy simplistic categorisation.

The “race” narrative flattens complexity. And flattening complexity, the book suggests, leads to flawed governance responses.

AI as a Geopolitical Multiplier

If AI is not simply the object of geopolitical rivalry, what is it?

Dr. Wong described AI as a risk multiplier — not only technologically, but politically. The most dangerous dynamics emerge when AI risks intersect with geopolitical risks.

Consider several domains:

  • Disinformation campaigns amplified by generative AI
  • Autonomous or semi-autonomous weapons systems
  • Algorithmic profiling and automated repression
  • Strategic propaganda at scale
  • Financial and cyber manipulation
  • Real-time influence operations across borders

AI lowers the cost of manipulation, speeds up escalation cycles, and diffuses capabilities to actors previously unable to compete with state power.

And this diffusion is critical.

Authoritarian Regimes and Algorithmic Power

In authoritarian contexts, AI becomes a powerful tool for regime resilience.

Surveillance systems enhanced by machine learning allow governments to track dissent, monitor social networks, and predict unrest. Content moderation systems automate censorship. Predictive analytics tools can anticipate collective action and neutralise it preemptively.

The implications are profound. AI enhances the capacity of states to consolidate control, particularly where checks and balances are weak.

But the book talk did not frame AI as a one-sided authoritarian advantage.

Instead, Dr. Wong emphasised that power is never static.

Non-State Actors and Use of AI in Rebel Governance

One of the most striking parts of the discussion was the acknowledgment that states do not monopolise AI tools. Non-state actors — including insurgent groups, rebel governance structures, and militant organisations — are increasingly leveraging digital tools in conflict settings.

In the aftermath of political upheaval and ongoing ethnic conflict, various armed groups and resistance networks operate in fragmented information environments. AI-enabled tools can assist with:

  • mobilisation and recruitment,
  • encrypted coordination,
  • narrative warfare and legitimacy-building,
  • targeted messaging,
  • and digital counter-propaganda efforts.

At the same time, algorithmic amplification can exacerbate ethnic tensions, spread misinformation rapidly, and entrench fragmentation.

AI thus becomes embedded in contested spaces where sovereignty itself is disputed. It shapes not only how wars are fought, but how legitimacy is constructed.

This blurring of state and non-state technological capability challenges traditional international relations frameworks. AI does not merely reinforce centralised authority; it can destabilise it.

The Weakening of Sovereign Self-Determination

Perhaps the most philosophically compelling theme of the talk concerned sovereignty.

AI systems are largely developed and deployed by powerful technology firms operating across borders. Their algorithms are opaque. Their training data global. Their incentives commercial.

This creates a paradox: even democratic states increasingly depend on privately owned systems for governance, communication, and economic coordination.

Who, then, sets the rules?

If algorithmic infrastructure is controlled by corporations whose accountability mechanisms are unclear, the sovereign right of peoples to determine their own informational ecosystems is weakened.

In fragile states, the problem is compounded. In smaller economies, digital infrastructure often depends on foreign providers. This creates asymmetrical dependencies that can limit policy autonomy.

AI thus raises a structural question: can sovereignty survive when key decision-making infrastructures are externally owned, privately governed, and algorithmically opaque?

The Governance Dilemma: Three Poles or Something More Complex?

During the discussion, a common framework was raised: three governance poles — the US (market-driven), the EU (rights-based regulation), and China (state-centric control).

Dr. Wong urged caution.

While this tripolar framing offers analytical clarity, reality is more entangled. In the US, tech firms and national security institutions are deeply intertwined. In China, regulatory enforcement is not always monolithic or perfectly coordinated. In Europe, innovation ambitions increasingly complement regulatory leadership.

Governance fragmentation may be real, but it is not cleanly divided. And fragmentation carries risks.

Without coordination, retaliatory technological nationalism becomes more likely. Shared standards become harder to negotiate. Escalatory spirals in AI-enabled security dilemmas become more plausible.

Middle Powers: From Rule-Takers to Rule-Shapers

What role can middle powers and smaller states play in this landscape?

Dr. Wong rejected fatalism. While structural asymmetries are real, middle powers possess leverage — particularly through supply chains, regulatory coalitions, and strategic resources.

ASEAN, for example, is embedded in global manufacturing networks. India convenes AI summits and positions itself as a governance voice. The UK has invested heavily in AI safety discourse. Smaller states can coordinate standards, influence norms, and shape multilateral conversations.

The key insight was pragmatic: if you are not at the table, you risk being on the menu.

Collective action among middle powers can amplify bargaining capacity. Even if global institutions lack strong enforcement mechanisms, they still shape agendas and discourse.

The Environmental Trilemma

Another dimension of the conversation addressed AI’s environmental footprint.

Data centres consume immense energy. Cooling systems demand significant water resources. The carbon intensity of AI training is substantial.

Dr. Wong proposed what he termed a new “energy trilemma”:

Countries may struggle to simultaneously achieve:

  1. AI sovereignty
  2. Climate sustainability commitments
  3. Economic growth

Developing sovereign AI infrastructure requires energy and capital. Relying on external AI providers may preserve environmental goals but compromise autonomy. Prioritising rapid economic growth may sideline sustainability concerns.

This trilemma is particularly acute for developing economies seeking to industrialise while also participating in the digital revolution.

AI is not immaterial. It is deeply physical.

AI and Democracy: From Literacy to Resilience

The discussion also touched on democracy.

Some technology leaders argue that AI democratises access to knowledge and lowers barriers to participation. But Dr. Wong invoked concerns reminiscent of Tocqueville’s warnings about democratic societies drifting toward soft despotism — where citizens become passive and administrative power expands subtly.

Algorithms shape attention. They amplify polarisation. They reinforce echo chambers.

AI does not create these vulnerabilities, but it accelerates and amplifies them.

The proposed response was not technophobia but resilience. Education must move beyond “AI literacy” toward cultivating critical autonomy — ensuring citizens can engage with AI without becoming dependent on it.

The issue is not whether AI is inherently good or bad. It is whether societies retain the capacity to shape its deployment consciously.

Accountability in an AI-Driven World

A final theme concerned responsibility. When AI systems produce harmful outcomes — from disinformation to lethal miscalculations — who is accountable?

Blaming the last user is insufficient. Relying solely on corporate transparency is insufficient. Delegating decisions entirely to AI systems introduces automation bias and diffuses responsibility further.

Dr. Wong warned against allowing AI to become the ultimate decision-maker in high-stakes geopolitical contexts. Human gatekeepers remain essential. Accountability requires identifiable agents. Otherwise, governance risks dissolve into procedural opacity.

A Paradigm Shift in the Making

At its core, The Geopolitics of Artificial Intelligence argues that AI may alter not just the tools of politics but the structure of power itself. It reshapes:

  • the speed of decision-making,
  • the scale of influence operations,
  • the distribution of capability between states and non-state actors,
  • the balance between sovereignty and dependency,
  • and the relationship between citizens and authority.

AI is not just embedded in geopolitics. It is beginning to reshape its architecture.

Conclusion: The Stakes of the AI Era

The seminar’s “fireside chat” format made complex ideas accessible, but the underlying message was sobering. AI offers extraordinary economic and societal benefits. It can optimise infrastructure, improve education access, enhance planning, and support scientific discovery. But it also carries systemic risks:

  • authoritarian entrenchment,
  • rebel mobilisation,
  • fragmentation of global governance,
  • erosion of sovereignty,
  • environmental strain,
  • and democratic vulnerability.

The future of AI will not be decided solely in Silicon Valley or Beijing. It will be shaped in conflict zones, in middle-power coalitions, in regulatory forums, and in classrooms.

The central question is no longer whether AI will transform geopolitics.

It already is.

The question is whether governance, cooperation, and collective responsibility can evolve quickly enough to prevent technological acceleration from outpacing political wisdom.

Tags:

Comments are closed