Governing AI: A Geopolitical Imperative
What are the competing geopolitical and regulatory forces converging around AI?
“We have experienced moments of major technological change before. But we have never experienced the convergence of so many technologies with the potential to change so much, so fast.”
This is the latest statement from the Stanford Emerging Technology Review (SETR), a group of working university faculty and engineers dedicated to:
“help both the public and private sectors better understand the technologies poised to transform our world so that the United States can seize opportunities, mitigate risks, and ensure that the American innovation ecosystem continues to thrive.
But history has shown that technological breakthroughs and geopolitics seldom remain siloed. Geopolitics and regulation of AI are converging as the security implications of its vast applications grow amid a broad paradigm shift in global politics.
Geopolitics of AI
In an era where technological advancements are shaping the global landscape, the United States faces an urgent need to maintain its leadership in the field of AI.
Senators, including Mike Rounds, have initiated discussions on crafting AI policy with an "incentive-based" approach. This strategy aims to retain AI developers within the United States to ensure the nation's continued innovation in this strategic area.
Senate Majority Leader Chuck Schumer has expressed concerns that EU-style regulations could put American firms at a disadvantage when competing with China. Though Beijing’s AI ambitions face far more semi-structural hurdles than it cares to admit.
White House National Security Advisor Jake Sullivan also emphasized the need to preserve America's edge in science and technology as both a matter of national security and international competitiveness. AI stands to play a central role as increasingly autonomous systems transform defense technology.
The dual use nature of a technology with civilian (commercial) and military (security) applications is not new. Steam engines, the telegraph, chemical discoveries, etc all expressed this dual-use phenotype.
However, what sets AI apart is its unprecedented ability to learn, adapt, and make autonomous decisions based on vast amounts of data. Unlike earlier technologies, AI possesses the potential for exponential growth, constantly improving its capabilities, and transcending traditional boundaries between civilian and military domains.
Paying a Pretty Penny
However, the development and deployment of AI are not without challenges. The costs associated with training large language models (LLMs) like OpenAI’s ChatGPT are significant. A report by SETR found that to train GPT-4 involved purchasing and running 25,000 Nvidia A100 GPU deeplearning chips at a cost of $10,000 each.
The same report found that the electricity consumption for training a model such as ChatGPT can be equivalent to the yearly consumption of over 1,000 U.S. households. And hundreds of millions of daily queries on ChatGPT may consume around 1 gigawatt-hour (GWh) each day. This is roughly the equivalent daily energy use of about 33,000 U.S. households
A model built by
found that the cost to operate ChatGPT costs close to roughly $700,000/day. The boutique semiconductor research and consulting firm acknowledges that there are several “unknown variables” factoring into the total cost.Will Security Trump Regulatory Coordination?
The European Union has taken a significant step by finalizing highly-anticipated comprehensive AI regulations enshrined in the AI Act.
You can read more about it here.
While these regulations are set to come into force in 2025, the EU encourages companies to voluntarily adhere to the rules in the interim, though there are no immediate penalties for non-compliance.
Good luck with that.
The EU's approach is risk-based, categorizing AI impact into tiers, with potential fines ranging from 1.5% to 7% of global sales for companies violating these parameters.
But as I mentioned in my earlier reports on AI, heavy-handed regulation raises concerns about stifling domestic innovation. Companies operating in strict regulatory environments will likely have to divert resources away from building a product and toward legal compliance instead.
As a result, it could risk pushing companies and entrepreneurs into regions with a more accommodative regulatory regime. The result would be a brain drain with severe implications for a country’s security and economic integrity.
Some have argued for industry self-regulation, highlighting the delicate balance between regulation and fostering innovation. French President Emmanuel Macron, in particular, has been a strong advocate for protecting its AI companies, including those developing foundational AI models like Mistral.
European policymakers are still hammering out details of the AI Act, and there is a concern that it may have to be pushed out to January. The problem there is extending it out further may be difficult ahead of the European parliament elections in June.
Yet another example of geopolitics posing a headline risk in a strategic area with multi-iterated consequences and implications. For more information on AI and geopolitics, be sure to check out my previous Weekly Insights:
Subscribe today to access next week’s in-depth report on the geopolitics of large language models.