Yellow Card, Red Card...AI Card?
Europe may play the role of a referee in the global AI governance game, with China and the US as the star players - but will they all follow the same rules?
The ongoing AI race globally is as much a commercial endeavor as it is a geopolitical strategy. What we are seeing now is a problem which policymakers have warned about for years: the rate of technological innovation in AI is outpacing the regulation of it.
The three key players in this race are the US, China, and EU, with each region’s domestic cultural and political economy dictating the development of their respective AI capabilities and associated regulatory regimes.
The risk here is a further balkanization of what is already a fragmented digital landscape; specifically, the diverging moral justifications (and regulatory frameworks which spring from it) among the US, EU and China.
Exempli gratia, China’s incorporation of socialist ideals into its AI regime is directly in conflict with the West’s liberally-rooted moral regime underpinning its regulatory scaffolding. But this fissure is being amplified by other pernicious forces.
The geopolitical rivalry between the US and China, and to a larger extent, between West and East, is balkanizing the digital landscape; and the private sector will suffer for it. An increasingly complex regulatory environment will make it difficult for business to operationalize their commercial endeavors across jurisdictions.
Pantheon Insights provides guidance to businesses by identifying these geopolitically-linked forces and how to navigate around them in an ever-changing world.
What’s the Outlook?
To borrow some terms from Soccer (or football depending on what side of the Atlantic you’re on), Europe is more likely to play the role of referee, with China and the US as the two star players from opposite teams.
The problem, however, is whether Beijing and Washington will adhere to European-based rules if it means potentially undermining their position in the ongoing US-Sino Promethean War. In times of war, morality frequently yields to effective strategy.
The EU AI Act
The Artificial Intelligence Act marks the first all-encompassing framework on AI put forward by the West, specifically Europe. The European Artificial Intelligence Board would be responsible for overseeing the implementation of the regulatory regime and ensure uniform application across the EU.
Even though the framework is being drafted and deployed in Europe, the US will likely adopt many of its principles. The so-called Brussels Effect of unilateral regulations applied on a globalized scale means it will likely become the default standard for AI regulation in North America as well.
The externalization of these rules will use the vector of market mechanisms to enforce regulatory adherence for corporations operating outside of the EU, thereby achieving regulatory harmony in US and European markets.
The proposed blueprint would organize AI systems by a tiered risk framework with corresponding regulatory stipulations for its development and use.
The four risk categories are divided into:
Unacceptable risk
High risk
Limited risk
Minimal Risk no Risk,
and an organization’s regulatory obligation “would be dictated by the layer into which [their] AI system falls” with each level of risk carrying more or fewer regulations. Examples of heightened risk to consumers include critical infrastructure machinery, autonomous vehicles, and medical devices.
According to the European Parliament, the use of AI in the following areas will be prohibited:
Artificial Intelligence (AI) for biometric surveillance, emotion recognition, predictive policing
Generative AI systems like ChatGPT must disclose that content was AI-generated
AI systems used to influence voters in elections considered to be high-risk
The EU approach to AI builds on already-existent digital frameworks like GDPR, which already states that “algorithmic systems should not be allowed to make significant decisions that affect legal rights without any human supervision”.
To ensure corporate compliance, European officials are calling for steep penalties, going as high as 6% of global income. However, European heavy-handedness in regulations does not come without a cost.
A major risk is what it will do to European innovation in the private sector. With start ups worrying about adhering to compliance, it may leave less room for cutting-edge innovation out of fear it will incur the wrath of the bureaucratic leviathan that is Brussels.
As it stands, the continent’s share of the overall AI market is small. As of 2020, less than a fifth of large European firms use AI tools at scale, and the continent is home to only approximately 10 percent of global digital unicorns.
As a consequence, Europe’s competitiveness as a technological player will cause them to lose ground to the far-less regulated US but more advanced AI market.
US Approach to AI Regulation
Relative to Brussels, Washington has adopted a looser, more voluntary approach to AI compliance. Policymakers anticipate that states in the US will introduce AI regulation laws as a result of insufficient federal enforcement, potentially leading to a diverse set of state-level regulations for companies to adhere to.
According to a comprehensive report by Brookings:
“The U.S. federal government’s approach to AI risk management can broadly be characterized as risk-based, sectorally specific, and highly distributed across federal agencies. There are advantages to this approach, however it also contributes to the uneven development of AI policies. While there are several guiding federal documents from the White House on AI harms, they have not created an even or consistent federal approach to AI risks…A Blueprint for an AI Bill of Rights (AIBoR) [also] endorses a sectorally specific approach to AI governance, with policy interventions tailored to individual sectors such as health, labor, and education.”
As a result, there will likely be asymmetry in local digital laws as we see in California vs federal standards. At this time, AI regulation on a federal level remains in its pre-adolescent stages. According to a study conducted by Stanford, only five out of 41 federal agencies have created an AI plan. The five in question are the Departments of Energy, HHS, and VA, the EPA, and USAID.
As for as US-EU policy coordination goes, both Washington and Brussels are looking to formulate AI governance policies through the Trade and Technology Council. A prerequisite to deeper and wider policy coordination will require both parties to align on common terminology for the various AI systems and the corresponding risks they pose.
The National AI Advisory Committee, introduced in April 2022, has the potential to serve as an external advisory body, offering guidance to the government on handling AI risks in sectors like law enforcement. Its main focus, however, lies in promoting AI as a valuable national economic asset.
Both Brussels and Washington advocate risk-based approaches to AI regulation and have outlined comparable principles regarding the functioning of trustworthy AI. Having said that, there are regulatory disparities in how both address risk management vis-a-vis AI.
Looking forward, the potential regulatory tension between the Atlantic partners will be between the patchwork-level of AI regulation in the US and the more centralized, all-encompassing regulation of the EU.
China AI Regulation
China is once again facing the same dilemma it did with the birth of the internet decades ago: how to leverage its productive, data-based potential with the risks it will undermine state power? Generative AI in particular poses a major threat.
In China, generative AI providers are required to:
Maintain the integrity of state power
Avoid inciting secession
Protect national unity
Uphold economic and social order
Produce products in line with the country's socialist values
The US falling behind China in regulating AI leaves them geopolitically vulnerable to Beijing exercising power in these markets. Standardization - meaning trust and consistency - are magnetic forces for capital and consumers alike.
As an author at Axios wrote: “…If China can be first on AI governance, it can project those standards and regulations globally, shaping lucrative and pliable markets.”
Having said that, rapid, early-stage regulation of AI has its costs.
Much like in Europe, the heavy-handedness of the Chinese government will likely slow private sector innovation, allowing America to accelerate its place in the Promethean War. Furthermore, China still lacks the advanced, AI-interfacing semiconductors it needs to reach technological parity with the US.
The ongoing rivalry between the US and China also means that it is not likely that both sides will cooperate closely together like they used to. The Promethean War means both Beijing and Washington are looking to adopt new systems of warfare to achieve technical superiority; strict AI regulations could inhibit those efforts.
As mentioned in my previous piece, “Geopolitical Power Play: AI Unleashed”, China is exploring a Systems Confrontation military modus operandi. And supporting this new framework will be data-driven AI models to analyze how to deal maximum damage. Under this hegemonic-struggling conditions, it is difficult to imagine the US, China, and EU harmonizing their regulations on AI if it undermines their positions.