US sets AI standard, leaving Britain on the back foot
When Donald Trump sat down in the Oval Office this week to sign off on a national AI framework, he delivered a global message. The US has committed to one standardised, federally regulated set of rules for AI.
The executive order sweeps away the threat of 50 different state-level approaches to regulating AI – a scenario Silicon Valley has long warned would turn compliance into a cross-country scavenger hunt.
“You can’t expect a company to get 50 approvals every time they want to do something”, Trump said. And needless to say, tech firms did not object.
With AI investors David Sacks and Chamath Palihapitiya looking on, the White House delivered precisely what they and other industry giants like Google, OpenAI and Meta have been lobbying for: a unified framework to keep innovation moving at pace and pre-empt California’s habit of writing tougher regulations than everyone else.
Alongside the order comes an ‘AI litigation task force’, a threat to challenge state laws, and a nudge that federal broadband funding may depend on cooperation.
The UK’s slower stance
The UK, on the other hand, continues to sketch its approach in pencil.
For over two years, Britain has promoted a “pro-innovation, principles-based” model, encouraging regulators to apply broad values to AI, without passing new laws.
Ministers have argued that this would keep Britain nimble, able to adapt as AI evolves.
But what began as flexibility has increasingly taken on the form of hesitation. While the US now has one national playbook and the EU has a full AI act, the UK’s patchwork of consultations and action plans, whilst valuable, hardly forms a rulebook.
Recent developments of the AI action plan, proposals for AI growth labs to test new tech, and an expanded remit for the AI safety (now security) institute, are all useful building blocks, but none amount to the strict structure that has just been implemented over the pond.
Business leaders and parliamentarians have called for greater clarity on the most powerful frontier systems, worried that these agreements are not keeping pace with rapidly advancing technology.
The concerns are not abstract. From deepfake fraud to AI-generated therapy advice, regulators are already dealing with problems caused by emerging tools, that are changing day by day.
And there is a geopolitical angle, too. The EU writes statutory law, the US now enforces one standard, and China continues to legislate aggressively.
A choice the UK can’t outsource
Some of Britain’s hesitation reflects a genuine dilemma: binding legislation written too early risks stifling innovation, but written too late can erode public trust.
No government wants to fossilise a sector expected to add billions to the economy. Ministers have insisted that existing laws already catch many harms, and so far, they are right.
But British PLC increasingly wants clearer principles and predictability. And, they want to know whether the UK’s approach will remain compatible with international partners, especially now that the US has decisively picked its path.