Europe relaxes AI regulation in bid to keep up with US and ChinaEurope relaxes AI regulation in bid to keep up with US and China
The EU is reportedly cutting back tech regulation in an attempt to boost investments in AI, while the UK’s AI Safety Institute has tellingly changed its name to the UK AI Security Institute.
February 14, 2025
![](https://eu-images.contentstack.com/v3/assets/blt3d4d54955bda84c0/blt3055cd4f15e9313c/668fe2231c01000e81681310/european-parliament-flag_169_2.jpg?width=1280&auto=webp&quality=95&format=jpg&disable=upscale)
The Financial Times spoke to Henna Virkkunen, the European Commission’s executive vice-president in charge of digital policy, who said the EU wanted to “help and support” companies when applying AI rules, in an effort to boost competitiveness and not get left behind in the ultra-hyped sector.
The report quotes her as saying Brussels needed to ensure “that we are not creating more reporting obligations for our companies”, and that an upcoming EU code of practice on AI expected in April would limit reporting requirements to what is included in the existing AI rules.
The FT report also notes that US President Donald Trump has threatened to retaliate against the EU for the fines it has hit US tech companies with, and his return to the White House “has emboldened Silicon Valley executives in their claim that the EU’s regulatory grip is hurting their companies.”
However report adds that the “EU is cutting back tech regulation to spur investments in artificial intelligence”, and not because of pressure from US political or Big Tech directions.
The deregulatory push, as the report terms it, is driven by the EU’s own desire to boost competitiveness, and not “dependent on the US”. “We are very committed to cut bureaucracy and red tape,” Virkkunen is quoted as saying.
The EU’s AI Act emerged at the beginning of August last year, and placed legal requirements and obligations on companies that want to develop and deploy artificial intelligence, with some provisions within it taking up to three years to enter into force.
“The AI Act ensures that Europeans can trust what AI has to offer”, reads the blurb. “While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.
“For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.
Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.”
This was followed by an AI Pact, which sought to create and share best practices and internal policies when it comes to adhering to the AI Act, and promote and foster the early implementation of its instructions before they enter into force.
Such legislation is not popular by much of the tech-sphere. In September an open letter signed by 59 tech companies, coordinated by Meta, argued that Europe’s bureaucracy in relation to regulating AI means its companies risked falling behind other regions in capitalising on it.
“Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era due to inconsistent regulatory decision making,” it read.
“…we need harmonised, consistent, quick and clear decisions under EU data regulations that enable European data to be used in AI training for the benefit of Europeans,” concludes the letter. “Decisive action is needed to help unlock the creativity, ingenuity and entrepreneurialism that will ensure Europe’s prosperity, growth and technical leadership.”
More recently there seems to be a growing momentum from Europe to pipe more money into AI development. Earlier this week The European Commission announced it aims to facilitate €200 billion in AI investment.
Commission President Ursula von der Leyen said: "Too often, I hear that Europe is late to the race, while the US and China have already gotten ahead. I disagree. Because the AI race is far from over. Truth is, we are only at the beginning...And global leadership is still up for grabs."
Apart from anything else this presents itself in a different tenor to the wording of the AI Act, which emphasised caution and the potential harm to EU citizens AI poses.
On the same day French telecoms group Iliad announced it is investing €3 billion in artificial intelligence, and the French government made a €109 billion AI spending commitment.
Meanwhile over in Blighty, the AI Safety Institute has changed its name to the ‘UK AI Security Institute’. While there is still talk within the press release about it seeks to tackle how the technology can be used to develop chemical and biological weapons, carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse, it also states:
“A revitalised AI Security Institute will ensure we boost public confidence in AI and drive its uptake across the economy so we can unleash the economic growth that will put more money in people’s pockets.”
It’s also announced a new agreement with Anthropic. “This will include sharing insights on how AI can transform public services and improve the lives of citizens, as well as using this transformative technology to drive new scientific breakthroughs,” it states. “The UK will also look to secure further agreements with leading AI companies as a key step towards turbocharging productivity and speaking fresh economic growth – a key pillar of the government’s Plan for Change.
This comes weeks after the UK government “kickstarted the year by setting out a new blueprint for AI to spark a decade of national renewal,” as the release puts it.
This refers to last month’s announcement that the UK is planning to ramp up AI investment and usage across the public sector. Prime Minister Keir Starmer is “throwing the full weight of Whitehall” behind the AI industry, we were told, by agreeing to take on 50 recommendations set out by venture capitalist Matt Clifford, who was commissioned to put together the AI Opportunities Action Plan.
“This is a plan which puts us all-in - backing the potential of AI to grow our economy, improve lives for citizens, and make us a global hub for AI investment and innovation,” said Clifford. “AI offers opportunities we can’t let slip through our fingers, and these steps put us on the strongest possible footing to ensure AI delivers in all corners of the country, from building skills and talent to revolutionising our infrastructure and compute power.”
There’s less specific reference to reducing regulation and summoning gargantuan pots of money in the announcement from the UK government today, but again we can note the change in tone from previous announcements which more often than not seemed to be more about insisting on accountability and transparency, and an increased emphasis on ploughing in head first into the brave new world of AI.
While neither will admit it, the chances that this shift in approach has nothing at all to do with the US administration’s new no-holds-barred stratgey to storming ahead in the AI race, the surprise arrival of China’s DeepSeek on the world stage, and a desire to stay relevant in a sector which its proponents talk up as the most important thing in the world, seems unlikely.
About the Author
You May Also Like