Why digital sovereignty is at the heart of the AI revolution

IS
5 minutes read

By Rosanne Kincaid-Smith (pictured), COO of Northern Data Group

The publication of the full and final text of the landmark Act offers insights into how the law will foster the development of safe and trustworthy AI systems in the EU. The Act comes into force on August 1  2024, but many of its provisions will only apply after a two-year transition period (August 2 2026), giving organisations time to prepare and comply with the new regulations.

The EU AI Act is just one example of how digital sovereignty will shape and transform AI technology over the coming years. Sovereignty is defined as ‘the power or authority to control and rule’, and digital sovereignty encompasses AI-specific aspects: the need for control over the development and deployment of AI tools, computing capacity and training data.

As more countries and blocs build up their own AI infrastructure, capabilities, and industries, it’s no surprise that they are taking steps to remain ahead of the international competition too. Because although a headline aim of the EU AI Act is to ensure the rights of EU citizens are protected in the digital space, it also seeks to give European companies a better opportunity to compete against tech firms in the US, China and elsewhere. How? By keeping data and expertise physically in Europe. But with the UK now firmly disconnected from the EU, how will digital sovereignty in this country play out?

Staying ahead of the competition

A 2024 report by the UK Parliament’s Communications and Digital Committee says that AI tools, like large language models, will produce “epoch-defining changes comparable with the invention of the internet.” And while the committee praises the government’s positioning of the UK as an AI leader, as seen at the inaugural Global AI Safety Summit at Bletchley Park in 2023, it also says more work is needed to enable the UK to compete globally.

To accelerate progress, the UK can look to tech leaders like the US for direction. One step that the US has taken to solidify its own digital sovereignty is to test the launch of a National Artificial Intelligence Research Resource. The pilot aims to make computing resources and data sets available to all. Rather than tech behemoths like Microsoft and Google potentially stifling wider innovation by monopolising computational, data, software, models, training and support, the Resource will “make available government-funded, industry and other contributed resources in support of the nation's research and education community.”

Similarly, the US Federal Trade Commission has launched a competition law inquiry into the Generative AI investments and partnerships of five companies: Alphabet, Amazon, Anthropic PBC, Microsoft and OpenAI. And the US President’s latest March 2024 budget allocated $3.3 billion to responsibly develop, test, procure, and integrate transformative AI applications and increase agency funding for AI, both to “address major risks and to advance its use for public good.”

This idea of “public good” also encompasses protection of individuals' fundamental rights and freedoms, particularly their right to protection of their personal data – returning us to the basics of AI sovereignty.

Complying with UK data laws

Though the UK is no longer in the EU, the provisions of the EU’s GDPR have been incorporated directly into UK law as the UK GDPR. This means that UK citizens have the right to know what information the government and other organisations store about them, in any space and medium like AI, including the rights to:

Be informed about how data is used
Access personal data
Have incorrect data updated
Have data erased
Stop or restrict the processing of data
Data portability (get and reuse your data for different services)
Object to how your data is processed in certain circumstances

But where the EU and UK now differ is that the EU stipulates that personal data that is associated with EU citizens should be processed and stored within EU borders. Meanwhile, there are no data localisation requirements in the UK; its citizens’ data does not need to be physically kept in the country. That means that UK companies can take advantage of the data center and cloud computing capabilities available in the EU to power their AI innovations.

Securing GPU chips in the right locations

Access to the GPUs that can power AI technology is highly vaunted and limited. Very few manufacturers and organisations provide them, and if they’re captured by companies outside of the UK or EU then businesses may struggle to both build AI tools and comply with GDPR.

That’s why, if you’re looking to harness AI in the UK and EU, it’s important to partner with a registered European ML and AI compute capacity provider with European Sovereignty Compliance. These partners should always store and process clients’ data in region, with data centers ideally certified to ISO27001 for added client security and reliability.

It all comes down to “right hardware, right place”. Companies need powerful hardware to power AI processes, but it needs it to be situated in such a way that it complies with local regulations. And, crucially, businesses need to ensure regulatory compliance across all their operations, including with services provided by third-party partners. Ultimately, sovereignty can be a tough hurdle to overcome. But once data localisation is in place, businesses can bring to life ideas that go on to change the entire world.