Abrir/cerrar menú
Geopolitical implications of artificial intelligence
Post 9th January 2024

Geopolitical implications of artificial intelligence

Jose Luis Albo

Public Sector Director for Minsait's North Zone

We live in a race by the world's major economies to master certain emerging technologies, in the belief that they will have an impact on the maintenance of the current World Order or the establishment of a new one.

Jose Luis Albo

Public Sector Director for Minsait's North Zone
Geoanalysis
Madrid

The England of the industrial revolution, inter-war Germany, or the post-First World War United States are examples of how technological-industrial development can alter that order and, in this case, AI, but not only AI but a set of disruptive technologies and even the raw materials that enable them, have become the epicentre of that race.

We are talking about AI, of course, but also about 5G, Cloud technologies, quantum computing and next-generation chips themselves. And the raw materials and techniques needed to build them, such as silicon, lithium or rare earths.

The trade war between the United States and China, the world's leading superpower and the one that aspires to become one in the not too distant future, has had something to do with this interest in controlling these technologies, because they all have an impact on the industrial and economic development of States, but also on their defence strategy, their physical security and cybersecurity.

Europe, the United Kingdom, Japan, South Korea and Russia are other players who have an important role to play in the game and, although they are unlikely to win, they have an enormous interest because the result could make them gain, but also lose, important weight on the world geopolitical map for many years to come.

The United States and China have the largest technology companies. China also directly or indirectly controls the production of a very high percentage of the necessary raw materials. These companies, Microsoft, Google, Meta, Apple, NVIDIA or Qualcomen in the United States and Baidu, Alibaba or Tencent in China, are the ones that are channelling tens of billions of dollars in the development of these technologies and specifically in the field of AI.

Chat-GPT has already opened our eyes to the capabilities that this technology can have for better and for worse, but it has also revealed that all these companies were already working with greater or lesser progress on similar solutions: Microsoft's Copilot, Google's Gemini, Meta AI from Meta and others that have not yet been named are being presented or are already announced for presentation in the short term.

Europe has none of these giants, and although it has large technology and industrial companies, some of the largest in the world in their field, such as Nokia, Siemens, Ericsson or SAP, they cannot compete with the investment capacity of Big Tech, with Microsoft alone announcing investments of more than 100 billion dollars in Chat-GPT through Open AI.

In the face of this, Europe is clearly at risk of missing the AI train if it is not able to create the right scenario for big tech companies to emerge with the capacity to invest in this area. Europe announced in 2021 an investment of €1 billion per year in AI, which it expected to grow to €20 billion with private capital contributions. This contribution is complemented by specific items within the MRR funds, such as component 16 of Spain's Recovery, Transformation and Resilience Plan, which contributes €540 million to the development of the national AI strategy.

The French Minister of Economy recently estimated AI investment in 2022 in the US at EUR 50 billion, EUR 10 billion in China and EUR 5 billion in the EU.

And although some studies put public-private investment in AI in Europe in 2023 at more than 30 billion euros, whatever the figures, because it is clear that the variability is enormous, they are far from being figures that allow us to compete now on an equal footing with the United States or China. And in this scenario it seems clear that in the short term the development of the big AI platforms, at least as far as generative AI is concerned, is going to come from the United States and China.

But although Europe has clear weaknesses in this area, it also has strengths that need to be exploited.

Many studies point out that the vital data for training AI models has so far come from the internet. And there, Meta, Google, etc., which have the data provided free of charge by billions of users around the world, or Chinese companies with their government-sponsored monopoly, have had a clear competitive advantage. But these same studies also predict in the short term the importance of the data generated by companies, in generative AI but also in predictive AI, which will enable the automation and improvement of decision-making.

Europe has a very important industrial fabric. Many of the world's largest companies in strategic sectors such as automotive, telecommunications, biotechnology, pharmaceuticals, transport and tourism, among others, are here. Also in the field of defence. And they are industries that possess a huge amount of data, and will generate much more in the future from the Industry 4.0 model, the internet of things or 5G.

Furthermore, Europe also has an important ICT sector, which has companies and multinationals such as Indra with a great technological and innovation capacity, and which has an important role to play in the development of ad hoc solutions for these industries, allowing them not only to maintain their global leadership in the sectors in which they already compete but also to enter new business models that are still unknown through AI.

These ad hoc solutions can come about through the implementation of the major commercial solutions that exist at any given time, as well as specific development in each case. They involve collecting company-specific data and incorporating it into the product (with the integrity and security implications that this entails), training and ensuring that the algorithms work and, finally, connecting AI with business applications.

It is this path, the development of new technology companies around AI, the incorporation of AI into European industries, and the development of a powerful ICT sector capable of serving them, that will mark Europe's role on the global geopolitical stage of AI, and with it in the future world order that will be conditioned by the level of technological development of each country.  And it gives shape to a concept that has become fashionable in recent times, which is that of digital sovereignty. 

European public administrations have an important role to play here, with two fundamental aspects: investment and the stimulation of innovation and regulation.

Technology, and AI much more so, obliges the EU to seek an appropriate long-term position. And this is tremendously difficult because we are talking about an environment that is all but unpredictable and what is done now will shape the future in an enormously complex international environment such as the one described above.

It is critical for the EU to increase the budgets allocated to the development of new technologies, and specifically Artificial Intelligence, through all the tools at its disposal. We are not just talking about subsidies, we are talking about a commitment to a whole series of tools for public-private collaboration and the promotion of innovation. We are talking about facilitating and encouraging the industrial sector to incorporate AI into its day-to-day work. We talk about supporting the creation of public-private data spaces that help companies test and develop their algorithms. And we talk about helping companies compete outside Europe to gain size.

I don't think it's a problem of lack of public funds. The funds and the capacities to move them are there, but each state has its own strategy, and investments as important as those required for AI require a comprehensive vision. Some states are already talking about the desirability of creating a European capital market to respond to this need.

Companies will need specialised workers capable of working and innovating in AI. We are therefore talking about the need for investment to train current workers who will be impacted by AI and the new workers who will join the labour market in the future. AI has increased the number of jobs that are likely to be impacted by new technologies from 50% to 60-70%, and occupations previously outside the pool such as lawyers, consultants, engineers or marketing creatives are no longer in the pool thanks to generative AI. The entry of AI will force, just as mechanisation did in the 19th century, the relocation of significant masses of people and the equipping of them with specific skills, which in turn leads to the redesign of education plans and the promotion of STEM vocations from the earliest years of schooling.

And finally there is regulation. It is clear to all of us that it is necessary. And aside from the apocalyptic news that we see in the press day in and day out, it is clear that the disruptive potential of AI and the risks that its misuse entails for privacy, for discrimination due to the bias of algorithms, for the manipulation of opinions or for the security of states make it necessary. And the security aspect takes on an important role, because regardless of the declarations of intent, democracies coexist in this world with autocratic regimes whose policies regarding AI and its civil and military use may not coincide with those of Europe.  

Europe has repeatedly stated its commitment to reliable AI that is ethical, lawful and robust, to ensure principles such as respect for people, fairness, transparency and privacy. And, consequently, truth.

With this objective in mind, the EU has been working since 2018 on the development of legislation to regulate the use and development of AI. Last December, the European Parliament and the Presidency of the Council gave the green light to a provisional regulation on AI that will now have to be technically developed and subsequently submitted for endorsement by the Member States, a process that will end with the implementation of the European Law on Artificial Intelligence, which is intended to be comprehensive, not before 2026. 

A few weeks ago we attended the AI Safety Summit in the UK, where 28 countries signed a declaration that, among other things, warned of the risks and uncertainties of the accelerated advance of AI, and highlighted the need for global regulation. But who will emerge victorious from this battle for AI leadership in the future will depend in part on what that regulation looks like.

All countries have accelerated their strategy for the design of this regulation, so that it can condition future global standards. The US has just done so through a first executive order that gives an idea of the urgency. The UK, China, Japan and several European states are also in the race. Spain, through the ENIA (National Artificial Intelligence Strategy), and while awaiting the future European law, has designed a framework for the development of AI in Spain, which among other things has led to the creation of the National Agency for the Supervision of Artificial Intelligence.

Given the complexity of the negotiations at European level for the development of the regulation and for its subsequent application in the countries that make up the EU, Europe runs the risk that even having been a pioneer in the development of a new AI law, other countries will overtake us left and right due to their greater agility.

In addition, the first details of the regulation have raised fears in European industry. More than 150 of Europe's leading companies have signed a letter expressing their disagreement with the first draft, which they believe jeopardises European technological sovereignty by not allowing them to compete on a level playing field with international companies. They express doubts especially with regard to generative AI models, as the restrictions they impose would make companies incur disproportionate liability risks and compliance costs, which would lead the most innovative companies to leave Europe and generate a huge productivity gap with the US. 

In the end, it will be necessary to strike a balance in a regulation that must protect the citizen from corporate and government misuse of AI, but at the same time ensure the competitiveness of European companies vis-à-vis their external competition, thus guaranteeing the construction of European strategic sovereignty and contributing to its leadership on the technological and AI geopolitical map.

In the recent past, European regulation has already shown itself to be particularly relevant in protecting the rights of European consumers and, by extension, of the whole world: it has done so with Meta, Facebook and Whatsapp, limiting the movement of sensitive user data outside the European Union. It has done so with Google and Microsoft by punishing anti-competitive strategies that harmed users, or with Apple by forcing them to adopt a connector standard in their devices as opposed to a proprietary connector. And it has succeeded because of the importance of the European internal market for these large companies. 

So it is to be expected that the regulations that the EU develops around AI will have an equal or greater impact on its development not only at the European level but also globally. Hence the importance of getting it right but also getting it fast.

Was it useful?

Choices
newsletter

Ideas in your mail

Subscribe to the Ideas4Democracy newsletter so you don't miss out on global democratic news.

Subscribe