Digitalisation and Media

AI: To regulate, but not too much

AI: To regulate, but not too much - Shaping Europe

Can we really have it all under the European AI Act?

The hot topic in the European Union (EU) nearing the end of the year is definitely AI. Policy-makers in Brussels are working on a tight timeframe, while politicians from around the Union are trying to voice and push for their concerns. Businesses providing AI-powered solutions also have skin in the game. True to its passion for regulation and its ambitions to be a leader in the digital field, the EU is aiming to have the AI Act passed as binding law before New Year’s Eve. This will be the world’s first comprehensive AI law. Nevertheless, with the symbolic deadline rapidly approaching, there is doubt whether it will be feasible, due to the complexity of the matter and the many vested interests. This article goes over the contents of the Act, the debates around it, and some potential problems.

Origins and substance of the AI Act 

The AI Act is part of the EU’s larger Digital Strategy. It was first proposed by the European Commission (EC) in 2021. Its primary objective is to establish clear guidelines and standards for the development, deployment, and use of AI technologies across various sectors. The goal is to ensure transparency, accountability, and ethical use while balancing innovation and competitiveness. It will classify AI applications in a tier system according to the risk they pose, and depending on the score the amount of regulation will be high, low or somewhere in between. There are four categories envisaged:

  • Unacceptable risk 
  • High risk 
  • Limited risk
  • Minimal risk 

The technology that falls under the unacceptable category will be outright banned and includes real-time biometric identification systems like facial recognition or systems that can manipulate behaviour, such as AI-powered games that learn from personal vulnerabilities to influence decision-making. High-risk applications will have to be assessed before being let on the market and must undergo recurring assessments. Moreover, AI applications that are meant to operate in critical infrastructure (water supply, energy, roads, etc.) will also be categorised as an unacceptable risk. 

If we think about this in practical terms, what happened with working from home during the COVID-19 pandemic, for example, was that specialists in water management could easily do their job with the push of a button. Water quality and access to water were managed through AI-operated technology. This comes with risks, and hence it makes a lot of sense that there should be uniform standards on how the technology is used across the board.

Generative AI, the most popular example being ChatGPT, falls into the limited risk category and will have to comply with rules of transparency and legality of information. This also includes the likes of the recent trend of image manipulation allowing you to look like a Disney princess or a Star Wars character. Lastly, minimal risk includes spam filters or AI-enabled video games, facing close to no regulation.  

Voices in the arena: perspectives and preferences

There are a lot of different stakeholders here. We have the EU institutions with the EC putting innovation at the forefront, while the Parliament pushes for more considerations of safety for the average consumer. Member states stress matters of law enforcement and national security, such as the use of facial recognition, so there is also a battle of who has or should have competence between the national and the European level. Within the currently envisioned Act, facial recognition can be used for security concerns only in grave circumstances with specific permission from the relevant court. However, security and its enforcement are a field of national competence, so there were discussions around whether the EU actually can tackle this in a regulation. Governments have put forward their demands for leaving some more leeway for the use of the technology.

On the other hand, industry representatives are reluctant to accept too much regulation, out of concern that this would hinder innovation and growth opportunities. Here we speak not only of European companies but also of corporate giants like Google and Meta, formerly known as Facebook. Given that Europe is among the biggest consumer markets, and big non-EU companies have a significant foothold in our continent, they will be subject to these regulations if they wish to continue to provide their services. This is a longstanding strategy of the EU, to lead and influence technological development by setting regulatory standards, and thereby exporting its terms and conditions across the globe. One could argue it is even more about influencing these foreign giants than about the European industry which is mainly comprised of small and medium enterprises (SMEs). Ironically, when I asked ChatGPT to give me a list of the top 15 companies that develop AI, only one of them (in the 15th place) was European – Siemens. In that sense, Chinese and American companies have a lot at stake in this debate, while at the same time being in a position to threaten the market. Could you imagine if we woke up tomorrow and there was no Facebook, Instagram, or TikTok?

Research facilities in the EU are also facing constraints as EU-funded research has traditionally been conditional on the end result. This leads, according to some researchers, to less ambitious research projects due to fears of not getting funding if the endeavour does not succeed. Civil society and consumer protection organisations are worried about data protection and privacy. In this excitingly new but crowded space with many voices, it is easy to see how direction might be lost in the noise. And not less important, technology is developing way faster than the notoriously slow EU machine can possibly create and enforce rules. So at the end of the day, we are left with the question: even if the EU pushes the AI Act, will that be enough?

Outlining the challenges and problems

However, while it is important to regulate AI to ensure ethical development and usage of the technology, challenges arise in doing so:

Defining AI

The EU defines AI as “the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity”. Yet this is hardly an extensive definition as the basis of the most comprehensive law in this constantly evolving field. Where do we put machine learning? What is the scope of AI technologies? Before there is comprehensive legislation, these fundamental ambiguities must be addressed as they reflect directly on the ability to categorise AI into the risk tiers and are crucial to what exactly the criteria would look like. 

Companies afraid of losing competitiveness

Not surprisingly, the business world is worried that rigid rules will hamper innovation opportunities and will put Europe behind its strategic rivals the United States and China. In an open letter, 150 European companies have expressed concern that strict compliance will result in a critical productivity gap. Under the Act, they will not be able to use biometric data such as facial recognition and will have to present regular reports of the copyright data used for training of the models, which will stall progress compared to foreign companies that do not have to comply with these regulations. While forbidding facial recognition would be a triumph for privacy watchdogs, the companies believe this is hampering the potential to improve user experience and provide an array of new services, for example using facial recognition as a way of tracking time and attendance minimising risks of impersonation. Thus, the EU would be trading its technological edge for excessive precautions. ‘Better safe than sorry’ was a popular phrase during COVID-19, but the free market economic model does not work like that. It is a risky game that requires taking strategic risks rather than eradicating them in the initial stage.

The fight against disinformation 

Disinformation has become a top priority and somewhat of a buzzword in recent years. In the AI debate, it is not exactly clear how the Act would tackle the possibilities of chatbots producing misinformation. There are measures on transparency and legality, but not on how to ensure that only factual information is generated. There is also the underlying problem that these models feed off and learn from things on the internet, so if there is particularly skewed information out there (like in media outlets), chances are the AI will have a built-in bias. One obvious implication of this is furthering the circulation of misinformation as a response to a certain question posed by the user, especially in cases of very specific fields where the information is scarce, to begin with. Another is the possibility for chatbots to learn to discriminate and project polarising political opinions and overall discriminatory conduct. Examples include an Amazon project which had AI discriminate against the resumes of women just because it was trained with mostly men’s resumes and thought that males are preferred candidates. Another famous case in point was Microsoft’s Taybot which was supposed to learn how to post on social media and it ended up tweeting offensive and discriminatory content.

Can we regulate the past?

While the Act is aiming indeed to be all-encompassing and comprehensive, AI, especially generative AI, has been developing rapidly. Under the regulation, companies will have to disclose the copyright data used to train the models from now on, but what about all the data that was collected before? Also, since so far there has been no requirement to disclose the mechanisms of data collection, how can we ensure that AI, especially ChatGPT, as the most advanced mass consumer application, is not storing personal data? In its newest features, its developer, OpenAI, invites us to take a picture of our fridge so it can tell us what to cook for dinner. If you have ever typed your name or explained your life situation to ChatGPT, do you actually know if this information was promptly deleted? It was probably stored exactly for training purposes, especially in the beginning. In that vein, even with the new regulation, what falls beyond copyright data might remain under the radar of the provisioned European Artificial Intelligence Board.

What to make out of all this? 

Back when the Act was proposed in 2021, the EC’s vice president for the Digital Agenda, Margrethe Vestager, famously said: “On artificial intelligence, trust is a must, not a nice to have”. While she was implying the importance of making sure the technology is trustworthy, there is also the underlying dimension of trust between us as citizens and the governors who make and implement the rules. The race to regulate a constantly changing landscape is very challenging. Therefore, the efforts to come up with a robust working framework should be complemented by a strengthened societal dialogue. More efforts in the digital strategy should be allocated to properly educating the public about the usage and implications of AI so people can safely use it to its full potential as a mass product. 

The multi-level dialogue between different stakeholders should be further stimulated so that soft law instruments like general consensus or common targets are set since traditional style regulation is always hard to agree on and slow to implement. In the meantime, the players should get together and discuss alternative approaches. A good example is the UK summit on AI safety held at the beginning of November, which had world and tech leaders deliberate. It resulted in the signing of a declaration aspiring to joint management of AI-related risks and was even signed by China. When facing a global phenomenon, we need global action, which is not always easily achievable by traditional ways of governing.

Iva Dzhunova holds a BA degree in International Relations from the University of Groningen and is now pursuing an MA in European Policy at the University of Amsterdam.

Image: Shutterstock