Artificial intelligence
How the AI Act aims to regulate artificial intelligence in the EU
25 January 2024
Artificial intelligence is playing an ever more important role in our lives – and this began long before all the hype about ChatGPT. And yet, alongside its many opportunities, AI is also associated with various risks. The Artificial Intelligence Act (AIA) is intended to provide a future regulatory framework for artificial intelligence within the EU. The European Parliament and the EU member states agreed on a draft version in December 2023, and the AI regulation is set to be adopted early this year. TÜViT expert Vasilios Danos explains what the AI Act is going to change.
Mr Danos, how is the AI Act going to be used to regulate artificial intelligence in the EU; in other words, what is it going to change?
Vasilios Danos: As far as the citizens are concerned, the AI Act will in the first instance create greater transparency in respect of where artificial intelligence is used and offer the added security that AI will not infringe people’s rights. Companies, providers and authorities which use, introduce or develop AI applications will in the future be forced to consider the risks and impacts these applications may have and how they can be minimised. The action to be taken will depend on the risk class to which their AI application has been assigned.
The AI Act distinguishes between four risk classes. What exactly are these?
The lowest level is the “Low Risk” category. This includes things like spam filters and AI avatars in video games. In other words, applications which pose no risk of physical harm or breaches of rights or financial damage. These applications are largely exempt from all requirements. The next level is “Limited Risk”. This would include simple chatbots, for example: AI systems which interact with users. Applications of this kind will in future have to make it explicit to users that they are dealing with an AI and not a human being. Deepfakes – manipulated videos of famous people – and other AI-generated content will also have to be marked as such. The third level is the “High Risk” category. This covers biometric access systems which use facial recognition to identify people or AI systems which are used to automatically scan job applications. With the latter, the possibility that applicants may be discriminated against and excluded based on their name must be ruled out. But AI-controlled industrial robots which might harm people if they malfunction and certain types of critical infrastructure, such as telecommunications, water and electricity supplies, are also classed as High Risk. The AI Act therefore contains a large number of strict provisions to cover this risk level.
And what sanctions apply in the event of breaches of the regulations?
Infringements are punishable with considerable fines. Using banned AI applications can be punished with a fine of up to 35 million euros or seven percent of the company’s global turnover. Other breaches can cost up to 15 million euros or three percent of annual turnover. This means that it is essential for companies to assign their AI applications to the correct category and implement the necessary measures. However, many of them will find it virtually impossible to do so by the deadline without outside help. All the more so when you consider that tests of AI systems of the kind that my team and I have been working on for quite a few years now are quite simply completely new terrain for companies. Unlike traditional software, there are no established test tools or best practices here that you can fall back on.
Are ChatGPT or other AI-based chatbots also covered by the AI Act?
By the time ChatGPT was introduced at the end of 2022, the deliberations on the AI Act were already relatively far advanced. The EU responded to this development by creating a separate category for these “basic models”. Basic models are AI models which are used for various purposes (also known as “General Purpose AI” or GPAI for short) and which have a big impact on a lot of people as a result. The AI Act calls on providers of these models to implement far-reaching transparency requirements, for example, by disclosing the origin of the training data used and, where applicable, arranging for independent third parties to test them for security.
Wann soll der AI Act in Kraft treten?
Wenn alles nach Plan läuft und die endgültige Formulierung abgeschlossen ist, wird der AI Act in diesem Frühjahr verabschiedet. Dann gibt es eine Übergangsfrist, innerhalb der die Verordnung in die nationale Gesetzgebung gegossen wird. Die Verbote für unerlaubte KI sollen voraussichtlich bereits nach sechs Monaten greifen, die Anforderungen für ChatGPT und andere Basismodelle nach einem Jahr. Für die weiteren Risikoklassen ist eine Übergangszeit von zwei Jahren vorgesehen.
How will companies have to prepare for the new regulations?
It will basically go like this: Companies won’t get a letter from an authority assigning their AI application to one of the risk categories. Instead, the onus will be on them to assign it correctly themselves and to take the necessary measures. And this will have to happen before the AI application is even launched or used. For example, if an AI system comes under the High Risk category, a whole raft of requirements will need to be met; then, the robustness and security of the systems may have to be validated by an independent third party like TÜViT. Moreover, the companies will be required to implement an AI (risk) management system that will cover the application’s complete life cycle: How was the AI system developed; what was the quality of the training data; what risks are associated with them; how was the AI validated and tested? ISO standards have already been published for certain aspects, such as an AI management system, for instance. (Testing) standards for other aspects, such as security and transparency, are in the process of being drafted. We at TÜV NORD are also represented on the committees by which the general requirements of the AI Act are being determined in detail for particular fields of use. AI applications in mechanical engineering, medicine and telecommunications are subject to other factors and challenges, which are also reflected accordingly in the tests.
And what is the fourth and last risk class all about?
This is the “Unacceptable Risk” category. These are AI systems which pose a clear risk to basic rights, such as those which automatically analyse our behaviour or are used for manipulation purposes. It’s for this reason that such AI applications are banned by the AI Act. These might include emotion recognition systems in the workplace or what are known as social scoring systems, which countries like China use to evaluate and monitor people on a massive scale by means of AI.
Is the AI Act capable of getting AI fully under control, or are there still loopholes which will need to be closed in the future?
As things stand, the AI Act should indeed cover all the widely used AI applications. But the artificial intelligence field is developing at lightning speed. Many AI researchers didn’t think that something as powerful as ChatGPT would emerge for another 30 years. And new AI technologies might well also be developed that aren’t covered by the AI Act. So, if we’re going to keep up with this breakneck development and ensure that we can continue to regulate it, we’re going to have to be alert and able to act fast.
This is an article from #explore. #explore is a digital journey of discovery into a world that is rapidly changing. Increasing connectivity, innovative technologies, and all-encompassing digitalization are creating new things and turning the familiar upside down. However, this also brings dangers and risks: #explore shows a safe path through the connected world.