31 July 2025
With the AI Act, the EU has got a comprehensive regulation off the ground to get to grips with Artificial Intelligence: Since February, it has been categorically forbidden to use AI systems for the purpose of manipulation or in any other way that might threaten fundamental rights. The start of August will see the launch of the next stage in the implementation of the AI Act. Thora Markert, AI expert at TÜVIT, explains which AI systems are going to be affected, which requirements they will have to meet and what will happen next with the AI Act.
#explore: Ms. Markert, the implementation of the AI Act in the EU will enter its second phase on 2 August. What will change?
Thora Markert: On this date, the requirements for general purpose AI systems (or GPAI for short) will enter into force. These will affect systems which are used for general purposes and can carry out a lot of tasks at once and which might as a result have a huge impact across society at large. This includes large language models like ChatGPT, Google’s Gemini and Meta’s Llama, but also AI systems which generate images.
Which requirements are the providers of such systems going to have to meet going forward?
All providers of GPAI systems like these will have to register with the EU, create detailed technical documentation for their model and make themselves available for inspection by the supervisory authority if requested to do so. The EU’s most urgent aim is to make these systems more transparent for the supervisory authorities and AI users: The providers of AI systems like these will have to do things like disclosing the provenance of the training data along with the computing resources used and their energy consumption – in other words, it’s all about the sustainability of these systems. The users will have to be informed in clearly comprehensible terms about the capabilities and limits of any given system. In other words, it will need to be made obvious which mistakes these AI might generate. It’s going to be particularly important for AI-generated images, videos, audio recordings and texts to be clearly marked as such, by using a watermark, for instance. The recently published General Purpose AI Code of Practice offers practical guidelines for the implementation of these statutory duties. It contains specific explanations as to how the stipulations for GPAI models can be met in respect of transparency, copyright and safety & security.
As far as these all-purpose AI are concerned, the AI Act distinguishes a further level: GPAI models with systemic risk. What does this relate to?
This category includes models that reach a specific threshold of computing operations or meet other criteria – for example, those used by large numbers of users and which have a very broad range of applications and might therefore have a damaging impact on society, democracy, the economy or the fundamental rights of EU citizens. This is why they must satisfy even more and even stricter criteria: The providers must test their models intensively and prove to the EU that they have done so; they have to identify and mitigate possible systemic risks as well as implementing sufficient cybersecurity measures. After all, if systems like these were to be hacked and manipulated, this could cause major harm to society as a whole. Moreover, the providers must have comprehensive governance in place for their systems. That means they must document in detail how the system has been developed and trained, which data were used and how the system is to be trained and updated in the future.
© Adobe StockThere is still some way to go, but from August 2027 the AI Act will also cover high-risk systems, such as networked machines for Factory 4.0.
© Adobe StockThanks to the EU AI Act, language models such as ChatGPT will become more transparent for supervisory authorities and users in future.
And which AI systems fall within this category; in other words, which of them come with a systemic risk?
Definitely the large language models like ChatGPT and Gemini that I mentioned before. Smaller language models with fewer parameters and capabilities aren’t included in this category. Some aspects of the AI Act will only become clear in the course of ongoing implementation and practical experience. Which means that you need to look very closely at the system in question to determine whether it comes with a systemic risk and is required to satisfy the relevant conditions.
What sanctions can be expected in the event of violations of the Act?
Violations can be penalised to the tune of up to 35 million euros or seven percent of the company’s global turnover. Considerable amounts of money, in other words. We’re helping companies to clarify the question of the extent to which they are affected by the AI Act and which requirements they have to meet if they are providers or users. If companies use AI, they are obliged, for example, to train their staff in the use of these systems.
What happens next in the implementation of the AI Act?
The next stage will come into effect next year: As of 26 August 2026, the AI Act will also cover high risk systems. Although this will be restricted to those systems that aren’t already regulated by other EU regulations: These include biometric systems, AI systems that are used in critical infrastructure, border control or asylum processes and AI applications which are used to automatically sift through asylum applications, as these might disadvantage certain people. But general purpose AI systems can also be classed as highly critical and will then also additionally have to satisfy the requirements for this AI class: The providers will for instance need to implement an AI (risk) management system that covers the application over its entire lifecycle, and they’ll also be obliged, if required, to allow independent third parties like TÜVIT to test their system for robustness and security. In the last phase as of August 2027, the AI Act will also include high risk systems that are already covered by other EU regulations: these will include medical products, applications for aerospace and shipping, lifts, electronic toys and networked machinery used in Factory 4.0.
About Thora Markert
Thora Markert is director of the AI Research and Governance division at TÜVIT. The computer scientist deals mainly with the reliability and IT security of AI, has developed a test environment for Artificial Intelligence and subjects systems to scrutiny herself in tests.
When ChatGPT kicked off the rapid rise of general-purpose AI at the end of 2022, the AI Act was already pretty far advanced. Would it be fair to say that the EU has reacted extraordinarily quickly to this new development?
Definitely! The paragraphs concerning these GPAI models were incorporated in the very last drafts of the AI Act. In other words, the EU did react quickly and also responded appropriately to the dynamic technological developments in this field in the way it designed the AI Act. The Act was deliberately designed so as not to be static! The idea is for the regulations to keep pace with the development, if need be to be adapted and expanded to include possible new AI systems and developments. The EU is currently assembling a big team of independent AI specialists to advise the Commission in the implementation of the AI Act and to draw its attention to possible new risks.