Artificial Intelligence
Which AI systems must meet which requirements starting in August.
July 31, 2025
With the AI Act, the EU has set in motion comprehensive regulation for artificial intelligence: Since February, AI systems that are used for manipulation purposes or otherwise threaten fundamental rights have been categorically banned. At the beginning of August, the next stage of implementing the AI Act now ignites. Which AI systems are affected by this, which requirements they must meet and how the AI Act continues afterwards is explained by Thora Markert, AI expert at TÜVIT.
#explore: Ms. Markert, on August 2, the implementation of the EU's AI Act enters the next phase. What changes?
Thora Markert: From this date, the requirements for so-called General-Purpose AI systems (short: GPAI) come into force. For systems that have a general purpose, can fulfill very many tasks at once and could therefore have a very large impact on society as a whole. This includes large language models like ChatGPT, Gemini from Google or Llama from Meta, but also AI systems that can generate images.
What requirements must the providers of such systems fulfill in the future?
All providers of such GPAI models must register with the EU, create detailed technical documentation of the model and make it available to the supervisory authority upon request. The EU's primary goal is to make these systems more transparent for supervisory authorities and users: The providers of such an AI system must, among other things, disclose the origin of the training data, as well as the computing resources used and energy consumption – so it's also about the sustainability of these systems. Users must be informed in an understandable way about the capabilities and limitations of the system. So it must become clear what errors this AI could produce. Particularly important: AI-generated images, videos, audios or texts must be clearly marked as such, for example through a watermark. Practical guidelines for implementing legal obligations are provided by the recently published General-Purpose AI Code of Practice. It contains concrete explanations on how the requirements for GPAI models regarding transparency, copyright and safety & security can be met.
The AI Act distinguishes a further level for these general-purpose AI: GPAI models with systemic risk. What's behind this?
This category includes models that reach a certain threshold of computational operations or meet other criteria – those that are used by very many users, have a very broad application area and can thus have harmful effects on society, democracy, economy or fundamental rights of EU citizens. Therefore, they must fulfill even more and stricter requirements: The providers must intensively test their model and prove these tests to the EU; they must identify and mitigate possible system risks and also implement adequate cybersecurity measures. Because if such systems were hacked and manipulated, this could cause great damage to society as a whole. In addition, the providers must operate comprehensive governance of their systems. This means they must document in detail how the development and training took place, which data was used and how the system is further trained and updated.
And which AI systems fall into this category, so have a systemic risk?
These are definitely the mentioned large language models like ChatGPT and Gemini. Smaller language models with fewer parameters and capabilities do not fall into this category. Some aspects of the AI Act will only crystallize more precisely through further implementation and practical experience. One must therefore look very closely at each individual system to see whether it poses a systemic risk and has to fulfill the corresponding requirements.
What are the consequences of violations?
Violations can be punished with up to 35 million euros or seven percent of worldwide annual turnover. So we're talking about considerable sums here. We support companies in clarifying the question of to what extent they are affected by the AI Act and which requirements they must fulfill if they are providers or users. If you use AI in the company, you are obligated to train employees in dealing with these systems.
How does the implementation of the AI Act continue?
The next stage takes effect next year: From August 26, 2026, high-risk systems will also be covered by the AI Act. However, these are only systems that are not already regulated by other EU directives: for example biometric systems, AI systems used in critical infrastructure, for border control or in asylum procedures, and AI applications that automatically screen applications, as these can disadvantage people. But general-purpose AI systems can also be classified as highly critical and must then additionally fulfill the requirements for this AI class: Providers must, for example, implement AI (risk) management that covers the application over its complete lifecycle, and have the robustness and security of the systems tested by independent third parties like TÜVIT if necessary. In the final stage from August 2027, the AI Act finally also covers high-risk systems that already fall under other EU regulations: for example medical devices, applications for aviation, shipping and aerospace, elevators, electronic toys and networked machines for Factory 4.0.
When the rapid triumph of general-purpose AI began with ChatGPT from the end of 2022, the development of the AI Act was already very far advanced. Can one say that the EU reacted extraordinarily quickly to the new development here?
Absolutely! The paragraphs on these GPAI models were incorporated into the latest drafts of the AI Act. The EU reacted quickly here and also took into account the dynamic technological development in this field in the design of the AI Act: The AI Act is deliberately designed to be non-static. It should keep regulatory pace with development, be adapted and expanded as needed to also capture possible new AI systems and developments. The EU is currently building up a large team of independent AI experts to both advise the Commission on implementing the AI Act and alert it to new risks.
This is an article from #explore. #explore is a digital journey of discovery into a world that is rapidly changing. Increasing connectivity, innovative technologies, and all-encompassing digitalization are creating new things and turning the familiar upside down. However, this also brings dangers and risks: #explore shows a safe path through the connected world.