AI Act for Busy People: What You Need to Know in 5 Minutes

AI Act and Artificial Intelligence Regulation

Artificial Intelligence is quickly becoming an integral part of our daily lives – from the way we work to how we communicate. Such fundamental and rapid changes in society must inevitably be met with legislation to ensure a safe and fair framework for its further development and use. That is why the European Union is introducing the AI Act, which represents the world's first comprehensive regulatory framework governing the use of artificial intelligence systems. For many, such legislation represents uncertainty, so I will attempt to summarize its content simply and clearly below.

The goal of the regulation is to harmonize rules in the field of AI and ensure that technologies are safe, trustworthy, transparent, and non-discriminatory. These are the 4 basic goals that the regulation sets out according to its explanatory memorandum. Although the regulation establishes a number of obligations and regulates the use of these systems, it does not ban AI across the board. Instead, it introduces a risk-based approach. Depending on the assessment of the system and its impact on the health, safety, or fundamental rights of EU citizens, the rules become stricter. As a result, the regulation aims to harmonize rules in the field of artificial intelligence to ensure a well-functioning internal market for AI systems.

Who It Applies To

The AI Act applies to both public and private entities, even outside the EU, if they provide or deploy a given AI system on the European market. For company management, it is crucial that the AI Act uses a risk-based approach. The impacts therefore allow and are tiered according to how risky the AI is that the company develops, provides, or merely uses.

For the average user, however, the AI Act changes virtually nothing. It contains almost no obligations for EU citizens. On the contrary, the regulation guarantees for average users that systems are used responsibly and in accordance with fundamental human rights. The AI Act does not aim to limit technological development or the average citizen's access to it; therefore, the Act is limited to establishing minimum requirements necessary to address risks and problems associated with the use of AI. Nevertheless, the regulation allows the EU to maintain its leading position in the field of technology.

How AI Works (Risk Principles)

According to the explanatory memorandum, this regulation is needed especially in sectors with high impacts, such as climate change, environment and health, public sector, finance, or agriculture. For their protection, the AI Act operates on a mechanism of four risk levels. The AI Act thus defines 4 sequential categories:

Prohibited Systems

Systems falling into this category are regulated in Title II of the Regulation and are such systems that are unacceptable in terms of risks to humans, and therefore the use of these systems is prohibited. A rightful place in this category belongs to systems that, for example, use manipulative or exploitative practices or systems that (with certain exceptions) use "real-time" remote biometric identification in publicly accessible places for law enforcement or social credit scoring purposes. Simply put, these are systems that are contrary to Union values.

High-Risk Systems

High-risk systems are permitted but subject to strict rules. These are established both for the period before market introduction and throughout the entire lifecycle of the system. The regulation places obligations on the system itself, as well as on providers and users of these systems to increase safety and ensure compliance with legislation, particularly legal requirements regarding data governance, documentation and record-keeping, transparency and provision of information to users, human oversight, robustness, accuracy, and security. Such systems include tools that may help, for example, manage aircraft, subways, or energy supplies.

Limited Risk Systems

This category includes AI systems that we commonly know and use. These are generative AI models such as ChatGPT, Google Gemini, and others. It includes systems that interact with people, systems using emotion detection, and systems that generate and manipulate content (so-called deep fakes). The main requirements for these systems are information and transparency so that their user is aware that they are communicating with an AI system.

Minimal Risk Systems

The last category is systems with minimal risk, which is a residual category, meaning it includes all other AI systems on which the regulation places no requirements. Basically unregulated are, for example, AI systems functioning as spam filters or AI used in video games.

Although the AI Act represents complex regulation, for most Czech businesses it will not mean an immediate and direct administrative burden. Most new obligations fall primarily on providers and developers of AI software, mainly for systems designated as high-risk.

However, for companies that do not develop artificial intelligence themselves but only use ready-made tools from external suppliers, it is crucial to focus on the security of company data and verify that the deployment of these technologies is in compliance with all legal regulations.


Sources

← Back to Blog Overview