AI Act Glossary: Do You Speak the Regulator's Language?
Artificial Intelligence is a tool without which we can hardly imagine the functioning of today's modern world. Not only is it becoming our helper in everyday life, but it is a great advantage in the business field. More and more companies are using it to simplify processes, automate routine tasks, and increase their efficiency. However, it is necessary to realize that the benefits that AI brings us are also associated with many risks.
The European Union responds to this with its regulation on artificial intelligence, known as the AI Act. It is the first legal regulation on artificial intelligence in the world. Although its validity began on August 1, 2024, its full effectiveness is planned for August 2, 2027. However, key provisions and obligations become enforceable gradually, at earlier, precisely determined dates. The aim of the regulation is to create a unified legal framework for the development and responsible use of artificial intelligence. Regulation aims to support development and innovation in the field of AI, but at the same time emphasizes safety and protection of citizens' rights so that systems are safe, trustworthy, transparent, and non-discriminatory.
Knowledge of AI Act terminology is absolutely key for companies – it is the first step to correctly classifying their systems, understanding the scope of their obligations, and ensuring use in accordance with European legislation. Article 3 of the regulation itself defines 68 terms that form the legislative basis of the entire text. In this article, we will focus on the most important ones and explain exactly what they mean.
AI System
The first and absolutely most basic term is the AI system. The official definition is very broad: it is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Simply put, an AI system is any program or model that automatically creates outputs from input data. Such a system can work with a certain degree of independence and often can learn and adapt (so-called adaptability). The legislator intentionally conceived the definition of an AI system this broadly to include everything from simple systems to advanced neural networks.
Today, we come into contact with artificial intelligence systems on a daily basis: unlocking a phone using Face ID, spam filters that sort emails into spam and inbox, algorithms on social networks offering us tailored content, voice assistants in phones (Siri).
Key Roles in the AI Ecosystem
Provider
A provider is a person who develops an artificial intelligence system and then places it on the market under their own name or trademark. Providers have the greatest control over the development of the AI system, so the legislator imposes a wide range of obligations on them. They bear primary responsibility for ensuring that their AI systems meet all requirements under the regulation, so they must assess the risks the system brings, ensure documentation, and potentially certification before offering it on the market.
An example can be a company that created software with AI for automatically evaluating job applicants' resumes. This company is a provider – it created an AI system and offers it as a product to other companies to facilitate employee recruitment.
Deployer
A deployer is a person who uses AI systems actively for their activities exclusively for professional purposes. This category does not include personal and non-commercial use. The deployer is responsible for the correct and ethical use of AI in accordance with the rules (must follow manufacturer instructions, must not use the system for prohibited purposes, etc.).
In previous versions of the AI Act, the general term "user" was used. This designation was confusing as it gave the impression that the rules and responsibilities apply to everyone, including ordinary people who use AI for their personal needs. Therefore, it later switched to the term deployer, which better captures professional use of AI.
A deployer can be, for example, a hospital that starts using an AI system analyzing X-rays. The hospital deploys this system into operation – trains employees regarding use, sets up processes. The hospital as a deployer is responsible for the AI tool being used correctly.
Distributor
A distributor is an entity in the supply chain that supplies an AI system to the market without being its creator or importer. They do not participate in the development process in any way, nor do they change the properties of the AI system; their role is purely to resell or supply the product from the provider to end customers. Obligations are also associated with this role – the distributor is obliged, for example, to verify that the product has the necessary marking and documentation.
For example, a firm that buys licenses for an AI system for smart home security from a provider and then sells them to end customers (installation companies, individuals) is a distributor.
Importer
An importer places an AI system on the European Union market that is available from a provider located outside the EU. If a system is developed in the USA and does not have a branch in the EU, someone must import the product into the EU and ensure its compliance with EU rules. The importer has very similar obligations to the provider.
Operator
Operator is a broader umbrella term that includes providers, deployers, importers, and distributors. The regulation uses it when it wants to talk about the obligations of everyone involved in the AI lifecycle.
Risk
The AI Act is built on a so-called risk-based approach – assessing what level of risk the use of a certain AI system may pose to society. Risk is defined as a combination of the probability and severity of potential harm caused by the use of AI. The regulation introduces four categories of AI risk. The higher the category, the stricter the conditions the system must meet.
- Minimal risk: Most AI systems fall into this category and do not pose a significant danger. They are not regulated by the EU act in any way; this includes, for example, AI systems in video games.
- Limited risk: Systems falling into the limited risk category are subject to only mild transparency obligations – people must be aware that they are communicating with artificial intelligence. Typically, AI chatbots that answer customer questions on websites will fall into this category.
- High risk: This category includes AI systems that could have an adverse impact on the safety of persons – especially in education, migration and border controls, or critical infrastructure (road transport, energy supplies). Such systems are permitted but must meet strict requirements and obligations such as human oversight, transparency, and testing. As an example, we can cite an AI system used to diagnose diseases in patients – a system error could damage the patient's health.
- Unacceptable risk: The European Union prohibits systems whose risk has been classified as unacceptable. Such a system poses an unacceptable threat to individuals and society as a whole. As an example, we can cite an AI system that evaluates individuals based on their behavior and creates a "social credit score" that determines access to loans or job opportunities.
Lifecycle and Market
Placing on the market refers to the first supply of an AI system to the EU market. It is a situation where an AI system is made available on the European Union market for the first time, typically by a provider. It is at this moment that the product must meet all applicable legal requirements.
Making available on the market means any supply of an AI system for distribution or use on the EU market in the course of a commercial activity. It therefore includes both the first placing on the market and all subsequent sales, transfers, rentals, or even free provision within commercial activities.
Putting into service refers to the situation where a deployer starts using an AI system for their activity. This allows the legislation to also apply to systems that were not offered on the market, but the organization developed and deployed them itself. It ensures that even internally created and subsequently used AI systems are subject to rules from the moment they are deployed.
Other Important Terms
Intended Purpose
Refers to the specific use of the AI system for which its provider developed it. It is based on what the manufacturer states in official information. We could say it is the "job description" of the AI system given by its creator. Precise definition of the purpose for which AI is to serve is the basis for assessing its risk and safety.
Substantial Modification
It is a change to the system that occurs after it has been placed on the market or put into service and which the original provider did not foresee. Such a modification can fundamentally change the intended purpose of the system, and thus its overall risk. A formerly low-risk system can thus become a high-risk system subject to different, stricter obligations. The AI Act therefore establishes a rule: whoever carries out such a substantial modification takes on full responsibility and all obligations of the original provider.
Post-Market Monitoring
To increase safety, the AI Act introduces an obligation for continuous monitoring of AI systems even after they have been placed on the market. Providers must collect and evaluate experiences from AI operation. If they encounter new problems or risks that the AI system brings in practice, corrective measures must be taken (issue an update, warn users, or withdraw from the market).
General Purpose AI
Classic AI models usually focus on one activity (e.g., selecting candidates for employment). General purpose AI models are universal large models that have been trained on huge amounts of data and can therefore perform a wide range of tasks across different areas. A typical example of a general model is the GPT model from OpenAI, on which the popular ChatGPT chatbot runs. Such a model can answer questions, write texts, translate, program, create content, etc. The AI Act introduces specific obligations – the provider of a general model must thoroughly document the training of the model and describe what data it was trained on and what capabilities it has. General models are considered very influential, so even though they do not fall into high risk, legislators pay increased attention to them and place steeper requirements on them.
Deep Fake
The AI Act defines a deep fake as artificially created content (image, video, audio recording) that mimics persons, objects, or events and could appear authentic to a person. The legislation does not explicitly ban deep fakes. It falls into the limited risk category and imposes a transparency obligation, so the content needs to be marked to make it clear that it was created using artificial intelligence. The aim of these rules is to prevent the misuse of deep fake content for fraud and disinformation. A well-known case of deepfake is an advertisement for dental services in which the face of actor Tom Hanks was used unauthorizedly using AI.
AI Literacy
Is the ability to understand AI and handle it – knowing how it works, what to expect from it, how to correctly interpret its outputs, and what rules and limitations apply to it. Article 4 therefore requires providers and deployers of AI to ensure that their staff working with the AI system are sufficiently trained and aware of risks and potential shortcomings.
The AI Act is the world's first comprehensive legal framework for regulating artificial intelligence in the European Union. The definitions explained in this article are the basis for orientation in the regulation – they help explain the basic concepts of AI terminology, explain the differences between entities that work with AI, and their different obligations. We also explained the terminological meaning between the moments to which the legal regulation applies. Understanding these terms is important for anyone using AI for professional or commercial purposes.
← Back to Blog Overview