AI for Business May 04, 2026

The EU AI Act is here: what your company must sort out by August 2026

The EU AI Act affects every company that uses AI or runs a chatbot. We break down the deadlines, fines up to €35M and what you must sort out before 2 August 2026.

The EU AI Act is the strictest AI regulation in the world. Unlike GDPR, it won't hit you in two years. The first penalties have been on the table since February 2025, the main wave of obligations for businesses lands on 2 August 2026. Numbers, deadlines, and most importantly what to actually do about it.

 


 

It's not the distant future. The clock is already ticking.

Remember GDPR? Companies laughed at it for two years, then 25 May 2018 came and panic set in. The AI Act is essentially the same script, only faster, stricter, and with a higher cap on fines. Some rules have been in force for two years already, the main wave for ordinary companies lands in a few months.

Time left until the main AI Act deadline
89
days
11
hours
14
minutes
22
seconds
2 August 2026 — start of high-risk AI obligations and most general business duties

Remember that date. It's not the first or the last AI Act milestone, but it's the one that affects the vast majority of companies that use or offer AI in any way.

 


 

What is the AI Act, really?

The AI Act (EU Regulation 2024/1689) is the first comprehensive law in the world to regulate artificial intelligence. It entered into force on 1 August 2024 and rolls out gradually in several phases.

Its logic is simple: AI systems are sorted into four risk categories. The higher the risk, the tougher the rules. Without classification, you can't tell what applies to you, so let's start there.

AI Act risk pyramid
Tier 1 — Prohibited
Unacceptable risk
Social scoring, manipulative AI, biometric categorisation, predictive policing. Fully banned since February 2025.
Tier 2 — High risk
Strict regulation
AI in recruitment, employee evaluation, credit scoring, healthcare, critical infrastructure, justice. Requires documentation, audits, human oversight.
Tier 3 — Limited risk
Transparency obligations
This is where chatbots, AI agents, image generators and deepfakes sit. Users must know they're interacting with AI or that the content was AI-generated.
Tier 4 — Minimal risk
No special obligations
Spam filters, AI in games, e-commerce recommendation engines. Voluntary codes of conduct are recommended, nothing more.

Most B2B and B2C deployments (chatbots, marketing AI, text generators, voicebots) fall into tier 3. That sounds lenient, but even here there are clear obligations that companies routinely break without realising it.

 


 

The deadlines you need to know

The AI Act didn't switch on overnight. It rolls out in phases, each adding new obligations.

 
 
1 Aug 2024
Entry into force
The AI Act officially becomes part of EU law.
 
2 Feb 2025
Bans + AI literacy
Prohibited practices apply. Duty to train staff in AI.
 
2 Aug 2025
GPAI rules
Obligations for LLM providers (OpenAI, Anthropic, Mistral...).
 
2 Aug 2026
Main wave of rules
High-risk systems + most obligations for businesses.
 
2 Aug 2027
Full applicability
All provisions in full force, including products with embedded AI.

If you've been watching the dates, two milestones should have caught your eye. 2 February 2025 has already passed and brought with it the AI literacy duty, which the vast majority of companies have never even heard of. We'll get back to that.

 


 

Does it apply to my company at all?

The most common question we get. And the most common answer is: yes, almost certainly. Click through this quick decision tree.

Decision wizard — 3 clicks
1) Do you use any form of AI or AI tool (ChatGPT, chatbot, text generator, AI in your CRM, voicebot...)?
2) Does your AI communicate with people (customers, candidates, employees) or generate content?
3) Does your AI make decisions about things like hiring, performance reviews, credit, healthcare, or access to education?
Tier 4 — Minimal impact
The AI Act doesn't apply to you directly. But beware: the moment you start deploying any AI (even ChatGPT for writing emails), you're at least in tier 3. And the AI literacy obligation applies to every company that uses AI at all.
Tier 4 — Minimal risk
Your AI runs in the background (e.g. recommendation engine, anti-spam). No special obligations, but you should still handle AI literacy and internal governance.
Tier 3 — Limited risk
This is where most chatbots, AI agents and content generators live. Main obligation: transparency. Users must know they're talking to AI or that the content was AI-generated. Plus AI literacy, documentation and internal usage rules.
Tier 2 — High risk
This is the toughest tier. Risk management system, technical documentation, logging, human oversight, quality data, audits, registration in the EU database of high-risk AI systems. If you're here, start now, not in July 2026.

 


 

Fines: same league as GDPR. In some cases, even higher.

When companies hear "AI Act", they always ask one thing. How much will it cost if we don't comply? Here's the answer, plain and simple.

Highest tier
Use of a prohibited AI practice
up to €35,000,000
or 7% of global turnover (whichever is higher)
 
Mid tier
Breach of duties for high-risk systems
up to €15,000,000
or 3% of global turnover
 
Lower tier
Misleading or false information to authorities
up to €7,500,000
or 1% of global turnover
 

For SMEs, the lower of the two amounts applies (flat × percentage). For large enterprises, it's the higher one. That's a key difference compared to GDPR.

 


 

I have a chatbot or AI agent. What exactly do I need to fix?

Now we get practical. If you have a chatbot, voicebot or AI agent on your website that communicates, works or generates content on your behalf, tier 3 applies. Here are the four key areas.

 
Duty 1
Transparency
Users must clearly know they're talking to AI, not a human.
▸ Click for details
How to comply
Clearly label the chatbot as a "virtual assistant" or "AI". Add an opening message like: "Hi, I'm an AI assistant...". For voicebots, this must be spoken at the very start of the call.
◂ Back
 
Duty 2
Labelling AI content
Generated text, images and audio must be machine-detectable.
▸ Click for details
How to comply
For deepfakes, a visible label is mandatory. For AI-generated text and images, watermarking or metadata. Most reputable generators do this for you (C2PA, SynthID).
◂ Back
 
Duty 3
AI literacy
People who work with AI must know how to handle it. In force since 2 Feb 2025.
▸ Click for details
How to comply
A short internal training (1-2 hours) on how AI works, where it hallucinates, what data not to feed it, how to verify outputs. Keep a record of who was trained and when.
◂ Back
 
Duty 4
Documentation & governance
You need to know where AI runs in your company, who manages it, who's accountable.
▸ Click for details
How to comply
Keep an AI register (a simple spreadsheet works): which tool, for what, what data, who owns it. Plus an internal AI usage policy and updated DPAs with vendors covering AI responsibilities.
◂ Back

These four are the minimum that every company using AI or a chatbot has to handle. Regardless of size.

 


 

AI literacy: the silent duty you're already breaking.

 
Heads up — in force since February 2025
Article 4 of the AI Act says companies must ensure a sufficient level of AI literacy among their staff and contractors who work with AI systems. The vast majority of companies have never heard of this. It's not about certificates, it's about a basic understanding of AI and its limits.

In practice that means 1-2 hours of internal training (ideally documented), an internal "How we use AI" policy and a clear distinction between what may and may not be shared with AI. If you have 5 people, you can sort it out at Monday's stand-up.

 


 

Compliance roadmap: 6 steps that actually make sense

A systematic plan instead of panic. Here's the sequence that works for companies from 5 to 500 employees.

1
 
Inventory
List every piece of AI used in your company
Not just the official tools. ChatGPT in browsers, Copilot in Office, AI inside your CRM, plug-ins, marketing generators, vendor voicebots. Everything in one spreadsheet.
2
 
Classification
Place each system into a risk tier
You've already seen the pyramid. For each tool answer: prohibited / high / limited / minimal. For edge cases, ask an expert or use the decision wizard above.
3
 
Transparency
Roll out AI disclosure everywhere AI touches users
An opening message in your chatbot, an intro in your voicebot. Clear labels for image and text generators. Visibly mark deepfakes.
4
 
AI literacy
Train your team and document it
An hour, two max. Cover LLM basics, hallucinations, data sensitivity, output verification. File the training record.
5
 
Contracts
Review contracts with AI vendors
Who is the provider, who is the deployer? Who carries liability? DPAs must cover AI. A lot of existing contracts don't address this at all.
6
Living document
Build an AI policy and keep it alive
Rules for how the company uses AI. What's allowed, what isn't, who approves new tools. Short doc, max 2 pages. Review every six months.

None of these steps is rocket science. But if no one has tackled them so far, you won't pull it off in the last two weeks of July.

 


 

What the AI Act isn't

A lot of myths have piled up around the AI Act. Three we hear most often.

  • "The AI Act bans AI." It doesn't. It only bans a narrow set of practices (social scoring, manipulative subliminal AI, etc.) that have no place in regular business anyway.
  • "It only affects developers and big tech." Wrong. The term deployer in the Act covers anyone using AI in a professional context. That's every company running GPT-5 in marketing or running a chatbot on its website.
  • "It will be expensive like GDPR." Not necessarily. Tier 3 (chatbots, marketing AI) you can solve internally in tens of hours, not hundreds of thousands of euros. It's only expensive for tier 2 (high-risk systems in HR, finance, healthcare).

 


 

Frequently asked questions

Yes. The AI Act applies to any company that operates in the EU, sells into the EU, or whose AI system has an impact on people in the EU. Headquarters don't matter. It works the same way as GDPR.
Both. OpenAI is the provider and has obligations regarding the model itself. You are the deployer and are responsible for how AI is used inside your company (context, transparency to users, AI literacy, governance).
If it's not clear from context (widget name, icon, opening text) that this is AI, you're outside the law. Add an opening message, a clear label in the widget header, or "AI assistant" in its name. Takes ten minutes and saves you the headache.
Each EU member state designates national supervisory authorities (in the Czech Republic that's being set up under the Ministry of Industry and Trade, alongside the Data Protection Office and the Telecom Office). At EU level, the AI Office at the European Commission supervises general-purpose AI models (GPAI).
You can, but it's high risk. That means documentation, human oversight on decisions, no automated rejection of candidates, transparency to applicants, quality data, audits. Many companies prefer to use AI only for support tasks (CV parsing) and leave the final decision to a human.
If you have a chatbot and standard AI tools (tier 3), realistically tens of hours of work spread across the year. Inventory, training, transparency tweaks, AI policy. External consulting in the low single-digit thousands of euros. For high-risk systems (tier 2) it's hundreds of thousands plus an ongoing audit.

 


 

We can help you with this

 
Chatbot.Expert × AI Act
We'll build your AI chatbot or AI agent so it complies with the AI Act from day one.
We deliver implementations where transparency, AI literacy materials for your team, governance documentation and updated DPAs are part of the package. No surprise "oh wait, this too?" in July.
Chatbot with disclosure
Includes AI labelling, human handoff, conversation logging.
AI agent with governance
Documentation, audit trail, role-based access.
AI literacy training
A 2-hour workshop for your team plus take-home materials.

 


 

Bottom line: don't panic, but get started

The AI Act is not the apocalypse. For most companies it means a few process tweaks, an hour of training and one extra sentence in the chatbot disclaimer. For companies with high-risk AI systems it's a tougher challenge, but one you can handle with help.

What's certain: regulators won't give you a pass on 2 August 2026 because you "didn't know" about the AI Act. Same as with GDPR. Half a year is more than enough time to get this sorted.

If the AI Act caught you off guard, or you're not sure which tier you fall into, get in touch. We'll go through it in 30 minutes and you'll know what to do next.

Thomas Wilson profile picture

Petr Chmelař

Petr is the co-founder of Chatbot.Expert, where he focuses on developing AI chatbots and AI agents for companies that want to automate communication and customer support.

Latest Articles

The EU AI Act is here: what your company must sort out by August 2026
AI for Business

The EU AI Act is here: what your company must sort out by August 2026

The EU AI Act affects every company that uses AI or runs a chatbot. We break down the deadlines, fines up to €35M and what you must sort ou...

May 04, 2026

Why Big Companies Are Replacing Employees with AI Agents
AI for Business

Why Big Companies Are Replacing Employees with AI Agents

Klarna, Salesforce, IBM, Amazon, and Microsoft are replacing employees with AI agents. What can you learn from their stories?

March 03, 2026

The complete guide on how an LLM works.
AI Chatbot

The complete guide on how an LLM works.

We break down the technical architecture of Large Language Models (LLMs), from Transformers and layers to tokens and hallucinations.

February 02, 2026