EU AI regulation 2025 has quietly turned into a three-layer maze: the original EU AI Act, a new Digital Omnibus package, and changes to GDPR and other data laws that directly affect AI training. For founders, lawyers and product teams, the big problem is that timelines and obligations now pull in different directions—stricter system rules are delayed, while data-use rules are opened up sooner.
EU AI regulation 2025: why everyone is confused
When people talk about EU AI regulation 2025, they usually mean the EU AI Act, but that is now only one piece of the puzzle. On 19 November 2025, the European Commission unveiled the Digital Omnibus package, a set of two large “omnibus” laws that tweak AI rules, GDPR, ePrivacy, cybersecurity and data legislation all at once.
At the same time, the Commission proposed to delay many “high-risk AI” obligations to late 2027 or even 2028, while making it easier to use personal data for AI training under GDPR. This combination is what many observers are calling regulation chaos: one law gets tougher on AI systems, while another relaxes the data rules that feed those systems.
Framework #1 – What the EU AI Act actually requires
The original EU AI Act is a risk-based framework that classifies AI systems into prohibited, high-risk, limited-risk and minimal-risk categories. High-risk systems include use cases like biometric identification, hiring tools, credit scoring, access to education and certain medical or safety-critical applications, all subject to strict obligations on data quality, documentation, human oversight and monitoring.
Generative AI and general-purpose AI (GPAI) also fall under the Act: providers must give technical documentation, transparency information and ensure that synthetic images, audio, video and text are detectable as AI-generated or manipulated. Before the new proposals, many of these requirements were meant to start applying in 2026, with a staged rollout for different risk levels.
Framework #2 – Digital Omnibus: delay button for high-risk AI
The Digital Omnibus on AI Regulation is a new proposal that changes how and when parts of the EU AI Act will bite. One of the biggest changes is a postponement of high-risk AI obligations: instead of applying from August 2026, they can now be pushed back to December 2, 2027 or even August 2, 2028, depending on when the Commission confirms that support tools and standards are ready.
The Omnibus also defers transparency duties for existing generative AI systems, for example moving the obligation to watermark or otherwise mark synthetic content for systems placed on the market before August 2026 to February 2027. It centralises enforcement for large AI providers in a powerful AI Office at EU level and removes or softens some obligations, such as shifting AI literacy duties from companies to public authorities and expanding regulatory sandboxes and real-world testing options.
Framework #3 – GDPR changes that quietly open more data for AI
Alongside the AI Omnibus, the Commission proposed a Digital Legislation Omnibus that amends GDPR, ePrivacy and data rules to better “support AI innovation”. A key element is a new GDPR legal basis (often referred to as a new article 88c) that would treat AI model training, development and operation as a “legitimate interest”, allowing controllers to process personal and even special-category data for AI without consent, as long as strict safeguards and balancing tests are applied.
Under this proposal, organisations must still minimise data, apply strong security, document risk assessments and give people a strong right to object, but the default shifts toward allowing more data use for AI by design. Critics argue this undermines the original GDPR consent and purpose-limitation model, especially when combined with efforts to cut “consent fatigue” and reduce cookie banners.
Timelines: what changes in 2025, 2026 and 2027
For companies watching EU AI regulation 2025, the main issue is the moving clock. Some AI Act provisions are already in force or on fixed timelines, but the Digital Omnibus would insert conditional dates for chapters on high-risk AI, tying their start to the availability of harmonised standards and support tools and then capping the delay with final backstop dates in late 2027 or mid‑2028.
At the same time, data-related flexibilities in the Digital Legislation Omnibus would start much earlier once adopted, meaning AI training on broader datasets could expand well before high-risk system rules fully kick in. This creates a regulatory gap: companies can more easily train models now, but face full high-risk compliance only later, which might encourage aggressive deployment before the stricter obligations arrive.
What this means for startups and enterprises building AI
For AI builders, the message from EU AI regulation 2025 is not “relax, you are off the hook”—it is “you have more time, but not less responsibility.” Startups and enterprises should map where their systems fall in the AI Act’s risk ladder, prepare for high-risk obligations even if enforcement is delayed, and refresh DPIAs and data-mapping to reflect the new legal basis for AI training.
Practically, that means creating a high-risk AI inventory, aligning future products with upcoming harmonised standards, and building opt‑out and objection mechanisms that will stand up to GDPR scrutiny under the new AI training rules. Companies that treat 2025–2027 as preparation time instead of a holiday will suffer fewer shocks when the high-risk chapters and full transparency requirements become enforceable.
Critics say this is “deregulation dressed as innovation”
Civil-society groups, academics and some regulators warn that the Digital Omnibus recasts EU data and AI law as a tool for AI competitiveness first, and rights protection second. They argue that loosening GDPR for AI training while delaying the strictest AI Act obligations is effectively deregulation dressed as innovation, especially when combined with industry lobbying to slow high-risk enforcement.
The Commission, on the other hand, presents the package as a way to simplify red tape, cut overlapping reporting duties and save billions for businesses, while still keeping the AI Act’s core architecture intact. The real test will be how Parliament, Council and national data protection authorities amend and interpret these proposals over the next two years.
Key questions still unanswered
Several big questions remain open. Will the European Parliament and Member States accept broad AI training as a legitimate interest, or will they narrow or condition it further during negotiations? How will national data protection authorities respond if they believe the new rules erode GDPR’s original guarantees—will they test them in court or issue strict guidance that effectively re-tightens the screws?
For now, anyone building or deploying AI in or into Europe should treat EU AI regulation 2025 as a moving target, not a finished rulebook. The safest strategy is to track AI Act and Digital Omnibus developments closely, over‑comply on transparency and data protection, and design governance that can adapt quickly as the final texts and guidance land in 2026–2027.







