Last updated: 2026-04-11

UK AI Bill

What we know about the UK's approach to AI regulation and how to prepare.

The UK is developing its own approach to AI regulation. While the EU opted for a single comprehensive law, the UK's model leans toward sector-specific regulation through existing regulators. Here's what we know and how to prepare.

This page reflects the latest known position

The UK AI Bill is still in development. Details may change. We'll update this page as new information becomes available.

The UK's Approach

The UK government's AI regulation framework is based on five principles that existing regulators — the FCA, ICO, Ofcom, CMA, HSE, and others — are expected to interpret and enforce within their sectors:

  1. Safety, security, and robustness — AI systems should function reliably and securely
  2. Transparency and explainability — people should be able to understand how AI is being used and how decisions are made
  3. Fairness — AI should not discriminate or produce biased outcomes
  4. Accountability and governance — clear responsibility for AI outcomes, with appropriate oversight
  5. Contestability and redress — people affected by AI decisions should be able to challenge them

How It Differs from the EU AI Act

AspectEU AI ActUK Approach
Legal instrumentSingle comprehensive regulationSector-specific guidance with potential statutory duties
Risk classificationDefined risk tiers (unacceptable, high, limited, minimal)Context-dependent, assessed by sector regulators
EnforcementCentral AI Office + national authoritiesExisting regulators (FCA, ICO, Ofcom, etc.)
ScopeAll AI systems by risk categoryFocused on sectors where AI risk is highest
CompliancePrescriptive requirementsPrinciples-based with sector interpretation
ExtraterritorialApplies to non-EU providers/deployers affecting EUUK-focused, but likely to affect international businesses operating in UK

What This Means for Your Business

Even without the AI Bill fully enacted, existing UK law already covers much of what the principles require:

  • GDPR / UK Data Protection Act 2018 — covers AI processing of personal data, automated decision-making rights (Article 22), data protection impact assessments
  • Equality Act 2010 — prohibits discrimination; AI systems that produce biased outcomes can create liability
  • Consumer Rights Act 2015 — AI-powered services must meet 'reasonable care and skill' standards
  • Financial Services regulations (FCA) — AI in credit, insurance, and investment is already regulated
  • Employment law — using AI for hiring, performance management, or redundancy decisions has existing legal guardrails
You're probably already regulated

If your business operates in a regulated sector (financial services, legal, healthcare, education), your sector regulator likely already has expectations about AI use. The AI Bill will formalise these, not start from scratch.

Preparing Now

Regardless of the final form of the UK AI Bill, these actions will put you in a strong position:

  1. Create an AI inventory — know every AI tool your business uses, who uses it, and what for
  2. Assess risk — for each AI use case, understand the potential for harm (discrimination, financial loss, safety)
  3. Document decisions — record why you chose specific AI tools and models, and how you monitor their output
  4. Ensure human oversight — for any AI-driven decision that affects people, have a qualified human who reviews and can override
  5. Track spend and usage — demonstrate you know how much AI costs and how it's being used across the business
  6. Train your team — ensure everyone using AI understands its limitations and your organisation's AI policy

The Dual Compliance Challenge

If your business serves EU customers AND operates in the UK, you'll need to comply with both the EU AI Act and UK regulations. The good news: they share common principles. Building compliance for one gets you most of the way to the other. SpendLil's roadmap is designed to cover both frameworks.