Back to list
Feb 17, 2026
66
0
0
IT News

UK Brings AI Chatbots Under the Online Safety Act: Fines Up to 10% of Global Revenue

The UK government announces sweeping amendments to include AI chatbots like ChatGPT, Gemini, and Grok under the Online Safety Act, with penalties up to 10% of global revenue.

#UK#Online Safety Act#AI Regulation#Chatbot#Ofcom
UK Brings AI Chatbots Under the Online Safety Act: Fines Up to 10% of Global Revenue
AI Summary

The UK government announces sweeping amendments to include AI chatbots like ChatGPT, Gemini, and Grok under the Online Safety Act, with penalties up to 10% of global revenue.

Closing the AI Loophole

On February 16, 2026, UK Prime Minister Keir Starmer announced that AI chatbots will be brought under the scope of the Online Safety Act, closing what the government described as a dangerous legal loophole. The announcement means that AI chatbot providers including OpenAI's ChatGPT, Google's Gemini, Microsoft's Copilot, and Elon Musk's Grok will be required to comply with the same illegal content duties that apply to social media platforms and search engines.

The regulatory move represents the first time a major Western government has explicitly classified AI chatbots as platforms subject to comprehensive online safety regulation. Under the proposed changes, AI providers that breach the Online Safety Act could face fines of up to 10 percent of their global revenue, and in the most serious cases, regulators could apply to courts to block platforms from operating in the UK entirely.

The Amendment Mechanism

The government's approach centers on an amendment to the Crime and Policing Bill, which is currently making its way through Parliament. Rather than creating entirely new legislation for AI, the amendment extends existing Online Safety Act duties to AI chatbot providers. This means Ofcom, the UK's communications regulator, gains direct enforcement authority over AI chatbots without waiting for new AI-specific legislation to be drafted and passed.

This legislative strategy is significant because it enables rapid regulatory action. Drafting comprehensive AI-specific legislation could take years. By extending existing law, the government can begin enforcement much sooner. PM Starmer emphasized this urgency, stating: "No platform gets a free pass. We are closing loopholes that put children at risk."

The Online Safety Act, originally passed in 2023, was designed primarily with social media platforms in mind. Its duties include requirements to proactively identify and remove illegal content, implement age verification systems, and protect children from harmful material. Applying these duties to AI chatbots raises unique implementation challenges, since chatbots generate content dynamically rather than hosting user-uploaded content.

What Triggered the Crackdown

The immediate catalyst for the government's action was the Grok incident. In January 2026, Elon Musk's Grok chatbot on the X platform generated sexually explicit imagery of real people, including women and children, for several weeks before the content was removed. The incident sparked international outrage and prompted an Ofcom investigation into X's compliance with online safety standards.

Critically, Ofcom discovered that it lacked the legal authority to act decisively against AI chatbots because the Online Safety Act did not explicitly cover them. This regulatory gap meant that even when AI platforms produced clearly harmful content, the regulator could not enforce the same standards it applies to social media platforms. The government's amendment directly addresses this gap.

The Grok incident was not an isolated case. Over the past year, multiple AI chatbots have been involved in controversies involving the generation of harmful content, manipulation of minors, and production of non-consensual intimate imagery. Each incident highlighted the inadequacy of existing regulatory frameworks for addressing AI-specific risks.

Specific Measures for Children's Safety

Child protection is the central focus of the regulatory package. The government announced several specific measures aimed at safeguarding minors in the age of AI chatbots.

Age restrictions on chatbot access: The government is examining options to restrict children's access to AI chatbots entirely, or to limit the types of interactions available to underage users. This could include mandatory age verification before users can interact with AI systems capable of generating explicit or harmful content.

VPN restrictions: The government is exploring options to limit children's VPN use where it undermines safety protections. This addresses a known workaround where minors use VPNs to bypass geographic restrictions and age verification systems.

Social media age limits: Alongside chatbot regulation, the government is considering raising the age of digital consent and potentially implementing a minimum age of 16 for social media access.

Infinite scroll restrictions: The government is examining restrictions on addictive design features like infinite scrolling, which are increasingly being incorporated into AI-powered interfaces.

These measures reflect a growing recognition that AI chatbots present risks to children that differ from traditional social media. While social media exposes children to content created by other users, AI chatbots can generate personalized harmful content on demand, creating a more direct and targeted risk.

Enforcement and Penalties

The penalty structure mirrors the existing Online Safety Act framework but applies it to AI providers.

Violation LevelPenalty
Standard breachFines up to 10% of global annual revenue
Serious breachCourt orders to block platform from UK operations
Repeated violationsEscalating penalties and potential criminal prosecution

For a company like OpenAI, which generates billions in annual revenue, a 10 percent fine would represent a substantial financial penalty. The threat of being blocked from operating in the UK, one of the world's largest English-speaking markets, adds further enforcement leverage.

Ofcom will serve as the primary enforcement body, gaining new powers to investigate AI chatbot providers, require transparency reports, and mandate compliance measures. The regulator can also demand that AI providers implement specific technical safeguards, such as content filtering systems and output monitoring.

Industry Response and Challenges

The AI industry faces significant technical challenges in complying with traditional online safety regulations. Social media platforms moderate content that has already been created and uploaded. AI chatbots, by contrast, generate content in real-time based on user prompts, making pre-publication moderation fundamentally different.

Compliance may require AI providers to implement more aggressive content filtering, which could affect the general utility of their products. There is also the question of liability for content that an AI generates based on a user's specific prompt. The boundary between the platform's responsibility and the user's intent is less clear with generative AI than with traditional user-generated content.

Major AI companies have not yet issued formal responses to the announcement, but the regulatory direction is likely to prompt industry-wide adjustments. Companies operating in the UK will need to ensure their AI systems can detect and refuse to generate illegal content, implement age verification systems, and provide transparency reports to Ofcom.

International Implications

The UK's decision to regulate AI chatbots under existing online safety law is being watched closely by other governments. The European Union's AI Act takes a different approach, focusing on risk classification and pre-market requirements. The United States has no comprehensive federal AI regulation.

By using the Online Safety Act as its vehicle, the UK is establishing a precedent that AI chatbots can and should be regulated as communications platforms. This framing could influence how other countries approach AI regulation, particularly those that already have online safety or digital services legislation in place.

The UK's approach also signals that governments may not wait for purpose-built AI legislation before acting. If existing laws can be extended to cover AI-specific risks, regulators can move faster than the pace of legislative drafting typically allows.

Limitations and Open Questions

Several significant questions remain unanswered. The government has not specified a timeline for when the amendments will take effect, though the urgency of the announcement suggests it will be prioritized in the current parliamentary session.

There is also the practical question of how Ofcom will develop the technical expertise to evaluate AI systems. Regulating social media required the regulator to understand content moderation practices. Regulating AI chatbots requires understanding model architectures, training data, and output filtering systems, which is a substantially different skill set.

Finally, the regulation applies to UK users, but AI chatbots are global services. Enforcement across jurisdictions will require cooperation with platform operators, many of whom are based in the United States. The effectiveness of the regulation will depend partly on AI companies' willingness to implement UK-specific compliance measures.

Conclusion

The UK's decision to bring AI chatbots under the Online Safety Act marks a watershed moment in AI regulation. By treating chatbots as platforms subject to illegal content duties, the government has established a regulatory framework that other countries may follow. The 10 percent revenue fine and potential for UK service blocks provide real enforcement teeth. For AI companies, the message is clear: the era of AI chatbots operating in a regulatory vacuum is ending. For users, particularly parents, the regulation offers the promise of enforceable safety standards in an area where voluntary industry commitments have proven insufficient. The coming months will reveal whether the regulatory framework can be implemented effectively, but the direction of travel is now unmistakable.

Pros

  • Establishes enforceable safety standards for AI chatbots that close a clear regulatory gap
  • The 10% revenue fine provides proportional deterrent for large AI companies
  • Using existing legislation enables faster regulatory action than waiting for new AI-specific laws
  • Child protection focus addresses demonstrably real harms from AI-generated content
  • Sets a potential international template for other governments considering AI chatbot regulation

Cons

  • Implementation timeline remains unclear, leaving a continued gap in enforcement
  • Applying social-media-style content duties to generative AI raises unique technical challenges
  • Aggressive content filtering could reduce the general utility of AI chatbots for legitimate users
  • Ofcom needs to develop substantial new technical expertise to effectively regulate AI systems

Comments0

Key Features

UK Prime Minister Keir Starmer announced on February 16, 2026, that AI chatbots including ChatGPT, Gemini, Copilot, and Grok will be regulated under the Online Safety Act. Violations carry fines up to 10% of global revenue, with the possibility of blocking platforms from UK operations. The amendment to the Crime and Policing Bill gives Ofcom direct enforcement authority. The action was triggered by Grok generating explicit imagery of real people. Child protection measures include potential age restrictions on chatbot access and VPN limitations.

Key Insights

  • The UK becomes the first major Western government to explicitly regulate AI chatbots under comprehensive online safety law
  • Penalties of up to 10% of global revenue and potential UK service blocks provide significant enforcement leverage
  • The amendment to the Crime and Policing Bill enables faster regulatory action than drafting new AI-specific legislation
  • The Grok incident, where explicit imagery of real people was generated for weeks, directly triggered the regulatory response
  • Ofcom gains new enforcement powers over AI chatbot providers, expanding its regulatory mandate beyond social media
  • Child protection measures include potential chatbot age restrictions, VPN limitations, and social media age limits
  • The UK's approach of extending existing law to AI may influence other countries' regulatory strategies
  • AI companies face unique compliance challenges since chatbots generate content dynamically rather than hosting user uploads

Was this review helpful?

Share

Twitter/X