Regulators warn financial firms over frontier AI cyber risks

Bank of England, FCA and HM Treasury have jointly called on regulated firms to strengthen cyber defences against rapidly advancing AI threats

Regulators warn financial firms over frontier AI cyber risks

The Bank of England, the Financial Conduct Authority (FCA) and HM Treasury have issued a joint statement on Friday, urging regulated financial firms and financial market infrastructures (FMIs) to take immediate steps to address the cyber security risks posed by frontier artificial intelligence (AI) models.

The statement warns that the cyber capabilities of current frontier AI models already surpass those of skilled human practitioners, and do so at greater speed, larger scale, and lower cost.

If exploited maliciously, these capabilities could threaten firms' safety and soundness, customers, market integrity, and financial stability. Regulators expect these risks to worsen as more powerful models emerge, with firms that have underinvested in core cyber security fundamentals facing increasing exposure.

Boards and senior management are directed to develop a sufficient understanding of frontier AI risks to set strategic direction and oversee how control functions manage them. Resourcing and investment decisions should account for the evolving threat environment, including risks from end-of-life systems or those no longer supported by vendors. Firms should also review whether their insurance coverage remains adequate.

"Frontier AI models can rapidly identify and enable exploitation of a potentially large number of vulnerabilities across firms' technology estates," the joint statement read. "Firms should be able to triage, prioritise, risk assess, and remediate vulnerabilities more quickly, more frequently, and at scale, including through automation where appropriate, while mitigating the operational risks from doing so."

Firms are also required to effectively manage frontier AI cyber risks arising from third parties and supply chains, including open-source software. This includes maintaining capabilities to identify, monitor, and manage external applications, libraries, and services integrated into their networks, and being prepared to remediate vulnerabilities identified by third parties at scale.

Regulators also expect firms to maintain robust access management, network security, and data protection controls to reduce the attack surface that a frontier AI model could exploit. "Firms should consider adopting automated and AI-enabled defences to operate at comparable speed to AI-driven attacks," they stated.

On recovery, regulators directed firms to the effective practices on cyber resilience published by the Bank, Prudential Regulation Authority and FCA in October 2025.

The government and UK financial authorities said they would continue to monitor frontier AI developments and engage with industry through the Cross Market Operational Resilience Group (CMORG).

Want to be regularly updated with mortgage news and features? Get exclusive interviews, breaking news, and industry events in your inbox – subscribe to our FREE daily newsletter. You can also follow us on Facebook, X (formerly Twitter), and LinkedIn.