Regulator sounds alarm on AI cyber threat, putting mortgage brokers on notice

ASIC sends urgent open letter to Australia's financial sector

Regulator sounds alarm on AI cyber threat, putting mortgage brokers on notice

Australia's corporate watchdog has issued one of its most direct warnings to date to financial services licensees, declaring that frontier artificial intelligence (AI) models have fundamentally shifted the cyber threat environment.

In an open letter published on Friday and addressed personally to licensees and directors across the financial services industry, Australian Securities and Investments Commission Commissioner (ASIC) Simone Constant delivered a clear message: the cyber risks supercharged by advanced AI are not theoretical, they are here now, and boards that are not actively strengthening their defences face consequences.

"Do not wait for perfect clarity to address the threat posed by new AI models," Constant wrote. "Instead, act now, and act with discipline, to strengthen the cyber resilience fundamentals that underpin your business."

The letter has particular weight for Australia's mortgage broking sector – which now facilitates more than 77% of all new home loans and handles vast quantities of borrower financial data daily.

Brokers and aggregators sit at the intersection of multiple data flows: lender systems, CRMs, serviceability calculators, identity verification tools and, increasingly, AI-powered platforms designed to accelerate lending decisions. Each integration point is a potential vulnerability.

ASIC's concern is not that AI creates entirely new categories of risk, but that it lowers the barrier to executing sophisticated attacks. A phishing email that once required considerable skill to craft can now be generated in seconds. Vulnerabilities that once required significant resources to exploit can be identified and targeted at scale and speed.

Frontier AI poses major threat

The letter specifically calls out the accelerating pace at which AI is enabling attackers to discover and act on known software vulnerabilities, urging licensees to patch systems promptly and implement layered, defence-in-depth architectures.

It reads: “Frontier AI models are a step-change in capability, but they do not change the fundamentals of good cyber resilience; rather, they reinforce the importance of strong, end-to-end preparedness.

“Entities that have established robust plans across the full cyber incident lifecycle, and keep those plans current, tested and embedded, will be better placed to manage the accelerating threats posed by frontier AI.”

The regulator is calling on licensees to reassess cyber plans, protect critical assets, minimise attack surfaces, review user access privileges – with a specific mention of rising insider threats – and to actively manage third-party risk, particularly where external providers introduce systemic exposure.

That last point resonates strongly in broking, where aggregators and brokerages rely on interconnected technology ecosystems spanning multiple vendors.

The letter also requests that licensees use this letter as a board agenda item, requiring that it be tabled and discussed at the highest levels of governance.

The warning arrives as the broking industry is rapidly accelerating its own AI adoption. A recent survey found that 12% of brokers had reported a cybersecurity incident in the previous year, and data protection concerns were identified as one of the main barriers to wider AI uptake in the sector.

As Mortgage Professional Australia reported, the FBAA has already advised members not to enter client or personal information into publicly available AI tools.

Mounting regulatory heat

The regulatory heat is building from multiple directions.

Just days ago, MPA analysed APRA's parallel AI warning and its direct implications for brokerage operations, noting that while AI can flag compliance issues, brokers must take ultimate accountability – and that lenders are deploying AI to detect anomalies in broker submissions with increasing sophistication.

ASIC’s letter warns that boards cannot simply rely on assurances from management. “Appropriate cyber risk management starts at the accountable leadership of licensees and participants. Please ensure this letter is tabled and discussed at your ultimate board and risk governance committees,” it states.

The warning follows a string of home loan fraud controversies emanating from the major banks. 

In February, Commonwealth Bank reported itself to the authorities over a suspected $1 billion worth of fraudulently obtained home loans, including AI-generated applications.

Fraud estimates have since blown out to $3 billion. Sub-aggregator Hai Money became the first casualty of the escalating fraud scandal after parent aggregator Finsure severed their relationship, leading Hai Money to collapse in April.