Regulator makes stark warnings as the AI bandwagon gathers steam
The timing could not be more pointed. As Australian mortgage brokers race to adopt AI tools – from lender policy search engines to automated compliance checkers and AI-drafted lender rationales – the country's prudential regulator has published its most detailed assessment yet of how the financial sector is managing the technology. The verdict is uncomfortable reading.
In a letter to all APRA-regulated entities last week, APRA Member Therese McCarthy Hockey delivered a sector-wide finding: AI adoption is accelerating, but governance, oversight, and risk management frameworks are not keeping pace. Where that gap is not addressed, the regulator has signalled it will act — including through enforcement.
Read next: Are brokers prepared for the AI revolution?
The letter is addressed to APRA's direct regulated population: banks, insurers, and superannuation trustees. Mortgage brokers are not APRA-regulated entities. But the findings land squarely in the broker channel's operating environment, because the lenders brokers submit to every day are the institutions APRA has just placed on notice.
The lender context brokers need to understand
When APRA finds that major banks are trialling AI in loan application processing with governance frameworks that haven't matured at the same pace, that has direct implications for how those lenders assess broker-submitted applications and how quickly they can adapt when AI systems produce unexpected outcomes.
The regulator's concern is that many institutions treat AI risk as "just another technology risk" – a framing APRA explicitly rejects. Predictive models adapt, drift, and can encode patterns in training data that produce biased or inconsistent outputs.
Read next: Read next: Mutual banks warned against untamed AI uptake
This matters because Commonwealth Bank recently rolled out an AI agent to address more than $1 billion in suspected home loan fraud, some of it linked to the broker and referral channels. The system monitors more than 80 million signals a day. That infrastructure is powered by exactly the kind of AI APRA is now scrutinising – and broker-submitted loan files flow directly through it.
Fraud, compliance, and broker accountability
APRA observed that staff use of AI tools outside approved control frameworks is widespread, with many entities relying on policy guidance rather than enforceable technical controls. The gap between what people are doing with AI and what governance says they should be doing is, in APRA's words, significant.
This maps directly onto challenges the broker industry is already grappling with. As MPA reported from the 2026 LMG Growth Summit, compliance risk sits at the heart of the industry's AI ambitions, with LMG stating clearly that while AI can flag compliance issues, "brokers do need to take ultimate accountability."
The trajectory is clear: lenders are deploying AI to detect anomalies in broker submissions with increasing sophistication. The tolerance for shortcuts in documentation – and for AI-assisted application drafting that strays into misrepresentation – is narrowing, not because of new rules, but because the surveillance infrastructure is improving faster than many brokers may realise.
Questions every broker should be asking
The AI tools entering the broker market are genuinely useful. Mortgage Choice's AI-powered Policy Search Tool reviews more than 5,000 pages of residential lending policy in natural language. Quickli's Pro tier offers an AI credit analyst across 45 lenders. LMG's MyCRM Intelligence drafts lender rationales and runs QA checks across more than 500 data points per deal. The efficiency gains are real.
But the APRA letter raises questions every broker should be able to answer: Do you know what data your AI tools are trained on, and whether it's current? Do you know how the tool behaves when it's wrong? Do you have a fallback if an incorrect output makes it into a lodged application?
The broker using the output carries the professional and legal exposure when something goes wrong – regardless of which tool produced it.
No new rules – but existing ones apply in full
The most important clarification in the APRA letter is this: the regulator is not creating new AI-specific rules. It is confirming that existing obligations apply to AI use in full.
The same logic flows directly to brokers. Best interests duty does not change because a broker used an AI tool to arrive at a product recommendation. The obligation to verify information before submission does not change because an AI drafted the lender rationale. As APRA's McCarthy Hockey signalled as far back as 2024, the prudential framework was always designed to be technology neutral. What has changed in April 2026 is the tone: from guidance to active assessment, and from assessment to the credible prospect of enforcement.
Read next: Australians still prefer brokers over AI for big money calls: study
The productivity case remains strong
None of this is an argument against AI adoption. As Connective's Mark Haron told brokers at the end of 2025, brokers who embrace available tools will outperform those who don't. The question the APRA letter raises is not whether to use AI, but whether the governance around its use is proportionate to the risk.
For brokers, that means one thing above all else: accountability for client outcomes does not transfer to the algorithm.
The institutions you broker through are now on formal notice. Understanding what that means for your operating environment is the first step to navigating what comes next.
The full APRA letter, including the supervisory debrief for executive management, is at apra.gov.au.


