How Australian Banks Are Using AI to Catch Fraud in Currency Transfers
International money transfers have always been a target for fraud. The combination of speed, cross-border complexity, and large transaction values makes them attractive to criminals. What’s changed in the past two years is how Australian banks are fighting back.
The answer, increasingly, is AI. But not in the way most people imagine.
The Scale of the Problem
According to the Australian Financial Crimes Exchange, financial fraud losses in Australia exceeded $4.1 billion in 2025. A significant chunk of that involved international transfers — money mule operations, invoice fraud, and romance scams that funnel funds overseas.
The traditional approach to catching these transactions relied on rules-based systems. If a transfer exceeded a certain amount, went to a flagged country, or came from a new account, it triggered an alert. These systems catch some fraud, but they also generate enormous volumes of false positives. Banks like CBA and Westpac were reportedly reviewing tens of thousands of flagged transactions per month, with well over 90% turning out to be legitimate.
That’s not sustainable. It burns through compliance staff time, delays legitimate transfers, and frustrates customers.
How AI Changes the Game
The newer systems use machine learning models trained on historical transaction data to identify patterns that rules-based systems miss. Instead of looking at individual transactions in isolation, they map networks of accounts, transaction timing, amounts, and counterparties.
A rules-based system might flag a $15,000 transfer to Southeast Asia. An AI model might notice that the sending account received five deposits of exactly $2,900 from different sources in the preceding week — a pattern consistent with structuring but invisible if you’re only looking at the outbound transfer.
NAB announced in late 2025 that its AI fraud detection system had reduced false positives by 40% while increasing genuine fraud catches by 25%. Those are impressive numbers if they hold up, though banks aren’t always forthcoming with the underlying methodology.
Commonwealth Bank has been particularly active in this space, partnering with firms that specialise in AI strategy support for financial services. The approach involves training models not just on CBA’s own data but on anonymised patterns from the broader financial ecosystem, which gives the models a much richer picture of what fraud looks like across the system.
What’s Working and What Isn’t
The biggest success stories involve catching “first-party fraud” — situations where the account holder themselves is complicit, either willingly or through coercion. Traditional systems struggle with these because the account holder’s behaviour appears normal right up until the fraudulent transfer.
AI models can detect subtle behavioural shifts. Changes in login times, device switching, unusual browsing patterns before a transfer, and even typing speed variations have all been used as signals. It’s sophisticated, and it works.
What isn’t working as well is catching entirely novel fraud types. ML models are, by definition, trained on historical patterns. When criminals invent new schemes — and they do, constantly — there’s a lag before the models catch up. The AUSTRAC guidance on AI in financial crime detection explicitly warns about this limitation.
Privacy Concerns Are Real
There’s a tension here that doesn’t get enough attention. The more data you feed into fraud detection models, the better they work. But that means banks are building increasingly detailed profiles of customer behaviour. Where you log in from, when you bank, who you transact with, how you type — all of this is being captured and analysed.
Most customers would probably accept this trade-off if they understood it. The problem is that many don’t know it’s happening. Australia’s privacy framework under the Privacy Act 1988 gives financial institutions broad latitude to collect and process data for fraud prevention, but the transparency around how AI models use that data remains limited.
What This Means for Forex Traders and Businesses
If you’re sending or receiving international payments regularly — whether you’re a forex trader, an importer, or just someone with overseas financial commitments — you’ll increasingly be subject to AI-driven scrutiny. That’s not necessarily a bad thing, but it does mean a few practical considerations:
Be consistent. AI models flag anomalies. If your transfer patterns are regular and predictable, you’re less likely to trigger a review.
Document your transfers. Having invoices, contracts, or other documentation readily available speeds up any review process.
Expect some delays. Banks are getting better at processing legitimate transfers quickly, but the transition to AI-based systems isn’t complete. Some transactions that previously sailed through may now take an extra day for automated review.
Choose your bank wisely. Not all banks are equal in this space. The Big Four are investing heavily in AI fraud detection, while some smaller ADIs are still running primarily rules-based systems. Better technology generally means faster processing and fewer false positives.
The direction of travel is clear. AI-powered fraud detection in currency transfers is going to become standard across the Australian banking system within the next two to three years. For most people, that’s a good thing. Just don’t be surprised if your next international transfer takes a little longer than expected while the machines do their work.