Banker's Digest
2025.10
Australia's banking sector reacts cautiously to the development of AI

In 2017, Australia appointed a Royal Commission – a special investigative body outside of the government – to investigate numerous reports of abuses in financial services over a span of many years. The 2019 release of the final report marked a watershed moment in the development of Australia’s banking ethics, highlighting “the connection between conduct and reward; the asymmetry of power and information between financial services entities and their customers; the effect of conflicts between duty and interest; and holding entities to account.” These weighty factors continue to exert influence on banks and regulators as they have adapted to the technological developments which have occurred since the release of the report. Generative AI creates rich opportunities for cross-product sales with precision targeting – and yet Australia’s ‘big four’ banks had already exited funds management, securities, and financial advice by the time it arrived. “They got burned,” said Chris Whitehead, former CEO of the Australian financial training institute FINSIA. “Severely burned.” (FINSIA is a member of the Asian-Pacific Association of Banking Institutes (APABI), for which TABF serves as the Secretariat.) The banks had consistently lost money in those business areas over some two decades. While upselling may seem at first like a straightforward business plan, the issues that later emerged included conflicts of interest, excessive fees, and failure to match investors’ risk appetite – not to mention more direct types of fraud, such as advisory fees charged to the accounts of deceased customers. Following the Royal Commission, several big banks were required to set aside a total of A$10 billion for remediation to wealth management customers. “Huge costs – not just financial but reputational as well.” Australia’s experience can provide a useful comparison point to Taiwan’s financial system, which is dominated by financial holding companies, more in line with Australia’s earlier industry layout. This cross-sector conglomeration inherently gives Taiwan greater exposure to the risks that may arise from innovation with insufficient safeguards. Of course, this is not the only difference between the two financial sectors. Taiwan’s ETF infrastructure is well developed, reflecting the funding needs of its large pool of SMEs, who may lack direct access to capital markets. Passive investing can directly substitute for expensive investment advice, meaning that Taiwan’s financial services never needed to develop in that direction to begin with, and the incentives for inappropriate upselling behaviors were never as strong here. Nevertheless, Australia can illustrate some of the pitfalls that Taiwan might face as it seeks to both add value to its wealth management, and drive digital transformation through AI. Customer-facing communication creates some of the most practical ethical pitfalls for excessive digitalization, falling in the sweet spot of time-consuming labor worth automating, technology which is now superficially up to the task, and unlimited potential for liability. To be sure, some of the most exciting recent innovations avoid this sweet spot simply by remaining out of the public view, and avoiding difficult processes – particularly credit decisions, which involve fairness, and transparency on questions of potential bias. In Australia, like Taiwan, LLMs are being added to internal processes, such as helping employees more efficiently gather and integrate unstructured information which might be scattered across systems. These applications tend to keep employees themselves in positions of decision-making authority, and therefore raise fewer questions about legal responsibility. While conservatism on more sensitive applications buys some time, the industry also understands that regulation is likely inevitable. Rather than laxity, it mainly seeks clarity, yet assistance from international markets has been somewhat lacking. The EU has faced industry calls for a two-year delay in the implementation of the AI Act; although the European Commission insists that the timetable is fixed, it also released the Code of Practice for General-Purpose AI two months late. The US, for its part, meanwhile now rejects the very notion of financial regulation of AI altogether. The Executive Order entitled Removing Barriers to American Leadership in Artificial Intelligence, signed shortly after the inauguration of President Trump, revoked an earlier Executive Order under former President Biden on AI safety and trust, and House legislation further broadly pre-empted state regulation on consumer protection, algorithmic bias, and children’s online safety. Nevertheless, stated Whitehead, “I think there’s still a sense that that regardless of what kind of regulation comes out, we need to get on with it and we need to be creating our own frameworks and risk management.” While he was referring to Australia, this sentiment probably also applies to various other smaller regulatory players around the world. In practice, it will not always be financial institutions themselves who implement advanced systems, but their technology partners, which raises the topic of supply chains. In principle, banks retain full responsibility for the responsible use of AI by their partners: “you can outsource the operation, but you can’t outsource the accountability.” In practice, they generally prefer to work with partners who are ‘big enough to sue’ after the discovery of a potential shortcoming – which can mean not just technology multinationals (where issues might arise of data residency), but also onshore management consulting companies, who can then aggregate offerings by smaller start-ups. Constant review and evaluation of partners is a major ongoing task for any such arrangement. AI disruption is not just limited to banks, securities firms, or their technology suppliers; on the contrary, some of the most innovative uses have appeared on the part of threat actors. Australian finance and related sectors have been rocked by a number of major cyberattacks in recent years – another set of scandals mostly in parallel to those identified in the Royal Commission, and intensifying in the years since that report, yet with overlapping implications in terms of corporate governance and executive responsibility. 2022 breaches of Optus, a telco, and Medibank, a health insurer, served as wake-up calls within banking. In 2023, the consumer finance firm Latitude suffered a breach of the personal information of up to 14 million Australians, compared to the country’s total population of 27 million. Much of that data involved inactive customers and should have already been deleted, pointing to management deficiencies, although it may also be prudent to note it may be because of their disclosures that we have the full information on the extent of the breach in the first place. The most important asymmetric contribution of AI to cybersecurity is probably in the social engineering aspect of an attack. Even earlier in the development of LLMs, the threat from highly convincing, grammatically correct phishing messages composed by less advanced models became readily apparent. More recently, however, the more advanced capabilities of agentic AI have begun to supplement that threat, in several ways. Whereas in the past, the existence of a whole website with market trading or quotes might have implied legitimacy simply from the time and money required for its construction, now it has become easy for threat actors to build fake websites on the fly. Likewise, attackers can also automate open source intelligence gathering to personalize phishing messages with information gleaned from sources like LinkedIn with much less effort. Furthermore, as is frequently the case in this field, the attackers have few security requirements, making cybercrime a good test case for agentic AI overall. “On a personal basis,” noted Whitehead, “As part of a research project, one of the universities here essentially did what’s called a Turing test: could you tell the difference between a person and a computer” through a range of either human or AI responses? “I have to say, we did very poorly.” Banks are being forced to respond to this onslaught by introducing greater frictions into payments processes. One of the questions Australia is facing right now is who should pay for losses from fraud. The UK, as a point of comparison, has mandated that banks reimburse scam victims immediately, giving them an enormous incentive to be proactive in their account protection. Australia’s Scams Prevention Framework Act, passed in February, imposes fines of up to A$ 50 million for firms which fail to meet their obligations, yet still rejected this automatic reimbursement model due to the potential for moral hazard. Customers also play an integral part in refusing to give out their account information, and automatic reimbursement could perversely encourage a greater intensity of targeting. Regardless of the role of consumer education, however, “trust is very hard to build and very quickly destroyed” – a principle banks must keep firmly in mind as they face threats both from new technologies, and rivals who may be racing to be the first to apply those technologies.