Widely expected to have major transformative influence on improving financial inclusion for less fortunate and enhancing efficiencies for businesses, artificial intelligence nonetheless harbours some significant risks.  Financial firms and regulators are increasingly cognizant of potential unintended consequences of deploying artificial intelligence on individual lives and society in general.  Fair, transparent and accountable integration of AI by businesses and governments is therefore crucial for its successful development as it looks to gain consumer and regulatory trust.

We encounter artificial intelligence (AI) on a daily basis, whether it is at work, in the social setting or in our interaction with authorities.  It brings convenience to service users as well as enhances scale and efficiencies for service providers.  Financial services industry is no exception.  A global information provider IHS Markit expects AI’s business value in banking to reach US$300 billion by 2030 .  Total benefits for the financial services industry are estimated to exceed US$1 trillion in the same period , whether through increasing operational efficiencies, reducing exposure to financial crime or releasing manpower for more innovation.  A survey by The Financial Times in 2018, however, highlighted a more muted reality: ‘Rather than racing towards an AI-enabled future, the [banking] industry is feeling its way forward’ .

AI capabilities promise considerable benefits. Consumers, for example, might enjoy near-instant decisions on their loan applications or receive more personalised service as advanced analysis of their information will recommend products for their specific circumstances.  But as new technologies evolve and take hold, their large-scale application inevitably brings about risks that require careful examination: This explains why financial institutions should tread carefully in this largely unchartered territory. 

AN OPINION EXPRESSED IN CODE

Conversations about AI conjure up images of an impartial process capable of delivering consistently fair analysis and outcomes.  Yet, this highlights some fundamental misconceptions about the actual development and operation of the technology.  AI systems are ‘trained’ on the wealth of data, which is  more likely than not to be influenced by human bias through either incomplete reflection of the society or because it contains historic realities or attitudes which are not entirely objective in themselves.  A widely quoted example relates to voice  and facial  recognition technology – a capability which will help replace the myriad of passwords that customers need to remember – since it has a higher rate of error for women or ethnic monitories because the information the system relies on draws predominantly on male voices or images of white men.  As Ivana Bartoletti, co-founder of the Women Leading in AI Network, put it: ‘An algorithm is an opinion expressed in code.  If it’s mostly men developing the algorithm, then of course results will be biased’ .

AI developers may inadvertently feed biases into technology where the human decision making process they are trying to recreate has been inherently biased.  In the context of financial services, ethnic minority borrowers might receive unfavourable loan conditions if the system draws on the data that carries a history of unfair treatment of this customer segment .  In its report on machine learning published in 2017, The Royal Society, the UK’s leading research institute, expressed concern that the AI systems might learn to deduce information indirectly, even where factors that would be in breach of discrimination laws (e.g. age, race or gender) are removed from data sets.  For example, an individual’s address, occupation and years in education may become effective proxies that nudge the system to make biased decisions . 

In its review of the use of Big Data – a field closely linked to AI – in general insurance, the UK’s Financial Conduct Authority recognised its benefits for consumers, but also highlighted the risks of firms using information to the disadvantage of some customers by either reducing exposure to less profitable clients or charging higher premiums to those able or willing to pay more .  Ability to process data in new ways may question the moral element of resulting decisions.  In the US, The New York Times reported back in 2009 that some firms started reducing credit limits on cards which included expenditure for family counselling services as divorce is generally associated with job losses and higher default rates .  The business logic is obvious but whilst not in breach of law, it sends an unfortunate signal that financial institutions prioritise their bottom line over serving the community that relies on banking services for their daily activities.  AI therefore is not simply a technical or legal issue, but has some significant ethical dimensions which firms should examine and integrate into their governance and operations. 

BEYOND THE PURELY TECHNICAL FIELD

The key issue with AI application is its lack of explainability, the so-called black box problem, which arises in unsupervised machine learning:  Although information that feeds into the system is transparent, there is very limited understanding of how the programme reaches its decisions.  In highly regulated markets, such as financial services, where impact of a decision can be life-changing (e.g. an application for mortgage or life insurance policy is rejected), it is simply not good enough to say ‘computer says no’ without any appropriate justification.  Consumers will want to know the basis for any decision affecting them and understand how to adjust their behaviour to receive approval next time. 

Most significantly, lack of explainability also creates a situation where unfair biases can go undetected on a wide scale.  Firms therefore often opt for a trade-off approach whereby a choice is made in favour of transparency over performance : Even though a bank may lose a few percentage points in terms of loan repayment predictability, it retains an understanding of how the system reaches its decisions and can therefore identify any biases that might inadvertently creep into the system as well as justify its operation to the regulators. 

AI application has long ceased to be a purely technical issue confined to the limits of the IT department.  Successful implementation requires involvement of the company’s top management who can ask the relevant questions and ensure that the use of AI technology aligns with the company’s broader standards and values.  AI tends to permeate the whole organisation – including business, legal & compliance, risk management and security departments – and requires their involvement in developing compliant and ethical algorithmic models.  As deployment of AI continues to grow in different parts of financial services, firms need to remember that its inherent vulnerabilities require as comprehensive approach as any other risk does. 

Analysts also point to other ways of mitigating biases that might impact the operation of AI.  The obvious solution is to ensure that AI development teams reflect characteristics of the society, which will require a significant shift given that – according to the World Economic Forum’s The Global Gender Gap Report 2018 – only 22% of AI professionals globally are female .  Equally important is the diversity of data that feeds into the AI models, although path to this objective is strewn with challenges.  Whilst modern technology allows collection of information from, for instance, social media or mobile data in support of the customer’s creditworthiness that can facilitate greater financial inclusion, it can also lead to models making opaque and erroneous decisions based on irrelevant factors  or prejudicing those with little online footprint, including vulnerable or elderly customers. 

According to Gartner, a research firm, about 85% of AI projects undertaken through 2022 will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them .  Both industry and policymakers therefore come to conclusion that human touch remains necessary when AI operates in the background:  McKinsey & Company, a global management consultancy, recently described the work of a European financial services firm, which decided to include ‘a human in the loop’ mechanism when advising financially vulnerable or recently bereaved customers . 

SOME FOOD FOR REGULATORY THOUGHT

With far-reaching consequences that AI has on economy and society, policymakers and regulators around the world are looking to understand the exposure and how to minimise it.  Conversations over AI risks are no longer restricted to technological or operational contexts but also consider their systemic implications for the global financial stability.  In its report Artificial intelligence and machine learning in financial services, the Financial Stability Board, an international policy co-ordinating body, concluded that the new applications ‘show substantial promise if their specific risks are properly managed’, but raised concerns around vulnerabilities that universal banks can become exposed to if they depend on the same algorithms and data streams or where their systemic importance grows through increased market power buoyed by their AI capabilities .

Some jurisdictions have introduced legal safeguards to protect individuals from discriminating decision making, which continue to apply in the AI context.  Under The Equal Credit Opportunity Act in the US, creditors must provide a statement of reasons for negative decision on consumers’ application .  General Data Protection Regulation, which came into force across the European Union in May 2018, provides individuals with the right not to be subject to a decision based solely on automated processing.  In prescribed circumstances, customers can also request human intervention to express their views or contest the decision .

Otherwise, policymaking and regulatory work around ensuring AI operates in a fair and transparent fashion is focused on developing advisory guidance which the industry can use in designing the new technology.  In November 2018, the Monetary Authority of Singapore introduced a set of principles to promote fairness, ethics, accountability and transparency in AI and data analytics.  The document encourages regular reviews of data and models as well as application of the same ethical standards to human and AI decisions . 

The European Commission followed suit in April 2019 by publishing Ethics Guidelines for Trustworthy AI, which were developed by a group of independent experts to encourage development of responsible and accountable technology .  The Commission’s AI strategy, released last year, articulated an ambition to become a global leader in cutting-edge AI by increasing public and private investment into AI to €20bn through end of 2020 and then maintain investments at the same level annually over the following decade.  As it recognises that building public trust into new technology is essential to reaching this objective , guidelines focus on areas, such as transparency, diversity, fairness, non-discrimination and avoidance of bias. 

AI WITHOUT BORDERS

AI is a global phenomenon deployed across governments and industries.  It is here to stay and no one seriously doubts that it will have profound impact on our daily lives and – in the context of financial services – our relationship with money.  Stephen Hawking once said that ‘the rise of powerful AI will be either the best or the worst thing ever to happen to humanity.  We do not yet know which’ .  Indeed, risks are significant, including not just potential for bias, but profound changes in the workplace as automation eradicates jobs and in the social context as human interactions are modified beyond recognition.  But technology also presents tremendous benefits which have strong potential to outweigh the risks. 

What is important, therefore, is to ensure that it proves to be a force for good and serves as a catalyst for more inclusive financial markets as well as the society in general.  This requires global co-operation between national governments, regulators and industries to establish consistent frameworks for mitigating risks that AI technology can inadvertently bring about as well as prevent intentionally irresponsible behaviour.  There have been various calls around the world for continuous dialogue at the international level, effectiveness of which might well shape the future we all live in.