Artificial Intelligence in Credit: Legal and Compliance Issues

There’s been an explosion of interest in artificial intelligence or AI in the last year that was set off by the release of ChatGPT by Open AI in November 2022. Since then almost every week we have seen the announcement of new AI services and companies incorporating AI into their existing services.

ChatGPT is an example of what is called “generative AI”. One of the reasons for the rapid adoption of generative AI is because it is accessible to the public through user-friendly interfaces.

AI has been in development for a number of years, but it has clearly now reached the deployment stage where it is available to the general public and being widely used.

This article explores how AI is being used or could be applied in credit, and the legal and compliance issues that need to be considered.

WHAT IS AI?

Before we look at how AI can be used in credit, we need to have a basic understanding of what is meant by AI, and related concepts.

A word of warning - some of these concepts are fluid, contested, and not precise.

AI broadly defined refers to machine-based or digital systems that use machine or human-provided inputs to perform advanced tasks for a human-defined objective, such as producing predictions, advice, inferences, decisions, or generating content.

The CSIRO has defined AI as a collection of interrelated technologies used to solve problems autonomously, and perform tasks to achieve defined objectives, in some cases without explicit guidance from a human being.                                  

Another simpler definition of AI is that AI is the use of computer programs that can learn from data and perform tasks that normally require human intelligence.

By “learning” from data, AI has the ability to get “smarter” over time.

“Machine learning” is basically training the AI model to recognise patterns in data, and then using that knowledge to make predictions or decisions about new data. When there are humans involved, that’s a “Human in the Loop” (HITL).

AI is sometimes classified into narrow AI or general AI. Narrow AI is used for confined tasks like searching the internet, steering a vehicle, or operating chatbots on websites. General AI (or artificial general intelligence - AGI) is where the AI performs complex reasoning tasks across a wide range of subjects, like humans.

Generative AI uses “large language models” (LLMs) and “multimodal foundation models” (MFMs). LLMs are trained using massive amounts of data to learn billions of parameters during training. LLMs read vast amounts of text, spot patterns in how words and phrases relate to each other, and then make predictions about what words should come next. An MFM is a type of machine learning model that combines both image and text data as inputs.

LLMs and MFMs use “neural networks”. These are computing systems comprised of interconnected units, often organised into layers to mimic the way the human brain functions. They are constructed through machine learning techniques and trained using huge quantities of data, typically sourced from the internet, rather than relying on predefined rules.

AI APPLICATIONS IN CREDIT

AI has the potential to touch pretty much every aspect of the business of lending. Lending is an information based business and many of the tasks performed by humans can be replaced or assisted by AI.

Pretty much the whole life cycle of credit could be impacted by AI.

  • Marketing: developing ads and campaigns and targeting customers.

  • Product design: monitoring competitors, evaluating product options, and developing alternatives.

  • Product selection: comparing products and matching products to customer requirements; “robo advice”.

  • Credit scoring and credit assessments: data analysis to create more accurate and personalised models of creditworthiness.

  • Credit decisions: automated credit decisions.

  • Loan processing and settlement: extracting information from documents, generating and sending documents, and automated settlements.

  • Customer identification and AML/CTF: KYC and transaction monitoring.

  • Customer service: responding to queries and pricing variations.

  • Collections and enforcement: monitoring loan performance, and automated collections contact.

  • Dispute resolution: processing complaints and conducting conversations with customers.

  • Fraud detection: spotting fakes and scams and responding rapidly to prevent losses.

  • Compliance and legal: breach detection, issue identification and legal sign off.

POTENTIAL HARM AND RISKS WITH AI

As artificial intelligence advances in its capabilities and is deployed in an increasing array of applications, many people are expressing concern about some of the implications.

According to the report on the state of AI governance in Australia, the types of risks from AI can be grouped into 3 categories.

  • First, AI systems can pose risks when they do not operate in the way required or at the level of quality required. An example in the case of credit would be where it results in gender biased credit decisions.

  • Second, AI systems pose risks if they are used for malicious purposes or in misleading ways. This could include the use of AI to commit credit fraud.

  • Third, risks can arise if AI is overused or used inappropriately or recklessly without considering the flow on effects. These unintended consequences could include things such as breaching individual privacy, and unemployment resulting from replacing jobs.

Let’s look more closely at some of the potential system failure risks.

One of the main challenges with AI is that the means by which the AI produces outputs may not be known or transparent. The results can’t always be explained – even by the humans who built them. There is an important difference between what is called black box and white box AI. Black box AI is a type of artificial intelligence where the end-user does not know how the AI produces insights based on a data set. ChatGPT is a black box AI. In contrast, white box AI is transparent about how it comes to its conclusions. White box AI tends to be more practical for businesses. Since a company can understand how these programs came to their predictions, it’s easier to act on them. As AI plays a more prominent role in society, trust and transparency will become increasingly crucial.

Because AI is trained on large data sets, the content of those data sets can influence the outcomes, and this could lead to bias in decision making.

For example, if artificial intelligence systems used for lending decisions are based on historical data which shows that loans are more likely to be given to a man rather than a woman, the AI might infer that men are more creditworthy, leading to bias in credit decisions.

Problems with AI can arise not only with the data set that the model is trained on, but also the learning algorithms used in the AI model. AI can be very good at correlating data but it doesn’t understand causation. It won’t look for particular attributes unless the algorithm asks it.

Generative AI models like ChatGPT have been optimised for fluency rather than accuracy and they can create fake outputs (known as hallucination). You may have heard of the recent case where a lawyer used ChatGPT to draft legal pleadings and it created fake cases in support of the submission.

LEGAL AND COMPLIANCE ISSUES

When understanding how the law applies to AI, there are some important basic principles.

  • AI does things humans would otherwise do, but the law is a human construct, governing rights and obligations between people and non-human operated legal entities such as companies and governments which have a “legal personality.” AIs do not have a legal personality.

  • Delegating decisions or other outputs to AI does not excuse the humans who deploy it. Existing laws will apply in relation to artificial intelligence, and humans will be responsible for how it is developed and used.

This seems to be borne out in the guidance that has been issued on some of the predecessors of artificial intelligence. Way back in 2016, which seems to be a century ago in terms of AI development, ASIC published guidance on providing digital financial product advice to retail clients in its regulatory guide 255. The guide defines digital advice as the provision of automated financial product advice using algorithms and technology and without the direct involvement of a human advisor. This is almost like artificial intelligence except that it doesn't refer to the learning capabilities of the technology.

ASIC’s view in RG 255 was that financial services licensees who use this technology must have at least one person who understands the technology and the algorithms used to provide the advice, and that the licensee should regularly monitor and test the algorithms that underpin the advice. It must be asked though whether that level of understanding and control can be maintained as AI becomes rapidly more powerful.

Another guidepost from the pre-ChatGPT era is the famous Wagyu and Shiraz case on responsible lending in 2019, which arose out of concerns by ASIC at the automated decisioning system (ADS) used by Westpac to make lending decisions. Again, this was not quite artificial intelligence, but did involve an algorithm comprising over 200 rules.

ASIC’s concern was that the ADS was not properly assessing the suitability of loans in accordance with the responsible lending requirements under the National Consumer Credit Protection Act (NCCP Act). However the judge found that a credit provider could do whatever it wanted in the assessment process, as long as it did not make unsuitable loans.

This conclusion, which went against the approach that ASIC was taking to enforcement in this area, seems to suggest that there may be opportunities to deploy AI in the credit assessment process, provided that the outcome is that there are no unsuitable loans approved. But how will we know to trust the outcome?

LOOKING CLOSER AT EXISTING LAWS

Let’s now look at some of the main existing laws which could be relevant to the use of AI in credit. Many of these will also apply to the use of AI in other domains by business.

  • Corporations Act:  Corporate governance in relation to AI is a big issue. Companies are being told that they need to develop an AI governance strategy which deals with an understanding of it, controls on how it's used, and accountability for performance and problems. Another consideration in terms of governance is the responsibilities of directors and the extent to which they can rely on information, advice and decisions which have an AI component to them. Under section 189 of the Corporations Act, a director can rely on information or advice given or prepared by employees and professional advisors, subject to some conditions. It will be interesting to see how directors’ liability could be affected by reliance on AI generated information or advice.

  • APRA prudential standards: Lenders prudentially regulated by APRA will have obligations in terms of information security and more broadly operational risk which will be impacted by the increasing use of AI. AI governance will be a part of operational risk management.

  • NCCP Act: In terms of credit regulation, the use of AI in credit assessments and decisions, and in credit recommendations by brokers, will clearly be controlled by the responsible lending obligations, although as we have seen from the Wagyu and Shiraz case, there may be some latitude here. The use of AI may also be constrained by the general conduct obligations of credit licensees, including the obligation to act efficiently, honestly and fairly, and the obligation to have the necessary competence to engage in credit activities covered by the licence.

  • Australian Securities and Investments Commission Act (ASIC Act): Under the ASIC Act, the consumer protection provisions may apply to the use of AI. In the Trivago case, Trivago was found to have engaged in misleading and deceptive conduct because the algorithm it used in making hotel room recommendations gave consumers the impression that they were getting the best deal or the cheapest rates when this was not the case. Under the ASIC Act there are also consumer guarantees in relation to financial services which provide that they have been performed with due care and skill and are fit for purpose. When those services are delivered with AI components, the provider may be liable if it is found that they do not meet these guarantees.

  • Privacy Act: The Privacy Act will obviously be relevant to AI in many respects, including the use of personal information in the data sets used to train AI or to provide AI services: issues concerning permission to collect personal information and use the information, and ensuring that the information is accurate and up to date and that it is kept secure, in accordance with the Australian Privacy Principles. The recent response of the Federal Government to the Privacy Act Review includes accepting the recommendation of the Privacy Act Review that privacy policies should set out the types of personal information that will be used in substantially automated decisions which have a legal or similarly significant effect on an individual’s rights, and that high-level indicators of the types of decisions with a legal or similarly significant effect on an individual’s rights should be included in the Privacy Act and supplemented by OAIC guidance (proposals 19.1 and 19.2). This could include decisions on denial of consequential services or support, such as financial and lending services, among others. The Government has also agreed with the recommendation that individuals should have a right to request meaningful information about how automated decisions with legal or similarly significant effects are made (proposal 19.3).

  • AML/CTF Act: When AI is used for AML/CTF processes such as KYC, the question will be raised as to whether the risk based systems and controls required for an AML/CTF program may legitimately utilise AI and the extent to which these are sufficient and appropriate for the designated services being provided by the reporting entity.

  • Intellectual property law: This is often discussed in relation to AI, although perhaps not so much in financial services. There are concerns about AI being trained on information where the copyright is held by other people, and there also questions about whether intellectual property rights can exist in AI generated content -- and the view seems to be that it doesn't, because it's not created by humans.

  • Anti-discrimination law: This area of law is pertinent to AI because of the concern that AI decisions or advice may be biased and discriminate against protected categories. This is clearly a risk in credit decisions.

  • Common law: Beyond statutory law there is the common law, and the deployment of AI resulting in harm may be subject to claims in negligence where it is established that the person using it at a duty of care. Class actions are no doubt on the horizon.

  • Employment law: Finally, there is employment law. AI may impact this field in several ways, such as where AI is replacing workers, and also in the ways in which workers may be allowed to use (or directed not to use) AI as part of their work.

FUTURE REGULATION OF AI

There has been a lot of talk and speculation about the dangers presented by AI, alongside the excitement about its potential benefits. Just after the release of GPT 4, used in the paid version of ChatGPT, a number of prominent AI scientists and business people called for a pause in development. It is the sheer power of AI which leads to these concerns about its possibly damaging effects on human society.

At the moment, as with many technologies, the pace of regulation lags behind the development of the technology. In Australia, there is currently no specific AI legislation.

There is a set of AI ethics principles which was published by the Department of Industry Science Energy and Resources in 2019, and a Responsible AI Network convened by the CSIRO.

There are 8 ethics principles:

  • Human, societal and environmental wellbeing: an AI system should benefit individuals, society and the environment.

  • Human centred values: an AI system should respect human rights, diversity, and the autonomy of individuals.

  • Fairness: an AI system should be inclusive and accessible and should not involve or result in unfair discrimination against individuals, communities or groups.

  • Privacy protection and security: an AI system should respect and uphold privacy rights and data protection and ensure the security of data.

  • Reliability and safety: an AI system should reliably operate in accordance with its intended purpose.

  • Transparency and explainability: there should be transparency and responsible disclosure so people can understand when they're being significantly impacted by AI and can find out when an AI system is engaging with them.

  • Contestability: when an AI system significantly impacts a person, community, group or environment, there should be a timely process to able to challenge the use or outcomes of the AI system.

  • Accountability: people responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

The Federal Government has also released a discussion paper on safe and responsible AI in Australia in June, which looks at the existing regulatory framework, potential gaps, and possible options for governance for the framework governing the safe and responsible use of AI.

There is progress in various jurisdictions on AI regulation but the forerunner at the moment which is likely to shape regulation in other jurisdictions is the EU Artificial Intelligence Act. The EU AI Act takes a risk based approach to the regulation of AI with different regulatory requirements depending on the risk rating of minimal, limited, high, or unacceptable risk. Unacceptable risk AI would be banned. The European Parliament reached a provisional agreement with the Council of the European Union on the EU AI Act on 9 December 2023. The agreed text must formally adopted by both the Parliament and Council to become EU law.

The high risk category includes AI which is used to evaluate credit worthiness of people.

A one size fits all approach seems to be inflexible, and some kind of risk based approach which differentiates levels of risk is likely to be selected as the appropriate basis for regulation in Australia. Governments will be seeking to ensure protection from possible harms from AI while allowing for commercial freedom to develop these technologies and use them for socially beneficial outcomes.

One option may be to have safe harbour provisions where AI models can be protected if they meet specific criteria. Another possible option is having a kind of strict liability regime where the humans responsible for AI take full responsibility, in the way that humans can be strictly liable for the damage caused by dangerous animals.

Clearly we have a way to go in developing the regulatory frameworks for AI and how that will affect the rollout of AI into credit in financial services more generally.

CONCLUSION

We hope that this article has given you a helpful overview of artificial intelligence, how it can be applied to credit, the potential risks posed, legal and compliance implications, and the regulatory trends going forward.

We are continually monitoring developments in AI and can assist you with legal advice on how to use AI in your business. Please get in touch if you need help.

 

Kathleen Harris and Patrick Dwyer
Legal Directors

Thanks to James Dwyer for his assistance in preparing this article.

Previous
Previous

Financial Services and Credit Monthly Update December 2023

Next
Next

Financial Services and Credit Monthly Update - November 2023