AI use and risks - and the regulatory response

THE GROWING USE OF AI

Over the last year, artificial intelligence (AI) has become much more widely used.

A recent survey shows that about a third of people are regularly using ‘generative AI’ now at work and personally. Generative AI (or gen AI) is a subset of AI which enables the creation of text, images and sounds (depending on the model) from natural language prompts and image prompts.

One big reason for AI adoption is that AI is being embedded into applications that we use every day like Microsoft 365. A number of apps now have AI features, and the major general purpose AI apps like Chat GPT, Gemini from Google, and Claude from Anthropic all have free versions available. The capabilities of the models that are being used in these apps are improving all the time. More people are getting into the practice of experimenting with AI to see what it can do and how they can use it. For many, AI is becoming an indispensable part of daily work.

Besides the retail AI products there is also a lot of activity in the development and deployment of wholesale AI applications.

AI is a broad-spectrum technology that can be applied across virtually any information system for personal or business use.

Finance sector businesses are moving on from planning to use AI to developing and deploying AI in their operations. The main use cases for AI in finance include intelligent process automation, anomaly and error detection, analytics, and operational assistance and augmentation.

Recently the CBA announced an AI chatbot, which will be the first for a major Australian bank. You can expect to see more and more of this kind of news.

Outside of your operations, AI will also be used in ways that impact your business. One example often cited is the risk of AI misuse by scammers.

ASIC survey

In October 2024, the Australian Securities and Investments Commission (ASIC) published a report on its survey of AI use among Australian financial services and credit licensees, Report 798: Beware the gap: Governance arrangements in the face of AI innovation (Report 798). ASIC found that adoption of AI is accelerating rapidly, and generative AI adoption is increasing exponentially. The key uses of AI among licensees included:

  • Credit decisioning and management.

  • Marketing.

  • Customer engagement and customer value proposition.

  • Fraud detection.

  • Business efficiencies and compliance.

  • Pricing optimisation.

  • Insurance (actuarial, claims management, etc.)

Use cases are shifting toward more complex and opaque techniques such as neural networks. In about a third of use cases, the AI models were developed by third parties.

AI RISKS

Why AI is different

AI is not just another new technology. It poses some unique risks.

  • It can make decisions without human intervention.

  • It exhibits human-like reasoning across domains.

  • AI improves performance through learning.

  • AI can analyse massive data efficiently.

  • The decision-making processes of AI are often unexplainable.

  • AI can create outputs indistinguishable from human-generated content.

  • AI can perform tasks beyond its original design.

  • AI is increasingly integrated into daily life.

In his new book Nexus, the historian Yuval Noah Harari writes that AI poses an existential threat to humanity because it’s not just a tool, it’s an agent:

“AI is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands … AI isn’t a tool – it’s an agent.”

AI risks in credit and financial services

ASIC outlined some of the key risks of the use of AI in financial services and credit in its Report 798. These include:

  • unfair or unintended discrimination due to biased training data or algorithm design;

  • incorrect information provided to consumers about products or services;

  • manipulation of consumer sentiment or exploitation of behavioural biases (e.g. in marketing);

  • breaches of data privacy and security; and

  • eroding consumer trust and confidence due to a lack of explainability, transparency and contestability.

Where AI models being deployed are developed by third parties, there are added risk management concerns.

REGULATORY DEVELOPMENTS

Australia joined other countries in November 2023 to sign the Bletchley Declaration, supporting the principles of safe, human-centric, trustworthy and responsible AI.

Several jurisdictions are now proceeding with legislation to regulate AI, including some American states and most importantly the European Union. The EU approved its AI Act on 13 March 2024, and it came into force on 1 August 2024.

EU approach

The EU may not be a leader in technology innovation, but it is a leader in technology regulation. In what is known as the Brussels Effect, the standards set by the EU can end up being the de facto global standard, something we have certainly seen with privacy regulation and the GDPR. The same thing could well happen with AI – and that’s not necessarily a good thing – so it’s important to understand what the EU legislation is seeking to do.

The EU AI Act takes a risk-based approach to AI regulation. The level of regulation increases as the risk increases. The Act bans unacceptable risk AI such as social scoring and manipulative AI. At the other end, minimal risk AI is unregulated. The Act mostly deals with high-risk AI, and the regulatory requirements apply to developers whether or not they are located in the EU, if their services are made available in the EU. For high-risk AI, the EU AI Act imposes a number of obligations that will come into effect in 24 months or 36 months, depending on the type of high-risk AI.

There are also obligations in the EU AI Act specifically for general purpose AI such as ChatGPT. These are scheduled to commence in August 2025, but the standards that will underlay these requirements will take some years to develop, and so a code of practice is being rolled out in the meantime.

Australia’s proposed mandatory guardrails

In September 2024, the Australian Government released its proposals paper for regulation of AI. It follows an initial discussion paper released in June 2023 and the Government’s interim response in January 2024. Like the EU, the Australian Government’s intended approach to AI regulation is risk based, with mandatory ‘guardrails’ for high-risk AI, which seek to balance preventative measures with remedial (after the event) measures.

The risk-based approach will allocate responsibility for AI harms across the AI supply chain and throughout the AI lifecycle. The AI supply chain is the network of actors and organisations that enables the use and supply of AI from design, testing and fine tuning through to deployment and integration into local IT systems.

An important basic distinction in the supply chain is between developers and deployers of AI. Developers design, build, train, adapt or combine AI models and applications, while deployers supply or use an AI system to provide a product or service. Deployment can be for internal purposes or externally, such as to end customers who are not deployers of the system. The mandatory guardrail responsibilities will be distributed according to who is best equipped to deal with the risks associated with a particular stage of AI in its lifecycle.

The guardrails will be an extra layer on top of existing laws that will continue to apply to developers and developers of AI, like the Privacy Act.

Only ‘high-risk AI’ will be subject to the Australian mandatory guardrails. The proposals paper sets out 2 broad categories of high-risk AI. The first category will be uses or applications of AI technology that may be high-risk. When deciding if a use or application is high-risk, the Government is proposing that you would have regard to a set of principles rather than a list (unlike the EU AI Act, which for example lists credit decisioning as high-risk). These principles are mainly about adverse impact on humans in various ways.

The second proposed category of high-risk AI is ‘general-purpose AI’ (GPAI) - an AI model capable of being used or adapted for use for a variety of purposes, for direct use as well as for integration in other systems. ChatGPT is an example. The focus of this second category is on what the technology can do, unlike the first category which is about how it will be used. The concern with GPAI is that the risks of these AI systems are not foreseeable. GPAI models to date have mostly been developed in other jurisdictions, so there is a need for alignment here with international regulation.

The 10 guardrails

The proposed mandatory guardrails are that organisations developing or deploying high-risk AI systems will be required to:

  1. Accountability: Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.

  2. Risk management: Establish and implement a risk management process to identify and mitigate risks.

  3. Protection: Protect AI systems and implement data governance measures to manage data quality and provenance.

  4. Testing and monitoring: Test AI models and systems to evaluate model performance and monitor the system once deployed.

  5. Humans: Enable human control or intervention in an AI system to achieve meaningful human oversight.

  6. End users: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.

  7. Challenge: Establish processes for people impacted by AI systems to challenge use or outcomes.

  8. Transparency: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.

  9. Records: Keep and maintain records to allow third parties to assess compliance with guardrails.

  10. Assess: Undertake conformity assessments to demonstrate and certify compliance with the guardrails.

At this stage these are only proposals, but they are not particularly controversial, and the final outcome will probably look similar to the proposals.

There are 3 options in the proposals paper for how the mandatory guardrails can be legislated:

  • adapt existing regulatory frameworks to include the guardrails;

  • create framework legislation that will require existing laws to be amended for the framework legislation to take effect; or

  • a new cross-economy AI Act.

Voluntary AI Safety Standard

While the Government is consulting the mandatory guardrails, it published a Voluntary AI Safety Standard in September 2024 to promote ‘human centric’ AI. The voluntary standard also includes guardrails which align very closely with the proposed mandatory ones. The only difference is number 10 – in the proposed mandatory guardrails, number 10 is a requirement to do conformity assessments to demonstrate and certify compliance with the other guardrails. In the voluntary standards, number 10 is about engaging your stakeholders and evaluating their needs and circumstances.

The Federal Government says that it should set an example by adopting best practices in its approach to AI governance. It has reached agreement with the States and Territories on a uniform approach for the assurance of AI in government and also released its policy on the responsible use of AI in government.

AI and privacy

The Privacy and Other Legislation Amendment Act 2024 (Cth) amends the privacy legislation to require that entities include in their privacy policies information about the kinds of information used in, and the types of decisions made by, computer programs that use personal information to make decisions that could reasonably be expected to significantly affect the rights or interests of an individual. This is only a disclosure requirement. It does not impose any restrictions on automated decision making. This change will come into effect in 2026.

In October 2024, the Office of the Australian Information Commissioner published two guidance papers, one on privacy and developing and training generative AI models and the other on privacy and the use of commercially available AI products.

Applying existing laws

While the mandatory guardrails are still on the horizon, regulators have to work with existing laws, and both ASIC and APRA have taken the position that the existing laws and prudential standards that they enforce still apply to the development and use of AI.

In its Report 798 ASIC says that the regulatory framework for financial services and credit is technology neutral. It highlights some key existing obligations that are relevant to the safe and responsible use of AI. These include:

  • the obligation of licensees to do all things necessary to ensure their regulated services are provided efficiently, honestly and fairly;

  • the obligations not to engage in unconscionable conduct or to make false or misleading representations;

  • the requirement to have risk management systems and adequate technological and human resources;

  • responsibility for outsourced functions; and

  • duties of directors and officers to act with a reasonable degree of care and diligence.

APRA is open to firms experimenting with AI if they have firm board oversight, robust technology platforms and strong risk management to deal with it.

Australian Consumer Law

There are some challenges with applying existing laws to AI. These were laid out in relation to the Australian Consumer Law in another Federal Government consultation paper issued in October 2024. The consultation does not question the fundamental principles in that law, which are also replicated for financial services in the ASIC Act, but considers whether they work as well as they should in a practical sense when the goods and services supplied have AI components. One example is how you would prove a breach of a consumer guarantee for a product enabled by AI: how do you know when it has failed, and how do you establish the extent of the failure and the extent of any harm?

ACTION STEPS

So what can we be doing now?

AI is not just an everyday technology upgrade – its capabilities and risks are at a different level. For that reason, it’s really important to keep up to date with AI developments, and you should familiarise yourself with AI by using it and experimenting with it.

AI development and deployment is not on hold, waiting for regulation to happen. Regulation is playing catch up. But we don’t need to wait for legislation to tell us how to act properly.

Your business should adopt AI governance frameworks to manage how it will develop and deploy AI.

ASIC in its Report 798 concluded that the most mature governance arrangements for AI took a strategic and centralised approach, where the licensees:

  • had a clearly articulated AI strategy;

  • included AI explicitly in their risk appetite statement;

  • demonstrated clear ownership and accountability for AI at an organisational level, including an AI-specific committee or council;

  • reported to the board about AI strategy, risk and use;

  • had AI-specific policies and procedures that reflected a risk-based approach, and these spanned the whole AI lifecycle;

  • incorporated consideration of AI ethics principles in the above, and

  • were investing in resources, skills and capability.

Use the Voluntary AI Safety Standard as a starting point – the mandatory regime is likely to look similar. But don’t lose sight of how existing laws can apply to AI.

If you need help with the legal aspects of using AI in your business, please reach out to us.

 

 

 

Next
Next

Financial Services and Credit Monthly Update November 2024