50 shades of grey header

50 shades of grey - can lawyers ethically use AI?

 

Artificial intelligence (AI) has the potential to provide lawyers with valuable insights and predictions – cheaply and efficiently. However, the technology comes with a number of ethical grey areas. Do you know enough about AI to advise your clients and your organisation on what’s at stake?


AI is making its mark in the legal services

AI is a new kind of computing system that can perform tasks that have previously needed human intelligence.

If you work in legal services, you’ve likely heard the hype around AI by now. Firms are increasingly adopting software that uses machine learning – and there’s a world of speculation on the technology’s future potential.

Utilising AI makes sound financial sense for legal services. It can perform tasks, such as contract review, at a rapid speed and often more accurately than a human.

Michelle Mahoney, Executive Director of Innovation at King & Wood Mallesons, said, ‘In legal teams, we’re currently seeing the biggest amount of traction and the greatest use of AI in contract review and extraction. This could be using AI to either look at contracts one at a time, or across a portfolio of contracts to find differences and commonalities.’

And, in many instances, the investment needed by the legal services providers is very low. The technology is often cloud based, which means users only pay per document.

‘It’s still a growing market. The big law firms are definitely using these technologies, and we are seeing more and more take-up across the industry,’ Michelle said.


Ethical quandaries: it’s all in the data

AI works by taking data, modelling it and making assumptions.

Michelle said ‘AI is very strong at predictions. So, this poses the question: are there any biases in the data sets that it’s using?’

‘Any prediction is based on an algorithm that has used data from real-life scenarios. The ethics around that prediction are important.’

An article by the New York Times shows how this bias can manifest. The Times reported that the algorithm that calculates credit limits for Apple’s new credit card was giving higher limits to men than women.

Why? As the article states, ‘Algorithms are written by humans, who are inherently biased… artificial intelligence software is trained on data that contains all kinds of human biases, which can then appear in its own inferences.’

With the Apple credit card, the algorithm may be using historical data that says ‘women typically receive lower credit limits’ and factoring this into its analysis. Regardless of the woman’s income and credit history.

 

Why ethical implications are important for lawyers

As AI becomes an increasingly larger part of a lawyer’s workflow, those in the legal services will need to understand how it works and its ramifications.

But AI will be important for lawyers beyond their own work. They will also have clients who are utilising AI. These clients will be seeking legal advice as well as guidance on the ethics surrounding it. 

Michelle said, ‘It’s an area with a lot of moving parts. It’s vital for lawyers to be across this area of new technologies and understand how this space is evolving.’

 

Need a leg-up in understanding AI?

A six-week course from The College of Law, Fundamental technologies shaping legal services: understand the technologies driving the business of law commencing on 10 February 2020, will help you make sense of today's legal technology landscape. It will equip you with knowledge on the major technologies disrupting the way lawyers work, and the skills legal teams need to survive our digital future.

The subject is part of the College’s newly formed Master of Legal Business, and is led by Teaching Fellow, Michelle Mahoney. If you want to learn how you could be the next change-maker in your organisation, get in touch today