Fed Court turns to AI to predict asset split after relationship breakdown

By

Creates proof-of-concept.

The Federal Court of Australia has created a machine learning proof-of-concept that is designed to help parties divide assets and liabilities following the breakdown of a relationship.

Fed Court turns to AI to predict asset split after relationship breakdown

Digital practice registrar Jessica Der Matossian told IBM’s THINK 2019 in Sydney that the proof-of-concept, developed with IBM partner Carrington Associates, had been trained on 1600 anonymised applications for consent orders made to the courts.

When both parties involved in a dispute agree to a course of action, they can apply to the court to formalise the agreement with a consent order.

The split is usually worked through by lawyers for both sides, but the Federal Court is experimenting with what it is calling the ‘FCA Consent Order AI Application’ to help parties more accurately determine a split that would receive court approval.

“The tool essentially allows them to enter their relevant information and based on like cases and outcomes of people who are in similar situations, that machine learning process thinks like a human and provides that percentage split to them,” Der Matossian said.

“The recommendation takes into account a series of factors such as age, income, capacity to earn an income, length of the relationship, [and whether there] are children involved.

“What this system actually does is it looks at what the judges are deciding and the registrars are approving, and it says ‘this is a fairer and more just outcome given your situation, given law, given the position you’re in terms of what your assets and liabilities are’.”

The tool is not currently in use by litigants or lawyers, and Der Matossian noted that once it progressed to this point, the final call on any asset division would still be one for the parties to work out.

“For the court, one of the most important roles that we play is to always remain transparent and impartial,” she said.

“This means we can only use the tool for making recommendations and for information purposes at this stage.

“Ultimately, If the litigants don’t like it, they can go off and agree amongst themselves or seek further legal advice. But that decision ultimately lies with them.”

Before artificial intelligence could gain a deeper foothold in the determination of legal outcomes, many deeper questions would need to be answered and assurances made.

“Being a court, we’re limited in how we can use AI,” Der Matossian said.

“To use it as a decision-making tool just isn’t possible for us. AI isn’t transparent enough and it raises a lot of liability and legal questions such as ‘Can I have access to the algorithms? Are the algorithms a trade secret for the court? Who’s liable for decisions made by AI? Do I have a right to appeal a decision made by AI? What does that appeal process look like?’

“Being lawyers and being so risk-averse, of course we think that way.”

The proof-of-concept uses machine learning algorithms running in IBM Watson Studio that are then exposed as an API.

The models were put together by Carrington Associates and the Federal Court via a design thinking process.

“We ran some design thinking workshops with the court,” Carrington Associates director of technology solutions Atul Desai said.

“It was important to understand the human element of the process because many times what happens is there are reasonings behind certain decisions that humans make in terms of how they split the assets.

“It was something we wanted to consider while we actually built the solution.”

Extracting data

The project ran into unforeseen challenges putting the learning dataset together to train the machine learning model.

While the Federal Court has had fully-digital files and e-services since 2008, the way data was stored presented some problems.

“Data was not stored in a structured machine-readable format. It was in PDF documents and what we had to do was spend a lot of effort understanding the structure of the PDF documents and then write a complex extraction process using OCR and computer vision technologies,” Desai said.

“Although the Federal Court has access to large sets of digital data, what we came to find is the way we were storing it wasn’t great,” Der Matossian added.

“It added a layer of complexity to the project, and a whole extraction phase that was completely unforeseen and added additional time and resources to the project and that affected our operations and Carrington’s operations.

“So on reflection, what we’ve really learned is in order for us to - fingers crossed - move further into the artificial intelligence world, we do really need to reflect on how we’re storing our data and what data we’re storing as well.”

Due to the sensitivity of data used in the training set, it was anonymised and critical information was masked before it could be used, Desai added.

There is also an ongoing challenge around making sure that only relevant data is fed into the machine learning model.

“Society’s always changing. Information about families, the way they dealt with divorce, the way they split their assets and liabilities 10-20 years ago may not be relevant today,” Der Matossian said.

“So as much as we need large sets of data, we need relevant data to reflect community [changes].”

Desai said that there are ongoing experiments about the best way to present the machine learning tool, including experimentation with a chatbot interface using Watson Assistant.

“This is just the beginning,” he said.

“There is a lot to do when we build a larger solution.”

Building trust

A full production version is likely to use Watson OpenScale, a relatively new service from IBM designed to improve transparency around the inner workings of an AI model - and therefore help instil trust in what it produces.

“One of the things which is critical when you build an AI model is there needs to be fairness in the models which allows you to give recommendations to all groups of people, and it doesn’t give biased recommendations,” Desai said.

“IBM has introduced a solution called OpenScale and it’s something we’ll be using in the larger project.

“The models become explainable so people can trust the AI solution.”

Ry Crozier attended IBM THINK 2019 in Sydney as a guest of IBM Australia.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

Macquarie Uni to spend up to $700m on 10-year digital transformation

Macquarie Uni to spend up to $700m on 10-year digital transformation

UniSuper's Google cloud deletion traced to "blank parameter" in setup

UniSuper's Google cloud deletion traced to "blank parameter" in setup

Australian Federal Police uses cloud, SASE to upgrade reach and capability

Australian Federal Police uses cloud, SASE to upgrade reach and capability

Western Sydney Uni discloses January "IT network" breach

Western Sydney Uni discloses January "IT network" breach

Log In

  |  Forgot your password?