Safe and Responsible AI in Australia: An Orwellian Dream

 
 

There has been some expected fanfare over the recently released “Safe and Responsible AI in Australia paper. And it seems that commentators and AI observers might find what’s happening over at the NDIA, a bit too unpleasant to believe.

First up. Let’s have a quick look at the AI paper.  I’ll do more commentary in due course.

The first proposed principle puts the mandatory AI guardrails on a collision course with the freshly amended NDIS Act.

Principle (1): The risk of adverse impacts to an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations.

The government is speaking with forked tongue. The NDIS Bill was pushed through Parliament, ignoring - not even taking into account - the objections of the Human Rights Committee of Parliament, human rights scholars and legal experts. The very foundations of the NDIS Bill suspends human rights, and is operationalised via automated decision making algorithms.

And here is the second point of collision between the AI policy and the newly amended NDIS Act: the ability to challenge the use of AI or outcomes of automated decision-making (ADM).

Well, not so, according to the drafters of the NDIS Bill - who probably have not given thought to the implications of ADM and AI, notwithstanding the RoboDebt and RoboNDIS human catastrophes.

Specifically, the newly amended NDIS Act puts algorithms (the Budget Calculation Instrument) - yet to be defined - beyond the reach of administrative review. This is a terrifying world first. The black box algorithms will have absolute supremacy.

This will undoubtedly trigger High Court action at some point.

The result of this is that people are refused access or do not receive funding necessary for life. Unbelievably, the NDIA has not documented the risk to life arising from its use of algorithms - when in other jurisdictions overseas, death and the most grotesque suffering and discrimination has resulted. The cases of catastrophic harm from the NDIA’s use of robo systems and methods are documented on the robondis.org campaign website.

It is therefore curious that the paper refers to overseas high risk use-cases, including:

“Access to essential public services and products. AI systems used to determine access and type of services to be provided to individuals, including healthcare, social security benefits and emerging services.”

This is exactly the NDIS use case.

The risk of harm and threat to life is real - not theoretical - the harms are widespread and happening now. Not in a far off time. And it cannot be said, that these risks and circumstances were not and are not known. Were these not considered?

But it seems that these guardrails will not reach government’s own policy making and administration - this in itself is a risk to democracy and civil society. Then keep reading, because on page 56, there is mention of “other work”:

"...work led by AGD to develop a whole of government legal framework to support use of automated decision-making systems (ADM) for delivery of government services. This may include systems run by AI. This reform work implements the Australian Government’s response to recommendation 17.1 of the RoboDebt Royal Commission."

So very quietly, the use of ADM across government services will be legally supported at some point in the future - the current grey legal status is therefore concerning. This shifting legal minefield of "subterranean systems" is explored in the brilliant article, "Decoding the algorithmic operations of Australia's National Disability Insurance Scheme" by esteemed scholars Georgia van Toorn and Terry Carney.

Overall, the government's AI paper is a bureaucratic jumble of guardrails, pillars, and lists for a product development cycle: not for policy development or high risk service delivery.

The ambition of “Government as an exemplar” is straight out of utopia.

Previous
Previous

‘No previous Australian government, even in wartime…’

Next
Next

ChatGPT x AI Digital Human Cardiac Coach