Automated assessment of disability support recipients’ eligibility and determinations of how much funding they should receive will continue but won’t replace manual approvals or prevent modifications.
The joint minister for the National Disability and Insurance Scheme (NDIS) and for government services Bill Shorten said yesterday that the problem with the previous government’s application of algorithms to assess NDIS users’ needs and financial support was they took “the human element out of it.”
Flagging a forthcoming “NDIS reboot”, Shorten said that questions about cuts or funding allocations to the $35.8 billion scheme would have to wait for next month’s federal budget, but he committed to using automation without usurping manual planning.
But he also told the National Press Club that automated analysis of NDIS recipients’ and providers’ data could improve the scheme’s financial sustainability, fraud-resilience and personalisation.
NDIS "RoboPlanning"
Every year, the more than 550,000 NDIS participants must undergo a review of whether their ‘plans’ for the amount of money they receive for pre-approved products and services are “reasonable and necessary.”
The Morrison government’s ‘independent assessments’ trial contracted allied health professionals to use an opaque tool to judge NDIS users as one of 400 “personas,” and assign their “personalised budgets” accordingly.
The ‘independent assessments’ were planned to replace assessments based on submissions from the recipients' specialists or doctors.
In opposition, Shorten labelled the independent assessments systems as 'RoboPlanning' and said it was based on flawed mathematical formulas.
"It has been constructed in a black box. And the disability community fear it and detest it legitimately," he said.
The tool’s source code was never published. National Disability Insurance Agency (NDIA) spokespeople told senate estimates that assessments were based on “disability type, age, and a range of other factors.”
Pushback from the disability sector, which argued that the 400 personas system did not capture the diversity of NDIS users’ needs or allow for sufficient personalisation, led the Coalition to ultimately abandon the plan in July 2021.
However, the use of automated actuarial and predictive tools in NDIS determinations predates the Coalition’s ‘independent assessments’ trial; as do criticisms of the tools’ lack of transparency or capacity to recognise individuals’ specific needs.
But Shorten yesterday said that the government was not about to get rid of them.
Shorten said there was no inherent issue with using automation technology and data analysis in NDIS operations, as long as it was used ethically and transparently.
"We're always going to use automation and crunch data," he said.
"There's so much data in the world of disability ... We're not using data enough.
"Automation and using data is excellent, but it's the purpose it's used for and it's the manner in which it's the ethical framework around it."
Combining automated & manual assessments
Individuals’ NDIS plans were automatically generated by algorithms that predicted their needs and required level of financial support long before the coalition’s ‘independent assessments’.
The user’s “typical support package” is based on their age, disability and level of function and the average requirements of individuals in those categories.
However, stages in the process still require human intervention. The support package can be modified if it is found not to meet the users’ specific needs and an NDIA delegate must sign off on the plan.
Shorten said that the hybrid approach was not like 'Robodebt', which automated the entire cycle all the way from data-matching to issuing a debt notice requiring repayment of a benefit.
“Robodebt was all about relying on one fact,” Shorten said. “That if the income you declared to social services was different to your average annual income according to the tax office then the government reverses the onus of proof and the individual had to prove their innocence.”
NDIS assessments’ is more like what the Online Compliance Intervention was before it became Robodebt.
Welfare recipients’ reported income was being compared to their averaged annual tax records to automatically identify discrepancies since 2001. The discrepancies were manually assessed, until Robodebt was launched in 2016.
Supporters of the combined approach say that it allows the NDIS to be consistent in how much support it determines that people of a similar age, disability and function need while maintaining a degree of human oversight.
Critics argue that the use of algorithms still box individuals into broad categories while ignoring their individual needs denying them transparency about the decision-making process.
“All participants and families can see are the end results of automated adjustments, in the form of seemingly arbitrary cuts to people’s funding when they apply or go for review,” a UNSW report said last year.
The report called for the review of the NDIS - which Shorten launched in October - to get rid of automated assessments altogether.
Algorithmic transparency
Shorten said yesterday that there "should always be an ethical framework around the use of AI" as well as transparency into how it is coded and implemented.
"My view is it should be ideally, wherever possible, open source so that people can see what's going in,” Shorten said.
“And the best protection of data is to co-produce with citizens…the more that citizens feel they can control their own data, the more they trust the government.”
However, NDIS users and applicants remain in the dark about how algorithms assess their eligibility or review the level of support they receive.
A freedom of information request [pdf] for documents containing “the date algorithms began being used by the NDIA to assess participants' NDIS plan funding” and “the document that lays out the ethics framework for the use of algorithms in assessing participants' NDIS plan funding,” returned no results.
In a decision letter published last year, the agency said that “this is because the NDIA do not use algorithms to assess plan funding.”
It's possible that the refusal was based on semantics - where the agency considers that the algorithms are used to “guide” but not “assess” plan funding.