Read in the Substack app
Open app

Discussion about this post

User's avatar
Matt Z's avatar

Honestly, my guess is the article was peer reviewed by similar AI-brained researchers who just fed it into an LLM and skimmed the summary. Which is disturbing in its own right.

But the real irony comes from the text of the article. The whole crux of it is 'use AI models to diagnose autism, and make them 'explainable AI' to overcome the trust gap.' Except the authors and entire field clearly have huge blind spots and they're basically proposing digital phrenology.

In addition to this, their suggestions include 'robot assisted therapy' and 'use ai to generate recommendations for parental support'. But the science is apparently... dumping some data into a weighted neural network to return a score? And maybe it was higher confidence and faster than other models, but I gotta ask, did they just ask an LLM to try things until the numbers were better? What is 'explainable' about that process, and is it better or did you just manipulate the dataset until number went down?

Finally... the whole intro basically treats autistic people as 'a growing problem no one understands' and seems to accept the framework of 'autistic people are a burden who need constant management and no one really understands it' instead of 'autistic people need support.' The whole framing seems to remove agency from autistic people; granted that may be because it's written for early childhood intervention, but it feels... gross.

Expand full comment
James Annan's avatar

Currently doesn't say retracted, rather:

Change history

28 November 2025Editor’s Note: Readers are alerted that the contents of this paper are subject to criticisms that are being considered by editors. A further editorial response will follow the resolution of these issues.

Expand full comment
16 more comments...

No posts

Ready for more?