Riding the Autism Bicycle to Retraction Town
Does anyone *really* know their Factor Fexcectorn?
Some scientific papers have errors so obvious that they become instant social media meme fodder (and they should be struck from the scientific record as soon as possible).
They’re beautiful things, when they come along. Today, they’ve come along.
But this is an interesting case because it may even be a rare instance where we should celebrate the retraction process…
What’s the story here?
On November 19, Scientific Reports, an open access journal in the Nature Portfolio, published the following piece, headlined:
Bridging the gap: explainable ai for autism diagnosis and parental support with TabPFNMix and SHAP
Bridging the gap already feels very much like an AI headline, so that’s a tell, perhaps, that something weird is going on here.
The author is one Shimei Jiang, apparently from Anhui Vocational College of Press and Publishing in China. In their abstract, the paper claims it will present a novel AI and “explainable AI (XAI)-based framework to enhance autism spectrum disorder diagnosis and provide interpretable insights for medical professionals and caregivers.”
The model is based on TabPFNMix, which is a real thing you can find on Huggingface and Github and seems to have been created/used by those working at Amazon for some reason.1
The paper’s introduction is pretty stock standard, but the real weirdness is Figure 1.
If it isn’t immediately apparent, it’s nonsense. There’s a bit of a random bicycle with a torture-device for a seat; a small child points — at what, we can never know? — as his parent, in a feat of grand body horror, has become attached to a slab of concrete.
There’s the Factor Fexcectorn, the word AUTISM seemingly pointing to a small orb that sits just outside someone’s brain and, of course, the ┐ Tol LIne storee, that most vile thing!

It’s obvious that this figure was generated by AI, but that has not been picked up during the peer review process. This Figure should not be published in a scientific journal. Yet here we are.
Why was it published and what happens next?
Good question. I have a few answers for you.
Several sleuths posted about this case on PubPeer, flagging concerns over the AI Figure and some concerns with the data, a description of the model this paper is using and some references that may not exist.2 Then it kind of took off on social media, too.
I asked questions of the author — they did not respond.
I also asked questions of those listed on Huggingface and Github. One of those people was Nick Erickson.
It seems that the model is actually one that can do the things the paper suggests but its description are lacking. Erickson told nobreakthroughs:
”I only barely glanced at the paper, but seems they used TabPFNMix as a model for supervised machine learning on a tiny 120 sample dataset, which is its intended use. Their description of TabPFNMix is not correct, as they call PFN a “pattern fitted network”, but PFN actually stands for a “prior fitted network”. They also say it is suitable for large datasets, which it explicitly is not. It only works on small datasets, but that is less critical. Probably the description of the background is LLM generated. Although for their use-case it is a reasonable model to use and probably beats the other models they compare it to such as XGBoost”
Intriguing. So maybe this isn’t a complete wash, but there is value in what the paper is trying to do. (Though I note the “barely glanced at the paper” part.)
Most importantly, I asked questions of Springer Nature and the Editor-in-Chief of Scientific Reports, Rafal Marszalek3:
And …Springer Nature did respond! And very quickly. In fact, it’s one of the fastest responses I have ever had from a publisher about a paper.
The team tells nobreakthroughs: This paper is going to be retracted!
Here’s Marszalek’s comments via the Springer Nature integrity comms team:
“This paper is in the process of being retracted and the author has been informed.
We have assessed the issues raised and have confirmed concerns around the methodology and one of the figures, supporting a retraction decision.
We have also undertaken a thorough review of the editorial process. Whilst the details of peer review are confidential, we can confirm that the article underwent two rounds of review from two independent peer reviewers, supporting an accept decision. We have also done an assessment of the handling of other papers with the same handling editor and are confident that they have a robust record and that this was a case of human error. We will be supporting them to ensure that this does not happen again.”
I often report on retractions and integrity breaches in science and I’ve not really seen a response this quick. Retractions can sometimes take months — and while that process may take months, this at least gets us to a point to say “Don’t cite this paper, don’t share this paper, it is being scrubbed from the record”.
Is this the fastest scientific paper retraction ever? I didn’t look at the data — someone else will know. Maybe the guys at RetractionWatch can tell me.
But that last part of Marszalek’s comment is interesting! Two independent peer reviewers, across two rounds of review, appear to have missed the figure that was AI-generated and did not flag any issues?… This paper’s quick retraction — a sign it should not have been published in the first place — leaves some questions around its handling.
I asked further questions of Springer Nature about whether they would add a note to the article to suggest a decision has been made on its retraction, but haven’t heard back just yet — it’s currently early early morning UK time as a I write this.
I will also try something different with this one: I will post a comment to PubPeer to alert readers that Springer Nature is working on this retraction (it will likely go to moderation and may not get accepted!) based on this post and the reporting I’ve done here.
But I think it should be the publishers that, as soon as a decision is clear, they place notices not only on their journal, but also PubPeer. This would be one way to alert other researchers to the integrity of a piece, as well as members of the public. PubPeer can be — maybe should be — a little more like PubMed Commons: A post-publication forum that fosters discussion.
Especially in an area like autism research, it would be a shame for this study to somehow circulate when it has quite serious flaws.
Anyway. I have to get going, appreciate you for reading this breaking news style nobreakthroughs but I’ve got a doctor’s appointment and my autism bicycle awaits.
Honestly, the why?? — what this is for — that stuff goes over my head. {Post-publication addition: I should be clear that Amazon AI staff are not working on some sort of Autism Spectrum Disorder AI… a preprint about that work is here}
It seems these references are real. I had a look for them and they are all obtainable online and seem to line up with their authors.
Post-pub correction: I spelt Rafal’s name wrong. My apologies.




Honestly, my guess is the article was peer reviewed by similar AI-brained researchers who just fed it into an LLM and skimmed the summary. Which is disturbing in its own right.
But the real irony comes from the text of the article. The whole crux of it is 'use AI models to diagnose autism, and make them 'explainable AI' to overcome the trust gap.' Except the authors and entire field clearly have huge blind spots and they're basically proposing digital phrenology.
In addition to this, their suggestions include 'robot assisted therapy' and 'use ai to generate recommendations for parental support'. But the science is apparently... dumping some data into a weighted neural network to return a score? And maybe it was higher confidence and faster than other models, but I gotta ask, did they just ask an LLM to try things until the numbers were better? What is 'explainable' about that process, and is it better or did you just manipulate the dataset until number went down?
Finally... the whole intro basically treats autistic people as 'a growing problem no one understands' and seems to accept the framework of 'autistic people are a burden who need constant management and no one really understands it' instead of 'autistic people need support.' The whole framing seems to remove agency from autistic people; granted that may be because it's written for early childhood intervention, but it feels... gross.
Currently doesn't say retracted, rather:
Change history
28 November 2025Editor’s Note: Readers are alerted that the contents of this paper are subject to criticisms that are being considered by editors. A further editorial response will follow the resolution of these issues.