Having missed most of the well-deserved pushback and rising animosity against AI because I really didn’t have the time for a newsletter, I’m here to say No Breakthroughs has returned1 and this week I’m taking a look at a piece of biotech and science reporting around ChatGPT.
No need to salt the festering wound of AI slop here, that’s been done expertly by folks way smarter than me.2 In particular, I will just direct you to the excoriating essay “I will fucking piledrive you if you mention AI again” and say “This mostly sums up a lot of feelings I have about AI”. A piledriver, for the uninitiated, is a professional wrestling manoeuvre where one human being grabs another human being, thrusts their head between … Ah bugger it, just watch the GIF.
The inspiration for this newsletter is that great essay, but don’t get your hopes up. I’m not as funny or as cynical as Piledriver Guy. I do, however, feel the need to Superkick silly reporting square in the jaw, like the time Shawn Michaels clocked Shelton Benjamin off the top rope.3
I’m talking about Synchron this week, a company that’s doing some interesting and important work in brain-computer interfaces (BCIs) and that we can’t resist comparing to Elon Musk’s Neuralink because that’s just the easiest comp to make.
Synchron has received a ton of media attention because it has a neat brain-computer interface (BCI) device, the Stentrode, which is basically and extremely thin wire of electrodes carefully threaded into blood vessels in the brain. It reads electrical signals that are sent when a person is thinking about movement. This allows a person with paralysis to control things with their mind.
That’s not overselling it, those with the Stentrode implant can move cursors around a screen with thought alone, which is quite marvellous. The technology is very cool, it has been shown to work in a number of patients and its rigorous clinical trials continue. Australian publications love, love, love Synchron because of its Australian-born, New York-based CEO Tom Oxley, who conceived the device alongside Melbourne biomedical engineer Nicholas Opie. We claim Synchron as One Of Our Own — its our biotech Phar Lap, its giant heart on full display.
In the Australian Business Review, on July 22, Synchron found itself in the news again, with this headline:
Aussie tech company uploads ChatGPT to people’s brains
It stopped me in my tracks. UPLOADS? This would be revolutionary! The ability to upload software into people’s brains? That’s basically what they do in the Matrix! Karate, languages, Donna Hay’s apple crumble recipe — I wouldn’t need to look this up ever again.
I was skeptical. However, we can all get a little wild with headlines so, perhaps this was just overselling exactly what Synchron achieved. Let’s check the strapline?
The biotech, backed by Bill Gates and Jeff Bezos, has taken its next giant leap, incorporating ChatGPT into human brains to help those with paralysis communicate with others more freely.
Oh, okay. No, it seems Synchron has put ChatGPT into a brain. A human brain! Right in the middle of clinical trials!? Why is this only being reported on by the Australian Business Review? This is revolutionary!
Well, maybe there’s a little bit more in the piece that can shed light on this? Nope, that doesn’t seem to be the case. I’ll check the author’s LinkedIn? There, the claim is that Synchron is “installing the hyped tech from OpenAI into people’s brains”.
This is not the case. And at No Breakthroughs we don’t just superkick a report for no reason. We gently lay the thing down on the mat and explain why it’s worrying:
It insinuates ChatGPT is capable of things it is not capable of: Integration with Synchron’s system gives the patient a way to provide ChatGPT-generated responses to questions. An example used by the company shows a patient being sent a text message by their doctor to set up a time to meet. They then use their mind to guide a cursor to a ChatGPT-generated answer.
This is definitely not uploading anything to the brain and selling it as such is why we end up with articles like "I will fucking piledrive you” etc. This is supercharged predictive text. And it’s an important and legitimate use case for ChatGPT!
It’s not a feature that’s just jammed into the Stentrode to ride the hype — it could increase efficiency and save time for this particular group of people. But ChatGPT and the brain remain two distinct entities that do not cross over (the article does do well to point out that ChatGPT does not get any patient data…. (would you trust this??))It gives the impression information is able to be transferred to human brains. The Synchron device reads electrical impulses emanating from the brain and, with sufficient training, it enables a patient, unable to use their hands, to move a cursor around the screen with thoughts alone. It’s impressive and important tech, but it’s not really a two-way street. We’re not able to build software that can be uploaded to a brain to say, learn every recipe every written or perform a Shawn Michaels superkick.
Why this matters
You can find a bunch of research about hype in science leading to negative effects such as erosion of trust in scientists and institutions. Brian Resnick, at Vox, wrote about this five years ago. A Science commentary piece said similar, focusing on the researcher side — hype can also generate perverse incentives for research and lead down the path of falsification and fabrication.
Sometimes, the blame for hype can fall on how a press release was prepared. A little more research on Synchron’s ChatGPT partnership shows that the company had actually announced this via press release on July 11. But there was no overhyping or overselling in that, and it featured videos that demonstrated how the technology worked.
With all that information at hand, it falls on the publication to make sure the headline and copy are not overselling or overhyping the tech.
A bigger fear for science journalists is that scientists and researchers will shy away from doing any press, because they worry their stories won’t be told in the right way.
I encountered the Fear Factor last week, doing some freelance work on a recently published study in evolutionary biology and ecology. I won’t name the researcher, but they were particularly concerned about the way a major news outlet in Australia had framed their work: They were quoted in the piece and there was truth to what was said, but the publication painted an entirely more concerning picture of the research and its impacts. The researcher believed that framing jeopardized future funding opportunities.
I’ve spoken with many researchers in my time and the Fear is often: “I will look like I don’t know what I am talking about” or “I will be taken out of context”. If they do agree to interviews on record, the fear sometimes manifests as a desire to see their quotes and a draft copy of a report — and journalists will rightly turn down these requests, contributing to more fear about what the final story looks like.
tl;dr
ChatGPT hasn’t been uploaded, installed or integrated into human brains. It is being used in Synchron’s BCI to make texting more efficient for those who have lost upper limb mobility.
Who knows how long for? But I do see a need for a Science Media Watch, as friend, Lee Constable, suggested on LinkedIn recently. So here we are!
Suss Ed Zitron’s work on this.