“When I got my cancer,” my friend tells me two years after her double mastectomy, “the doctors made me fill out forms. They tried to jam me into a diagnosis.” She speaks bitterly. “They weren’t treating me as an individual.”
What if data and AI could be smarter than human doctors. Even--weirdly--more humane?
Data Diagnosis
In a recent Nature publication, a group of SF-based researchers describe how they tackled Traumatic Brain Injury (TBI) this way.
When your brain is damaged by an external force or object, you go to an emergency room. Your brain is formed of 100 trillion neural connections, with 86 billion neurons, and has proved so complex that it is impossible to map.
And when you get to the ER, your TBI is assessed as: mild, moderate, or severe.
That’s it.
TBI affects 2.8 million people a year in the US, and costs $76 billion annually to treat. Not satisfied with “mild, moderate or severe,” a San Francisco University consortium TRACK TBI created a database from 18 sites and 3,000 patients.
The research team applied machine learning to the problem. They “analyzed hundreds of simulations” using a “supercomputer at the National Energy Research Scientific Computing Center” according to Berkeley Lab. The researchers found 19 different types of unique patient conditions, or “outcome clusters,” hiding in the data.
Not three. Nineteen.
My friend was talking about breast cancer, not TBI. But it sounds like she wouldn’t have felt as jammed into a box if the nuances of her assessment could have been expanded 6-fold with data-driven research.
ComfortAIble
Okay, but this sounds a bit impersonal. Like being a Ford hooked up to that diagnostic tool at the garage mechanic. What about the humanity part of data-driven treatment?
There are some strange paradoxes here based on research published this month.
In the UK, a startup called Limbic launched a mental health chatbot for the National Health Service. (Demand for mental health services has been on the rise since the pandemic.) The chatbot was tested on half the websites referring people to NHS mental health services, to see if those referrals would come faster, and if they would be routed more effectively to caregivers.
What happened? According to the MIT Technology Review, the chatbot saw a 179% rise in referrals for people identifying as nonbinary, and about a 40% rise in referrals from Black and Asian ethnic minorities.
Wait, so not talking to a person made people more comfortable?
Exactly. The research “may suggest that interacting with the bot helped avoid feelings of judgment, stigma, or anxiety that can be triggered by speaking to a person.”
This was a factor in the TBI research as well. “Socioeconomic background”--gathered by the database--“became a differentiating factor for predicting various outcome trajectories,” one of the researchers said.
In other words: Remove the stigma--about sex, race, or money--and receive the accuracy.
Your Doctor is Hallucinating
Before we get too excited about robot medicine, however, Stanford’s Human-Centered Artificial Intelligence group published sobering findings.
You might be tempted to consult ChatGPT for medical help. Those AI bots are sucking up terabytes of web data. So they should be able to summarize it all and give you “best of the web” medical advice, right?
According to Stanford HAI, about ChatGPT: “30% of individual statements are unsupported and nearly half of its responses are not fully supported.”
Why?
The bots are tripping.
“Four out of five models hallucinate a significant proportion of sources,” say the researchers. Don’t confuse GenAI with your General Practitioner.
A few broad lessons from this round of specifically medical research:
1 - Reality can be so complex that it hides important answers. All that TBI data, once gathered, hid phenomena we can’t understand with our brains of flesh and blood. But with machine learning, a nuanced and powerful reality emerges. What data do you have that hides solutions, preferences, even cures?
2 - This TBI case study should also remind us of the power of gathering data in the first place. In all our excitement about advanced ML and AI, remember that all of it is built on boring-old-data… lovingly gathered, cleaned, stored, and made accessible.
3 - Medicine is one area where the benefits of AI may be most powerful. AI-enabled healthcare can, in theory, be delivered anywhere, to anyone, at a tiny marginal cost. A major leveler of inequality. What other areas of human endeavor can be so fundamentally affected by AI applied anywhere? Education?
4 - The intervention of bots into human interaction is unpredictable. Who would have guessed that someone who felt discriminated against would feel more comfortable with a software program than a person? Maybe it’s one of those things that’s completely obvious… after it’s proven. What other surprising dynamics await us, in applying non-human service to servicing humans?