ChatGPT is NOT an Understanding Machine
Will ChatGPT Help Human Patients and Digital Humans to Understand? Or Will this Combination Ignite Complexity and Risk?
Last week (13-16 March 2023), #NextMed brought together a gathering of the world’s most extraordinary health innovators and entrepreneurs. Check out the Livestream recordings of these rockstar sessions, graciously provided by #NextMed free of charge for a world hungry for change.
I have to say I am thrilled that aspects of my work on AI digital humans have been mentioned at #NextMed over the years, including having the great honour and buzz of presenting in person as faculty at #XMed in San Diego in 2019.
And I celebrate my amazing colleagues on this journey.
But the story of Hanna is really about the extraordinary leadership of Dr Chris Hillier, and how he motivated, excited and galvanised a whole community in Wilmington North Carolina - beyond the hospital - to co-design, own and want Hanna.
This was quite something very special to experience over a number of years working together on this. Very similar dynamics and factors were critical in the Nadia project.
These were not technology projects but human experiential innovations based in co-design and ethics, which is why the future Hanna or indeed any digital human in health and government service delivery is beyond ChatGPT. Or any technology.
Indeed, ChatGPT presents unknown risks in use cases of government and health service delivery. Who is willing to accept those risks? Read on.
The Future of Digital Humans in Healthcare is NOT ChatGPT
Among the great scientists of the world I follow and value in the global debate on ChatGPT and the like, are Gary Marcus and Grady Booch. Readers of this article should look them up.
Here are my observations on this exciting evolving field of digital humans in service delivery, which I have been involved with from the beginning.
The world is moving way too slowly. At least, parts of the world are moving way too slowly.
This is a comfortable institutional slowness entrapped by traditional concepts, constrained by start-up economics. The patient *must* be seen and accepted as the driver of design and innovation, not an afterthought to legitimise techno marketing. Patients are in it for the long haul, not for the fast buck.
The slowness is now complicated and confused by what appears to be a mechanism to move fast in the same direction: ChatGPT. We have seen these glittering baubles before.
All of the above assumptions are defective. A lack of awareness of these defective assumptions results in a fast-talking brochure – or stochastic parrot – that does not have longevity. I’ll talk about longevity in a moment.
ChatGPT is really not a ‘Large Language Model’ but a large ‘WORD’ model.
ChatGPT could be seen as a regurgitation machine, mostly of essay topics, marketing blurbs, exam answers and articles. (Not this article.)
Very interestingly, ChatGPT is being used in community research and other domains including accessibility - for example, my grandson with disability uses it. The use case matters a lot.
Don’t mix up writing a summary or article, for sustaining a conversation in an environment to be controlled for risk, such as health or human service delivery.
My commentary is based on decades of experience at the messy interface of front-line service delivery. The complex servicing systems of business, taxation, human services, immigration, disability, health.
And across all these domains, are common patterns of conversations. These common patterns are the foundations for any co-designed, domain specific language model.
So far, I have not heard anyone contemplate, what happens when things go wrong? And who exactly is accountable.
What happens when the *human* reacts adversely? A language model for a defined domain in service delivery must contain pre-determined escalation procedures, de-escalation procedures, hand-offs, and other process navigators.
In real life in service delivery domains, nothing happens on the fly, and this must be the case for AI digital humans used in service delivery.
Not being a language model, ChatGPT has none of the conversational structures of natural human conversation - just words.
ChatGPT cannot initiate a conversation. It cannot explain itself in context. If you don’t know a question to ask, it has no frame of reference. It cannot coach or guide you to an outcome, because an outcome is not an exercise - like write me an essay.
A language model needs context, and in a constrained or bounded domain - such as health, disability - purpose is paramount. Context and purpose are determined through co-design, providing the necessary ethics and governance guard rails.
Try applying ChatGPT in the immigration space, and the result will be a massive security and processing mess. And this same risk, unknown, unquantified, and uncontrolled, applies across the board in government and health service delivery.
And for people in Australia and overseas who have been watching and impacted by the unlawful RoboDebt catastrophe of defective algorithmic methods - methods that led to the deaths of innocent people - and the Royal Commission Class Action campaign into equally defective RoboNDIS algorithmic methods presenting risk to life - the question needs to be asked:
…who exactly is willing to take the risk? Add ChatGPT to this, and the risk to human life elevates catastrophically.
The same risk would be present in any application of ChatGPT to the UK NHS. And other health services globally. The risk of ‘chat’ and chatbots has been creeping up slowly for years, through powerful techno marketing. Of course, tech firms won’t talk of the risks.
Three years ago, I penned the article ‘We Need to Chat About Chatbots’ warning:
‘These technologies and artificial intelligence are breakthroughs that have the power to both liberate and discriminate. This is not about automating processes but serving humans, and for many, at a time of great vulnerability.’
I have a lot more to say about the extraordinary risks of ChatGPT in government and health servicing in following expositions.
Every human at the service delivery interface has the same needs and desires: to understand and to be understood. You cannot automate human understanding.
The idea of applying ChatGPT to AI digital humans in the service delivery interface of pharmacies is to be admired for the urgency to generate momentum for change. But such a use case needs to go further on two dimensions: contemplating risks well as really pushing the boundaries of traditional healthcare servicing. Co-design provides insights not otherwise possible into each of these dimensions.
In my opinion, ChatGPT might very well be the thing that holds back radical change because it distracts and gives the allure of change.
First up, what is it that *people* want? Not the pharmacist. Or big box pharmacy retailer, drunk on data.
When a person is standing in a busy crowded pharmacy, shocked and traumatised by what life has thrown at them – what do they *feel*? How do they express their needs? Can they even do so, understand, and remember?
Think about the person who shakes, or stutters, or has an accent. In a busy pharmacy. Lining up behind someone to talk to the digital human ChatGPT pharmacy assistant. The use case doesn’t survive reality.
But that doesn’t mean that AI digital humans in healthcare don’t have a transformative role – they do. Transformation of the use case is the key, not the automation of discriminatory processes that should not even exist in the first place.
It is from this perspective I explain. Human Rights and ethics, establishes the human need and the human right to understand and to be understood in context and safety.
Most digital human health concepts fail to address health illiteracy, and this failing is due to the fact that co-design is neither undertaken nor understood.
There is a simplistic focus on content and having a digital human ‘talk’ the content: previously screen scrapped, now potentially generated, and regurgitated by ChatGPT. The problem is, the content *is* the problem, and in particular, incomprehensible content from authoritative sources that patients cannot understand. And there is decades of peer reviewed international research on this.
We see crowd-sourced promotions inviting both experts and members of the public to ‘teach’ the digital human. And to ‘teach’ the ChatGPT. This is not co-design. And neither of these approaches addresses the impact of health illiteracy on service design.
Similarly, there is much focus on the development of large sets of questions and answers. To this we say, this is not enough. People don’t talk in question and answer pairs.
What if a patient doesn’t know what question to ask? Or doesn’t formulate questions? Or doesn’t even understand or have any knowledge about concepts, such as ‘volume’ or ‘eligibility’?
And this is the case given the extent of health illiteracy across populations, and in particular in disadvantaged communities.
This is a common challenge in any service delivery setting: which is why the science of complex servicing systems is an essential component in co-design.
People’s search for information and understanding, varies throughout their life journey.
Machine learning cannot learn what the patient doesn’t know...or what whole populations do not know ... or do not even talk about.
Machine learning on a large data set from health illiterate populations will have gaps and errors. Localisation through co-design is critical.
Similarly, crowd sourcing from experts also fails. There is decades of research and litigation case law on the negative health impact of bureaucratic health information.
Of course, we are seeing chatbots being technically connected to ChatGPT but this doesn’t make the resulting creature fit for purpose and contextually aware for specific service delivery use cases. They remain a curiosity.
In summary, without the governance of co-design and ethics, crowd sourcing from health illiterate populations or from experts fails to overcome the impact and disadvantage of health illiteracy. ChatGPT will not fix this.
Digital Human Cardiac Coach: Conversations in the Midnight Hours
The Digital Human Cardiac Coach / Digital Human Health Coach innovation we have developed is built on decades of patient experience. The conversations that occur in the lonely hospital room late at night, between patient and carer, when the doctors and nurses are not around.
Tell me, what crowd-sourced ChatGPT would know of these conversations? And for what it’s worth, these are the patterns of ‘midnight hours’ conversations amongst NDIS Participant and families - conversations that the NDIA ignored for years when co-design was rejected.
The Digital Human Cardiac Coach is not a regurgitation machine. It does not replace the health professionals; it does not provide diagnosis or treatment. This is not a digital puppet or talking brochure.
Over the years, we have closely studied and drawn inspiration from the remarkable work of SimCoach. SimCoach is a co-designed AI online intelligent virtual human agent developed by the USC Institute for Creative Technologies, designed to break down barriers to care for US military service members, veterans and their families.
And I ask myself, why after more than 10 years, is a SimCoach service not available in Australia for people like us - Veterans and their families? This is the institutionalised slowness I referred to earlier, a deathly inertia resistant to the discipline of deep systems thinking and co-design, yet attracted to shiny fast solutions such as ChatGPT that do not address the systemic problems in servicing.
More than 5 years ago, as Allan recovered from his 4th heart surgery – just months after I finished the Nadia project – we developed the Digital Human Cardiac Coach, and we are still working on it. We wrote of our experience in the article ‘Abandoned by Government eHealth- Heart Patient Turns to Apple’.
How Allan is not ‘just’ a heart patient – a person with 8 cardiac bypass grafts + 4 stents — he is a fitness enthusiast, a person with a disability, an engineer using common solutions across this spectrum of human experience.
Here we see the patient’s life journey, is beyond the staccato episodic medical journey.
The purpose of the Digital Human Cardiac Coach is to guide people – most of whom are health illiterate – through conversations with reassurance and support. The guided conversations are the golden thread, the corpus: patterns of contextual conversations, modules of themes and topics that thread through the patient’s life journey.
There are currently no lifelong *human* cardiac coaches that provide continuous support for heart patients on their heart health journeys.
What we are envisioning is a *digital human* companion that you meet in hospital and is there every step of the way throughout life.
And because of this focus on life-long support, the Digital Human Cardiac Coach or Digital Human Pharmacy Assistant should not be limited to ‘living’ in pharmacy shop front, but in your home and with you wherever you are.
Very soon after the ‘Abandoned’ article, I was invited to contribute the chapter ‘Digital Humans in Healthcare’ in the book ‘Augmented Health(care): The End of the Beginning’, a collection by Dr Lucien Engelen.
In these and other writings, I spoke about not only the Digital Human Cardiac Coach, but the Digital Human Pharmacy Assistant/Concierge to have conversations with people across a broad range of topic areas, in hospital and beyond, at any time, whenever needed.
And the need for conversations doesn’t stop at the hospital door, at the doctor’s door, at the pharmacy door. Quite the opposite. There is a high level of anxiety and risk when a fragile heart patient is dispensed their meds, discharged, and 15 minutes later sent home. With no-one to talk to in the midnight hours, when worries are magnified, about even the most basic of questions.
So the need for support at any time for the heart patient and their carer, means that the Digital Human Pharmacy Assistant has to leave the pharmacy, and come home.
Don’t Tell Us to Go Back into a Physical Pharmacy
If COVID taught us anything, it is the risk of concentration points of sick people – as in pharmacies, doctors rooms, and hospitals.
And through the COVID years, we fortunately avoided these situations, through self-imposed isolation and discipline. Telehealth was limited and largely failed us: astonishingly telehealth never really got off the ground to its full potential, and was terminated by the government. The gravitational pull of vested interests is strong.
I described in earlier writings the concept of having the co-designed Digital Human Pharmacy Assistant in the pharmacy. But it’s no use having this restricted to the pharmacy if I am in isolation, unwell, working remotely or travelling. And visits to the pharmacist and doctor are inconvenient frustrating and risky staccato events. An in-situ model will not generate the data anticipated nor the benefit for the patient.
What we have experienced through the Apple health ecosystem, is that the rich data of life is created between visits, in the home, in everyday situations, continuous, untethered. It is our vision that one day, our personal Digital Human Cardiac Coach, will be with us always as part of and connected to the Apple health ecosystem.
Ask then ask, what purpose the physical pharmacy serves.
And to demonstrate this, here is Allan, talking to the Digital Human Cardiac Coach about his meds and support, as he is recovering on the lounge at home. And me, his carer, outside in the park, talking to the Digital Human Cardiac Coach also about his meds.
Both of these contextual conversation use cases is absolutely essential to the ongoing care of a heart patient.
One conversation on the topic of ‘meds’ involving the heart patient himself talking to a female persona of the Digital Human Pharmacy Assistant. The other conversation also on the topic of ‘meds’, between me his carer, and an Asian male persona of the Digital Human Cardiac Coach.
The interface is not defined in limited terms of a particular face. And over time, eventually any interface the patient/carer chooses be it AR, IOT, voice, metaverse, connected devices and gaming.
The thread connecting these different contexts and interfaces, is the co-designed conversation on the particular topic, in this example, meds.
We have designed and developed a model comprising a corpus of guided conversations. Not a ‘large language model’ but a personalised conversation model.
It will be a long time – if ever – before a ChatGPT thing can do this.
And we have done this in our home studios. Through the isolation years of COVID. No start-up dynamics. No big health company overheads and KPIs. Just the gift of time, a lifetime, our own expertise, experience, and commoditised powerful technology.
That’s the disruption no one is really talking about. The different and unexpected vectors and origins through which disruption is coming.
Conversations Across Lifetimes Not with ChatGPT
Our vision of a lifelong coach, virtual and universal, logically extends from lifelong – to across lifetimes, intergenerational. And here we have pivoted to do this.
In my presentation at Singularity University Exponential Medicine in 2019, I spoke about ‘digital immortality’ and the intergenerational benefit of health conversations with past and future generations. I saw this as the ‘genome’ of your life health conversations.
From a world view with no life long *human* health coaches - a medical model tethered to place - tethered to appointment times - tethered to episodes.
To the creation of our co-designed Digital Human Cardiac Coach that can be with you anywhere, anytime. To our own immortal digital human genome of health conversations. Continuous health conversations untethered by location, untethered by time, and untethered by lifetime.
Allan and I don’t see it as creepy but as having a positive humanitarian dimension. We would like our grandsons to talk with us when we are gone, to ask questions, to share stories. To understand.
We are now going beyond building the modules of guided conversations for the Digital Human Cardiac Coach, to building our personalised health conversations for our daughters and grandsons. Capturing through-life environmental context which is so important to one’s own lifetime healthcare and to generations – as I spoke about in my XMed presentation. But this level of context is beyond the reach of ChatGPT.
I believe that our methods in how to do this are universally applicable.
This article has been about both the enormous humanitarian potential of AI powered digital humans in healthcare. But this is not a use case for ChatGPT.
In future articles, I will dig further into the dangers of the inevitable temptation to apply ChatGPT in government and healthcare servicing. Tech companies won’t talk of these extraordinary risks.
Want to know more? My inbox is FULL of messages from people around the world wanting to know where the story of digital humans started and the inside story.
Watch out for my book, coming soon, ‘Nadia: Politics | Bigotry | Artificial Intelligence’. The ‘Nadia” book also tells of the genesis of the Digital Human Cardiac Coach.