February 26, 2024

The one decision that AI cannot predict

We often talk about personalized medicine; we almost never talk about personal death.

End-of-life decisions are among the most complicated and feared by both patients and caregivers. Although multiple sources indicate that people prefer to die at home, in developed countries they often end their lives in hospitals, and often in acute care facilities. Several reasons have been advanced to explain this gap, including the underutilization of hospice facilities, due in part to delayed referrals. Healthcare professionals do not always initiate end-of-life conversations because they may be concerned about causing discomfort, interfering with patient autonomy, or because they lack the training and skills to discuss these matters.

We associate several fears with dying. In my practice as a doctor, where I have been working in palliative care for years, I have encountered three main fears: fear of pain, fear of separation and fear of the unknown. Yet wills, or advanced directives, which could take control of the process to some extent, are generally unusual or insufficiently detailed, leaving family members with an incredibly difficult choice.

In addition to the significant toll they face, research has shown that survivor or surrogate decision makers can be inaccurate in their predictions of the dying patient’s preferences, possibly because these decisions affect them personally and are related to their own belief systems and their views. role as children or parents (the importance of the latter is evident from an Ann Arbor study).

Can we possibly spare family members or treating physicians from making these decisions by outsourcing them to automated systems? And if we can, should we?

AI for end-of-life decisions

Discussions about a “patient preference predictor” are not new, but have recently been gaining traction in the medical community (such as these two excellent 2023 research papers from Switzerland and Germany), as rapidly evolving AI capabilities shift the debate from the hypothetical bioethical sphere to the concrete. Nevertheless, this is still in development and end-of-life AI algorithms have not yet been clinically applied.

Last year, researchers from Munich and Cambridge published a proof-of-concept study presenting a machine learning model that advises on a range of medical moral dilemmas: the Medical ethics advisor, or METHAD. The authors stated that they chose a specific moral construct, or set of principles, on which to train the algorithm. This is important to understand, and while it is admirable and necessary to be clearly stated in their article, it does not solve a fundamental problem with end-of-life ‘decision support systems’: on what set of values ​​should such algorithms be based ?

When training an algorithm, data scientists usually need a “ground truth” on which to base their algorithm an objective, unambiguous standard. Let’s look at an algorithm that diagnoses skin cancer based on an image of a lesion; the “correct” answer is benign or malignant – in other words, defined variables on which we can train the algorithm. But for end-of-life decisions such as “do not attempt resuscitation” (as emphatically illustrated in the New England Journal of Medicine), what is the objective truth against which we train or measure the algorithm’s performance?

A possible answer to this would be to exclude any moral judgment and simply try to predict the patient’s own wishes; a personalized algorithm. Easier said than done. Predictive algorithms need data to base their predictions on, and in medicine, AI models are often trained on a large, comprehensive dataset with relevant fields of information. The problem is we don’t know what is relevant. Presumably, beyond a person’s medical record, paramedical data such as demographics, socioeconomic status, religious beliefs, or spiritual practices can all be vital information to a patient’s end-of-life preferences. However, such detailed datasets are virtually non-existent. Nevertheless, recent developments in large language models (such as ChatGPT) allow us to explore data that we previously could not process.

If using retrospective data isn’t good enough, can we hypothetically train end-of-life algorithms? Imagine if we interviewed thousands of people about imaginary scenarios. Can we trust that their answers represent their true wishes? It can be reasonably argued that none of us can predict how we will react in real life situations, making this solution unreliable.

Other challenges also exist. If we decide to trust an end-of-life algorithm, what would be the minimum accuracy threshold we would accept? Whatever the metric, we will have to present it openly to patients and doctors. It’s hard to imagine standing in front of a family at such a difficult moment and saying, ‘Your loved one is in critical condition and a decision has to be made. An algorithm predicts that your mother/son/wife would have chosen to…, but keep in mind that the algorithm is only right 87% of the time.” Does this really help, or does it create more problems, especially if the recommendation goes against the wishes of the family, or is given to people who are not tech savvy and will have difficulty understanding the concept of algorithm bias or inaccuracies .

This becomes even clearer when we look at the ‘black box’ or unexplained feature of many machine learning algorithms that prevents us from questioning the model and the recommendations on which the model is based. While explainability is discussed in the broader context of AI, it is particularly relevant to ethical issues, where reasoning can help us become resigned.

Few of us are ever prepared to make an end-of-life decision, even though it is the only certain and predictable event at any given time. The more we own our decisions now, the less we will rely on AI to fill the void. Claiming ours personal choice means we will never need one personalized algorithm.

Leave a Reply

Your email address will not be published. Required fields are marked *