As AI becomes involved in clinical EM decisions, patient autonomy and shared decision making might suffer. If emergency physicians rely solely on AI-generated suggestions based only on objective data, they are likely to recommend treatments or interventions that are not consistent with the patient’s values and preferences.
Explore This Issue
ACEP Now: Vol 43 – No 01 – January 2024An example of this is shared decision-making regarding hospitalization in moderate-risk HEART score chest pain. Currently, the emergency physician may calculate the HEART score and then take the data to the patient for a discussion in which the patient is able to heavily influence their follow-up plan. Such shared decision making succeeds because the provider understands and can share with the patient how the data was applied and how the statistics and risks were generated. As AI models become more complex, clinicians may not be able to clearly discuss why the recommendations are being made, and patients may no longer be able to rely on the basis for their emergency physicians’ recommendations to make an informed decision.
Implementation of new technology within the medical field forces consideration of how patients and physicians will interact with it. A rarely discussed but vital ethical issue is that emergency physicians must remain aware that, when patients prefer to have humans interacting with them rather than an algorithm, they should maintain the right to refuse its application in their care. Emergency physicians must provide patients with sufficient information (e.g., inclusion, consequences, and significance) so that they can decide whether they will allow AI to be part of their care.6 Such consent necessarily requires that AI cannot be so embedded in the EM process that its use cannot be refused; patients must be able to challenge or refuse an AI-generated recommendation. This helps ensure that the humanistic nature of medicine prevails, and EM care is tailored to patient preferences and values.
AI’s role in patient-care decisions involving ethical dilemmas, including those about the end of life, is unclear and problematic. In the early stages of AI development, and for decades to come, trained professionals, usually emergency physicians, will need to provide counseling to patients and families. AI cannot replace physician input in the nuanced and complex ethical decisions that need to be made. However, AI may be able to help frame questions that can guide physicians in determining therapies and predicting mortality. For example, in patients at a high risk of death within six months, AI helped to reduce the use of chemotherapy by three percent.7 A study of AI-triggered palliative-care decisions found a higher use of palliative-care consultations and a reduced hospital readmission rate.8 AI will undoubtedly be useful in providing emergency physicians with ethical guidance, but it cannot make ethical decisions itself.
Pages: 1 2 3 | Single Page
One Response to “Artificial Intelligence in the ED: Ethical Issues”
January 8, 2024
Todd B Taylor, MD, FACEPAnalysis presented in this article is germane to healthcare for the half of the world’s population that has access to it. For the other 4.5 billion people, AI may become their sole source for the full spectrum of healthcare services, including preventative, diagnostic, therapeutic, and behavioral heath healthcare. To withhold this emerging technology from those who might otherwise have none, seems narrow minded & unwarranted. As is a common consequence of American healthcare, forcing the “perfect be the enemy of the good” need not be perpetuated to all populations.
To that end, one can imagine “proceduralists” (doing the necessary hands on work) aided by AI bringing healthcare to underserved & unserved individuals. Perhaps sooner than later, healthcare kiosks will have the ability to perform even sophisticated diagnostics & then deliver therapy (e.g. medication) even without the benefit of a human practitioner. Certainly some will suffer from incorrect diagnosis or prescribed therapy. But, that also happens on a regular basis in all sorts of healthcare settings today.
Technology continues to eliminate entire swaths of the services industry. Trucking & transportation will soon no longer require a human. AI will also give you that perfect haircut. Traditional grocery stores will soon be replaced by machine picked items, delivered to your door by autonomous vehicles in less than an hour.
As usual, healthcare will lag behind other industries, but technology will slowly chip away at services where AI advancements provide superior results. Diagnostic radiology & pathology are ripe for the picking.
Emergency Medicine may be further down that list, but automated triage & other parts of the ED process will make what we do now look like the typewriter . . . “type-what” said “Gen Alpha”. And, who knows what Gen Beta (2025-2039) will have never heard of. As a “Boomer” myself I’ll probably be dead by then, but hopefully not a victim of MedAI.