Unfortunately, there are some ugly counterpoints to our disapproval with which we must contend. Different clinicians managing the same patient often make very different decisions, a phenomenon known as interpractitioner variability. However, variability even exists within a single practice, with some physicians making decisions depending on the time left in their shift, fatigue from consecutive shifts worked, the recipient of their sign-out, whether they recently heard about a nasty medical malpractice lawsuit, or any number of things that bear absolutely no influence on the actual probability of disease. This is not to mention well-known examples of gender and ethnic discrimination by clinicians. Sadly, our own decision-making is also inherently flawed. In fact, one might argue it’s far easier to program bias out of an algorithm than out of a human being.
Explore This Issue
ACEP Now: Vol 41 – No 10 – October 2022Interpractitioner variability is one of the greatest inspirations pushing AI into the medical decision space. The companies paying for those decisions are especially incentivized. Although efficient medical practice is noble, cost-cutting at the expense of accurate diagnoses or appropriate disposition must be avoided.
When the emergency physician follows an incorrect AI recommendation, we all know who’s liable. We might try to defend ourselves by saying the ECG interpretation said “normal” but forgiveness for the clinician quickly wanes when the outcome is untoward. Alternatively, choosing to disagree with the computer’s interpretation of “acute STEMI” can be even harder to defend when the physician proves to be wrong.
At least with an ECG interpretation we can typically see what’s driving the interpretation and document our disagreement, but most algorithms are opaque in their operation. Deep learning algorithms known as neural networks are notorious for having “black box” operations where even the programmer is unsure how the computer derived the solution.8 Even when the mathematics applied to find the solution are readily reviewed, it can practically require a master’s degree in computer science or AI to understand it well.
Several programs have been created, deployed, and even sold to other companies with no supporting published studies. Many deserve more inspection because even when lab testing shows great results, AI algorithms can be rife with things such as correlation traps, biases, and self-fulfilling prophecies that cloak serious errors. Currently, most clinicians lack sufficient understanding of these programs and their genesis, or of how to interpret the results well enough to unveil these errors during their clinical shifts.
The incorrect response is to lean away from AI. The potential benefit of these programs is too great and the quality of predictions improving too quickly to expect we have more than six or seven years before they will be an integral part of clinical practice. But if patient-focused clinicians are not leading this era, others will. It is vastly important for experienced, practicing clinicians to be involved at all stages of algorithm development. Physicians need to ensure that recommendations are delivered in a way such that their use or dismissal is easily defensible, causation is assured with all correlations, and the results are both understandable and useable by the average clinician.
Pages: 1 2 3 | Single Page
No Responses to “The Impact of Artificial Intelligence in the Emergency Department”