The good news first: Sensitivity for SAH was the same in the validation as it was in the derivation, effectively 100 percent. Applying the Ottawa SAH Rule, as constructed, would virtually never miss a serious outcome in an acute headache matching the inclusion criteria for the study. That said, in their pursuit of absolute sensitivity, these authors have also followed the breadcrumbs laid out by their statistical analysis to their somewhat inane conclusion: The only path to zero-miss involves evaluating virtually everyone. The specificity of their rule was 13.6 percent, capturing almost all comers in pursuit of their small handful of true positives.
Explore This Issue
ACEP Now: Vol 37 – No 02 – February 2018This is an example of a decision aid that, after seven years and thousands of patients, likely cannot be shown to be superior to physician judgment when explicitly studied. No direct comparison was performed, but the underlying physician practice in these various studies was to investigate by either CT or lumbar puncture in between 85 percent and 90 percent of cases; the impact of this rule would be negligible. More concerning is the impact of a rule with such low specificity when used outside the narrow inclusion criteria and high prevalence of specific academic referral settings. It is possible or even likely that misuse of these criteria could lead to many more patient evaluations than by current clinical judgment without detectable advantage in patient-oriented outcomes.
A rule such as this is a prime example of why all decision aids should be tested in practice against physician judgment before their widespread use is encouraged. Given the past history of underwhelming performance of decision aids in direct comparison, this and countless other substitutions for clinician judgment should be viewed with skepticism rather than idolatry.
This should not suggest that decision aids can’t inform clinical judgment prior to formal testing, only that their limitations ought be considered at the time of utilization. Decision aids are derived and tested in unavoidably limited populations, outcomes are measured with flawed or incomplete gold standards, and the prioritization and weighting of different elements in the statistical analysis may have profound effects on the final model. Then, even in those ultimately tested against physician judgment, the same generalizability considerations persist, along with the confounding question of practice culture/environment and similarity to the clinicians involved.
The future of digital cognitive enhancement is bright, and computers may yet replace substantial portions of clinical decision making—but not today!
References
- Schriger DL, Elder JW, Cooper RJ. Structured clinical decision aids are seldom compared with subjective physician judgment, and are seldom superior. Ann Emerg Med. 2017;70(3):338-344.e3.
- Perry JJ, Sivilotti MLA, Sutherland J, et al. Validation of the Ottawa Subarachnoid Hemorrhage Rule in patients with acute headache. CMAJ. 2017;189(45):E1379-E1385.
- Perry JJ, Stiell IG, Sivilotti ML, et al. Clinical decision rules to rule out subarachnoid hemorrhage for acute headache. JAMA. 2013;310(12):1248-1255.
Pages: 1 2 3 | Single Page
One Response to “Published Clinical Decision Aids May Lack Validation”
February 18, 2018
Melissa RockefellerA good read, Ryan. Thank you!