How Clinical Decision Support AI Can Dramatically Improve Patient Safety
Clinical decision support AI is poised to radically improve patient safety. But that potential hinges on the use of high-quality data. This article explores the key data challenges that must be addressed to fully leverage AI in clinical decision support and minimize harm.
Table of Contents
Artificial intelligence (AI) holds plenty of promise for patient safety. But there’s a catch: To successfully improve patient safety, clinical decision support AI needs to be developed with high-quality, reliable data. If subpar data is used, AI models will not result in the desired patient safety improvements – and, worse yet, could cause significant harm.
Indeed, clinical decision support AI and other AI applications developed with unreliable data could lead to large numbers of patient injuries, according to a study published in Nature Medicine. Conversely, if an individual provider makes a decision based on poor data, the harm is likely to be much more contained. To make matters worse, when using clinical decision support AI, clinicians do not necessarily have the training to identify underlying glitches such as data bias, overfitting, or other software errors that might lead to less-than-optimal patient care. For example, such flaws in AI could result in incorrect medication dosage.
How Clinical Decision Support AI Can Impact Patient Safety
Clinical decision support AI and other AI models are poised to have a significant positive impact on patient safety. According to a literature review published in JMIR Medical Informatics and a study published in the American Journal of Infection Control, when developed and implemented correctly, AI in clinical decision support can enhance patient safety by improving:
- Error detection
- Patient stratification
- Drug management
- Diagnostic accuracy
- Infection control initiatives
Such improvements can elevate patient safety efforts in a variety of domains such as:
- Healthcare-associated infections
- Adverse drug events
- Venous thromboembolism
- Surgical complications
- Pressure ulcers
- Falls
- Decompensation
- Diagnostic errors
What’s more, AI can play a role in improving adherence to existing safety protocols. Consider the following: When a machine learning (ML) algorithm was developed to provide real-time hand hygiene alerts based on data from multiple types of sensors, compliance to best practices rose from 54% to 100%, according to a study in the Journal of Hospital Infection Control.
Exploring Data Challenges with Clinical Decision Support AI
While the possibilities are promising, the application of AI and ML to improve patient safety is an emerging field and many algorithms have not yet been externally validated or tested prospectively. Indeed, algorithms may be limited in generalizability, and performance could potentially be affected by the clinical context where the solution is implemented, according to an article in Digital Medicine.
Perhaps most importantly, to achieve optimal results, AI and ML algorithms designed to reduce medical errors and improve patient safety should be developed by accessing intelligence from large databases that contain accurate information on errors.
While access to voluminous data is key, the potential of AI for clinical decision support also hinges on data quality, bias mitigation, and data privacy and security. For example, if an AI model is developed with data that does not represent certain groups, the results of care delivered with the assistance of clinical decision support AI will not be equitable, according to an original research article in Health Policy and Technology. In fact, a systematic review by the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center found that algorithms developed on subpar data can exacerbate racial and ethnic disparities, while those developed on high-quality, inclusive data can potentially reduce disparities.
Data privacy and ethical use is another challenge associated with the development and implementation of clinical decision support system AI, according to the World Economic Forum.
How to Ensure Data Quality for Clinical Decision Support AI
To achieve the best clinical care and patient safety results, the quality of data needs to be a top concern as AI is developed and implemented.
To address quality concerns, Oliver Haase, a professor at the University of Applied Sciences in Konstanz, German, recommends the use of a data quality plan that includes the following components:
- Key quality metrics for the data
- A standard procedure for adding new data
- The process for future, consistent data cleaning
- A process for continuous data quality monitoring
Likewise, a data manipulation plan that describes the data processing steps should be leveraged. This plan should be implemented as reusable code.
In addition, it is important to ensure that clinical decision support AI model goals are in alignment with specific patient safety goals, such as identifying patient decompensation or improving infection control, according to a perspective article published by AHRQ. If the AI model is not precisely aligned with such patient safety goals, it might either miss critical signs of risk or generate false alarms.
Addressing Common Data Concerns
Researchers and developers also need to mitigate the effects of data bias. To do this, they should routinely analyze model metrics to detect bias, edit input variables, and explore the use of synthetic data, which involves creating artificial data that mimic real patient data but without the inherent biases, according to a review published by AHRQ.
Moreover, data-sharing practices for training and deploying clinical decision support AI must prioritize HIPAA compliance, encryption, and transparency to maintain patient trust and safety.
Because healthcare data contains personal and private patient information, sharing such data for AI model training and research purposes must be carried out with utmost caution and adherence to strict privacy and security measures, according to AHRQ.
Why New Data Could Make Clinical Decision Support AI Even Better
With data available today—especially laboratory information, imaging, and continuous vital sign data—it should be possible to reduce the frequency of many types of harm. However, when the data are available, they are often unstructured, undocumented, or disputed.
High-quality, large, annotated databases will prove quite fruitful in minimizing patient harm in the future. New types of data, especially from the huge array of sensing technologies becoming available, but also including data from various other sources like information supplied directly by patients, genomic sequencing, and social media, offer new opportunities to improve predictions as the first step toward development of preventive interventions to improve safety.
These types of data are becoming more accessible over time for research and to drive innovation in clinical decision support AI.