Enabledoc’s Predictive Decision Support Intervention (PDSI) Intervention Risk Management Practices
To comply with the Office of the National Coordinator for Health Information Technology (“ASTP/ONC”) through its Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule requires certified health information technology to provide or connect to Predictive Decision Support Solutions. The HTI-1 Rule requires that each PDSI is subject to intervention risk management (“IRM”) practices, which include risk analysis, risk mitigation, and governance. Enabledoc has established internal policies and procedures to ensure compliance with HTI-1 and the National Institute of Standard and Technology (“NIST”) AI RMF 1.0AI Risk Management Framework. Enabledoc IRM practices focus on ONC’s FAVES principles: fair, appropriate, valid, effective, and safe. Additionally, Enabledoc ensures our AI functionality improves efficiency, effectiveness, reliability, and productivity for the targeted user, while being robust, secure and private.
Enabledoc IRM practices involve leadership governance with policies and procedures that are required to be followed to minimize risk for patients, providers, Enabledoc and the entire ecosystem. Our multistage AI product development process includes a comprehensive risk assessment to identify and track issues regarding ethics, bias, accuracy, data privacy, equity, operations and regulatory requirements. Our models are trained using generative AI to accurately categorize clinical information to custom clinical templates and reliably generate only clinical information pertaining to information provided by the patient, staff or provider. This data curation aims to mitigate biases, detail only information provided and limit harmful content to produce a quality clinical note. Human reviewers help identify and correct problematic content and biases during the training process. Reviewers are also involved in crafting examples of responses that are factual, respectful, and helpful. If risks are identified, they are tracked and strategies to control and mitigate such risks are implemented to prevent incorrect or inappropriate content from being displayed. Enabledoc uses reinforcement learning guided by provider feedback to refine responses. Reviewers rank and assess responses, allowing the model to learn what constitutes a helpful, accurate, and contextually appropriate answer. Any high priority risks will be reviewed by executive management, subject experts, and legal experts. However, Enabledoc always errs on the side of minimizing risk and harm.
Enabledoc receives continuous user feedback and correction processes to ensure AI model reliability that complies with FAVES. All issues are tracked in an incident management system. AI models are periodically audited and updated. Product Management provides continuous monitoring to identify new or evolving risks. The model is updated based on new feedback data, improving accuracy and reducing the likelihood of generating misleading information over time.