Presentation
AI in Neurocritical Care: A Checklist for Trustworthiness
DescriptionIncreasingly, clinicians will be faced with whether to adopt an artificial intelligence (AI) or machine learning (ML) tool in routine neurocritical care practice. While AI/ML tools hold immense potential, trustworthiness is a primary concern. This concern is magnified when caring for critically ill patients who are vulnerable. Trust extends beyond mere functionality; it encompasses alignment with ethical considerations, transparency, and avoiding destructive biases.
This session – targeted at providers, nurses, pharmacists – translates technical criteria into an approachable checklist for evaluating the trustworthiness of an AI/ML tool. Focusing on neurocritical care scenarios, we will first review 1) robustness, 2) interpretability, 3) fairness, and 4) evaluation metrics. Taking a deeper step into interpretability, we discuss cutting-edge approaches to improve explainability.
Robustness is consistent performance, including unseen scenarios. Interpretability is being able to relate the result to an explainable rationale. Fairness is quantifying and avoiding bias, both human and algorithmic. Evaluation techniques are approaches to learn how a tool performs for both common scenarios and outliers. We then discuss more innovative approaches that enable “counterfactual” reasoning to answer the “why” question and that provide “attribution” approaches by quantifying the “importance” of features.
This will establish a roadmap for both technical and non-technical audiences, guiding them through the step-by-step assessment of AI/ML tools when reading the literature, discussing clinical implementation, and educating patients and colleagues. By promoting ethical, trustworthy, and transparent use of AI/ML tools, clinicians will be empowered to enhance outcomes while setting realistic boundaries regarding that tool’s limitations.
This session – targeted at providers, nurses, pharmacists – translates technical criteria into an approachable checklist for evaluating the trustworthiness of an AI/ML tool. Focusing on neurocritical care scenarios, we will first review 1) robustness, 2) interpretability, 3) fairness, and 4) evaluation metrics. Taking a deeper step into interpretability, we discuss cutting-edge approaches to improve explainability.
Robustness is consistent performance, including unseen scenarios. Interpretability is being able to relate the result to an explainable rationale. Fairness is quantifying and avoiding bias, both human and algorithmic. Evaluation techniques are approaches to learn how a tool performs for both common scenarios and outliers. We then discuss more innovative approaches that enable “counterfactual” reasoning to answer the “why” question and that provide “attribution” approaches by quantifying the “importance” of features.
This will establish a roadmap for both technical and non-technical audiences, guiding them through the step-by-step assessment of AI/ML tools when reading the literature, discussing clinical implementation, and educating patients and colleagues. By promoting ethical, trustworthy, and transparent use of AI/ML tools, clinicians will be empowered to enhance outcomes while setting realistic boundaries regarding that tool’s limitations.
Event Type
Breakout Session
TimeThursday, October 17th9:15am - 9:35am PDT
LocationHarbor Ballroom A
Delivery, Quality and Safety
APP Practice
Diversity, Equity, and Inclusion
Global Neurocritical Care
Informatics
Patient Education
Provider Education Topics (eg fellowship training, competency assessment, etc)
Introductory