Patients, Patients, Patients, Patients!

Exploring the Ethical Dimensions of AI in Patient Care

Fellow Healthcare Champions,

Are you overwhelmed by all the fluff and hype around AI and not sure how to identify meaningful information? We get it. As busy clinicians ourselves, our newsletter, "AI Grand Rounds," is here to provide clinically relevant AI information.

No matter who you are—a healthcare provider, inventor, investor, or curious reader—we PROMISE one thing: you will always find information that is relevant, meaningful, and balanced.

Let’s join our journey to share clinically meaningful and relevant knowledge on healthcare AI.

Sincerely,

Your fellow physicians!

Table of Contents

🚨 Pulse of Innovation 🚨

Breaking news in the healthcare AI

AI ACT — to— AI PACT

The EU's AI regulatory framework balances innovation with consumer protection, ensuring AI systems are safe, trustworthy, and respect fundamental rights.

AI ACT

  • Risk Classification:

    • Unacceptable Risk: Prohibited uses like social scoring by governments.

    • High Risk: Strict regulations for AI in critical sectors like healthcare and law enforcement.

    • Limited Risk: Transparency obligations (e.g., AI must disclose they are not human).

    • Minimal Risk: Voluntary codes of conduct.

  • Trustworthy AI Requirements:

    • Focus on human oversight, robustness, privacy, transparency, non-discrimination, societal well-being, and accountability.

  • Governance and Implementation: by European Artificial Intelligence Board for oversight

  • Funding: €1 billion/year from Horizon Europe and Digital Europe.

AI PACT

The AI Pact is a voluntary initiative by the European Commission to encourage early adoption of the measures outlined in the AI Act. By fostering collaboration among industry players, it aims to build trust and ensure compliance with forthcoming regulations.

  • Benefits for Participants:

    • Gain a first-mover advantage by preparing for compliance ahead of the legal deadlines.

    • Enhance visibility and credibility by demonstrating a commitment to trustworthy AI.

    • Share best practices and collaborate with other industry leaders.

  • AI Pact has two Pillars

    • Ecosystem of Excellence

    • Ecosystem of Trust

🧑🏼‍🔬 Bench to Bedside👨🏽‍🔬

Developments in healthcare AI research and innovations

Eye Tracking Insights into Physician Behavior with Explainable AI (XAI)

Objective: To use eye-tracking technology to study ICU physicians' interaction with explainable AI (XAI) in clinical decision-making.

Methods: Nineteen ICU physicians (13 male, 6 female) participated in the study, which was conducted in a clinical simulation suite with patient scenarios and AI recommendations (safe and unsafe). Physicians made prescription decisions before and after the AI recommendation was revealed, with four types of XAI presented simultaneously. Eye-tracking glasses measure visual attention through gaze time, fixations, and blink rate metrics.

  • Results

    1. Attention: Unsafe AI recommendations attracted significantly more gaze fixations (mean 962) than safe ones (mean 704), p = 0.002. There was no significant difference in attention to the four types of XAI during unsafe scenarios (mean 94 fixations) compared to safe scenarios (mean 75 fixations).

    2. Self-reports: No correlation between self-reported usefulness of XAI (mean rating 3.2 on a 0-4 scale) and actual attention metrics.

    3. Clinical Practice Variation: There is no strong pattern between eye-tracking metrics and clinical practice variation. The mean practice variation for fluid was 217 ml/h, and for vasopressor, it was 0.04 mcg/kg/min.

    4. Fixation and Blink Rate: Mean fixation duration was lowest for the patient mannequin (135 ms). The AI screen's mean blink rate was highest (19.9 blinks per minute).

      Pros and Cons: The study demonstrated increased attention to unsafe AI recommendations and the feasibility of using eye-tracking in high-fidelity simulations, providing objective behavioral data over self-reports. However, the XAI's influence on behavior in unsafe scenarios was limited, and there was a discrepancy between self-reports and actual attention. The small sample size limited statistical power, and the simulation lacked real-world dynamics. Unsafe AI recommendations attract more attention, but XAI did not significantly help avoid unsafe decisions. More robust XAI systems and real-world research are needed.

🧑🏽‍⚕️ AI in Clinic 🏥

Developments in healthcare AI research and innovations

Credo AI: Tool for AI Governance

What Credo AI Does

Credo AI is a leader in the field of Responsible AI Governance, offering a comprehensive platform designed to streamline AI governance, risk management, and compliance for enterprises. The company's solutions help organizations adopt AI responsibly by ensuring the integrity, fairness, and compliance of AI/ML applications. Key services include:

  • AI Governance Platform: Provides tools for tracking, prioritizing, and controlling AI projects to ensure they are profitable, compliant, and safe.

  • AI Audit Services: Ensures the integrity and fairness of AI systems through detailed audits.

  • Educational Workshops: Offers expert-led workshops to empower teams to implement and scale AI governance within their organizations.

Similar Companies

Several companies offer similar AI governance, risk management, and compliance solutions. Notable competitors include:

  • Holistic AI: Specializes in AI governance, risk, and compliance across various sectors, including healthcare, financial services, and technology.

  • Armilla AI: Provides a governance platform to drive ethical decisions with transparency, helping clients validate and monitor AI models.

  • Arize: Focuses on AI observability and large language model (LLM) evaluation, offering tools for monitoring and troubleshooting machine learning models.

  • Fiddler AI: Offers a model performance management platform that includes model monitoring, explainable AI, and analytics

🤖 Patient First, AI Second🤖

Ethical and Regulatory Landscape of Healthcare AI

Is this AI application Trustworthy? DIOPTRA can answer

The biggest question about AI applications in all fields, and in healthcare, particularly, is their trustworthiness. Trustworthy AI shall be valid and reliable, safe, secure, resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair—with harmful bias managed. Dioptra can answer the question of trustworthiness by providing AI risk and its mitigation pathways.

The National Institute of Standards and Technology developed Dioptra with the following Primary AI use case

  • Model Testing:

    • 1st party - Assess AI models throughout the development lifecycle

    • 2nd party - Assess AI models during acquisition or in an evaluation lab environment

    • 3rd party - Assess AI models during auditing or compliance activities

  • Research: Aid trustworthy AI researchers in tracking experiments

  • Evaluations and Challenges: Provide a common platform and resources for participants

  • Red-Teaming: Expose models and resources to a red team in a controlled environment

The Dioptra divides users with levels of “New comer”, “Analyst”, “Researcher” and “Developer” based on their expertise. It tests applications if they are.

  • Reproducible: Dioptra automatically creates snapshots of resources so experiments can be reproduced and validated.

  • Traceable: The entire history of experiments and their inputs are tracked

  • Extensible: Support for expanding functionality and importing existing Python packages via a plugin system

  • Interoperable: A type of system that promotes interoperability between plugins

  • Modular: New experiments can be composed of modular components in a simple yaml file

  • Secure: Dioptra provides user authentication with access controls coming soon

  • Interactive: Users can interact with Dioptra via an intuitive web interface

  • Shareable and Reusable: Dioptra can be deployed in a multi-tenant environment so users can share and reuse components

This could be an excellent tool to assess utility and secure the algorithms that provide the final actionable output.

Disclaimer: This newsletter contains opinions and speculations and is based solely on public information. It should not be considered medical, business, or investment advice. This newsletter's banner and other images are created for illustrative purposes only. All brand names, logos, and trademarks are the property of their respective owners. At the time of publication of this newsletter, the author has no business relationships, affiliations, or conflicts of interest with any of the companies mentioned except as noted. ** OPINIONS ARE PERSONAL AND NOT THOSE OF ANY AFFILIATED ORGANIZATIONS!

Reply

or to participate.