No BS news..good or bad we talk about it !

AI companies fail too

Fellow Healthcare Champions,

Curious about healthcare AI but no idea where to start? We get it. As busy clinicians ourselves, our newsletter "AI Grand Rounds" is here to provide clinically relevant AI info. Thanks for joining us on this journey!

Let’s join our journey to share clinically meaningful and relevant knowledge on healthcare AI.

Sincerely,

Your two fellow physicians!

Table of Contents

🚨 Pulse of Innovation 🚨

Breaking news in the healthcare AI

Med-Gemini:A Groundbreaking Family of AI Models Revolutionizing Medical Diagnosis and Clinical Reasoning

  • Precision and Scope: Med-Gemini models achieved a diagnostic accuracy rate of 93% in medical image analysis and improved dialogue systems' understanding of patient interactions, surpassing previous AI models by a significant margin.

  • Innovative Features: The models employ techniques such as uncertainty estimation, which has reduced diagnostic error rates by up to 30%, showcasing their robustness in handling complex medical data.

  • Deployment Challenges: The Med-Gemini models face significant challenges in clinical implementation, including ensuring unbiased data interpretation and adherence to stringent healthcare privacy laws. The paper discusses a 15% improvement in mitigating bias through enhanced data curation techniques, emphasizing the need for rigorous validation to maintain patient safety and confidentiality.

 

🧑🏼‍🔬 Bench to Bedside👨🏽‍🔬

Developments in healthcare AI research and innovations

AI-powered CRISPR Gene Editors: On Way to Precision

  • The CRISPR technology can potentially solve several health abnormalities where genes or resultant proteins are an underlying cause. However, the current CRISPR gene editors lack precision, often associated with unpredictable performance in target cells.

  • This new research has demonstrated the use of AI with a large language model for protein structure to identify a precision CRISPR editor with more predictable behavior in target human cells. This will potentially eliminate any unpredictable side effects.

👩‍⚕️AI in the Clinic 🏥

Real-world and practical use of healthcare AI in clinic

Atrial Fibrillation: A frontier to be conquered by AI

  • Atrial Fibrillation is challenging to detect early, even with a 2-week event monitoring. However, a couple of platforms could help overcome this challenge.

  • The first is a deep learning model that can better discriminate ECG-based diagnosis via single-lead at-home ECG with an accuracy of 80%. A recent Nature article provides details of the research and its application.

  • The second is the Apple Watch, which recently received FDA approval to capture historic atrial fibrillation data. This data can be used in clinical research to determine risk factors and study endpoints.

  • Combining the above two platforms—ECG data from Apple Watch and deep learning-based diagnosis—could enhance the early detection of Atrial Fibrillation.

🩺 Start-Up Stethoscope 💵 

Trending startups and technologies impacting clinical practice

All that Glitters is not Gold: Babylon AI

  • In this section, we have covered several start-ups that have shown the potential of their products, scalability, and market spread and have generated several millions in funding.

  • Things are not always perfect; we provide information on Babylon AI, a health AI startup with an AI-powered chatbot as its primary product that rose to fame, generated funding, and secured an NHS (United Kingdom) contract but failed rapidly.

    • Inaccurate diagnoses: There were instances where Babylon's AI incorrectly diagnosed patients with severe conditions, such as heart attacks or deep vein thrombosis when the symptoms were actually indicative of less severe conditions.

    • Lack of robust feedback system: Babylon's AI did not have a robust feedback system in place to identify adverse outcomes or to improve its performance over time.

    • Misleading advertising: Babylon was accused of making exaggerated claims about the capabilities of its AI technology, which led to concerns about the company's corporate governance and transparency

    • Data breach: In 2020, Babylon admitted to a data breach where users were able to access video consultations of other patients

    • Safety concerns: The UK's medical device regulator, MHRA, expressed concerns about the safety of Babylon's chatbot, particularly regarding its ability to identify signs of serious illness

    • Lack of transparency: Babylon faced criticism for its handling of safety concerns, including its attempts to suppress a critical report by the Care Quality Commission (CQC) and its aggressive response to critics

🤖Patients First, AI Second🤖

Ethical and regulatory landscape of healthcare AI

Rishi Sunak promised to make AI safe. Big Tech’s not playing ball

  • Recall November 2023, leading AI labs committed to sharing their models before deployment to be tested by the UK AI Safety Institute. OpenAI, Anthropic, and Meta failed to share their models with the UK AISI before deployment. Only Google DeepMind has given pre-deployment access to UK AISI. 

  • Anthropic released Claude 3, without any window for pre-release testing by the UK AISI. When asked for comment, Anthropic co-founder Jack Clark said, “Pre-deployment testing is a nice idea but very difficult to implement.”

  • These are concerning findings, and we should think twice before trusting AI companies; as clinicians, we are optimistic about AI, but we echo the sentiment that “AI is ALSO the biggest threat to healthcare.”

Disclaimer: This newsletter contains opinions and speculations and is based solely on public information. It should not be considered medical, business, or investment advice. The banner and other images included in this newsletter are created for illustrative purposes only. All brand names, logos, and trademarks are the property of their respective owners. At the time of publication of this newsletter, the author has no business relationships, affiliations, or conflicts of interest with any of the companies mentioned except as noted. ** OPINIONS ARE PERSONAL AND NOT THOSE OF ANY AFFILIATED ORGANIZATIONS!

Reply

or to participate.