- AI Grand Rounds
- Posts
- Agentic AI - New Era in Artificial Intelligence
Agentic AI - New Era in Artificial Intelligence
What "WE" think about AI use in Healthcare
Fellow Healthcare Champions,
Are you overwhelmed by all the fluff and hype around AI and not sure how to identify meaningful information? We get it. As busy clinicians ourselves, our newsletter, "AI Grand Rounds," is here to provide clinically relevant AI information.
No matter who you are—a healthcare provider, inventor, investor, or curious reader—we PROMISE one thing: you will always find information that is relevant, meaningful, and balanced.
Let’s join our journey to share clinically meaningful and relevant knowledge on healthcare AI.
Your fellow Physicians,
Table of Contents
🚨 Pulse of Innovation 🚨
Breaking news in the healthcare AI
Google enters an era of Agentic AI
Google is stepping into the era of AI agents with the launch of Gemini 2.0, an upgraded AI chatbot, and a limited rollout of Project Astra, a computer vision-powered AI agent
Gemini 2.0:
Better reasoning
Faster responses
More efficient at handling questions, coding and math
Project Astra:
A visual system that can identify objects, help you navigate the world, and tell you where you left your glasses.
Improved dialogue and multi-language support.
Integrates with Google Lens and Maps.
Remembers up to 10 minutes of interaction
Project Mariner:
An experimental new Chrome extension that can quite literally use your web browser for you.
Great for analyzing text, images, and graphs.
Still in testing, with issues in accuracy and speed.
The video below is worth 2 minutes of your time.
Now, how do we use this in healthcare? We have joined the trusted tester group, if you want us to try any particular project please send us an email and we would be glad to check it out, and share our experience here!
Email us at [email protected] or just drop any of our editors a message on LinkedIn.
🧑🏼🔬 Bench to Bedside👨🏽🔬
Developments in healthcare AI research and innovations
Sepsis ImmunoScore – Transforming Early Detection with AI
From Innovation to Implementation
Sepsis, a leading cause of mortality in hospitalized patients, often presents with diverse clinical manifestations, making early detection a persistent challenge.
Enter the Sepsis ImmunoScore, the first FDA-authorized AI diagnostic tool for sepsis. Developed with cutting-edge machine learning (ML), it bridges laboratory innovation and clinical utility by stratifying patient risk and aiding timely interventions.
Methodology
This multicenter study evaluated 3,457 adult patients suspected of infection across five U.S. hospitals (2017–2022). Using a supervised random forest ML algorithm, the Sepsis ImmunoScore incorporated 22 parameters, including vital signs, routine lab results, and biomarkers (CRP, PCT). Participants were divided into derivation (n=2366), internal validation (n=393), and external validation (n=698) cohorts.
The tool’s primary goal was to detect sepsis within 24 hours based on Sepsis-3 criteria. Secondary outcomes included in-hospital mortality, ICU admission, vasopressor use, and mechanical ventilation.
Key Results
Diagnostic Power: The tool demonstrated an AUC of 0.85 in the derivation cohort, with slightly lower but robust performance in validation cohorts (0.80–0.81).
Risk Stratification: Patients were categorized into four risk groups:
• Low Risk: 3% sepsis prevalence, 0% mortality.
• Very High Risk: 69.7% prevalence, 18.2% mortality.
Prediction of Critical Outcomes: Risk levels correlated with higher ICU transfers, length of stay, vasopressor use, and mortality. For example, in the external validation cohort, ICU transfers rose from 4.7% (low risk) to 54.6% (very high risk).
Clinical Translation
Designed for seamless integration with electronic medical records (EMRs), the Sepsis ImmunoScore delivers real-time insights to clinicians. Its Bayesian framework allows dynamic risk stratification, empowering physicians to prioritize interventions such as antibiotics or ICU transfers.
Unlike passive sepsis-alert systems prone to false alarms, this tool requires an explicit clinical suspicion (e.g., a blood culture order), minimizing alert fatigue.
Limitations
Population Scope: The study was confined to five U.S. hospitals, which may limit generalizability.
Observational Design: Clinical impact on outcomes remains untested.
Data Gaps: Approximately 7.2% of patients were excluded due to incomplete inputs, highlighting the importance of consistent data availability.
From Bench to Bedside
The Sepsis ImmunoScore exemplifies how sophisticated AI can enhance bedside decision-making, transforming early sepsis detection. While still in its early stages, its robust design and clinical relevance mark it as a promising adjunct in the fight against sepsis—a testament to the power of AI in bridging laboratory insights with real-world care.
🧑🏽⚕️ AI in Clinic 🏥 |
Developments in healthcare AI research and innovations |
Decoding AI in Clinics: What to Believe When Studies Disagree
AI-powered tools like Nuance’s Dragon Ambient eXperience (DAX) promise to transform clinical workflows, but two recent studies paint different pictures. One claims limited efficiency gains, while the other highlights improved clinician experiences. So, how do you figure out what to trust?
What the Studies Found
The NEJM AI Study:
• Scope: Tracked clinicians for 180 days to see if DAX reduced time in EHR or improved financial metrics.
• Findings: No major efficiency improvements. However, some groups, like family medicine clinicians, saw small time savings.
• Takeaway: DAX needs better adoption strategies to deliver on its promises.
The JAMA Network Open Study:
• Scope: Surveyed clinicians 5 weeks after they started using DAX to measure satisfaction and perceived impact.
• Findings: Many users reported less time spent documenting and lower frustration with EHRs.
• Takeaway: Positive effects were more about user experience than measurable efficiency gains.
How to Choose Which to Trust
What Are You Looking For?
If you’re focused on big-picture efficiency or cost savings, the NEJM study is more relevant.
For understanding clinician satisfaction and burnout, JAMA’s insights hit closer to home.
How Rigorous Was the Research?
The NEJM study’s long-term data and statistical depth make it more reliable for system-wide decisions.
JAMA’s survey is shorter-term but still valuable for gauging how users feel about DAX.
What Makes a Study Strong?
The best studies are long-term, use large samples, and measure outcomes that matter to your goals. They should also be transparent about limitations and avoid bias.
The Bottom Line
Both studies offer useful insights, but they look at different aspects of DAX. NEJM shows where the tool needs work for broader efficiency gains. JAMA highlights how it can improve clinician experiences today. To really get the full picture, future research should combine both perspectives.
🤖 Patient First, AI Second🤖 |
Ethical and Regulatory Landscape of Healthcare AI |
What “WE” think about use of AI in Healthcare: A Scientific Survey
A survey recently published in the JAMA Open conducted on over 2000 people representing different demographic, social and economic background provided valuable insights in to public perception of AI in use in healthcare.
The key finding of the survey was a strong desire to get notified about use of AI in their treatment or disease management from the participants.
Key Findings
Strong Desire for Notification: A significant majority of respondents (62.7%) strongly agreed that they should be informed about the use of AI in their healthcare.
Demographic Differences:
Gender: Women expressed a greater desire for notification than men.
Age: Older adults were more likely to favor notification than younger adults.
Race/Ethnicity: White respondents showed a higher preference for notification compared to Black or African American respondents.
Education: Individuals with higher levels of education were more likely to desire notification.
Comparison to Other Data Uses: The desire for notification about AI in healthcare was found to be higher than for other forms of data use, such as health information or biospecimens.
Implications
The findings underscore the importance of transparency and public trust in AI-driven healthcare. Healthcare organizations and policymakers should prioritize patient notification as a crucial step in ensuring ethical AI use.
However, the study also highlights the need for nuanced approaches to notification, considering the diverse needs and preferences of different demographic groups. Collaborative efforts involving the public, patients, and experts are essential to develop comprehensive strategies that promote transparency and build trust in AI-powered healthcare.
Disclaimer: This newsletter contains opinions and speculations and is based solely on public information. It should not be considered medical, business, or investment advice. This newsletter's banner and other images are created for illustrative purposes only. All brand names, logos, and trademarks are the property of their respective owners. At the time of publication of this newsletter, the author has no business relationships, affiliations, or conflicts of interest with any of the companies mentioned except as noted. ** OPINIONS ARE PERSONAL AND NOT THOSE OF ANY AFFILIATED ORGANIZATIONS!
Reply