Back To Blog

FDA Seeks Public Input on Evaluating Real-World AI Medical Device Performance

Overview

The U.S. Food and Drug Administration (FDA) has issued a Request for Public Comment: Measuring and Evaluating Artificial Intelligence-enabled Medical Device Performance in the Real-World, to gather input on how best to measure and evaluate the real-world performance of artificial intelligence (AI)-enabled medical devices, including those that use generative AI (GenAI). This initiative underscores the agency’s commitment to ensuring that AI continues to advance healthcare innovation while maintaining patient safety, reliability, and clinical effectiveness.

Not a Policy or Guidance Document

It’s important to note that this document is not a draft or final guidance and does not propose policy changes. Instead, it aims to stimulate an open discussion among regulators, developers, healthcare professionals, and the public on how AI-enabled devices behave in dynamic, real-world environments. The FDA encourages stakeholders to share insights, experiences, and data-driven best practices that could shape future frameworks for real-world AI performance monitoring.

Why Real-World Evaluation Matters

AI technologies promise to revolutionize healthcare by improving diagnostics, streamlining workflows, and personalizing treatments. Yet, once deployed in hospitals and clinics, these systems often encounter changing data inputs, clinical practices, and patient demographics that can affect their performance.

The FDA points out that AI system performance can drift over time — a phenomenon known as data drift, concept drift, or model drift. Such shifts can reduce accuracy, bias, or reliability, especially when the AI is exposed to new patient populations or evolving clinical environments.

Traditional testing methods, like retrospective validation and static benchmarking, are not sufficient to capture real-world variability. Therefore, ongoing and systematic monitoring is essential to ensure that AI-enabled devices remain safe and effective throughout their lifecycle.

Core Themes of the Request for Comment

The FDA’s request invites public feedback around six primary areas:

Performance Metrics and Indicators – FDA asks stakeholders to share which metrics and indicators they use to assess safety, effectiveness, and reliability in real-world clinical use and how these metrics are defined, prioritized, and measured over time.

Real-World Evaluation Methods and Infrastructure – The agency seeks examples of tools and processes currently used for post-deployment monitoring, including how organizations balance human expert review with automated AI performance tracking. Questions also focus on the technical and operational infrastructure that supports these evaluations.

Postmarket Data Sources and Quality Management – The FDA seeks feedback on data sources such as electronic health records (EHRs), device logs, or patient-reported outcomes. It also wants to understand how stakeholders tackle data quality, interoperability, and privacy challenges when aggregating real-world evidence.

Monitoring Triggers and Response Protocols – The FDA asks how organizations detect performance degradation, what events trigger deeper evaluation, and how they define thresholds for taking corrective action.

Human-AI Interaction and User Experience – Given that human behavior plays a crucial role in AI outcomes, the FDA is interested in how learner training, clinical workflow design, and communication strategies affect the ongoing performance and safety of AI systems.

Best Practices and Implementation Barriers – Finally, the agency welcomes insights into implementation challenges, incentives, and privacy safeguards shaping successful real-world validation systems.

Building on Previous Discussions

This request builds upon insights from the FDA Digital Health Advisory Committee’s November 2024 meeting, which focused on maintaining the safety and effectiveness of AI-enabled devices post-deployment. That discussion emphasized the need for robust, real-world evaluation strategies to complement premarket testing and regulatory review.

How to Participate

Public comments are open through December 1, 2025, via Regulations.gov (Docket No. FDA-2025-N-4203). Responses can focus on any subset of the questions posed, based on each submitter’s expertise or experience.

Why This Matters

As AI becomes increasingly integrated into medical devices, continuous, evidence-based oversight is becoming more urgent. The FDA’s proactive approach invites the healthcare community to help shape future frameworks that balance innovation with patient safety.

This call for public input represents a significant step toward creating a more adaptive, learning-based regulatory environment that evolves alongside AI technologies transforming healthcare.