FDA SaMD Classification: Which Tier Is Your AI?

FDA SaMD Classification: Which Tier Is Your AI?
Understanding FDA Software as a Medical Device (SaMD) classification is critical for health AI companies. Here's how to determine your risk tier and what it means for your regulatory pathway.

If you're building AI that affects clinical decisions, you need to understand how the FDA thinks about Software as a Medical Device (SaMD). The classification determines your regulatory burden, and getting it wrong can cost you years.

What is SaMD?

Software as a Medical Device is software intended to be used for medical purposes without being part of a hardware medical device. That includes:

  • Clinical decision support tools
  • Diagnostic algorithms
  • Treatment recommendation systems
  • Patient monitoring software
  • Medical imaging analysis AI

The key question: Does your software's output inform or drive clinical decisions?

The SaMD Risk Framework

The FDA uses a two-dimensional framework to classify SaMD risk:

Dimension 1: State of Healthcare Situation

Category Description Examples
Critical Life-threatening or results in irreversible harm ICU monitoring, stroke detection
Serious Significant impact on health outcomes Cancer screening, cardiac analysis
Non-serious Minor impact, easily reversible Wellness tracking, appointment scheduling

Dimension 2: Significance of Information to Healthcare Decision

Category Description Examples
Treat or diagnose Software drives the clinical action Automated diagnosis, treatment selection
Drive clinical management Software informs the decision pathway Risk scores, prioritization algorithms
Inform clinical management Software provides supplementary information Data visualization, reference information

The Risk Matrix

Combining these dimensions gives you a risk tier:

Treat/Diagnose Drive Management Inform Management
Critical IV (Highest) III II
Serious III II I
Non-serious II I I

Tier IV = Most stringent requirements (Class III device pathway) Tier I = Least stringent (may not require premarket review)

What This Means for Your Regulatory Pathway

Tier I: Low Risk

  • May be exempt from premarket notification
  • Still must follow Quality System Regulation
  • Documentation requirements still apply

Tier II: Moderate Risk

  • Likely 510(k) pathway
  • Need to demonstrate substantial equivalence to a predicate device
  • Clinical evidence requirements vary

Tier III: Higher Risk

  • 510(k) or De Novo pathway depending on novelty
  • More substantial clinical evidence required
  • Post-market surveillance expectations

Tier IV: Highest Risk

  • Typically Premarket Approval (PMA) pathway
  • Rigorous clinical trial requirements
  • Ongoing post-market requirements

The Autonomy Question

Here's where many AI companies get tripped up: the level of human oversight affects your classification.

If a physician reviews and confirms every AI output before action is taken, your software "informs" rather than "drives" or "treats." That can drop you one column to the left in the matrix.

But be careful. The FDA looks at intended use in practice, not just in your labeling. If your software is designed to be used autonomously (even if a human could theoretically review it), you may be classified at the higher level.

Case Studies

Case 1: Cardiac Risk Scoring Tool

A startup building an AI that analyzes ECG data to generate cardiac risk scores.

  • Healthcare situation: Serious (cardiac conditions, but algorithm flags for review, doesn't diagnose)
  • Information significance: Drives clinical management (risk score determines follow-up pathway)
  • Result: Tier II, likely 510(k) with predicate device

Case 2: Radiology AI for Stroke Detection

An AI that analyzes CT scans to detect large vessel occlusion and pages the stroke team.

  • Healthcare situation: Critical (stroke outcomes depend on rapid treatment)
  • Information significance: Drives clinical management (directly triggers clinical pathway)
  • Result: Tier III, De Novo pathway (novel algorithm, no predicate)

Case 3: Mental Health Chatbot

A conversational AI that provides CBT-based exercises and mood tracking.

  • Healthcare situation: Non-serious (wellness, not treatment)
  • Information significance: Informs clinical management (supplementary to therapy)
  • Result: Tier I, may not require premarket review (but verify with FDA)

Pre-Submission Meetings

If you're uncertain about your classification, the FDA offers Pre-Submission (Pre-Sub) meetings. These are invaluable for:

  • Confirming your risk classification
  • Understanding the evidence FDA expects
  • Clarifying intended use and labeling questions
  • Building a relationship with your review division

I strongly recommend a Pre-Sub for any Tier II or higher device. The $6,000-$10,000 investment can save you years of misdirected development.

The Predetermined Change Control Plan

For AI/ML devices, the FDA now allows a Predetermined Change Control Plan (PCCP). This lets you define:

  • What changes you might make to the algorithm
  • How you'll validate those changes
  • Under what conditions changes don't require new submission

This is a game-changer for AI companies. Instead of resubmitting every time you improve your model, you can define an "envelope" of acceptable changes upfront.

Common Mistakes

Mistake 1: Assuming "AI" Gets Special Treatment

The FDA doesn't have a separate pathway for AI. Your AI is software, and software that affects clinical decisions is a medical device. The same framework applies.

Mistake 2: Classifying Based on What You Hope, Not Reality

I've seen companies claim their diagnostic AI is "just informational" when the entire product design assumes autonomous use. The FDA will look at how the product is actually used, not how you label it.

Mistake 3: Ignoring State-Level Regulations

FDA clearance doesn't mean you're done. Some states have additional requirements for clinical decision support, and HIPAA applies regardless of FDA status.

Mistake 4: Not Building Audit Trails From Day One

FDA-regulated software needs traceability. If you can't demonstrate what version of your algorithm produced what output on what data, you'll struggle in regulatory review.

What I Recommend

  1. Determine your classification early. It affects everything from architecture to clinical trial design.
  2. Build for the regulatory pathway. Audit trails, version control, validation protocols.
  3. Consider a Pre-Sub meeting, especially if you're uncertain about classification.
  4. Plan for the PCCP. Design your change management with continuous improvement in mind.
  5. Document everything. Regulatory submissions are documentation exercises.

If you're building AI in healthcare and need help navigating the FDA pathway, let's talk. I've been through this process from the inside at companies shipping FDA-cleared AI at scale.