biomedion Newsroom

Beyond the Fear: Making Sense of AI in GMP Environments

Written by Product Management | Aug 6, 2025 7:28:56 AM

How to navigate Annex 22's grey areas and unlock the potential hiding in your data archives

The Elephant in the Room

If you've been working in pharma quality lately, you've probably felt it too – that uncomfortable tension between excitement about AI's possibilities and the paralyzing fear of getting it wrong.
Annex 22 was supposed to help. Instead, it left us with more questions than answers. The document warns us repeatedly about "critical applications" of AI, but here's the thing: it never actually tells us what those are. The result? Teams are stuck. QA departments are saying no to everything. IT is afraid to experiment. And meanwhile, terabytes of valuable GMP data sit locked away in archives, untouched and unusable.
But what if we're overthinking this?

A Better Way to Think About Risk

Let's step back from the binary thinking that's got us stuck. AI isn't inherently dangerous or safe – it depends entirely on what role we give it.
Here's a framework that actually makes sense:

High Risk (Truly Critical)

  • AI that makes final decisions about batch release
  • Systems that automatically adjust manufacturing parameters
  • Any application where the AI acts without human oversight

Low Risk (Manageable)

  • AI that suggests possibilities while humans decide
  • Smart search tools that help find relevant information
  • Pattern detection systems that flag potential issues for review

The dividing line is simple: Who has the final say?

If a qualified person reviews, validates, and decides based on AI insights, we're in manageable territory. If the AI acts autonomously on critical processes, we need to be much more careful.

 

The Goldmine You're Already Sitting On

Most pharmaceutical companies have something incredible: decades of meticulously documented GMP data. Deviation reports, batch records, audit findings, CAPA investigations – it's all there.

The problem isn't lack of data. It's access.

Imagine being able to ask your quality system: "Show me all deviations similar to this one from the past five years." Or: "What patterns do you see in our stability failures?" Or even: "Which CAPAs actually worked for similar problems?"

This isn't science fiction. This is intelligent information retrieval – and it's exactly the kind of non-critical AI application that can transform how we work.

Real Examples That Make Sense

Smart Archive Search

Instead of spending hours manually searching through folders and databases, quality professionals could use AI-powered search that understands context, synonyms, and relationships between different types of records. The AI doesn't make judgments – it just helps humans find what they need faster.

Pattern Recognition in Historical Data

AI can analyse thousands of data points to identify trends that human reviewers might miss – correlations between environmental conditions and product quality, or recurring issues that span multiple sites. But the AI doesn't draw conclusions; it highlights patterns for expert evaluation.

Intelligent Document Navigation

When investigating a deviation, AI could automatically surface related previous investigations, relevant SOPs, and similar cases from other facilities. It's like having an incredibly knowledgeable colleague who remembers everything – but you still make all the decisions.

Where Annex 22 Missed the Mark

The guidance tries to be helpful but ends up being overly restrictive by default. The blanket warnings about generative AI, dynamic models, and probabilistic outputs make sense for truly critical applications – but they shouldn't apply across the board.

By not drawing clear lines, Annex 22 has inadvertently encouraged a "when in doubt, say no" approach. That's understandable from a regulatory perspective, but it's leaving enormous value on the table.

A Path Forward

The future of AI in pharma isn't about replacing human expertise – it's about amplifying it. The most promising applications aren't the ones that make decisions for us, but the ones that help us make better decisions ourselves.

We need to move beyond asking "Is this AI allowed?" and start asking "Does this keep humans in control while adding real value?"

When we can answer yes to both questions, we're probably in the right territory.

The Opportunity Ahead

Your quality data doesn't have to stay locked in digital filing cabinets. With the right approach – one that respects both regulatory requirements and practical needs – AI can help transform that archive into a living, searchable knowledge base.

The technology exists. The regulatory pathway is clearer than it first appeared. What we need now is the confidence to take the first step.

Ready to explore what's possible with your quality data? Let's have a conversation about turning your archives into assets – safely, compliantly, and with humans firmly in control.

References:

  1. 2025 EU GMP Draft Updates: Chapter 4, Annex 11, and Annex 22 – What’s Changing? https://gmpinsiders.com/2025-eu-gmp-draft-chapter-4-annex-11-annex-22/ 
  2. Stakeholders’ Consultation on EudraLex Volume 4 - Good Manufacturing Practice Guidelines: Chapter 4, Annex 11 and New Annex 22 https://health.ec.europa.eu/consultations/stakeholders-consultation-eudralex-volume-4-good-manufacturing-practice-guidelines-chapter-4-annex_en