Annex 22 was supposed to help. Instead, it left us with more questions than answers. The document warns us repeatedly about "critical applications" of AI, but here's the thing: it never actually tells us what those are. The result? Teams are stuck. QA departments are saying no to everything. IT is afraid to experiment. And meanwhile, terabytes of valuable GMP data sit locked away in archives, untouched and unusable.
But what if we're overthinking this?
Let's step back from the binary thinking that's got us stuck. AI isn't inherently dangerous or safe – it depends entirely on what role we give it.
Here's a framework that actually makes sense:
High Risk (Truly Critical)
Low Risk (Manageable)
The dividing line is simple: Who has the final say?
If a qualified person reviews, validates, and decides based on AI insights, we're in manageable territory. If the AI acts autonomously on critical processes, we need to be much more careful.
Most pharmaceutical companies have something incredible: decades of meticulously documented GMP data. Deviation reports, batch records, audit findings, CAPA investigations – it's all there.
The problem isn't lack of data. It's access.
Imagine being able to ask your quality system: "Show me all deviations similar to this one from the past five years." Or: "What patterns do you see in our stability failures?" Or even: "Which CAPAs actually worked for similar problems?"
This isn't science fiction. This is intelligent information retrieval – and it's exactly the kind of non-critical AI application that can transform how we work.
Smart Archive Search
Instead of spending hours manually searching through folders and databases, quality professionals could use AI-powered search that understands context, synonyms, and relationships between different types of records. The AI doesn't make judgments – it just helps humans find what they need faster.
Pattern Recognition in Historical Data
AI can analyse thousands of data points to identify trends that human reviewers might miss – correlations between environmental conditions and product quality, or recurring issues that span multiple sites. But the AI doesn't draw conclusions; it highlights patterns for expert evaluation.
Intelligent Document Navigation
When investigating a deviation, AI could automatically surface related previous investigations, relevant SOPs, and similar cases from other facilities. It's like having an incredibly knowledgeable colleague who remembers everything – but you still make all the decisions.
The guidance tries to be helpful but ends up being overly restrictive by default. The blanket warnings about generative AI, dynamic models, and probabilistic outputs make sense for truly critical applications – but they shouldn't apply across the board.
By not drawing clear lines, Annex 22 has inadvertently encouraged a "when in doubt, say no" approach. That's understandable from a regulatory perspective, but it's leaving enormous value on the table.
The future of AI in pharma isn't about replacing human expertise – it's about amplifying it. The most promising applications aren't the ones that make decisions for us, but the ones that help us make better decisions ourselves.
We need to move beyond asking "Is this AI allowed?" and start asking "Does this keep humans in control while adding real value?"
When we can answer yes to both questions, we're probably in the right territory.
Your quality data doesn't have to stay locked in digital filing cabinets. With the right approach – one that respects both regulatory requirements and practical needs – AI can help transform that archive into a living, searchable knowledge base.
The technology exists. The regulatory pathway is clearer than it first appeared. What we need now is the confidence to take the first step.
Ready to explore what's possible with your quality data? Let's have a conversation about turning your archives into assets – safely, compliantly, and with humans firmly in control.