
An evidence-based snapshot of how the industry is approaching AI in pharmacovigilance today, including where adoption stops and accountability remains firmly human.
Insights from the 2026 Inaugural Pharmacovigilance Trends Survey
If you’re responsible for pharmacovigilance today, you’re operating in a pressure zone that didn’t exist a few years ago.
AI capability is advancing quickly. Vendors are confident. Leadership wants efficiency. Volumes, sources, and expectations keep expanding.
At the same time, accountability has not shifted at all.
You are still the person expected to explain how decisions were made, why certain signals were prioritized (or not), and how patient safety was protected when something goes wrong. That reality frames every AI conversation in PV, whether it’s acknowledged or not.
The CIOMS report on Artificial Intelligence in Pharmacovigilance, makes this explicit. AI can support PV activities, but responsibility remains human. Oversight must be meaningful, demonstrable, and sustained across the lifecycle of use.
This executive summary synthesizes insights derived from responses to our inaugural PV Trends survey, reflecting current industry practice and sentiment around AI in pharmacovigilance.
Who answered, and why their perspective counts
The respondents to this survey are embedded in live PV systems. They include safety leaders, operational heads, and professionals responsible for signal management, governance, and compliance.
These are not speculative roles. These are people who carry inspection risk, regulatory scrutiny, and decision accountability.
That context matters. Because when responses express caution, boundaries, or uncertainty, they reflect responsibility, not reluctance.
Our honest view: In pharmacovigilance, caution from accountable professionals is often a sign of maturity.
The central learning: adoption with a hard stop
Across every section of the survey, one pattern is consistent.
AI is being adopted, but only up to the point where human judgment, regulatory defensibility, and accountability would be compromised.
In practice, this shows up clearly in the data. While over half of respondents (56%) report that they are already experimenting with or applying AI in pharmacovigilance, only 6% say AI is fully reshaping their PV operations, and 38 % are not yet using AI at all.
Respondents are not questioning whether AI belongs in PV. That question has largely been settled. What remains unresolved is where the boundary sits between assistance and authority.
This boundary is not theoretical. It shows up repeatedly in how AI is being used today.
Our honest view: The industry is not moving slowly. It is drawing a line.
Across every section of the survey, one pattern is consistent.
AI is being adopted, but only up to the point where human judgment, regulatory defensibility, and accountability would be compromised.
Respondents are not questioning whether AI belongs in PV. That question has largely been settled. What remains unresolved is where the boundary sits between assistance and authority.
This boundary is not theoretical. It shows up repeatedly in how AI is used today.
Our honest view: The industry is not moving slowly. It is drawing a line.
AI as assistant, not authority
Most respondents report wanting AI to support areas where scale challenges human teams, not replace judgment.
When asked where technology and AI should make the biggest impact, the most common responses were case processing automation (34%) and smarter signal detection (22%), followed by freeing up time for strategic work (19%).
These are supportive, preparatory use cases, not decision authority.
Confidence is high when AI is used to scan, sort, and prioritise information. Confidence drops when AI moves closer to deciding what matters, what is significant, or what action should be taken.
This distinction closely mirrors the CIOMS position: AI may assist, but it does not own decisions.
Our honest view: This is not a lack of ambition. It is an intentional separation between efficiency and accountability.
Most respondents report using AI or advanced analytics in parts of the PV lifecycle, particularly where scale is a challenge for human teams. Common use cases include signal surfacing, data triage, and operational support.
Confidence is high when AI is used to scan, sort, and prioritise information.
Confidence drops when AI moves closer to deciding what matters, what is significant, or what action should be taken.
This distinction closely mirrors the CIOMS position: AI may assist, but it does not own decisions.
Our honest view: This is not a gap in ambition. It is an intentional separation between speed and responsibility.
More signals, more pressure, not automatically more safety
One of the clearest operational impacts emerging from the survey is increased pressure on already stretched teams.
While respondents consistently point to AI and automation as a way to improve signal detection and case handling, the dominant frustration across PV today is not technology, it is capacity.
In this survey, 31% of respondents identify resource constraints as their single largest frustration, followed by lack of global alignment (25%) and compliance burden (22%).
This matters because AI-enabled approaches tend to increase signal volume and data visibility. Without sufficient expert capacity to validate, contextualise, and prioritise outputs, teams risk slower decisions, greater documentation burden, and inconsistent conclusions.
CIOMS explicitly cautions that increased sensitivity without appropriate governance can amplify noise and obscure true safety risks.
Our honest view: Safety does not improve simply because more signals are detected. It improves when organisations are able to respond.
One of the clearest operational impacts emerging from the survey is increased signal volume.
AI-enabled approaches are expanding the number of potential signals that teams must review. Earlier visibility is viewed as valuable, but it introduces a new strain on validation capacity.
Without sufficient expert resource to contextualise and prioritise outputs, teams risk slower decisions, greater documentation burden, and inconsistent conclusions.
CIOMS explicitly cautions that increased sensitivity without appropriate governance can amplify noise and obscure true safety risks.
Our honest view: Safety does not improve simply because more signals are detected. It improves when signals can be understood, evaluated, and acted on in a timely and defensible way.
Governance is where confidence is gained or lost
When asked what limits wider AI adoption, respondents consistently highlight explainability, regulatory clarity, data quality, bias, and governance readiness.
These are often framed as barriers. In reality, they are the foundations of sustainable use.
The CIOMS guidance reinforces this by treating governance, transparency, and lifecycle oversight as essential conditions for AI use in PV, not optional safeguards.
Our honest view: Governance does not slow AI down. Poor governance does.
Human-on-the-Loop versus Human-out-of-the-Loop PV
The survey surfaces a fundamental choice facing the industry.
In Human-on-the-Loop PV, AI supports professionals by expanding visibility and reducing manual burden, while humans retain decision authority and accountability.
In Human-out-of-the-Loop PV, decisions are effectively delegated to systems that cannot explain their reasoning or be held responsible for outcomes.
Only one of these models aligns with regulatory reality.
CIOMS is clear that meaningful human oversight must be demonstrable. Oversight requires understanding, challenge, and the ability to intervene.
Our honest view: If humans cannot interrogate, override, and justify AI-supported outputs, oversight exists in name only.
What confident organisations are doing differently
Organisations that appear most confident in the survey share several common characteristics.
They define clear boundaries for AI use. They document human decision points explicitly. They invest in AI literacy so teams understand not just how tools work, but where they fail. And they align safety, quality, IT, and leadership around shared expectations.
These organisations are not avoiding AI. They are integrating it deliberately.
Our honest view: Confidence comes from clarity, not from complexity.
How to use this report
This executive summary is intended as a reference point, not a prescription.
You can use it to benchmark your current approach, frame leadership discussions, evaluate vendor claims, and align internal decisions with CIOMS principles.
It is designed to support clearer conversations about where AI helps, where it introduces risk, and where human judgment must remain explicit.
A living benchmark, not a snapshot
This survey will be rerun annually.
AI capability, regulatory expectations, and organisational confidence are all evolving. Tracking these shifts over time matters more than capturing any single moment.
By repeating this survey each year, we aim to build a longitudinal view of how AI in pharmacovigilance is maturing in practice, not just in ambition.
Our honest view: Progress is only meaningful if it can be observed, questioned, and measured responsibly.
Final reflection
AI will continue to advance. That trajectory is clear.
What remains uncertain is how well organisations will balance speed with accountability as the technology becomes more capable.
This survey suggests the industry understands what is at stake. The challenge now is translating that understanding into systems, processes, and decisions that regulators, clinicians, and patients can trust.
Questions worth asking now
If this summary resonates, it’s worth pausing to reflect on a few practical questions. These are not theoretical. They are the questions organisations end up answering implicitly, whether they choose to or not.
These questions are not about slowing innovation. They are about making sure innovation strengthens, rather than weakens, confidence and credibility.
If you found it difficult to answer more than one or two of these questions clearly, you are not behind. Most organisations are in the same position. But it is often a signal that intent has not yet been translated into something operational, defensible, and inspection ready.
Turning insight into action
Understanding the landscape is one thing. Operationalising AI responsibly inside a live PV system is another.
Many organisations tell us they know what should happen, but struggle to translate that into workflows, governance models, validation approaches, and team readiness that hold up under real-world pressure.
If you’re exploring how to introduce or scale AI in pharmacovigilance and want support turning intent into something practical, defensible, and inspection-ready, we’re happy to help.
This might mean pressure-testing your current approach, helping you define clear boundaries for AI use, strengthening governance and oversight, or building a roadmap that fits your organisation’s reality.
There is no obligation and no sales pitch, just a conversation about what’s possible, what’s risky, and what makes sense for where you are today.
A PV department built by experts in science, regulations, and reality