• BLOG.png


Clinical Operations Quality – What Is It and How Is It Ensured?

Posted by Steve Young on May 16, 2015 1:12:00 PM

What do we mean when we talk about “quality” in the context of conducting clinical research?  Quality can be a very elusive concept.  Fortunately we have a documented set of rules and guidelines – under the umbrella of Good Clinical Practice (GCP) – that establish objective standards by which quality in clinical research is judged.  The volume of GCP regulations and related guidance is extensive, but virtually all of it is directed at the following two imperatives:

  1. Protecting the safety and well-being of trial subjects
  2. Ensuring that clinical trial data are credible and support effective evaluation of the research objectives.

In order to effectively assess and manage quality along these two imperatives, the following core questions can and should be evaluated both prior to and during the execution of every study:

  1. Is there any evidence of increased risk to – or actual poor management of – subject safety and well-being?
  2. Is there any evidence of increased risk – or actual damage – to the credibility and completeness of clinical trial data (i.e., “data quality”) that might prevent effective evaluation of the research objectives?

Achieving appropriate levels of quality along these two dimensions is a high-stakes proposition for clinical research executives.  Clinical development is very expensive and time-consuming, and sub-optimal quality that puts subjects at risk or undermines the ability to evaluate study results can be disastrous.  Poor quality uncovered too late may prevent sponsors from progressing their investigational product to the next stage of development, including submission for marketing approval.  Quality issues discovered during regulatory authority audits may result in long delays or even rejections of marketing approval.  Negative impacts can result even when issues are identified during an ongoing study, especially if not found early enough.  For example, issues resulting in high subject attrition rates or non-evaluable subjects may require a prolonged enrollment period with significant additional costs and delays.

So it is clear that having in place an effective clinical operations quality management program is of paramount importance, from both an ethical and business perspective.  Traditional approaches to operational quality management, which have been in place for several decades now with little to no change, have included several core components.  These include site monitoring, clinical data management, and various other medical/safety reviews.  These quality control methods have been marked by exhaustive manual reviews of data both at investigative sites and remotely.  By far the most resource-intensive and costly of these has been site monitoring, which contributes up to 30% of the total cost of clinical research.  And the practice of 100% source data verification (SDV) has itself accounted for about half of the total site monitoring resource effort.

While costly, these traditional mechanisms have generally served their purpose in achieving requisite quality.  However, clinical development has come under increasing pressure over the past 15 years on a number of fronts. First, we have witnessed steadily increasing trial complexity, marked by the emergence of more burdensome and complex procedures for sites to follow and subjects to undergo.  This certainly increases risks related to protocol and GCP compliance, in addition to challenging the willingness of patients to enroll in clinical trials that present such a high level of burden or risk with uncertain benefit. Second, driven at least in part by negative public perceptions of product safety, there have been ever-increasing regulatory demands put on the quality of trial design and heightened scrutiny of study conduct. Third, all of these contribute to an increasing level of financial pressure that is driving organizations to seek more efficient ways to bring promising new products to market.

All of this upheaval poses a real and present danger to quality.  Fortunately, an opportunity to re-think our approach to clinical operations quality management has also emerged in recent years, enabled by the successful adoption of internet-based technologies for collecting clinical trial data and managing studies (e.g., EDC, ePRO, eCTMS, IRT, etc.) over the past decade.  This new eClinical landscape means that rich volumes of subject and site data and information are now available electronically to study teams in near real-time for viewing and processing.  Instead of needing to go to sites frequently to inspect relevant documentation and assess site behaviors and compliance, information can be reviewed and assessed remotely and centrally.  New quality management paradigms such as Quality by Design and Risk-based Monitoring (RBM) have taken center stage on the heels of this eClinical revolution, and promise a transformative improvement in both resource efficiency and quality.

Leveraging this opportunity does not come automatically, however.  The key to success lies in the ability to harness all of the incoming data and information in a way that enables effective ongoing assessment of study performance and quality.  This requires a combination of robust, clinically focused reporting capabilities and analytics that can turn raw information into actionable intelligence.

protocol-deviation-data.pngA perfect use-case for application of these capabilities is what is known as Centralized Monitoring, which broadly refers to a centrally-coordinated review of site and study-level metrics and analytics that enable study teams to detect emerging quality risks across the sites in their study.  This allows for a more targeted use of site monitoring resources, directing them to the sites and issues requiring the most attention.  Higher levels of quality can be achieved as a result, with fewer resources.


Some organizations have already benefited significantly from this type of approach.  In one real example, a team utilized a central reporting capability to compute and assess the following key risk indicators – or KRIs – to detect potential quality issues at sites in a large phase 3 study:

  • Adverse Event (AE) Rate
  • Visit-to-eCRF Entry Cycle Time
  • Rate of Missed Assessments – for two subject assessments supporting the primary endpoint
  • Number protocol deviations observed




The team was able to pro-actively identify a handful of sites that were deviating significantly from the study-wide trend on each KRI.



In one case the site monitor investigated a site whose AE Rate was very low, and discovered that they misunderstood the expectation that all subject events should be reported, even those that seemed to the site obviously unrelated to trial participation.  The team was also able to work more closely with sites that were delayed in eCRF data capture, which led to significant improvement in the overall average turnaround time – from over 15 days early in the study down to approximately 8 days later in the study.  Site engagement and compliance improved across the study as a direct result of leveraging these centralized analytics.

There is a tremendous opportunity for organizations today to improve quality in the conduct of their clinical studies, thereby increasing operational success and getting new treatments to patients faster.  It’s a win-win proposition for all stakeholders.  The key is in having the right tools in place to harness all of the rich study information now available, and turn it into actionable intelligence for your study teams!

I would like to acknowledge Comprehend (http://www.comprehend.com/) for their contribution to this blog and the use of their graphs and metrics.

Learn What it Means to Work with an EDC Specialist

Tags: Risk-Based Monitoring