• BLOG.png

    OmniBlog

RBM Implementation - Top 3 Areas to Simplify

Posted by Steve Young on Sep 14, 2015 1:53:00 PM

Risk-based monitoring (RBM) presents a tremendous opportunity for the life sciences industry. The FDA and the EMA have both issued very strong endorsements for RBM through their guidance documents issued over the past couple of years, clearly signaling to the industry that this paradigm represents a superior approach to achieving quality in the conduct of clinical research. And RBM also presents a tremendous opportunity in terms of resource efficiencies, particularly in the area of site monitoring. The following graph comes from an RBM value calculator developed at OmniComm, and illustrates the significant value that can be realized for a typical phase 3 study – value derived not just from resource efficiency gains but from improved quality as well.

OmniComm recently conducted a survey (RBM Survey) across 29 clinical research organizations to evaluate their progress with RBM adoption. 76% of respondents indicated that they are already actively implementing risk-based monitoring or planning to initiate their first RBM studies during this year (2015). This represents a very significant advancement from even a year ago, and we expect this momentum to only increase over the next couple of years, especially as the ICH E6 guidance for GCP is currently being updated to incorporate risk-based monitoring principles.

The survey respondents were also asked what they deem as their top challenge to successful RBM implementation, and overwhelmingly the number one answer (63%) was perceived RBM complexity/burden. So what is the source of complexity that organizations are encountering? We see to two drivers in particular for this situation. First, all of the growing interest in RBM has generated a plethora of advice – there are a lot of RBM “cooks in the kitchen”! And the available advice is not always consistent. So organizations and study teams are left trying to filter through all of that advice to determine what truly are best practices and what should be ignored.

Second, a lot of the methodology guidance that's coming out - while well intentioned - represents a level of over-engineering that is unnecessary and more likely to confound an effective RBM program. We see three areas in particular where complexity and burden should be reduced.

RECEIVE A COMPLIMENTARY RBM HEALTH CHECK

Study Risk Assessment

The first of these is the Study Risk Assessment exercise, which is an important planning component of an effective RBM program. During the study planning phase, a cross functional study team should be convening, identifying key operational risks based on the study design, and using that information to ensure that the operational quality management plans for the study are focused on eliminating or at least mitigating those risks. A current issue is that many organizations are trying to assess fifty, sixty, or even more discrete study attributes for risk identification, and that ends up being a very burdensome, time consuming process for study teams that are already extremely busy with numerous study planning and set-up activities. This many study attributes represents a significant amount of overlap in risk assessment coverage, and paring the list down significantly (and thoughtfully) should still yield a very effective risk assessment process without miring the study team in a drawn-out process.

What's often exacerbating this situation is the actual risk scoring method being used. The FMEA - or failure modes and effects analysis - is a common one and represents are a rather complex scoring method that comes across as very arcane and confusing the study teams. Simpler scoring methods can be applied successfully which will be less confusing and time consuming for your study teams.

Targeted SDV Planning

A second important area warranting review is targeted or partial SDV planning and execution. One practice in particular that we believe needs challenging is the pre-study assignment of different SDV levels for each investigative site. Is it necessary to assign different SDV levels to sites at the start of each study, based on a pre-study assessment of data quality risk for each site? We’ve begun assembling evidence that strongly suggests that even less experienced sites – including sites in more research-naïve global regions – show similar or even superior levels of quality with respect to reliable data transcription, timely data entry, etc. Your study teams may again be wasting valuable time during study set-up, assessing and categorizing sites based on a perceived (not actual) level of data quality risk, and complicating the SDV plan by assigning different plans to each site. It is likely more efficient and effective to assign the same baseline SDV plan to all sites in your study, and simply allow the operational quality monitoring activities determine when a given site may warrant additional SDV scrutiny.

Key Risk Indicators (KRIs)

Let’s move finally to the planning and execution of key risk indicators (KRIs). These are the quality-related metrics your teams will configure and use centrally to identify sites that may be deviating significantly from an expected norm and therefore require special attention and remediation. So how many KRI’s are sufficient to effectively oversee quality on any given study? Many organizations are trying to implement dozens of KRI’s for each study. If you have that many, you will likely find it difficult to scale and maintain, and a source of increased confusion and effort for your study teams instead of being a help. The reality is that – similar to our observation regarding the number of study attributes used in risk assessment – this many KRIs represents overlap and duplication in signal detection. The volume of “false signals” will inevitably increase, and rather than focusing study team resources on what matters, will more often send them chasing down non-issues and causing frustration for both them and the investigative sites. What's much more important than the quantity of KRIs is their quality. It is of critical importance to select and configure KRIs first to be as reliable as possible; i.e., effective at detecting real emerging issues and minimizing the occurrence of false signals. Second, KRIs must be configured for earliest possible detection; i.e., “leading” indicators vs. “lagging”. A quality issue identified well after the fact is not helpful as the damage has likely already been done and become permanent.

It is somewhat natural and expected that a level of over-engineering will take place as a new paradigm is taking shape. These challenges will inevitably be overcome, and those that recognize them earlier will benefit the most. RBM does not need to be complicated, and in fact its fundamental purpose is to drive better focus and greater efficiency (while improving quality). Simpler in this context does not mean we shouldn’t be thoughtful in our approach. All components of your RBM methodology and tools need to be carefully evaluated to ensure they are supporting the goal of more efficient, more effective quality management.

Watch the video to learn more.

 

Learn What it Means to Work with an EDC Specialist

Tags: Risk-Based Monitoring