Skip to main content

Table 3 Case study in multicenter research: challenges and lessons learned

From: Conducting multicenter research in healthcare simulation: Lessons learned from the INSPIRE network

In this case study, we reflect our experiences conducting the CPRCARES study (Improving Cardiopulmonary Resuscitation with a CPR Feedback Device and Refresher Simulation)—prospective, multicenter, randomized, 2 × 2 factorial designed study conducted across 10 INSPIRE network sites [11].

The main objective of the study was to determine whether just-in-time CPR training before cardiac arrest, or real-time visual CPR feedback during cardiac arrest, improves the quality of chest compressions during a simulated cardiac arrest scenario.

Planning phase

  Defining the research question(s): We had two research questions we wanted to answer. We decided to conduct a 2 × 2 factorial designed study, allowing us to answer both questions in one study.

  Lesson learned: Multicenter research allows for sufficient sample size to conduct factorial design studies.

  Outcome measures: We wanted to collect CPR quality data from both the CPR feedback device and the mannequin. We needed a mannequin permitting chest compressions to >5 cm (as CPR guidelines recommend a depth of 5–6 cm), so the manufacturer provided custom-made chest springs allowing for a maximum compression depth of 7 cm. The chest springs were installed in mannequins across all 10 sites.

  Lesson learned: If the mannequin is to be used to collect data, it must have the appropriate functional fidelity.

  Pilot studies: We did not do a pilot study and instead used results from prior clinical studies to inform our power calculation. Without the pilot study, we were less prepared for the multicenter center and as a result forced to troubleshoot many issues during feasibility testing and during recruitment that could have been avoided.

  Lesson learned: Pilot studies not only help to inform sample size/power calculations but also provide value experience to help inform the design of the multicenter research protocol.

Project Development phase

  Identify collaborators: Just-in-time CPR training was one of the interventions so we excluded sites where just-in-time CPR training was occurring. Unfortunately, this meant investigators from recruitment sites did not have prior experience with just-in-time CPR training. To address this issue, we invited collaborators with experience using just-in-time CPR training to help develop the protocol.

  Lesson learned: Inviting collaborators who have prior experience with the intervention is important for protocol development.

  Protocol Development: We presented the proposed study at an INSPIRE annual meeting and receive great feedback that was incorporated into the protocol. Unfortunately, the protocol revisions led to unexpected delays.

  Lesson learned: The research timeline should appropriately budget for time to revise the protocol after receiving feedback.

  Research operations manual: We trained our confederates in a very thorough manner and measured their compliance with the tightly scripted confederate roles.

  Lesson learned: Confederate compliance with pre-scripted behaviors can be very high if they are trained in a rigorous manner.

  Grant preparation: Our study scenario was a case of cardiac arrest progressing from one rhythm to another. The simulated clinical environment was also standardized across sites. Reviewers questioned the generalizability of our findings across different institutions (where clinical environments differ) and across different patient presentations of cardiac arrest (i.e., different rhythms).

  Lesson learned: While standardization is a strength of simulation research, it may also be perceived as a limitation when it comes to generalizability.

  Ethics approval and subsite contracts: We submitted several ethics amendments for sub-study ideas that emerged during discussions. This led to a significant delay.

  Lesson learned: Ensure that all ideas for possible sub-studies have been discussed and incorporated into the research proposal prior to ethics submission.

  Clinical trial registration: We were asked to provide a clinical trial registration number upon submission of the manuscript for publication.

  Lesson learned: Ensure your study is prospectively registered in a clinical trial registry prior to initiation of recruitment.

  Manuscript oversight committee: We had planned for several sub-studies to be published as separate manuscripts. Writing groups were assigned by the manuscript oversight committee which resulted in no conflicts between investigators related to authorship order.

  Lesson learned: Transparency and clarity is key to prevent conflict between investigators for potential publications resulting from multicenter research projects.

  Feasibility testing: Despite feasibility testing, we still had one or two sites that submitted videos with very poor audio quality—making it difficult to use those videos in certain analysis.

  Lesson learned: Have all sites test audio and video quality before each recruitment session.

Study Execution phase

  Recruitment and enrollment: Some sites fell short of their recruitment quota due to lack of available participants. Ongoing local studies at some sites with related interventions and outcomes limited the number of possible participants at those sites.

  Lesson learned: Collect an inventory of ongoing related studies at all potential sites and consider these studies when estimating size of potential participant poll.

  Communications and decision-making: We had regular conference calls and annual face-to-face meeting that helped keep the study on track. Unfortunately, these calls dropped off after the study was complete, making communication more challenging during the dissemination phase.

  Lesson learned: Continue regular conference calls (and consider face to face meetings) during dissemination phase of research.

  Quality assurance: We instituted centralized monitoring of videos in this study, allowing us to identify poor video quality at one site early on in recruitment. This resulted in a fix when improved video quality for subsequent sessions.

  Lesson learned: Centralized monitoring of data and videos is a critical quality assurance measure.

  Data abstraction and analysis: We trained raters to use a tool to assess clinical performance by viewing videos of the simulated cardiac arrest. Videos were not available for rating until 6 months after the rater training, necessitating repeat rater training and re-calibration.

  Lesson learned: Timing of rater training is critical. Ideally, raters should be trained and calibrated immediately prior to rating performance.

Dissemination phase

  Presentation and publication: We aimed to publish the main study first and sub-studies shortly thereafter. One or two sub-studies were processed and accepted for publication quickly, nearly resulting in publication of these sub-studies prior to the main study being published first.

  Lesson learned: Do not submit sub-studies for publication until the main study has been accepted for publication.

  Media: Media outlets in the USA and Canada took interest in our study, resulting in investigators from various recruitment sties giving interviews to local and regional media outlets.

  Lesson learned: Engage media at various recruitment sites to maximize dissemination.

  Translation to practice: We wanted our study to inform the evidence review that the International Liaison Committee on Resuscitation (ILCOR) was conducting for 2015 resuscitation guidelines. Our study was published after the literature searches were conducted. Immediately after our study was published, we contacted the author of the question related to just-in-time training to ensure our study was included in the review process.

  Lesson learned: Work with knowledge translation partners to determine their deadlines. Take knowledge translation efforts into consideration when developing research timelines.