- Open Access
Conducting multicenter research in healthcare simulation: Lessons learned from the INSPIRE network
Advances in Simulation volume 2, Article number: 6 (2017)
Simulation-based research has grown substantially over the past two decades; however, relatively few published simulation studies are multicenter in nature. Multicenter research confers many distinct advantages over single-center studies, including larger sample sizes for more generalizable findings, sharing resources amongst collaborative sites, and promoting networking. Well-executed multicenter studies are more likely to improve provider performance and/or have a positive impact on patient outcomes. In this manuscript, we offer a step-by-step guide to conducting multicenter, simulation-based research based upon our collective experience with the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE). Like multicenter clinical research, simulation-based multicenter research can be divided into four distinct phases. Each phase has specific differences when applied to simulation research: (1) Planning phase, to define the research question, systematically review the literature, identify outcome measures, and conduct pilot studies to ensure feasibility and estimate power; (2) Project Development phase, when the primary investigator identifies collaborators, develops the protocol and research operations manual, prepares grant applications, obtains ethical approval and executes subsite contracts, registers the study in a clinical trial registry, forms a manuscript oversight committee, and conducts feasibility testing and data validation at each site; (3) Study Execution phase, involving recruitment and enrollment of subjects, clear communication and decision-making, quality assurance measures and data abstraction, validation, and analysis; and (4) Dissemination phase, where the research team shares results via conference presentations, publications, traditional media, social media, and implements strategies for translating results to practice. With this manuscript, we provide a guide to conducting quantitative multicenter research with a focus on simulation-specific issues.
Simulation-based research (SBR) addresses either the impact of simulation as an educational intervention or as an investigative methodology to study clinically important questions . Despite the increase in published healthcare simulation research, relatively few studies are multicenter in nature . Many published single-center studies fail to make an impact on educational or clinical practice or have a follow-up multicenter study conducted. An increase in multicenter SBR has the potential to improve the level and quality of evidence to help to inform change that drives patient care and outcomes.
There are many important benefits of collaborative, multicenter research. Multicenter research allows analysis of questions that require larger sample sizes , while enabling comparison of effect between sites and providing insight related to generalizability of effect across institutions . Multicenter research promotes capacity, networking, and mentorship by bringing together investigators who share and leverage resources, expertise, and ideas [4,5,6,7]. Research networks can support multicenter collaborations by providing infrastructure, site investigators and content experts, and opportunities for dissemination of best practices to and beyond network members [4, 8, 9]. Collaborative research teams involving members from various professions or disciplines incorporate multiple perspectives that introduce new knowledge and concepts to improve healthcare . As a consequence of higher quality research, teams conducting multicenter studies are able to publish in higher impact peer-reviewed journals , contributing to a track record of success that leads to more effective dissemination and may support funding for future projects.
The International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE) is the world’s largest simulation network focused on improving healthcare outcomes through collaborative, multicenter research . As INSPIRE network investigators, we have reflected on our past successes, challenges, and failures in conducting large, multicenter, simulation-based studies [9,12,13,14,15,16,17,18,19,20,21,22,23,24,, 11–25]. In this manuscript, we provide a guide to conducting quantitative multicenter research with a focus on simulation-specific issues. We hope this guide will help facilitate collaboration and multicenter SBR that will positively impact patient care and clinical outcomes.
We identify four key phases to plan and conduct multicenter SBR: (1) Planning, (2) Project Development, (3) Study Execution, and (4) Dissemination (Fig. 1). While some of these phases and the steps within have been described for non-SBR related studies [3, 7, 26, 27], we share a unique perspective as simulation researchers by focusing on simulation-specific issues and related recommendations. For each step of the multicenter research process, we offer a simulation research pearl—a high-yield, simulation-relevant tip that will help investigators successfully complete their multicenter SBR project. Table 1 highlights the differences between clinical trials and simulation-based studies and provides practical tips for investigators to consider when conducting multicenter simulation-based research. To facilitate the research process, we provide a checklist for investigators (Table 2) to use as they progress through the various phases and steps of multicenter simulation-based research.
Defining the research question(s)
To identify an important research question, investigators can seek guidance from existing research networks that have identified knowledge gaps and developed consensus for the future of simulation research. For example, in 2011, the Society for Simulation in Healthcare and the Society in Europe for Simulation Applied to Medicine conducted an Utstein Style Meeting to set a research agenda for simulation-based healthcare education . The INSPIRE network also conducted a consensus building exercise to define six specific areas of focus to help advance the field of pediatric simulation . The results from consensus meetings serve as a roadmap for future multicenter research projects, while networks may provide feedback to help fine-tune the research question. Defining an objective that is feasible, ethical, and has the potential to positively impact learner and/or patient outcomes should drive the genesis of the research question . Table 3 offers a case study in multicenter research based on a recent INSPIRE multicenter study.
Simulation research pearl
Prior to proposing a research question, investigator(s) should complete a thorough review of the published literature within and outside of the simulation and/or healthcare domains. If a specific intervention has already been robustly studied in the clinical context, then there may be little reason to replicate the study in the simulated context.
Identifying outcome measures
Outcomes from simulation-based research have been described in the context of translational science, where T1 outcomes are those achieved in the simulation lab, T2 outcomes as those resulting in improved patient care practices, and T3 outcomes resulting in improvements in patient and public health . Investigators should strive to measure T2 and T3 outcomes, which can be facilitated by partnering with outcomes centers and/or clinical researchers who are familiar with the processes involved in collecting clinical data. When collecting T2 or T3 outcomes is not feasible or applicable, T1 outcomes (knowledge, skills, and/or attitudes) can be captured in the simulated environment [2, 31, 32] (Table 1). If T1 outcomes are measured, it is important that the measurement tools used have sufficient validity evidence .
Utilizing tools that lack validity evidence place the results of studies in question. When validated tools are unavailable, investigators have the option of either modifying a pre-existing tool or developing a new one; either way, research should be done to describe the validity evidence supporting the use of the tool as an outcome measure [34, 35]. For example, when planning to conduct a study evaluating the impact of scripted debriefing for novice facilitators of a pediatric advanced life support course , we decided to measure learner knowledge and team clinical performance as outcomes. Prior to the main study, we developed and conducted a validation study for both the multiple-choice test  (i.e., knowledge) and the clinical performance tool (i.e., adherence to resuscitation protocols) . Failure to invest time in gathering validation data for proposed outcome measures makes the results of the study difficult to interpret, with subsequent publication difficult to achieve.
Simulation research pearl
When simulation technology (e.g., mannequin or external device) is used to capture performance outcomes, investigators should ensure that the data captured is both reliable and accurate (Table 3). This may require standardized calibration of equipment across sites, collaboration with industry, and testing equipment at all sites prior to study implementation.
Single-center pilot studies are the foundation for successful multicenter studies. Pilot studies help to inform potential modifications to the study question and study design, highlight challenges with protocol execution (e.g., obtaining consent, recruitment, data collection), and provide data to inform the power calculation of the sample size for the multicenter study [3, 27]. Protocol non-adherence or subject withdrawal can be captured to estimate the sample size for the multicenter study . Lessons learned from pilot work should be integrated into the new multicenter study protocol, preventing potential issues from arising at that phase (Table 3). Single-institution pilot studies help to identify human and material resource needs that inform budgets. Sometimes, pilot data are noted from a review of published literature; investigators should contact first authors to explore the potential for collaboration and synergy.
Simulation research pearl
Do not underestimate the value of conducting pilot studies. Pilot data strengthens grant applications, particularly for multicenter simulation studies, where investigators can highlight how pilot work has identified solutions to simulation-specific research issues (e.g., standardizing scenario, blinding of reviewers, confederate training).
Project development phase
Key collaborators for any multicenter study should have collective expertise in the relevant content area, clinical research, simulation research and/or education, study design, statistical analysis, technical expertise (if appropriate), and knowledge translation . Team members may include principal and site investigators, research coordinators and assistants, a statistician, subject matter experts, simulation technicians, a psychometrician, trainee(s), and senior mentor(s). Involving trainees and junior faculty builds capacity for future studies, while mentors troubleshoot issues and provide career advice for early-career investigators . In the planning phase, the principal investigator clarifies roles, responsibilities, expectations, and the projected time commitment for each team member. The principal investigator works with site leads to determine existing resources at each site, identifies opportunities for matching funds from institutions, and clarifies budget requests for future grant applications.
Central to the success of a multicenter research team is the establishment and maintenance of trust amongst investigators. Research team members must feel comfortable sharing ideas without fear of others stealing those ideas and turning them into projects or grants of their own. Establishing an agreement of shared confidentiality when the research team is first formed helps to lay the foundation for a trusting bond between investigators.
Simulation research pearl
Invite collaborators from outside of the simulation community—they often provide perspective that can help enhance study design, clinical applicability, and generalizability.
Protocol development is typically an iterative process involving feedback from diverse informed sources, revisions to the protocol, and consensus from investigators . The initial draft should be discussed in a meeting with collaborators and, if possible, external experts who are not part of the study team.
Next, the study team typically requires several more planning meetings (in person, web-based, or conference calls) to reach consensus before finalizing the study protocol. Using internet-based file sharing and collaboration tools can facilitate ongoing asynchronous dialogue amongst collaborators . Sometimes larger research teams run the risk of “analysis paralysis,” or spending too much time analyzing and debating the details of the study protocol without coming to consensus. If this occurs, it may help if a smaller core group of individuals develops the study protocol that is then circulated for final revision and approval by the entire research team.
Simulation research pearl
Investigators should present their study protocol to colleagues for peer review (Table 3). The INSPIRE network hosts a bi-annual meeting where investigators meet with collaborators and external experts to receive constructive feedback.
Research operations manual
The research operations manual provides a step-by-step guide, policy, and standard operating procedures for the execution of the study protocol . The manual should be organized to provide the necessary information for a collaborator at any site to recruit subjects and collect data; including study team members, study flow, inclusion and exclusion criteria, study methods, confederate training, and details regarding data collection and sharing. Listing members with appropriate contact information ensures that collaborators have someone to contact for support and guidance.
Data management procedures are critical to a successful multicenter study. A centralized method of collecting data (e.g., online website, centralized database) allows for data to be entered remotely (i.e., either by collaborator and/or study participants), which is particularly important for multicenter research [40, 41]. Data security and privacy concerns are heightened if data can be entered or viewed by investigators across institutions . The principal investigator determines the level and degree of access for each team member. The manual also describes the unique identification system for each study participant (or team), while providing a means of identifying data across nested factors such as time points, recruitment sites, and intervention groups . Standard procedures for data abstraction, verifying the accuracy of data, and generation of data backups should be described within the manual.
The ability to isolate the independent variable by minimizing the influence of other variables is an advantage of simulation research; the challenge is how to accomplish this across multiple sites (Table 1). Participants should be oriented to the technology and the environment in a standardized fashion. When confederates (i.e., actors) are used in SBR, they should be trained and monitored to ensure consistent performance across sites (Table 3) . In prior work, we describe how the use of cue cards, online learning, and videos modeling ideal confederate behavior result in highly consistent confederate performance in a multicenter trial . The same type of simulator (e.g., task trainer, mannequin) should be used across all recruitment sites, and the simulated environment should be set up in the same manner to reduce potential confounders. Standardizing the simulation event, including clear learning objectives, use of adjuncts and facilitator characteristics should be discussed when preparing the protocol. All relevant elements of instructional design (e.g., duration, timing, frequency, clinical variation, assessment, adaptability, range of difficulty, adjuncts, integration, feedback/debriefing) need to be considered—for educational studies, these represent significant confounding variables that can be minimized through careful planning and feasibility testing.
Simulation research pearl
A detailed description of how the protocol is standardized across sites helps to minimize risk of bias . Recently published reporting guidelines describe important standardization elements, including participant orientation, simulator type, simulation environment, simulation event/scenario, instructional design, and feedback and/or debriefing [44,45,46,47].
Grant support is often necessary to conduct a multicenter research project . The project team identifies candidate funding agencies, which may vary depending upon research focus and country of origin. We recommend applying to multiple opportunities (if permitted by funding agency) to maximize chances of securing funding. Smaller grants present opportunities to fund pilot, validation work, or portions of the main study. Grants should include preliminary data from pilot studies and should be written to highlight the strengths of the research team and prior collective successes. Rejected grants with reviewer comments should not be viewed negatively; rather, they offer feedback that can improve future grant submissions.
When preparing a budget, the principal investigator should request institutional budgets from each site investigator (to inform the larger budget), identify opportunities for matching or in-kind funds from collaborating institutions, and determine which research positions will provide best value for money. We recommend prioritizing some funding to support network infrastructure, as money goes much further when allocated to site research coordinators/assistants than it does in purchasing equipment or protecting investigator time. Management of the overall budget and distribution of funds is typically the responsibility of the principal investigator (lead research site). Additional administrative support should be allocated to the lead research site in the budget to account for management of finances. Sometimes, enthusiasm at a site wanes, resulting in unfulfilled commitments. To manage this issue, one or two backup sites can be identified in the grant proposal. These sites assist with recruitment if other sites are falling short. The responsibilities of each institution should match the funding allocation with a transparent process of sharing the budget across centers.
Ultimately, some projects may either be left unfunded or underfunded. If so, the research team needs to determine if the project can be feasibly completed with existing infrastructure. In our experience, multicenter research projects can be completed with fairly modest budgets if the principal and site investigators are passionate, enthusiastic, and have the time and energy to fully commit to the project.
Simulation research pearl
Grant reviewers may criticize studies with simulation-based T1 outcomes. To highlight the clinical relevance of these studies, investigators should attempt to describe potential links between proposed outcomes, other accessible outcomes, and patient outcomes (Table 1). Doing so provides a chain of causality that may strengthen the rationale for the study .
Ethical approval and subsite contracts
Obtaining institutional review board (IRB) approval can be a rate-limiting step for a multicenter research study. IRBs at different institutions may have varying requirements and a range of review levels, from exempt to review by the full board . To reduce workload and streamline the IRB process, principal investigators should prepare and circulate a copy of the approved IRB submission with accompanying documentation (e.g., protocol, consent forms, demographic forms, assessment tools) that can be used as a template for other site investigators (Table 1). We have found that a presentation (via webinar or face to face) given by the principal investigator describing elements of the IRB submission helps engage collaborators who are preparing their own IRB submissions. Enlisting the assistance of an IRB staff member can be very helpful when navigating multicenter and particularly multinational IRBs.
Subsite contracts between the primary investigators’ institution and site investigator institutions are typically required if data and/or money is being transferred between sites. Contracts help to ensure participant (i.e., learner, provider or patient) confidentiality across all sites and outline data sharing and financial agreements. Typically, contracts cannot be executed until ethical approval has been obtained at both sites. These legal contracts can take many months to complete and can cause lengthy delays in research. Obtaining IRB approval early in the planning phase and quickly moving onto subsite contracts can help keep teams on track.
Simulation research pearl
Multicenter research teams can manage the variability with IRBs across institutions  by sharing IRB comments with collaborators so that issues can be addressed in future submissions or amendments. Some IRBs will allow for a ceded review for simulation studies, either on a case-by-case basis or as a pre-approved agreement .
Clinical trial registration
Clinical trial registries are “web-based databases providing researchers, journal editors, and reviewers detailed study information to help inform trial results” , with the expressed intent to determine the degree of publication bias. While the International Committee of Medical Journal Editors (ICMJE) requires that clinical trials be registered prior to publication in their member journals, there is controversy over whether simulation-based studies should also meet this requirement (Table 1). The ICMJE definition suggests that registration is necessary for studies examining the effect of providers on patients (i.e., T1 outcome), but not necessary for studies examining the effect of the providers (i.e., T2 or T3 level outcomes) . Our experience has been variable, with most top medical journals requesting clinical trial registration numbers prior to considering studies for publication.
Simulation research pearl
Manuscript oversight committee
The manuscript oversight committee (MOC), typically comprised of three members, is tasked with ensuring academic rigor, transparency for authorship assignment, and managing conflict of interests for potential publications . The MOC works with the primary investigator to list all potential publications and then generates writing teams for each publication in a fair and logical manner. Key authorship positions are allocated based upon projected degree of involvement and workload, while also providing opportunity for novice investigators to serve in key writing roles. Developing and sharing an MOC document during the planning phase of research provides investigators opportunity to give feedback on proposed writing teams and prevents conflict between team members (Table 3). See Additional file 1 for a sample MOC document.
Simulation research pearl
Simulation studies can be published in many different journal types (e.g., simulation journals, healthcare education journals, clinical journals). Research teams should consider the journal scope, focus, and audience when selecting a journal—and matching this to the study topic, quality, and ideal audience (Table 1).
As one of the final steps of the project development phase, feasibility testing serves as an important test to determine if all sites are properly prepared to execute the study protocol. During feasibility testing, all sites should conduct several study rehearsal sessions using volunteers that would not be typically eligible to participate in the study. The sessions can serve multiple purposes: to train research personnel and test the technical aspects of research, including mannequin operation, audio/video capture (if applicable), and data collection/sharing (Table 1). If audio and video from one or more angles is being used to capture outcomes, we recommend that each site send audio/video from feasibility testing sessions to the primary investigator for review and approval before commencing recruitment. Test data should be uploaded to the database to ensure that data collection system is functional.
Simulation research pearl
The simulation environment across research sites may be variable. When reviewing videos from feasibility testing, pay close attention to the simulation environment to identify possible sources of variance across sites (Table 3).
Study execution phase
Recruitment and enrollment
Clear and concise inclusion and exclusion criteria ensure participant types are similar across all sites. For randomized controlled trials, block randomization ensures equal allocation of study groups within study sites. Ideally, the randomization process should occur centrally (e.g., randomization envelopes created at one site), thus eliminating the chance for randomization to occur differently between sites.
Simulation research pearl
Individuals from various sites may have different expectations of participation in simulation-based research and may wonder how it differs from simulation-based education and/or assessment (Table 1). To avoid confusion and systematic bias, the consent process can be standardized by using a video to introduce the study, potential risks, and how the session(s) may differ from simulation-based education.
Communications and decision-making
Clear communication between the principal investigator and collaborators prevents challenges from arising and keeps the research project on the proposed timeline. A research steering committee comprised of the primary investigator, site co-investigators, and administrative support should meet regularly by conference call and annually in person to review progress and make key decisions . Individual sites will have their own research operations committees, comprised of research assistants, coordinators, and the site investigator, to discuss recruitment and to troubleshoot any local issues. Establishing a clear organizational structure, along with a shared goals and expectations, creates a team-based research environment where individuals buy into their role as a collaborative team member .
Sometimes despite having an established organizational structure, site investigators may lose interest in the study or be pulled away by competing priorities . To manage this issue, the primary investigator must foster a collaborative spirit by building team cohesion (e.g., team dinners at conferences), celebrating successes (e.g., first participant recruited, presentations, publications), and providing positive feedback. Understanding the areas of expertise of site investigators allows the primary investigator to assign responsibilities that are most likely to fully engage the team member. We have found that quarterly newsletters and site performance dashboards (e.g., to report recruitment numbers) to be effective motivating tools during the study execution phase of multicenter studies.
Simulation research pearl
Simulation fellowship training programs offer opportunity to engage trainees in multicenter research. A training committee comprised of the primary investigator, trainee(s), and their supervisor(s) should be established to oversee trainee progress and academic productivity.
Primary investigators should work with the research team to implement a quality assurance plan to prevent, detect, and address problems as they arise. In the study execution phase, detection of problems via routine monitoring can be accomplished through centralized monitoring of data (e.g., automated data field screening), centralized review of videos, and site visits . Centralized monitoring of data allows for detection of missing or incorrect entry of specific data points.
A schedule of site visits by the principal investigator or core study team should be developed, ideally timed to correspond with the initiation of recruitment, and near the middle of the trial if funding permits. The initial site visit may be used as an opportunity to train research staff and confederate actors (if applicable) . The individual conducting the site visit reviews the research operations manual with the local study team, reviews recruitment and data entry procedures, and addresses any pressing concerns . Mid-trial site visits includes the above plus on-site review of local study data to identify any possible issues. If errors or systemic issues are detected, the primary investigator many need to (a) revise the study protocol to prevent future errors, (b) retrain research staff, (c) conduct future audits, and (d) report on protocol violations in publications .
Simulation research pearl
Intermitted centralized review of videos can identify issues with video and audio quality, adherence to blinding of participants, or deviations in confederate behaviors that may affect study outcomes (Table 3).
Data abstraction and analysis
Rater orientation training is required when assessment tools are used to collect performance data [56, 57]. Rater orientation training ensures all raters have a shared understanding of the construct(s) being assessed and provides opportunity to calibrate raters immediately prior to data abstraction . If raters are expected to abstract data at multiple points during or after the study, then booster training (or re-training) should be offered to re-calibrate raters prior to each assessment time point (Tables 1 and 3). Online website and centralized databases can be used to collect and assign videos to raters, who can then submit ratings in an asynchronous fashion .
Multicenter SBR is prone to missing data. Missing data should be analyzed to determine if systematic biases exist (e.g., poor performances are not captured, data missing from one site only). In multi-center trials, some of the variation in the outcome may be at the institution level—this is explicitly true in cluster-randomized trials—which should be taken into account both to ensure proper analyses and as a phenomenon of considerable interest.
Simulation research pearl
When assigning videos to raters, investigators should attempt to avoid allocating videos of participants to raters from the same site. While raters may be blinded to the intervention, recognizing individuals who are their colleagues and/or trainees may introduce bias to the rating process.
Presentation and publication
Once data collection and analyses have taken place, the writing phase begins for professional scholarly output in the form of presentations and publications. Abstracts can be submitted for presentation at multiple conferences (if conference guidelines permit) to promote greater dissemination. Publications should be prepared according to the MOC document and formatted by following reporting guidelines for healthcare simulation research [44,45,46]. Following the steps outlined in this paper should offer various opportunities for publication, including systematic or narrative reviews (i.e., defining the research question), assessment tool validation studies (i.e., outcome measures), pilot studies, the main multicenter study, associated sub-studies, and educational content (if applicable). Fig. 2 offers an example of a series of publications resulting from work-related to a multicenter study examining the impact of just-in-time lumbar puncture training [21, 23, 25, 58, 59]. Publishing the systematic or narrative review, validation study, and pilot study early allows for citation of this work in the main multicenter study paper. Similarly, the multicenter manuscript should be published ahead of associated sub-studies (Table 3). Sub-studies should be planned ahead of time to ensure they address novel objectives and report outcomes that do not completely overlap with those reported in the main study.
Simulation research pearl
Educational interventions developed for simulation-based educational studies can be disseminated through publication to facilitate implementation by educators (e.g., meded portal) (Table 1).
Although embargo and copyright rules from journals may prevent disseminating findings prior to publication, the research team should carefully strategize how to best disseminate the knowledge through traditional media (e.g., television, newspaper), social media, webinars, podcasts, and/or blogs once results have been published. “Free Open Access Meducation” (FOAM) in the form of blogs, podcasts, and associated social media strategies have seen increasing uptake in the healthcare community as a means of disseminating new research to the masses [60,61,62].
Simulation research pearl
Engaging the editors of simulation-focused websites (e.g., www.simulationpodcast.com, www.debrief2learn.org) may facilitate dissemination in the form blogs, podcasts, or online article reviews. These sites discuss new research and engage the simulation community in online discussion of recently published work.
Translation to practice
Dissemination of research is incomplete without engaging relevant stakeholders in efforts to translate results to practice. Collaboration with existing organizations that share similar goals can enhance dissemination (Table 3). For example, after completing a multicenter study on scripted debriefing , our research team collaborated with the American Heart Association to integrate a scripted debriefing tool into new instructor training materials for advanced life support courses . Similarly, procedural skills studies conducted by INSPIRE investigators have spawned a collaboration with Open Pediatrics, with the goal of producing procedural skills training kits for residents and practicing physicians. Lastly, sites within simulation networks provide an established and receptive dissemination conduit for uptake of new research findings, serving as a powerful knowledge translation vehicle for completed multicenter studies.
Simulation research pearl
Begin with the end in mind. Know ahead of time who your key stakeholders are and engage them in defining the research question, study design, and protocol development. This will help to maximize the likelihood of uptake and dissemination once the study is completed.
The conduct of high-quality multicenter simulation-based research is challenging. Success may be enhanced by following a stepwise approach including four distinct phases of multicenter research: Planning, Project Development, Study Execution, and Dissemination. While a stepwise approach offers structure to formalize the research process, multicenter collaboration is often not completely linear in nature. Deliberate, thoughtful and collaborative decision-making occasionally requires the need to cycle back and revisit a step or two in the research process. These mini feedback loops facilitate the maintenance of a shared mental model amongst investigators, which is a critical element of successful collaborative research. We hope this guidance will encourage investigators to conduct multicenter research, and in doing so, advance the rigor and quality of SBR.
International Committee of Medical Journal Editors
International Network for Simulation-based Pediatric Innovation, Research and Education
Institutional review board
Manuscript oversight committee
Cheng A, Auerbach M, Hunt EA, et al. Designing and conducting simulation-based research. Pediatrics. 2014;133(6):1091–101.
Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA. 2011;306(9):978–88.
Chung KC, Song JW, Group WS. A guide to organizing a multicenter clinical trial. Plast Reconstr Surg. 2010;126(2):515–23.
Schwartz A, Young R, Hicks PJ, Appd LF. Medical education practice-based research networks: facilitating collaborative research. Med Teach. 2016;38(1):64–74.
Payne S, Seymour J, Molassiotis A, et al. Benefits and challenges of collaborative research: lessons from supportive and palliative care. BMJ Support Palliat Care. 2011;1(1):5–11.
O'Sullivan PS, Stoddard HA, Kalishman S. Collaborative research in medical education: a discussion of theory and practice. Med Educ. 2010;44(12):1175–84.
Huggett KN, Gusic ME, Greenberg R, Ketterer JM. Twelve tips for conducting collaborative research in medical education. Med Teach. 2011;33(9):713–8.
Cheng A, Hunt EA, Donoghue A, et al. EXPRESS--Examining Pediatric Resuscitation Education Using Simulation and Scripting. The birth of an international pediatric simulation research collaborative--from concept to reality. Simul Healthc. 2011;6(1):34–41.
Hunt EA, Duval-Arnould J, Chime NO, et al. Building consensus for the future of paediatric simulation: a novel ‘KJ Reverse-Merlin’ methodology. BMJ Simul Technol Enhanced Learn. 2016;bmjstel-2015-000072.
Lattuca LR, Creamer EG. Learning as professional practice. New Dir Teach Learn. 2005;102:2–11.
Cheng A, Brown LL, Duff JP, et al. Improving cardiopulmonary resuscitation with a CPR feedback device and refresher simulations (CPR CARES Study): a randomized clinical trial. JAMA Pediatr. 2015;169(2):137–44.
Cheng A, Hunt EA, Donoghue A, et al. Examining pediatric resuscitation education using simulation and scripted debriefing: a multicenter randomized trial. JAMA Pediatr. 2013;167(6):528–36.
Cheng A, Hunt EA, Grant D, et al. Variability in quality of chest compressions provided during simulated cardiac arrest across nine pediatric institutions. Resuscitation. 2015;97:13–9.
Cheng A, Overly F, Kessler D, et al. Perception of CPR quality: influence of CPR feedback, just-in-time CPR training and provider role. Resuscitation. 2015;87:44–50.
Donoghue A, Ventre K, Boulet J, et al. Design, implementation, and psychometric analysis of a scoring instrument for simulated pediatric resuscitation: a report from the EXPRESS pediatric investigators. Simul Healthc. 2011;6(2):71–7.
Duff JP, Cheng A, Bahry LM, et al. Development and validation of a multiple choice examination assessing cognitive and behavioural knowledge of pediatric resuscitation: a report from the EXPRESS pediatric research collaborative. Resuscitation. 2013;84(3):365–8.
Auerbach M, Fein DM, Chang TP, et al. The correlation of workplace simulation-based assessments with interns' infant lumbar puncture success: a prospective, multicenter, observational study. Simul Healthc. 2016;11(2):126–33.
Auerbach M, Kessler DO, Patterson M. The use of in situ simulation to detect latent safety threats in paediatrics: a cross-sectional survey. BMJ Simul Technol Enhanced Learn. 2015:bmjstel-2015-000037.
Chang TP, Kessler D, McAninch B, et al. Script concordance testing: assessing residents' clinical decision-making skills for infant lumbar punctures. Acad Med. 2014;89(1):128–35.
Kessler DO, Arteaga G, Ching K, et al. Interns' success with clinical procedures in infants after simulation training. Pediatrics. 2013;131(3):e811–20.
Kessler DO, Auerbach M, Pusic M, Tunik MG, Foltin JC. A randomized trial of simulation-based deliberate practice for infant lumbar puncture skills. Simul Healthc. 2011;6(4):197–203.
Kessler DO, Walsh B, Whitfill T, et al. Disparities in adherence to pediatric sepsis guidelines across a spectrum of emergency departments: a multicenter, cross-sectional observational in situ simulation study. J Emerg Med. 2016;50(3):403–15. e3.
Kamdar G, Kessler DO, Tilt L, et al. Qualitative evaluation of just-in-time simulation-based learning: the learners' perspective. Simul Healthc. 2013;8(1):43–8.
Auerbach M, Whitfill T, Gawel M, et al. Differences in the quality of pediatric resuscitative care across a spectrum of emergency departments. JAMA Pediatr. 2016;170(10):987–94.
Kessler D, Pusic M, Chang TP, et al. Impact of just-in-time and just-in-place simulation on intern success with infant lumbar puncture. Pediatrics. 2015;135(5):e1237–46.
Aitken LM, Pelter MM, Carlson B, et al. Effective strategies for implementing a multicenter international clinical trial. J Nurs Scholarsh. 2008;40(2):101–8.
Irving SY, Curley MA. Challenges to conducting multicenter clinical research: ten points to consider. AACN Adv Crit Care. 2008;19(2):164–9.
Issenberg SB, Ringsted C, Ostergaard D, Dieckmann P. Setting a research agenda for simulation-based healthcare education: a synthesis of the outcome from an Utstein style meeting. Simul Healthc. 2011;6(3):155–67.
Hulley SB, Cummings SR, Browner WS. Designing clinical research: an epidemiologic approach. 2nd ed. Philadelphia: Lippincott Williams and Wilkins; 2001.
McGaghie WC, Draycott TJ, Dunn WF, Lopez CM, Stefanidis D. Evaluating the impact of simulation on translational patient outcomes. Simul Healthc. 2011;6(Suppl):S42–7.
Brydges R, Hatala R, Zendejas B, Erwin PJ, Cook DA. Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis. Acad Med. 2015;90(2):246–56.
Cheng A, Lang TR, Starr SR, Pusic M, Cook DA. Technology-enhanced simulation and pediatric education: a meta-analysis. Pediatrics. 2014;133(5):e1313–23.
Cook DA, Beckman TJ. Current concepts in validity and reliability for psychometric instruments: theory and application. Am J Med. 2006;119(2):166. e7-16.
Cook DA, Hatala R. Validation of educational assessments: a primer for simulation and beyond. Advanc Simul. 2016;1:31.
Cook DA, Zendejas B, Hamstra SJ, Hatala R, Brydges R. What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. Adv Health Sci Educ Theory Pract. 2014;19(2):233–50.
Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004;10(2):307–12.
Minnick A, Kleinpell RM, Micek W, Dudley D. The management of a multisite study. J Prof Nurs. 1996;12(1):7–15.
Boulos MN, Maramba I, Wheeler S. Wikis, blogs and podcasts: a new generation of Web-based tools for virtual collaborative clinical practice and education. BMC Med Educ. 2006;6:41.
Bowman A, Wyman JF, Peters J. The operations manual: a mechanism for improving the research process. Nurs Res. 2002;51(2):134–8.
Cheng A, Nadkarni V, Hunt EA, et al. A multifunctional online research portal for facilitation of simulation-based research: a report from the EXPRESS pediatric simulation research collaborative. Simul Healthc. 2011;6(4):239–43.
Avidan A, Weissman C, Sprung CL. An internet web site as a data collection platform for multicenter research. Anesth Analg. 2005;100(2):506–11.
Schmitt CP, Burchinal M. Data management practices for collaborative research. Front Psychiatry. 2011;2:47.
Adler MD, Overly FL, Nadkarni VM, et al. An approach to confederate training within the context of simulation-based research. Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare. 2016;11(5):357–62.
Cheng A, Kessler D, Mackinnon R, et al. Reporting guidelines for health care simulation research. Clin Simul Nurs. 2016;12(8):iii–xiii.
Cheng A, Kessler D, Mackinnon R, et al. Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements. BMJ Simul Technol Enhanced Learn. 2016:bmjstel-2016-000124.
Cheng A, Kessler D, Mackinnon R, et al. Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements. Advanc Simul. 2016;1:25.
Cheng A, Kessler D, Mackinnon R, et al. Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements. Simul Healthc. 2016;11(4):238–48.
Chung KC, Shauver MJ. Fundamental principles of writing a successful grant proposal. J Hand Surg [Am]. 2008;33(4):566–72.
Cook DA, West CP. Perspective: reconsidering the focus on "outcomes research" in medical education: a cautionary note. Acad Med. 2013;88(2):162–7.
Marsolo K. Approaches to facilitate institutional review board approval of multicenter research studies. Med Care. 2012;50(Suppl):S77–81.
Greene SM, Geiger AM. A review finds that multicenter studies face substantial challenges but strategies exist to achieve Institutional Review Board approval. J Clin Epidemiol. 2006;59(8):784–90.
Winkler SJ, Witte E, Bierer BE. The Harvard Catalyst Common Reciprocal IRB Reliance Agreement: an innovative approach to multisite IRB review and oversight. Clin Transl Sci. 2015;8(1):57–66.
Cheng A, Raemer DB. Is clinical trial registration for simulation-based research necessary? Simul Healthc. 2014;9(6):350–2.
Greene SM, Hart G, Wagner EH. Measuring and improving performance in multicenter research consortia. J Natl Cancer Inst Monogr. 2005;35:26–32.
Knatterud GL, Rockhold FW, George SL, et al. Guidelines for quality assurance in multicenter trials: a position paper. Control Clin Trials. 1998;19(5):477–93.
Feldman M, Lazzara EH, Vanderbilt AA, DiazGranados D. Rater training to support high-stakes simulation-based assessments. J Contin Educ Health Prof. 2012;32(4):279–86.
Eppich W, Nannicelli AP, Seivert NP, et al. A rater training protocol to assess team performance. J Contin Educ Health Prof. 2015;35(2):83–90.
Braga MS, Tyler MD, Rhoads JM, et al. Effect of just-in-time simulation training on provider performance and patient outcomes for clinical procedures: a systematic review. BMJ Simul Technol Enhanced Learn. 2015:bmjstel-2015-000058.
Gerard JM, Kessler DO, Braun C, Mehta R, Scalzo AJ, Auerbach M. Validation of global rating scale and checklist instruments for the infant lumbar puncture procedure. Simul Healthc. 2013;8(3):148–54.
Thoma B, Poitras J, Penciner R, Sherbino J, Holroyd BR, Woods RA. Administration and leadership competencies: establishment of a national consensus for emergency medicine. CJEM. 2015;17(2):107–14.
Thoma B, Sanders JL, Lin M, Paterson QS, Steeg J, Chan TM. The social media index: measuring the impact of emergency medicine and critical care websites. West J Emerg Med. 2015;16(2):242–9.
Trueger NS, Thoma B, Hsu CH, Sullivan D, Peters L, Lin M. The altmetric score: a new measure for article-level dissemination and impact. Ann Emerg Med. 2015;66(5):549–53.
Cheng A, Rodgers DL, van der Jagt E, Eppich W, O'Donnell J. Evolution of the Pediatric Advanced Life Support course: enhanced learning with a new debriefing tool and Web-based module for Pediatric Advanced Life Support instructors. Pediatr Crit Care Med. 2012;13(5):589–95.
We would like to thank our colleagues and collaborators within the INSPIRE network whose dedication and commitment to multicenter, simulation-based research has informed this manuscript. The INSPIRE network receives infrastructure support from the Society for Simulation in Healthcare, the International Pediatric Simulation Society, RBaby Foundation, and B Line Medical. The primary investigator (AC) would like to acknowledge funding from the Alberta Children’s Hospital Research Institute, Alberta Children’s Hospital Foundation, and the Department of Pediatrics, Cumming School of Medicine to support the KidSIM-ASPIRE Simulation Research Program and his appointment as a Clinician Scientist, Alberta Children’s Hospital Research Institute.
Availability of data and materials
AC, DK, and MA contributed to the intellectual concept and design of the manuscript, drafted the initial manuscript, revised the manuscript, and gave final approval of the version to be submitted for publication. RM, TPC, VN, EAH, JDA, YL, and MP contributed to the intellectual concept, edited and critically revised the manuscript for important intellectual content, and gave final approval of the version to be submitted for publication. All authors agree to be accountable for all aspects of the work.
The authors declare that they have no competing interests.
Ethics approval and consent to participate
Consent for publication
About this article
Cite this article
Cheng, A., Kessler, D., Mackinnon, R. et al. Conducting multicenter research in healthcare simulation: Lessons learned from the INSPIRE network. Adv Simul 2, 6 (2017). https://doi.org/10.1186/s41077-017-0039-0
- Knowledge Translation
- Advanced Life Support
- Primary Investigator
- Feasibility Testing
- Site Investigator