Open Access

Reporting guidelines for health care simulation research: extensions to the CONSORT and STROBE statements

  • Adam Cheng1Email author,
  • David Kessler2,
  • Ralph Mackinnon3, 4,
  • Todd P. Chang5,
  • Vinay M. Nadkarni6,
  • Elizabeth A. Hunt7,
  • Jordan Duval-Arnould7,
  • Yiqun Lin8,
  • David A. Cook9,
  • Martin Pusic10,
  • Joshua Hui11,
  • David Moher12,
  • Matthias Egger13,
  • Marc Auerbach14 and
  • for the International Network for Simulation-based Pediatric Innovation, Research, and Education (INSPIRE) Reporting Guidelines Investigators
Advances in Simulation20161:25

DOI: 10.1186/s41077-016-0025-y

Received: 23 March 2016

Accepted: 8 July 2016

Published: 25 July 2016

Abstract

Background

Simulation-based research (SBR) is rapidly expanding but the quality of reporting needs improvement. For a reader to critically assess a study, the elements of the study need to be clearly reported. Our objective was to develop reporting guidelines for SBR by creating extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statements.

Methods

An iterative multistep consensus-building process was used on the basis of the recommended steps for developing reporting guidelines. The consensus process involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion via online premeeting survey, (5) conducting a consensus meeting, and (6) drafting reporting guidelines with an explanation and elaboration document.

Results

The following 11 extensions were recommended for CONSORT: item 1 (title/abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes/ estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). The following 10 extensions were recommended for STROBE: item 1 (title/abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). An elaboration document was created to provide examples and explanation for each extension.

Conclusions

We have developed extensions for the CONSORT and STROBE Statements that can help improve the quality of reporting for SBR (Sim Healthcare 00:00-00, 2016).

Keywords

Simulation Research Reporting guidelines Extension Health care

Background

Simulation has seen growing use in health care as a “tool, device, and/or environment (that) mimics an aspect of clinical care” [1] to improve health care provider performance, health care processes, and ultimately patient outcomes [15]. The use of simulation in health care has been accompanied by an expanding body of simulation-based research (SBR) addressing both educational and clinical issues [615]. Broadly speaking, SBR can be broken down into 2 categories: (1) research addressing the efficacy of simulation as a training methodology (ie, simulation-based education as the subject of research) and (2) research using simulation as an investigative methodology (ie, simulation as the environment for research) [16, 17]. Many features of SBR overlap with traditional clinical or educational research. However, the use of simulation in research introduces a unique set of features that must be considered when designing the methodology and reported when publishing the study [1619].

As has been shown in other fields of medicine [20], the quality of reporting in health professions education research is inconsistent and sometimes poor [1, 11, 2123]. Systematic reviews in medical education have quantitatively documented missing elements in the abstracts and main texts of published reports, with particular deficits in the reporting of study design, definitions of independent and dependent variables, and study limitations [2123]. In research specific to simulation for health care professions education, a systematic review noted many studies failing to “clearly describe the context, instructional design, or outcomes.” [1] Another study found that only 3 % of studies incorporating debriefing in simulation education reported all the essential characteristics of debriefing [11]. Failure to adequately describe the key elements of a research study impairs the efforts of editors, reviewers, and readers to critically appraise strengths and weaknesses [24, 25] or apply and replicate findings [26]. As such, incomplete reporting represents a limiting factor in the advancement of the field of simulation in health care.

Recognition of this problem in clinical research has led to the development of a growing number of reporting guidelines in medicine and other fields, including the Consolidated Standards of Reporting Trials (CONSORT) Statement for randomized trials [2730], the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement for observational studies [31, 32], and the Preferred Reporting Items for Systematic Review and Meta-Analyses Statement [3335], among more than 250 others [36]. Transparent reporting of research allows readers to clearly identify and understand “what was planned, what was done, what was found, and what conclusions were drawn.” [31] In addition to these statements, experts have encouraged [37] and published extensions to existing statements that focus on specific methodological approaches [38, 39] or clinical fields [40, 41]. In this study, we aimed to develop reporting guidelines for SBR by creating extensions to the CONSORT Statement and the STROBE Statement specific to the use of simulation in health care research. These reporting guidelines are meant to be used by authors submitting manuscripts involving SBR and to assist editors and journal reviewers when assessing the suitability of simulation-based studies for publication.

Methods

The study protocol was reviewed by Yale University Biomedical Institutional Review Board and was granted exempt status. We conducted a multistep consensus process on the basis of previously described steps for developing health research reporting guidelines [42]. These steps involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion, (5) conducting a consensus meeting, and (6) drafting reporting guidelines and an explanation and elaboration document.

Development of the steering committee

A steering committee was formed consisting of 12 members with expertise in simulation-based education and research, medical education research, study design, statistics, epidemiology, and clinical medicine. The steering committee defined the scope of the reporting guidelines, identified participants for the consensus process, generated a premeeting survey, planned and conducted the consensus meeting, and ultimately drafted and refined the final version of the reporting guidelines and the explanation and elaboration document.

Defining the scope of the reporting guidelines

To clarify the scope of the reporting guideline extensions, we defined simulation as encompassing a diverse range of products including computer-based virtual reality simulators, high-fidelity and static mannequins, plastic models and task trainers, live animals, inert animal products, human cadavers, and standardized or simulated patients (ie, individuals trained to portray a patient). Our definition excluded research using computational simulation and mathematical modeling, because the guidelines were developed for research using human participants, either as learners or health care providers [1]. The steering committee determined to create reporting guidelines encompassing the following 2 categories of SBR: (1) studies evaluating simulation for educational use and (2) studies using simulation as investigative methodology [16]. We identified the CONSORT [28] and STROBE [31, 32] Statements as reflecting the current reporting standards in health care research and aimed to develop extensions of these 2 statements for quantitative SBR. The CONSORT Statement and extensions were developed for randomized trials, and the STROBE Statement and extensions were developed for observational studies (cohort, case–control, and cross-sectional study designs). Our guideline extensions are not intended for qualitative research, mixed methods research, or validation studies.

Identification of consensus panel participants

The steering committee aimed to identify a consensus group with a broad range of expertise in SBR, including experience in conducting single and multicenter simulation-based studies, expertise in educational research, statistics, clinical epidemiology, and research methodology, and with varying clinical backgrounds. We invited the editor-in-chief and editorial board members of the following 3 health care simulation journals: Simulation in Healthcare, BMJ Simulation and Technology-Enhanced Learning, and Clinical Simulation in Nursing, and editorial board members from the following 2 medical education journals: Medical Education and Advances in Health Sciences Education. In total, 60 expert participants were invited to complete the online survey.

Generating a list of items for discussion

Before the consensus meeting, we surveyed the expert participants via a premeeting survey (www.surveymonkey.com) to identify items in the CONSORT and STROBE Statements that required an extension for SBR. The survey included all items from both the CONSORT and STROBE Statements and was pilot tested among steering committee members before being posted online. Participants were asked to provide suggested wording for the items they identified as requiring an extension. Participants were also given the option of suggesting new simulation-specific items for both the CONSORT and STROBE Statements. On the basis of methods previously used to develop extensions to the CONSORT Statement [40], we used a cutoff of endorsement by at least one third of respondents to identify high priority items for discussion during the consensus meeting.

Consensus meeting

A 5-h consensus conference was conducted in January 2015 in New Orleans, during the annual International Network for Simulation-Based Pediatric Innovation, Research and Education (INSPIRE) meeting. The initial 60 consensus panel participants were invited to attend the consensus conference as well as INSPIRE network members (ie, clinicians, researchers, educators, psychologists, statisticians, and epidemiologists). The INSPIRE network is the world’s largest health care simulation research network with a proven track record of conducting rigorous simulation-based studies in health care [4350].

The results of the online survey were circulated to each member of the steering committee, who were then assigned to review specific items from the CONSORT and STROBE statements on the basis of their expertise. The consensus meeting started with a brief didactic presentation reviewing the CONSORT and STROBE Statements, followed by a description of the study objectives and consensus process. In small groups, each steering committee member led a discussion with 4 or 5 individuals tasked with determining whether a simulation-specific extension was required for their assigned items and if so to recommend wording for the extension. Consensus panel participants were evenly distributed among small groups and specifically assigned to review items on the basis of their area of expertise. High priority items were discussed at length, but all other checklist items were also discussed in the small groups.

After small group discussion, the recommended simulation-specific extensions for both the CONSORT and STROBE Statements were presented to the entire group of participants. Each proposed extension was discussed before recommended wording was established. Minutes from the small and large group discussions were used to inform the development of the explanation and elaboration document [42].

Drafting reporting guidelines

The proposed extensions were circulated for comment among all meeting participants and consensus panel participants who could not attend the meeting. The steering committee used the comments to further refine the extension items. To evaluate these items in practice, 4 members of the steering committee independently pilot tested both the CONSORT and STROBE Statements with simulation-specific extensions. They used 2 published SBR studies (ie, one for each type of SBR), while ensuring that 1 study was a randomized trial and the other an observational study. Feedback from pilot testing informed further revisions. The final reporting guidelines with extensions were circulated to the steering committee 1 last time to ensure the final product accurately represented discussion during and after the consensus conference. An explanation and elaboration document was developed by the steering committee to provide further detail for each item requiring a simulation-specific extension [42].

Results

Premeeting survey

There was a 75 % response rate for the survey, with 45 of the 60 participants completing the entire survey. An additional 12 other participants (20 %) partially completed the survey. Of the 57 participants who responded to the survey, 17 were medical journal editors or editorial board members, 24 had advanced degrees (Masters, PhD), 16 with advanced degrees in medical education or educational psychology, 6 were nurses, 1 was a psychologist, and 54 were physicians (representing anesthesiology, critical care, emergency medicine, pediatrics, and surgery). Of the 3 participants who did not complete the survey, 2 were physicians and 1 was a scientist. The results of the survey are described in Additional file 1: Supplemental Digital Content 1.

Consensus meeting

In total, 35 consensus panel participants who completed the premeeting survey attended the consensus conference. An additional 30 attendees were INSPIRE network members. Of the 65 total attendees at the consensus conference, 12 were medical journal editors or editorial board members, 18 had advanced degrees (Masters, PhD), 4 were nurses, 1 was a psychologist, and 60 were physicians (representing anesthesiology, critical care, emergency medicine, pediatrics, and surgery).

The following 11 simulation-specific extensions were recommended for the CONSORT Statement: item 1 (title and abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes and estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). Participants agreed on the importance of describing the rationale for and design of the simulation-based intervention. Because many simulation-based studies use assessment tools as an outcome measure, participants thought that it was important to report the unit of analysis and evidence supporting the validity and reliability of the assessment tool(s) when available. In the discussion section, participants thought that it was important to describe the limitations of SBR and the generalizability of the simulation-based outcomes to clinical outcomes (when applicable). Participants also agreed that it was important to identify the simulator brand used in the study and if conflicts of interest for intellectual property existed among investigators. The group did not feel that modifications to the CONSORT flow diagram were required for SBR. See Table 1 for CONSORT extensions for SBR.
Table 1

Simulation-Based Research Extensions for the CONSORT Statement

Item

Item number

CONSORT description (Randomized Controlled Trials)

Extension for SBR

Title and abstract

1

a. Identification as a randomized trial in the title

b. Structured summary of trial design, methods, results, and conclusions

In abstract or key terms, the MESH or searchable keyword term must have the word “simulation” or “simulated.”

Introduction

 Background

2

a. Scientific background and explanation of rationale

b. Specific objectives or hypotheses

Clarify whether simulation is subject of research or investigational method for research.

Methods

 Trial design

3

a. Description of trial design (such as parallel, factorial) including allocation ratio

b. Important changes to methods after trial commencement (such as eligibility criteria), with reasons

 

 Participants

4

a. Eligibility criteria for participants

b. Settings and locations where the data were collected

 

 Interventions

5

The interventions for each group with sufficient details to allow for replication, including how and when they were actually administered.

Describe the theoretical and/or conceptual rationale for the design of each intervention.

Clearly describe all simulation-specific exposures, potential confounders, and effect modifiers.

 Outcomes

6

a. Completely defined prespecified primary and secondary outcome measures, including how and when they were assessed

b. Any changes to trial outcomes after the trial commenced, with reasons

In describing the details of methods of assessment, include (when applicable) the setting, instrument, simulator type, timing in relation to the intervention, along with any methods used to enhance the quality of measurements.

Provide evidence to support the validity and reliability of assessment tools in this context (if available).

 Sample size/study size

7

a. How sample size was determined

b. When applicable, explanation of any interim analyses and stopping guidelines

 

 Randomization: sequence generation

8

a. Method used to generate the random allocation sequence

b. Type of randomization and details of any restriction (such as blocking and block size)

 

 Randomization: allocation concealment mechanism

9

Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned

 

 Randomization: implementation

10

Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions

 

 Blinding (masking)

11

a. If done, who was blinded after assignments to interventions (e.g., participants, care providers, those assessing outcomes) and how

b. If relevant, description of the similarity of interventions

Describe strategies to decrease risk of bias, when blinding is not possible.

 Statistical methods

12

a. Statistical methods used to compare groups for primary and secondary outcomes

b. Methods for additional analyses, such as subgroup analyses and adjusted analyses

Clearly indicate the unit of analysis (e.g., individual, team, system), identify repeated measures on subjects, and describe how these issues were addressed.

Results

 Participant flow (a diagram is strongly recommended)

13

a. For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analyzed for the primary outcome

b. For each group, losses and exclusions after randomization, together with reasons

 

 Recruitment

14

a. Dates defining the periods of recruitment and follow-up

b. Why the trial ended or was stopped

 

 Baseline data

15

A table showing baseline demographic and clinical characteristics of each group

In describing characteristics of study participants, include their previous experience with simulation and other relevant features as related to the intervention(s).

 Numbers analyzed

16

For each group, number of participants (denominator) included in each analysis and whether analysis was by original assigned groups

 

 Outcomes and estimation

17

a. For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95 % confidence interval)

b. For binary outcomes, presentation of both absolute and relative effect sizes is recommended

For assessments involving >1 rater, interrater reliability should be reported.

 Ancillary analyses

18

Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing prespecified from exploratory

 

 Adverse events

19

All important harms or unintended effects in each group (for specific guidance, see CONSORT for harms)

 

Discussion

 Limitations

20

Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses

Specifically discuss the limitations of SBR.

 Generalizability

21

Generalizability (external validity, applicability) of the trial findings

Describe generalizability of simulation-based outcomes to patient-based outcomes (if applicable).

 Interpretation

22

Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence

 

Other information

 Registration

23

Registration number and name of trial registry

 

 Protocol

24

Where the full trial protocol can be accessed, if available

 

 Funding

25

Sources of funding and other support (such as supply of drugs), role of funders

List simulator brand and if conflict of interest for intellectual property exists.

MESH Medical Subject Headings

The following 10 extensions were drafted for the STROBE Statement: item 1 (title and abstract), item 2 (background/ rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). A similar emphasis was placed on the importance of describing all simulation-specific exposures, confounders, and effect modifiers, as was discussed for the CONSORT. Other extensions for the STROBE were under similar categories as the proposed extensions for the CONSORT. See Table 2 for STROBE extensions for SBR.
Table 2

Simulation-Based Research Extensions for the STROBE Statement

Item

Item number

STROBE description (Observational Studies)

Extension for SBR

Title and abstract

1

a. Indicate the study’s design with a commonly used term in the title or the abstract.

b. Provide in the abstract an informative and balanced summary of what was done and what was found.

In abstract or key terms, the MESH or searchable keyword term must have the word simulation or simulated.

Introduction

 Background/rationale

2

Explain the scientific background and rationale for the investigation being reported.

Clarify whether simulation is subject of research or investigational method for research.

 Objectives

3

State specific objectives, including any prespecified hypotheses.

 

Methods

 Study design

4

Present key elements of study design early in the paper.

 

 Setting

5

Describe the setting, locations, and relevant dates, including periods of recruitment, exposure, follow-up, and data collection.

 

 Participants

6

a. Cohort study: give the eligibility criteria and the sources and methods of selection of participants. Describe methods of follow-up.

Case–control study: give the eligibility criteria and the sources and methods of case ascertainment and control selection. Give the rationale for the choice of cases and controls.

Cross-sectional study: give the eligibility criteria and the sources and methods of selection of participants.

b. Cohort study: for matched studies, give matching criteria and number of exposed and unexposed.

Case–control study: for matched studies, give matching criteria and the number of controls per case.

 

 Variables

7

Clearly define all outcomes, exposures, predictors, potential confounders, and effect modifiers.

Give diagnostic criteria, if applicable.

Describe the theoretical and/or conceptual rationale for the design of the intervention/exposure.

Describe the intervention/exposure with sufficient detail to permit replication.

Clearly describe all simulation-specific exposures, potential confounders, and effect modifiers.

 Data sources/measurement

8

For each variable of interest, give sources of data and details of methods of assessment (measurement).

Describe comparability of assessment methods if there is >1 group.

In describing the details of methods of assessment, include (when applicable) the setting, instrument, simulator type, timing in relation to the intervention, along with any methods used to enhance the quality of measurements.

Provide evidence to support the validity and reliability of assessment tools in this context (if available).

 Bias

9

Describe any efforts to address potential sources of bias.

 

 Study size

10

Explain how the study size was arrived.

 

 Quantitative variables

11

Explain how quantitative variables were handled in the analyses. If applicable, describe which groupings were chosen and why.

 

 Statistical methods

12

a. Describe all statistical methods, including those used to control for confounding.

b. Describe any methods used to examine subgroups and interactions.

c. Explain how missing data were addressed.

d. Cohort study: if applicable, explain how loss to follow-up was addressed.

Case–control study: if applicable, explain how matching of cases and controls was addressed.

Cross-sectional study: if applicable, describe analytical methods taking account of sampling strategy.

e. Describe any sensitivity analyses.

Clearly indicate the unit of analysis (e.g., individual, team, system), identify repeated measures on subjects, and describe how these issues were addressed.

Results

 Participants

13

a. Report the numbers of individuals at each stage of the study (e.g., numbers potentially eligible, examined for eligibility, confirmed eligible, included in the study, completing follow-up, and analyzed).

b. Give reasons for nonparticipation at each stage.

c. Consider use of a flow diagram.

 

 Descriptive data

14

a. Give characteristics of study participants (e.g., demographic, clinical, social) and information on exposures and potential confounders.

b. Indicate the number of participants with missing data for each variable of interest.

c. Cohort study: summarize follow-up time (e.g., average and total amount).

In describing characteristics of study participants, include their previous experience with simulation and other relevant features as related to the intervention(s).

 Outcome data

15

Cohort study: report numbers of outcome events or summary measures over time.

Case–control study: report numbers in each exposure category or summary measures of exposure.

Cross-sectional study: report numbers of outcome events or summary measures.

 

 Main results

16

a. Give unadjusted estimates and, if applicable, confounder-adjusted estimates and their precision (e.g., 95 % confidence intervals).

Make clear which confounders were adjusted for and why they were included.

b. Report category boundaries when continuous variables were categorized.

c. If relevant, consider translating estimates of relative risk into absolute risk for a meaningful time period.

For assessments involving >1 rater, interrater reliability should be reported.

 Other analyses

17

Report other analyses done (e.g., analyses of subgroups and interactions and sensitivity analyses).

 

Discussion

 Key results

18

Summarize key results with reference to study objectives.

 

 Limitations

19

Discuss limitations of the study, taking into account sources of potential bias or imprecision. Discuss both direction and magnitude of any potential bias.

Specifically discuss the limitations of SBR.

 Interpretation

20

Give a cautious overall interpretation of results considering objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence.

 

 Generalizability

21

Discuss the generalizability (external validity) of the study results.

Describe generalizability of simulation-based outcomes to patient-based outcomes (if applicable).

Other information

 Funding

22

Give the source of funding and the role of the funders for the present study and, if applicable, for the original study on which the present article is based.

List simulator brand and if conflict of interest for intellectual property exists.

MESH Medical Subject Headings

For both the CONSORT and STROBE Statements, extensive discussion occurred in the consensus meeting related to the educational intervention and controlling for simulation-specific variables that pose as potential threats to the internal validity of simulation studies. A group of consensus panel participants with expertise in simulation-based education and instructional design used their knowledge of educational theory, existing educational research guidelines [51], and systematic reviews of SBR [1, 58, 11] to address this issue (Table 3). Table 3 offers an additional checklist of key elements specific to SBR, for item 5 (interventions) on the CONSORT Statement and item 7 (variables) on the STROBE Statement, that should be reported for all simulation studies, for both the intervention and control groups (if applicable).
Table 3

Key Elements to Report for Simulation-Based Research

Elementsa

Subelementsb

Descriptor

Participant orientation

Orientation to the simulator

Describe how participants were oriented to the simulator (e.g., method, content, duration).

 

Orientation to the environment

Describe how participants were oriented to the environment (e.g., method, content, duration).

Simulator type [16]

Simulator make and model

Describe the simulator make and model.

 

Simulator functionality

Describe functionality and/or technical specifications that are relevant to the research question. Describe modifications, if any. Describe limitations of the simulator.

Simulation environment [16]

Location

Describe where the simulation was conducted (e.g., in situ clinical environment, simulation center, etc.).

 

Equipment

Describe the nature of the equipment available (e.g., type, amount, location, size, etc.).

 

External stimuli

Describe any external stimuli (e.g., background noise).

Simulation event/scenario [16]

Event description

Describe if the event was programmed and/or scripted (e.g., orientation to event, scenario progression, triggers). If a scenario was used, the scenario script should be provided as an appendix.

 

Learning objectives

List the learning objectives and describe how they were incorporated into the event.

 

Group vs. individual practice

Describe if the simulation was conducted in groups or as individuals.

 

Use of adjuncts

Describe if adjuncts (e.g., moulage, media, props) were used.

 

Facilitator/operator characteristics

Describe experience (e.g., clinical, educational), training (e.g., fellowship, courses), profession.

 

Pilot testing

Describe if pilot testing was conducted (e.g., number, duration, frequency).

 

Actors/confederates/standardized/simulated patients [16]

Describe experience (e.g., clinical, educational), training (e.g., fellowship, courses), profession, sex. Describe various roles, including training, scripting, orientation, and compliance with roles.

Instructional design (for educational interventions) [53] or exposure (for simulation as investigative methodology) [16]

Duration

Describe the duration of the educational intervention. If the intervention involves more than one segment, describe the duration of each segment.

Timing

Describe the timing of the educational intervention relative to the time when assessment/data collection occurs (e.g., just-in-time training).

Frequency/repetitions

Describe how many repetitions were permitted and/or the frequency of training (e.g., deliberate practice).

Clinical variation

Describe the variation in clinical context (e.g., multiple different patient scenarios).

Standards/assessment

Describe predefined standards for participant performance (e.g., mastery learning) and how these standards were established.

Adaptability of intervention

Describe how the training was responsive to individual learner needs (e.g., individualized learning).

Range of difficulty

Describe the variation in difficulty or complexity of the task.

Nonsimulation interventions and adjuncts

Describe all other nonsimulation interventions (e.g., lecture, small group discussion) or educational adjuncts (e.g., educational video), how they were used, and when they were used relative to the simulation intervention.

Integration

Describe how the intervention was integrated into curriculum.

Feedback and/or debriefing [11]

Source

Describe the source of feedback (e.g., computer, simulator, facilitator).

Duration

Describe the amount of time spent.

Facilitator presence

Describe if a facilitator was present (yes/no), and if so, how many facilitators.

Facilitator characteristics

Describe experience (e.g., clinical, educational), training (e.g., fellowship, courses), profession, sex.

Content

Describe content (e.g., teamwork, clinical, technical skills, and/or inclusion of quantitative data, etc.).

Structure/method

Describe the method of debriefing/feedback and debriefing framework used (ie, phases).

Timing

Describe when the feedback and/or debriefing was conducted relative to the simulation event (e.g., terminal vs. concurrent).

Video

Describe if video was used (yes/no) and how it was used.

Scripting

Describe if a script was used (yes/no) and provide script details as an appendix.

aThese elements may apply for the simulation intervention (e.g., randomized controlled trial or observational study with simulation as an educational intervention) or when simulation is the environment for research (e.g., randomized controlled trial or observational study using simulation as an investigative methodology). Elements should be described in sufficient detail to permit replication

bDescription is required only if applicable

In modeling the explanation and elaboration document after other similar documents published in conjunction with reporting guidelines [28, 32], we provide a specific example for each item requiring a new extension coupled with the background and rationale for including that information for that item. We encourage readers to refer to the explanation and elaboration document to seek further detail about the nature and type of recommended reporting for each new extension (see Additional file 2: Supplemental Digital Content 2).

Discussion

We have developed reporting guidelines for SBR by creating extensions to both the CONSORT [28] and STROBE [31] Statements. These new extensions were developed via a consensus-building process with multiple iterative steps involving an international group of experts with diverse backgrounds and expertise. By creating extensions to both the CONSORT and STROBE Statements that can be applied to studies in both categories of SBR, we have developed reporting guidelines that are applicable to most studies involving simulation in health care research. To further assist authors in reporting SBR studies, we have published an explanation and elaboration document as an appendix that provides specific examples and details for all the new simulation-specific extensions for both the CONSORT and STROBE Statements.

The CONSORT and STROBE Statements with accompanying SBR extensions are meant to serve as a guide to reporting. As with other CONSORT and STROBE Statements, the items are not meant to “prescribe the reporting in a rigid format,” but rather the “order and format for presenting information depend on author preferences, journal style, and the traditions of the research field.” [28, 31] We encourage authors to refer to the explanation and elaboration document that provides details regarding specific elements related to individual items that should be reported for SBR. The use of reporting guidelines can have positive effects on various health care simulation stakeholders, including funders of SBR and those applying for funding (ie, use as a template for grant applications), educators (ie, use as a training tool), and students (ie, use to develop protocols for coursework or research) [33]. The application of these reporting guidelines will help enhance quality of reporting for quantitative SBR and assist journal reviewers and editors when faced with assessing the strengths and weaknesses of simulation-based studies in health care [24, 52, 53]. We encourage journals publishing SBR to consider endorsing the simulation-specific extensions for the CONSORT and STROBE Statements and adding these to their “instructions for authors.”

Simulation-based research has several unique factors that prompted us to develop simulation-specific extensions for both the CONSORTand STROBE Statements. First, there are a wide variety of simulators and simulation modalities available for use in research [16]. This, coupled with a plethora of instructional design features in simulation-based educational research, makes describing the simulation intervention a critically important component of any educational study involving simulation (Table 3) [6, 8, 19]. Second, SBR provides opportunity for the investigator to standardize the simulated environment and/or simulated patient condition. Standardization of the environment and patient condition allows the investigator to account for many of the potential threats to internal validity that are associated with simulation. Clear reporting of standardization strategies helps the reader understand how the independent variable was isolated (Table 3) [16]. Third, many simulation studies involve capturing outcomes from a variety of data sources (e.g., observation, video review, simulator data capture). When assessment instruments are used (e.g., expert raters assessing performance), it is imperative to discuss the psychometric properties of these instruments [5]. Existing guidelines fall short in this regard, and these new guidelines help address this issue. Lastly, simulation-based studies assessing outcomes in the simulated environment only (e.g., clinical performance) should attempt to provide evidence to support how the findings in the simulated environment translate to a valid representation of performance in the real clinical environment [3]. By doing so, authors help convey the relevance and importance of their findings.

Limitations

Our consensus process has several limitations. Although we had a 75 % response rate for our survey, an additional 20 % of participants only partially completed the survey. This may have potentially introduced a selection bias, although the survey represented only 1 step in our consensus-building process. We include a wide variety of experts in our consensus meeting, but many of them had a pediatric clinical background. We minimized this potential bias by ensuring that each breakout group had at least 1 expert participant with a background outside of pediatrics. Furthermore, the principles of SBR are common across specialties and professions, and INSPIRE network members represent researchers who are recognized internationally for being leaders in SBR. We based our reporting guidelines on the CONSORT and STROBE guidelines developed by clinical researchers. Other guidelines could have been used as a starting point such as the American Education Research Association standards developed in 2006 [54]. Our logic was to start with reporting guidelines that were applicable to all types of research, thus providing us more flexibility in generating extensions for both types of SBR. Cross-checking against the American Education Research Association guideline does not reveal areas that we might have missed. Although we tried to develop reporting guidelines for all types of SBR, we recognize that there may be specific types of research that may require new items or different extensions. For example, studies designed to evaluate the validity of simulation-based assessments vary in their reporting requirements. The Standards for Reporting of Diagnostic Accuracy Statement addresses these points [55], and a recent review operationalized these standards and applied them to SBR [56]. Other reporting guidelines that might be amenable for simulation-specific extensions include the Consolidated Criteria for Reporting Qualitative Research [57], and the Standards for Quality Improvement Reporting Excellence [58] guidelines for reporting quality improvement studies. Because the field of SBR grows, the simulation-specific extensions for the CONSORT and STROBE Statements may need to be revised or refined. We encourage authors, reviewers, and editors to visit our Web site (http://inspiresim.com/simreporting/) and provide feedback that will be used to inform subsequent revisions to these reporting guidelines.

Conclusions

The unique features of SBR highlight the importance of clear and concise reporting that helps readers understand how simulation was used in the research. Poor and inconsistent reporting makes it difficult for readers to interpret results and replicate interventions and hence less likely for research to inform change that will positively influence patient outcomes. The use of standardized reporting guidelines will serve as a guide for authors wishing to submit manuscripts for publication, and in doing so, it draws attention to the important elements of SBR and ultimately improves the quality of simulation studies conducted in the future.

Declarations

Acknowledgments

We would like to thank the Society for Simulation in Healthcare that provided funding to support the consensus meeting. The authors thank and acknowledge the contributions of the following individuals, comprising the INSPIRE Reporting Guidelines Investigators, who participated in the consensus-building process by either completing the premeeting survey, attending the consensus meeting, or both: Dylan Bould, MBChB, MRCP, FRCA, Med, University of Ottawa; Ryan Brydges, PhD, University of Toronto; Michael Devita, MD, FCCM, FACP, Harlem Hospital Center; Jonathan Duff, MD, MEd, University of Alberta; Sandeep Gangadharan, MD, Hofstra University School of Medicine; Sharon Griswold-Theodorson, MD, MPH, Drexel University College of Medicine; Pam Jeffries, PhD, RN, FAAN, ANEF, George Washington University; Lindsay Johnston, MD, Yale University School of Medicine; Suzan Kardong-Edgren, PhD, RN, ANEF, CHSE, Robert Morris University; Arielle Levy, MD, MEd, University of Montreal; Lori Lioce, DNP, FNP-BC, CHSE, FAANP, The University of Alabama in Huntsville; Marco Luchetti, MD, MSc, A. Manzoni General Hospital; Tensing Maa, MD, Ohio State University College of Medicine; William McGaghie, PhD, Northwestern University Feinberg School of Medicine; Taylor Sawyer, DO, MEd, University of Washington School of Medicine; Dimitrios Stefanidis, MD, PhD, FACS, Carolinas HealthCare System; Kathleen Ventre, MD, Children’s Hospital Colorado; Barbara Walsh, MD, University of Massachusetts School of Medicine; Mark Adler, MD, Feinberg School of Medicine, Northwestern University; Linda Brown, MD, MSCE, Alpert Medical School of Brown University; Aaron Calhoun, MD, University of Louisville; Aaron Donoghue, MD, MSCE, The Children’s Hospital of Philadelphia; Tim Draycott, MD, FRCOG, Southmead Hospital; Walter Eppich, MD, MEd, Feinberg School of Medicine, Northwestern University; Marcie Gawel, MSN, BSN, MS, Yale University; Stefan Gisin, MD, University Hospital Basel; Lou Halamek, MD, Stanford University; Rose Hatala, MD, MSc, University of British Columbia; Kim Leighton, PhD, RN, ANEF, DeVry Medical International’s Institute for Research and Clinical Strategy; Debra Nestel, PhD, Monash University; Mary Patterson, MD, MEd, Cincinnati Children’s Hospital; Jennifer Reid, MD, University of Washington School of Medicine; Elizabeth Sinz, MD, FCCM, Penn State University College of Medicine; G. Ulufer Sivrikaya, MD, Antalya Training and Research Hospital; Kimberly Stone, MD, MS, MA, University of Washington School of Medicine; Anne Marie Monachino, MSN, RN, CPN, Children’s Hospital of Philadelphia; Michaela Kolbe, PhD, University Hospital Zurich; Vincent Grant, MD, FRCPC, University of Calgary; Jack Boulet, PhD, Foundation for Advancement of International Medical Education and Research; David Gaba, MD, Stanford University School of Medicine; Peter Dieckmann, PhD, Dipl-Psych, Danish Institute for Medical Simulation; Jeffrey Groom, PhD, CRNA, Florida International University; Chris Kennedy, MD, University of Missouri Kansas City School of Medicine; Ralf Krage, MD, DEAA, VU University Medical Center; Leah Mallory, MD, The Barbara Bush Children’s Hospital at Maine Medical Center; Akira Nishisaki, MD, MSCE, The Children’s Hospital of Philadelphia; Denis Oriot, MD, PhD, University Hospital of Poitiers; Christine Park, MD, Feinberg School of Medicine, Northwestern University; Marcus Rall, MD, InPASS Institute for Patient Safety and Teamtraining; Nick Sevdalis, PhD, King’s College London; Nancy Tofil, MD, MEd, University of Alabama at Birmingham; Debra Weiner, MD, PhD, Boston Children’s Hospital; John Zhong, MD, University of Texas Southwestern Medical Center; Donna Moro-Sutherland, MD, Baylor College of Medicine; Dalit Eyal, DO, St. Christopher’s Hospital for Children; Sujatha Thyagarajan, DCH, FRCPCH, PediSTARS India; Barbara Ferdman, MD, University of Rochester Medical Center; Grace Arteaga, MD, FAAP, Mayo Clinic (Rochester); Tonya Thompson, MD, MA, The University of Arkansas for Medical Sciences; Kim Rutherford, MD, St. Christopher’s Hospital for Children; Frank Overly, MD, Alpert Medical School of Brown University; Jim Gerard, MD, Saint Louis University School of Medicine; Takanari Ikeyama, MD, Aichi Children’s Helath and Medical Center; Angela Wratney, MD, MHSc, Children’s National Medical Center; Travis Whitfill, MPH, Yale University School of Medicine; Nnenna Chime, MD, MPH, Albert Einstein College of Medicine; John Rice, PhD(c), US Department of the Navy (retired); Tobias Everett, MBChB, FRCA, The Hospital for Sick Children; Wendy Van Ittersum, MD, Akron Children’s Hospital; Daniel Scherzer, MD, Nationwide Children’s Hospital; Elsa Vazquez Melendez, MD, FAAP, FACP, University of Illinois College of Medicine at Peoria; Chris Kennedy, MD, University of Missouri Kansas School of Medicine; Waseem Ostwani, MD, University of Michigan Health System; Zia Bismilla, MD, MEd, The Hospital for Sick Children; Pavan Zaveri, MD, MEd, Children’s National Health System; Anthony Scalzo, MD, FACMT, FAAP, FAACT, Saint Louis University School of Medicine; aniel Lemke, MD, Baylor College of Medicine; Cara Doughty, MD, MEd, Baylor College of Medicine; Modupe Awonuga, MD, MPH, MRCP(UK), FRCPCH, FAAP, Michigan State University; Karambir Singh, MD, Johns Hopkins University School of Medicine; and Melinda Fiedor-Hamilton, MD, MSc, Children’s Hospital of Pittsburgh.

Supported by the Laerdal Foundation for Acute Medicine that has previously provided infrastructure support for the INSPIRE network.

A.C. (study design, writing, editing, and reviewing of manuscript) is supported by KidSIM-ASPIRE Simulation Infrastructure Grant, Alberta Children’s Hospital Foundation, Alberta Children’s Hospital Research Institute, and the Department of Pediatrics, University of Calgary; V.M.N. (study design, writing, editing, and reviewing of manuscript) is supported by Endowed Chair, Critical Care Medicine, Children’s Hospital of Philadelphia, and the following research grants: AHRQ RO3HS021583; Nihon Kohden America Research Grant; NIH/NHLBI RO1HL114484; NIH U01 HL107681; NIH/NHLBI 1U01HL094345-01; and NIH/NINDS 5R01HL058669-10.

D.M. (data interpretation, writing, editing, and reviewing of manuscript) is funded by a University Research Chair. Nick Sevdalis (collaborator) is funded by the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care South London at King’s College Hospital NHS Foundation Trust.

Authors’ contributions

The principal investigator, AC, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. AC, DK, RM, TPC, VMN, EAH, JD-A, DC, MP, JH, and MA participated in study design, the consensus-building process, drafting and revising the manuscript, and approving the final version of the manuscript for publication. YL, DM, and ME contributed to interpretation of data, critically revising the manuscript for intellectual content, and approving the final version of the manuscript. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy of the work are appropriately resolved. All authors read and approved the final manuscript.

Competing interests

The views expressed are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research, or the Department of Health. Sevdalis delivers safety and team skills training on a consultancy basis to hospitals in the United Kingdom and internationally via the London Safety and Training Solutions Ltd.

The remaining authors declare no conflict of interest.

Open AccessAdvances in Simulation is pleased to co-publish this article jointly with Simulation in Healthcare. The article is published here under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Authors’ Affiliations

(1)
Section of Emergency Medicine, Department of Pediatrics, Alberta Children’s Hospital, University of Calgary KidSim-ASPIRE Research Program
(2)
Columbia University College of Physicians and Surgeons
(3)
Royal Manchester Children’s Hospital, Central Manchester University Hospitals NHS Foundation Trust
(4)
Department of Learning, Informatics, Management and Ethics, Karolinska Institute
(5)
Children’s Hospital Los Angeles, University of Southern California
(6)
The Children’s Hospital of Philadelphia, University of Pennsylvania Perelman School of Medicine
(7)
Johns Hopkins University School of Medicine
(8)
Alberta Children’s Hospital, Cumming School of Medicine, University of Calgary
(9)
Multidisciplinary Simulation Center, Mayo Clinic Online Learning, and Division of General Internal Medicine, Mayo Clinic College of Medicine
(10)
Institute for Innovations in Medical Education, Division of Education Quality and Analytics, NYU School of Medicine
(11)
Department of Emergency Medicine, David Geffen School of Medicine at UCLA
(12)
Ottawa Methods Centre, Clinical Epidemiology Program, Ottawa Hospital Research Institute
(13)
Institute of Social and Preventive Medicine, University of Bern
(14)
Department of Pediatrics, Section of Emergency Medicine, Yale University School of Medicine

References

  1. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA. 2011;306:978–88.PubMedGoogle Scholar
  2. Zendejas B, Brydges R, Wang AT, et al. Patient outcomes in simulation-based medical education: a systematic review. J Gen Intern Med. 2013;28:1078–89.View ArticlePubMedPubMed CentralGoogle Scholar
  3. Brydges R, Hatala R, Zendejas B, et al. Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis. Acad Med. 2015;90:246–56.View ArticlePubMedGoogle Scholar
  4. Cheng A, Grant V, Auerbach M. Using simulation to improve patient safety: dawn of a new era. JAMA Pediatr. 2015;169:419–20.View ArticlePubMedGoogle Scholar
  5. Cook DA. How much evidence does it take? A cumulative meta-analysis of outcomes of simulation-based education. Med Educ. 2014;48:750–60.View ArticlePubMedGoogle Scholar
  6. McGaghie WC, Issenberg SB, Petrusa ER, et al. A critical review of simulation-based medical education research: 2003–2009. Med Educ. 2010;44:50–63.View ArticlePubMedGoogle Scholar
  7. McGaghie WC, Issenberg SB, Cohen ER, et al. Translational educational research: a necessity for effective health-care improvement. Chest. 2012;142:1097–103.View ArticlePubMedPubMed CentralGoogle Scholar
  8. Issenberg SB, McGaghie WC, Petrusa ER, et al. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach. 2005;27:10–28.View ArticlePubMedGoogle Scholar
  9. Cheng A, Lockey A, Bhanji F, et al. The use of high-fidelity manikins for advanced life support training – a systematic review and meta-analysis. Resuscitation. 2015;93:142-9. Accessed 14 Apr 2015.
  10. Cheng A, Lang T, Starr S, et al. Technology-enhanced simulation and pediatric education: a meta-analysis. Pediatrics. 2014;133:e1313–23.View ArticlePubMedGoogle Scholar
  11. Cheng A, Eppich W, Grant V, et al. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Med Educ. 2014;48:657–66.View ArticlePubMedGoogle Scholar
  12. Ilgen JS, Sherbino J, Cook DA. Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis. Acad Emerg Med. 2013;20:117–27.View ArticlePubMedGoogle Scholar
  13. Lorello GR, Cook DA, Johnson RL, et al. Simulation-based training in anaesthesiology: a systematic review and meta-analysis. Br J Anaesth. 2014;112:231–45.View ArticlePubMedGoogle Scholar
  14. Zendejas B, Brydges R, Hamstra SJ, et al. State of the evidence on simulation-based training for laparoscopic surgery: a systematic review. Ann Surg. 2013;257:586–93.View ArticlePubMedGoogle Scholar
  15. Dilaveri CA, Szostek JH, Wang AT, et al. Simulation training for breast and pelvic physical examination: a systematic review and meta-analysis. BJOG. 2013;120:1171–82.View ArticlePubMedGoogle Scholar
  16. Cheng A, Auerbach M, Chang T, et al. Designing and conducting simulation-based research. Pediatrics. 2014;133:1091–101.View ArticlePubMedGoogle Scholar
  17. LeBlanc VR, Manser T, Weinger MB, et al. The study of factors affecting human and systems performance in healthcare using simulation. Simul Healthc. 2011;6:S24–9.View ArticlePubMedGoogle Scholar
  18. Raemer D, Anderson M, Cheng A, et al. Research regarding debriefing as part of the learning process. Simul Healthc. 2011;6:S52–7.View ArticlePubMedGoogle Scholar
  19. Cook DA, Hamstra SJ, Brydges R, et al. Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis. Med Teach. 2013;35:e867–98.View ArticlePubMedGoogle Scholar
  20. Glasziou P, Altman DG, Bossuyt P, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383:267–76.View ArticlePubMedGoogle Scholar
  21. Cook DA, Beckman TJ, Bordage G. A systematic review of titles and abstracts of experimental studies in medical education: many informative elements missing. Med Educ. 2007;41:1074–81.View ArticlePubMedGoogle Scholar
  22. Cook DA, Beckman TJ, Bordage G. Quality of reporting of experimental studies in medical education: a systematic review. Med Educ. 2007;41:737–45.View ArticlePubMedGoogle Scholar
  23. Cook DA, Levinson AJ, Garside S. Method and reporting quality in health professions education research: a systematic review. Med Educ. 2011;45:227–38.View ArticlePubMedGoogle Scholar
  24. Jüni P, Altman DG, Egger M. Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ. 2001;323:42–6.View ArticlePubMedPubMed CentralGoogle Scholar
  25. Begg C, Cho M, Eastwood S, et al. Improving the quality of reporting of randomized controlled trials. The CONSORT statement. JAMA. 1996;276:637–9.View ArticlePubMedGoogle Scholar
  26. Begley CG, Ioannidis JP. Reproducibility in science: improving the standard for basic and preclinical research. Circ Res. 2015;116:116–26.View ArticlePubMedGoogle Scholar
  27. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Lancet. 2001;357:1191–4.View ArticlePubMedGoogle Scholar
  28. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c869.View ArticlePubMedPubMed CentralGoogle Scholar
  29. Moher D, Altman DG, Schulz KF. Opportunities and challenges for improving the quality of reporting clinical research: CONSORT and beyond. CMAJ. 2004;171:349–50.View ArticlePubMedPubMed CentralGoogle Scholar
  30. Plint AC, Moher D, Morrison A, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust. 2006;185:263–7.PubMedGoogle Scholar
  31. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007;147:573–7.View ArticleGoogle Scholar
  32. Vandenbroucke JP, von Elm E, Altman DG, et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. PLoS Med. 2007;4, e297.View ArticlePubMedPubMed CentralGoogle Scholar
  33. Moher D, Shamseer L, Clarke M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.View ArticlePubMedPubMed CentralGoogle Scholar
  34. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151:264–9.View ArticlePubMedGoogle Scholar
  35. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med. 2009;151:W65–94.View ArticlePubMedGoogle Scholar
  36. Enhancing the Quality and Transparency of Health Research. Equator Network library for health research reporting. Available at: http://www.equator-network.org/library/. Accessed 28 May 2015.
  37. Golub RM, Fontanarosa PB. Researchers, readers, and reporting guidelines: writing between the lines. JAMA. 2015;313:1625–6.View ArticlePubMedGoogle Scholar
  38. Campbell MK, Elbourne DR, Altman DG, CONSORT group. CONSORT statement: extension to cluster randomised trials. BMJ. 2004;328:702–8.View ArticlePubMedPubMed CentralGoogle Scholar
  39. Piaggio G, Elbourne DR, Altman DG, et al. Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA. 2006;295:1152–60.View ArticlePubMedGoogle Scholar
  40. Boutron I, Moher D, Altman DG, et al. Methods and processes of the CONSORT group: example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med. 2008;148:W60–6.PubMedGoogle Scholar
  41. Little J, Higgins JP, Ioannidis JP, et al. Strengthening the reporting of genetic association studies (STREGA)Van extension of the STROBE statement. Eur J Clin Invest. 2009;39:247–66.View ArticlePubMedPubMed CentralGoogle Scholar
  42. Moher D, Schulz KF, Simera I, et al. Guidance for Developers of Health Research Reporting Guidelines. PLoS Med 2010;7(2):e1000217. Available at: doi:10.1371/journal.pmed.1000217
  43. Cheng A, Hunt EA, Donoghue A, et al. Examining pediatric resuscitation education using simulation and scripted debriefing: a multicenter, randomized-controlled trial. JAMA Pediatr. 2013;167:528–36.View ArticlePubMedGoogle Scholar
  44. Cheng A, Brown LL, Duff JP, et al. Improving cardiopulmonary resuscitation with a CPR feedback device and refresher simulations (CPR CARES Study): a randomized clinical trial. JAMA Pediatr. 2015;169(2):137–44.View ArticlePubMedGoogle Scholar
  45. Cheng A, Overly F, Kessler D, et al. Perception of CPR quality: influence of CPR feedback, just-in-time training and provider role. Resuscitation. 2015;87:44–50.View ArticlePubMedGoogle Scholar
  46. Kessler DO, Arteaga G, Ching K, et al. Interns’ success with clinical procedures in infants after simulation training. Pediatrics. 2013;131:e811–20.View ArticlePubMedGoogle Scholar
  47. Gerard JM, Kessler DO, Braun C, et al. Validation of global rating scale and checklist instruments for the infant lumbar puncture procedure. Simul Healthc. 2013;8:148–54.View ArticlePubMedGoogle Scholar
  48. Kessler D, Pusic M, Chang TP, et al. Impact of Just-in-Time and Just-in-Place simulation on intern success with infant lumbar puncture. Pediatrics. 2015;135:e1237–46.View ArticlePubMedGoogle Scholar
  49. Chang TP, Kessler D, McAninch B, et al. Script concordance testing: assessing residents’ clinical decision-making skills for infant lumbar punctures. Acad Med. 2014;89:128–35.View ArticlePubMedGoogle Scholar
  50. Haubner LY, Barry JS, Johnston LC, et al. Neonatal intubation performance: room for improvement in tertiary neonatal intensive care units. Resuscitation. 2013;84:1359–64.View ArticlePubMedGoogle Scholar
  51. Common guidelines for education research and development. A Report from the Institute for Education Sciences, US Department of Education and the National Science Foundation. Available at: http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf13126. Accessed 10 Jan 2015.
  52. Cobo E, Cortes J, Ribera JM, et al. Effect of using reporting guidelines during peer review on quality of final manuscripts submitted to a biomedical journal: masked randomised trial. BMJ. 2011;343:d6783.View ArticlePubMedPubMed CentralGoogle Scholar
  53. Egger M, Schneider M, Davey Smith G. Spurious precision? Meta-analysis of observational studies. BMJ. 1998;316:140–4.View ArticlePubMedPubMed CentralGoogle Scholar
  54. American Education Research Association. Standards for reporting on empirical social science research in AERA publications. Educ Res. 2006;35:33–40.Google Scholar
  55. Bossuyt PM, Reitsma JB, Bruns DE, Standards for Reporting of Diagnostic Accuracy. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. BMJ. 2003;326:41–4.View ArticlePubMedPubMed CentralGoogle Scholar
  56. Cook DA, Brydges R, Zendejas B, et al. Technology-enhanced simulation to assess health professionals: a systematic review of validity evidence, research methods, and reporting quality. Acad Med. 2013;88:872–83.View ArticlePubMedGoogle Scholar
  57. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.View ArticlePubMedGoogle Scholar
  58. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for improvement studies in health care: evolution of the SQUIRE Project. Ann Intern Med. 2008;149(9):670–6.View ArticlePubMedGoogle Scholar

Copyright

© Society for Simulation in Healthcare 2016

Advertisement