|Item||Item number||STROBE description (Observational Studies)||Extension for SBR|
|Title and abstract||1||
a. Indicate the study’s design with a commonly used term in the title or the abstract.|
b. Provide in the abstract an informative and balanced summary of what was done and what was found.
|In abstract or key terms, the MESH or searchable keyword term must have the word simulation or simulated.|
|Background/rationale||2||Explain the scientific background and rationale for the investigation being reported.||Clarify whether simulation is subject of research or investigational method for research.|
|Objectives||3||State specific objectives, including any prespecified hypotheses.|
|Study design||4||Present key elements of study design early in the paper.|
|Setting||5||Describe the setting, locations, and relevant dates, including periods of recruitment, exposure, follow-up, and data collection.|
a. Cohort study: give the eligibility criteria and the sources and methods of selection of participants. Describe methods of follow-up.|
Case–control study: give the eligibility criteria and the sources and methods of case ascertainment and control selection. Give the rationale for the choice of cases and controls.
Cross-sectional study: give the eligibility criteria and the sources and methods of selection of participants.
b. Cohort study: for matched studies, give matching criteria and number of exposed and unexposed.
Case–control study: for matched studies, give matching criteria and the number of controls per case.
Clearly define all outcomes, exposures, predictors, potential confounders, and effect modifiers.|
Give diagnostic criteria, if applicable.
Describe the theoretical and/or conceptual rationale for the design of the intervention/exposure.|
Describe the intervention/exposure with sufficient detail to permit replication.
Clearly describe all simulation-specific exposures, potential confounders, and effect modifiers.
For each variable of interest, give sources of data and details of methods of assessment (measurement).|
Describe comparability of assessment methods if there is >1 group.
In describing the details of methods of assessment, include (when applicable) the setting, instrument, simulator type, timing in relation to the intervention, along with any methods used to enhance the quality of measurements.|
Provide evidence to support the validity and reliability of assessment tools in this context (if available).
|Bias||9||Describe any efforts to address potential sources of bias.|
|Study size||10||Explain how the study size was arrived.|
|Quantitative variables||11||Explain how quantitative variables were handled in the analyses. If applicable, describe which groupings were chosen and why.|
a. Describe all statistical methods, including those used to control for confounding.|
b. Describe any methods used to examine subgroups and interactions.
c. Explain how missing data were addressed.
d. Cohort study: if applicable, explain how loss to follow-up was addressed.
Case–control study: if applicable, explain how matching of cases and controls was addressed.
Cross-sectional study: if applicable, describe analytical methods taking account of sampling strategy.
e. Describe any sensitivity analyses.
|Clearly indicate the unit of analysis (e.g., individual, team, system), identify repeated measures on subjects, and describe how these issues were addressed.|
a. Report the numbers of individuals at each stage of the study (e.g., numbers potentially eligible, examined for eligibility, confirmed eligible, included in the study, completing follow-up, and analyzed).|
b. Give reasons for nonparticipation at each stage.
c. Consider use of a flow diagram.
a. Give characteristics of study participants (e.g., demographic, clinical, social) and information on exposures and potential confounders.|
b. Indicate the number of participants with missing data for each variable of interest.
c. Cohort study: summarize follow-up time (e.g., average and total amount).
|In describing characteristics of study participants, include their previous experience with simulation and other relevant features as related to the intervention(s).|
Cohort study: report numbers of outcome events or summary measures over time.|
Case–control study: report numbers in each exposure category or summary measures of exposure.
Cross-sectional study: report numbers of outcome events or summary measures.
a. Give unadjusted estimates and, if applicable, confounder-adjusted estimates and their precision (e.g., 95 % confidence intervals).|
Make clear which confounders were adjusted for and why they were included.
b. Report category boundaries when continuous variables were categorized.
c. If relevant, consider translating estimates of relative risk into absolute risk for a meaningful time period.
|For assessments involving >1 rater, interrater reliability should be reported.|
|Other analyses||17||Report other analyses done (e.g., analyses of subgroups and interactions and sensitivity analyses).|
|Key results||18||Summarize key results with reference to study objectives.|
|Limitations||19||Discuss limitations of the study, taking into account sources of potential bias or imprecision. Discuss both direction and magnitude of any potential bias.||Specifically discuss the limitations of SBR.|
|Interpretation||20||Give a cautious overall interpretation of results considering objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence.|
|Generalizability||21||Discuss the generalizability (external validity) of the study results.||Describe generalizability of simulation-based outcomes to patient-based outcomes (if applicable).|
|Funding||22||Give the source of funding and the role of the funders for the present study and, if applicable, for the original study on which the present article is based.||List simulator brand and if conflict of interest for intellectual property exists.|