Overview
In 2013, simulation centers at Mayo Clinic (Rochester, Minnesota) and Dartmouth-Hitchcock Medical Center (D-H; Lebanon, New Hampshire) formally established a collaborative network to share SBT courses across institutions. The current project evaluated this collaboration, with the long-term vision of continued sharing between D-H and Mayo Clinic and with other institutions as well. Considerable time and resources went into building this collaboration, which included preparing an infrastructure to inventory SBT courses, identifying content ready for sharing, and implementing sharing of specific SBT courses between sites.
We evaluated the process of sharing a total of four courses (two from each institution, shared with the other institution) by using quantitative and qualitative data including estimates of time spent developing and implementing each course, a survey of instructors involved in implementing shared courses, and 1-on-1 instructor interviews. The Mayo Clinic Institutional Review Board and the Dartmouth-Hitchcock Committee for the Protection of Human Subjects deemed the study exempt.
Process and procedures for sharing
Each site assembled a local team of simulation leaders, SBT educators, and administrators who met regularly to review the curricula, identify courses that could be shared, and monitor the process and progress of sharing. A shareability matrix was developed to define course readiness for sharing and to give potential users information about the course. At each site, an individual with training and experience in simulation and education scored every available course using the parameters of frequency, maturity, and completeness on a scale of 1 to 5, with 1 being least ready and 5 being most ready. Based on this scoring, an inventory of courses ready for sharing was identified.
The project teams chose two courses from each institution to share. For the first exchange, each team identified an existing SBT course for the same clinical task, “Moderate Sedation” for nurses, to contrast different approaches to the same topic and to focus on the process of sharing. Each organization provided their course to the other to implement. For the second course exchange, leaders at each site reviewed the other site’s inventory of shareable courses to identify a SBT course that would fulfill a known institutional need not already covered in their own institution’s training inventory. Mayo Clinic implemented the D-H “Stroke and Seizure” course for nurses and D-H implemented the Mayo Clinic “Central Line Workshop” for physicians. Instructors at each site with experience in SBT and appropriate content expertise implemented each shared course. Institutions followed local best practices when specific course elements were not explicitly specified.
Teaching methods used in the courses included slideshow presentations, scenarios using high-fidelity patient simulators, supporting articles, procedural task trainers, quizzes, and electronic learning content. A web-based file-sharing application provided a centralized repository for individuals at each site to share and access SBT materials.
Outcomes and data collection
Participants providing outcomes for this study were course instructors at Mayo Clinic and D-H. Sample size calculations were not conducted because the sample was determined by the number of instructors involved. Outcomes focused on instructor experiences with implementation of shared courses and faculty time needed for development and implementation of SBT courses. We evaluated designated outcomes after the course using 1-on-1 interviews and a written survey questionnaire.
Qualitative data were obtained from two sources. First, each questionnaire included open-ended questions regarding barriers, enabling factors, and suggestions for improvement. Second, each instructor participated in a 1-on-1 semi-structured interview with an investigator (E.A.L. or D.R.S.) using a flexible interview template. Interviewers asked about context-specific factors that made implementation easier or harder, important omissions in shared material, and suggestions for improvement. Interviews were transcribed for subsequent analysis.
The primary quantitative outcome was overall cost in terms of time. Each member of the implementation team and each instructor retrospectively estimated the total time spent organizing the exchange process, preparing to share a course, and implementing a shared course. Developers of original course plans and materials also retrospectively estimated the time spent creating the Central Line Workshop and Stroke and Seizure courses, but time estimates were not available for the Moderate Sedation courses. We did not convert time to monetary units because we anticipated variability in staff roles and salary ranges across institutions and because precise salary information was considered confidential by both institutions.
Other quantitative outcomes included instructor perceptions regarding efficiency, effectiveness, time required to develop and deliver the course, relevance of course materials to local needs, and barriers to implementation. We identified specific items used to measure these outcomes through informal discussions with the project team members and course instructors. Based on these items, we created a questionnaire that asked instructors to compare the shared course (and the experience of implementing the course) with locally developed courses that they had led previously. We refined items for the final questionnaire through an iterative review process among the project leadership team, and pilot testing on two simulation instructors not engaged in the course sharing implementation.
Data analysis
We performed a qualitative analysis of the interview transcripts and responses to open-ended survey questions with the intent of identifying specific changes to improve the sharing process. We used the constant comparative method [22] by first identifying strengths and weaknesses of the shared-course approach, and then contrasting these strengths and weaknesses to define key factors influencing the course-sharing process. We supplemented this analysis by identifying and grouping keywords in context, from which we formed thematic categories of comments. Finally, we integrated the key factors and thematic categories to create a grounded theory model explaining how sharing could be improved in future iterations [23–25].
Course evaluation and study-specific survey data were reported in aggregate using both mean (SD) and median (interquartile range) because of the small sample and because the data did not follow a normal distribution.