Although the mannequins used in healthcare simulations are highly sophisticated, the present results demonstrate that facilitators’ extra scenario information is essential for bridging the gap between the appearance of a sick patient and a human patient simulator. We argue that this information is important for participants to learn through iterative processes of assessments and actions. It is crucial to provide pieces of information, such as skin rash, thickness of the meconium, or abdominal status, for enabling the teams’ assessment and decision-making.
We agree, as proposed by Paige et al. [10], that conceptualization of fidelity should be extended by including various forms of cueing and the needs for empirical research on this aspect of simulation practice. Although our study solely focuses on the facilitators’ role, it provides further arguments for shifting emphasis from the properties of the simulator per se to session design and facilitators’ actions to create simulation activities as relevant representations of the intended clinical task. With reference to Hamstra et al. [7], we argue that this could contribute to functional task alignment, by framing the scenario in accordance with the intended learning objectives. Without the extra information, the participants would not have responded to the simulations as instances of the intended problem, and there would have been insufficient signs for the participants to draw conclusions about how to continue. In line with Johnson [9], the facilitators contribute to reconstituting the simulation as a medical practice, which, in turn, can serve as a context for focusing on the relevant aspects of professional performance. However, this does not mean that the level of structural fidelity does not matter, as the properties of the simulator serve as a basis for what information facilitators provide. Moreover, the facilitators have a delicate task to transform visual and sensory signs regarding bodily attributes into gestures and verbal messages to ensure that the participants have enough clues to understand the case without interfering with their work. Our study brings attention to the intricate interplay between humans and technology and how facilitators continuously adapt to the simulators´ shortcomings in order to keep the momentum of the scenario and enhance learner engagement.
In relation to what occasioned the facilitators to give extra scenario information, we found that the different methods offered different opportunities for facilitators to assess the teams’ needs for information. When the facilitators were present in the room (methods 1 and 2), and positioned close to the team, they used the participants’ actions and the evolving condition of the imagined patient as the basis for delivering timely prompts and instructions. When the facilitator conveyed the information via a loudspeaker or an earpiece (methods 3 and 4), the participants’ verbal reports on their actions or explicit questions often occasioned the facilitators to respond. In most cases, the facilitator responded after a delay, and the response was not as closely aligned to the teams’ actions and the development of the scenario as when the facilitator was present in the room. However, practical limitations exist. The facilitator may be located in an adjacent operator room because, in addition to adding extra scenario information, he or she may be operating the simulator and acting as the patient’s voice. The complex tasks of educators in healthcare simulation [19, 20] and an increased workload using low-fidelity compared to high-fidelity mannequins imply that multitasking can also be a reason for less well-timed information [21].
The methods for providing extra scenario information had extensive implications for how the scenarios played out and the opportunities for teamwork training. First, we observed an impact on communication patterns. Methods 1 and 2, with a facilitator present close to the action, tended to promote horizontal communication (i.e., to maintain the communication between team members). Providing information through method 3 and 4 tended to promote vertical communication (i.e., between a team member and the facilitator). In our cases, the learning goals were to train non-technical skills, such as interprofessional communication and decision-making [2] in which lengthy verbal information from the facilitator that hampers the internal communication among the team members might be particularly unwanted. In short, vertical communication tended to impair horizontal communication and to counteract the intended learning objectives of the simulation.
Second, the methods of providing information affected the workflow. Timely information in the form of brief prompts in method 1 and 2 seldom disturbed the course of events. In contrast, information delivered through method 3 impeded workflow. When the facilitator delivered lengthy verbal information slowly, the team had to pause and listen to the response. In method 4, the activities stopped for brief moments when the team had to ask for essential information. In both cases, the pace of the facilitator’s response interrupted the flow of the teamwork. This disruption occurred regularly in method 3 and occasionally in method 4.
Third, the timing and language style of the extra scenario information delivered by the facilitators affected the pace of the teamwork. Especially, the brief prompts presented in a condensed style as through method 2 and the professional style contributed to the team sustaining a high pace adapted to the targeted clinical tasks. In contrast, the method of delivering information slowly step-by-step, in which the team had to wait for a response to continue, slowed down the teamwork. However, we contend that the varying methods of delivering extra scenario information can serve different learning objectives. The participants’ level of experience and familiarity with simulation as practice are factors to take into account when customizing instructions [8]. Further, the clinical scenario will also make a difference, as simulating the patient’s voice is an option to convey some information in awake patients as opposed to when simulating unconscious patients or infants. A novice team may benefit from facilitation that helps them slow down and learn a procedure step-by-step, while the same type of instruction may disturb an experienced team by deviating from the objective to train them to deliver care under time pressure. Further, the various methods may suit different pedagogical agendas. For example, if educators want to support teams during scenarios, for example, by roleplaying a team member asking a relevant question, the presence of a facilitator is necessary. If another agenda is preferred, in which the participants’ experiences of solving the situation themselves are regarded important for learning, it may be a disadvantage with an instructor in the suit. This points to a need for educators in healthcare simulation to adapt instruction to various demands and learning objectives to ensure high-quality simulation [5, 16, 22].
In sum, the timing of information seemed to be important for the team to sustain engagement, team communication, workflow, and tempo. The language style of the supplied information appeared to have the potential to change the pace, as well as the focus of the scenario. We conclude that interprofessional team training could benefit from these opportunities to optimize the momentum of the workflow.
One methodological limitation was that we did not have access to records of what was said via the earpiece, although we could draw conclusions from the participants’ behaviors. Another limitation was that the analysis relied solely on data from a limited sample of film clips from three Swedish simulation centers. But there were also important strengths. First, that the collaborative analysis of the whole research group enabled the identification of new phenomena and relations, that is, the nature of extra scenario information, how it is conveyed, and its consequences. Secondly, that these findings could be both elaborated and refined by anchoring the initial findings in the whole data set. However, to establish the stability of these phenomena in other settings and the relations between methods and their consequences require a larger data source and hypothesis-driven quantitative studies.