Skip to main content

Embracing informed learner self-assessment during debriefing: the art of plus-delta


The healthcare simulation field has no shortage of debriefing options. Some demand considerable skill which serves as a barrier to more widespread implementation. The plus-delta approach to debriefing offers the advantages of conceptual simplicity and ease of implementation. Importantly, plus-delta promotes learners’ capacity for a self-assessment, a skill vital for safe clinical practice and yet a notorious deficiency in professional practice. The plus-delta approach confers the benefits of promoting uptake of debriefing in time-limited settings by educators with both fundamental but also advanced skills, and enhancing essential capacity for critical self-assessment informed by objective performance feedback. In this paper, we describe the role of plus-delta in debriefing, provide guidance for incorporating informed learner self-assessment into debriefings, and highlight four opportunities for improving the art of the plus delta: (a) exploring the big picture vs. specific performance issues, (b) choosing between single vs. double-barreled questions, (c) unpacking positive performance, and (d) managing perception mismatches.


The evolution of simulation-based education in healthcare has been accompanied by growth in the number of debriefing methods, frameworks, and/or conversational strategies [1,2,3,4,5,6]. Many debriefing methods demand considerable skill, which impedes effective implementation. The plus-delta approach to debriefing has multiple benefits since it is conceptually simple and easy to implement, while promoting learner capacity for self-assessment—a skill vital for safe clinical practice [2, 5, 7,8,9,10,11,12]. With plus-delta, facilitators engage learners in a self-assessment of their own performance [12], which in turn provides opportunity for individual and team reflexivity [13, 14]. Unfortunately, many facilitators lack awareness of the importance of learner self-assessment in promoting professional practice, resulting in an inability to maximize the impact of this approach or in some cases, an avoidance of the method altogether. We believe this straightforward approach can demystify the art of debriefing and promote its uptake, while concurrently capitalizing on the benefits of informed learner self-assessment. In this paper, we clarify the implementation of plus-delta and offer strategies to best execute the approach by clearly defining the role and benefits of learner self-assessment in debriefing.

This paper has several aims, structured in a step-wise manner to guide the reader through the background, rationale, and strategies for adopting learner self-assessment in debriefing. First, we define the plus-delta approach and describe its role in debriefing. Second, we argue for the important role for incorporating informed learner self-assessment into debriefings and map debriefing strategies to Ross’ four-stage model for fostering learning through self-assessment [15]. We then describe four opportunities for fine-tuning the art of the plus-delta, namely (1) using plus-delta for the big picture vs. specific performance issues, (2) single- vs. double-barreled questioning, (3) unpacking positive performance, and (4) managing perception mismatches. To close, we discuss how to incorporate various forms of informed learner self-assessment into debriefing.

What is plus-delta?

The plus-delta approach describes a debriefing strategy in which participants are asked to reflect on the entire simulation event (or portions thereof) and assess their individual and/or collective performance. When applying this approach, facilitators ask learners: “What went well and what would you do differently (or improve) next time?” [7, 9, 12]; “What did you do well, and what did not go well, and why?” [10]; “What was easy and what was challenging for you?” [5]; or other similar questions. Outside of healthcare, the US Army has adopted a version of this approach through a performance feedback method termed “After Action Review” [16, 17]. Following training, soldiers engage in a facilitated conversation to clarify what aspects of performance met pre-defined standards, and where there was opportunity for improvement [17]. The plus-delta approach, when coupled with feedback and teaching, can be used as the primary conversational strategy in a debriefing [7, 9,10,11] or used more selectively by blending it with other strategies (e.g., focused facilitation) depending on the learning context, amount of time available, and facilitator preferences (e.g., learner vs. instructor-centered debriefing) [12, 18]. Ideally, an effective plus-delta generates two lists of behaviors (i.e., things that the learners felt went well, and things that the learners felt could be improved), which then prompts further discussion, reflection, and/or learning during the debriefing. The true function of plus-delta is to conduct a learner self-assessment, the benefits and downsides of which have been extensively studied, debated, and described in the healthcare and education literature [19, 20].

Learner self-assessment for professional development

Although traditional notions highlight the importance of self-assessment for professional development, professionals are notoriously poor at assessing their own performance [19]. In a series of educational studies, participants were recruited to self-assess themselves after performing a wide range of tasks requiring humor, logical reasoning, and English grammar. These studies found that participants in the lowest scoring quartile tended to overestimate their performance [21]. Similar patterns have been observed in healthcare providers. Physicians often fail to recognize knowledge deficits, with less experienced and/or poorer performing clinicians demonstrating a tendency to overrate their knowledge and skills [19, 22,23,24,25,26]. Trainees exemplify this discrepancy and consistently overestimate competency in the face of both inadequate performance and adequate performance [22,23,24, 26]. Even experienced clinicians sometimes struggle to accurately assess their ability to integrate skills into clinical practice [19, 25].

Despite these inaccuracies, there are several important benefits of learner self-assessment. When self-assessments are accurate, additional learning can be gained from performing the act itself, thus allowing for skill development in the absence of expert assessment [27]. Learners who engage in self-assessment set higher goals and commit more effort to the acquisition of these goals, which equates to enhanced future performance [26, 27]. Objective feedback informed by specific performance standards amplifies the benefits of self-assessment [28,29,30,31].

Informed self-assessment describes the “set of processes through which individuals use external and internal data to generate an appraisal of their own abilities” [32]. Learners aware of specific benchmarked standards with access to objective data (i.e., external data) demonstrate improved self-assessment abilities compared to those who rely solely upon their own internal judgments (i.e., internal data) [29,30,31, 33,34,35]. Ross et al. proposed a four-stage model to foster learning through informed learner self-assessment that incorporates many of these key elements: (1) involve students in defining the criteria used to judge performance, (2) teach students how to apply the criteria, (3) give students feedback on their performance (informed by objective data) and self-assessments, and (4) help students develop action plans [15].

Learner self-assessment in debriefing

Critics may question the value of learner self-assessment during debriefing if clinicians struggle with providing accurate self-assessments of their own performance [19]. We argue that such criticism highlights why we should integrate learner self-assessment into debriefing; after all, without having learners self-assess, how will you know how they perceive their own performance? If learners overestimate their own performance, would you not want to know so that you could directly address this misperception? Failure to conduct a learner self-assessment during debriefing places the facilitator at risk for missing out on critical learner misperceptions that may be perpetuated if they are not addressed during the debriefing. Furthermore, the process of learner self-assessment promotes individual and team reflexivity, whereby group members actively “reflect upon … strategies, goals, processes, and outcomes to process key information and adapt accordingly” [14, 36]. Debriefing represents a form of post-action team reflexivity. The plus-delta approach triggers teams to evaluate their performance, which enhances team performance by promoting shared mental models, triggering adaptation, and crystallizing learning [13, 14]. For these reasons, we see a facilitated learner self-assessment as serving a distinctly unique role in debriefing, which emphasizes the importance of being able to conduct a plus-delta during debriefing in a purposeful manner.

Thus, in simulation-based education, debriefing can both engage learners and enhance their capacity for self-assessment in a manner conducive to effective learning. Table 1 provides an overview of how Ross’ four-stage model can foster learning through self-assessment in debriefing [15]. Stage 1 can be achieved during the pre-briefing by having the facilitator review specific performance goals with students and/or introducing a performance checklist for the simulation event [30]. Debriefings offer the optimal venue for addressing stages 2, 3, and 4. To teach learners how to apply performance criteria (i.e., stage 2), facilitators should first conduct a plus-delta with learners and then use language that explicitly connects performance criteria with observed behaviors [15] when closing performance gaps. For example, one strategy would be to view videos of expert modeled performance that demonstrates desired benchmarks [29]. In order to provide feedback on their self-assessments (i.e., stage 3), facilitators should close performance gaps by reviewing performance relative to specific standards (e.g., use of a performance checklist) [30, 31, 33] and generalize discussion to other clinical contexts (i.e., stage 4), both which are tasks central to effective debriefings [2, 12, 37].

Table 1 Fostering learning through self-assessment in debriefing using Ross’ four-stage model

The art of the plus-delta

In this section, we introduce four specific considerations when implementing plus-delta, offered in the order of decision-making typically required of a facilitator during a debriefing.

Assessing the big picture vs. specific performance issues

As with other conversational strategies, selective use of plus-delta may be appropriate at various points in discussion depending on the debriefing focus. In a blended method of debriefing, we locate plus-delta during the analysis phase [12, 38]. At the beginning of the analysis phase, facilitators may use a plus-delta to obtain a learner assessment of the “big-picture”, or the entire clinical event (Fig. 1a). In doing this, facilitators identify the learner agenda and recognize perception mismatches early in the analysis phase, which in turn helps prioritize topics for the remainder of the debriefing [18]. Of course, a plus-delta at the beginning of the analysis phase is not always necessary or appropriate. For example, when a rich reactions phase allows identification of numerous topics for discussion, facilitators may forgo a plus-delta and dive directly into focused facilitation. Facilitators should tailor the use of plus-delta to debriefing context (i.e., what has already been discussed) and learner needs.

Fig. 1
figure 1

Use of plus-delta for learner self-assessment of: a. The big picture or b. Specific performance issues

Alternatively, the plus-delta approach can be used as a tool to explore specific aspects of performance (Fig. 1b). A preview statement preceding the plus-delta question supports the use of the plus-delta approach to unpack specific learner behaviors. For example, the facilitator might say: “I’d like to spend some time discussing the task of defibrillation; and I’d like to get your take before I share mine” as a preview to a plus-delta on how defibrillation was conducted during the simulated cardiac arrest event, which might sound like: “Reflecting on the three instances when you had to defibrillate the patient, can you share what was done really well, and what you would do differently next time?”. Even using plus-delta this purpose, we encourage facilitators to keep in mind the need to identify and further explore perception mismatches as they arise.

Single- vs. double-barreled questioning

We see two main ways of approaching questioning when using plus-delta: single-barreled questioning (i.e., one-part question) and double-barreled questioning (i.e., two-part question). Single-barreled questioning involves asking the “plus” question first (e.g., “What aspects of your performance were done well?”), followed by reflective discussion of each of these points (Fig. 2a). Once completing the discussion of “plus” items, facilitators then pose the “delta” question (e.g., “What aspects of your performance would you change next time?”), followed by facilitated discussion and group reflection. With double-barreled questioning, facilitators asks both the “plus” and “delta” questions back to back (e.g., “What aspects of your performance were done well, and what things would you do differently next time?”), thus leaving it to the learner group to determine what aspects of performance to explore during discussion (Fig. 2b).

Fig. 2
figure 2

Phrasing of questions in plus-delta for: a. Single-barrel questioning or b. Double-barrel questioning

We see pros and cons to both approaches. Single-barrel questioning are inherently limiting, conferring more control (of debriefing content) to the facilitator by asking a question with a narrower scope. If, for example, a facilitator is debriefing a team of novice learners who have just performed poorly, they may see value for the learner group to explore positive aspects of their performance first. In this case, posing the “plus” question with the single-barreled approach would serve that purpose. As a downside, this approach exerts more control over the content of discussion may force the conversation in a direction misaligned with learner wishes, particularly when learner performance was sub-optimal (or vice versa). Double-barreled questions allow more freedom of response, placing the onus on learners to identify which aspects of performance, either “plus” or “delta” or both, to highlight during discussion. This approach often uncovers the learner agenda (i.e., the issues that more most important to the learners), which helps facilitators shape future discussion towards learner priorities [18]. Double-barreled questioning risks focusing learner groups entirely on answering only one part of the question (i.e., typically the “delta” question). In situations where learners focus on poor performance, a mentality of “bad is stronger than good” may overtake the debriefing, making it hard to shift gears despite potentially different preferences or perspectives [39]. In some cases, facilitator may never get around to re-asking the “plus” part of the question again, potentially leading to a debriefing that neglects positive aspects of performance.

Unpacking positive performance

Reflecting on our experiences teaching plus-delta to simulation educators around the world, we have discovered a tendency to focus on discussion of “delta” items at the expense of “plus” items. An inherent assumption drives this behavior, namely that learners derive more value learning from poor performance than good performance [39]. This concept, referred to in psychology literature as negativity bias [40], is especially pronounced when learners feel there is an opportunity to adapt their performance [41], as in simulation. As educators, when we see healthcare teams excel during clinical scenarios, we assume that all team members appreciate that all aspects of the case were managed well and how they were able to collectively achieve those goals. This is a dangerous assumption. When learners do something properly, other learners do not automatically appreciate (a) what was done well, (b) how it was done, (c) and why it was important to be done in that fashion. Failure to explore aspects of positive performance represents missed learning opportunities during debriefing [42].

We support Dieckmann et al.’s assertion about the value in unpacking positive performance (i.e., “learning from success”) during debriefings [43] and believe that plus-delta facilitates this activity. Following up the “plus” question with additional probing questions to explore the “what,” “how,” and “why” aspects of performance will deepen learning. For example, in response to the question “What aspects of performance were done well?”, learners may say: “I really thought that Michael did a great job as the team leader – he was awesome!”. To unpack this further, the facilitator could ask: “Tell me more about what you liked about Michael’s leadership”, “What made Michael an effective leader?”, “How did Michael bring you together as a team?”, or “Why was it so important to have a strong leader?” (Fig. 3). Alternatively, a skilled facilitator may further deepen discussion through focused facilitation (e.g., advocacy inquiry [37, 44], circular questions [45]) to explore the underlying rationale for these behaviors [12] (Table 2). All of these approaches encourage learners to reflect deeply on one aspect of the team’s performance, thus ensuring that all learners can carry these positive behaviors through to their next clinical encounter.

Fig. 3
figure 3

Steps for unpacking positive performance in plus-delta: 1. Initiating plus discussion. 2. Re-directing plus discussion. 3. Exploring specific behavior. 4. Exploring team dynamics

Table 2 Examples—language to manage perception mismatches in debriefing

Managing perception mismatches

One challenge facilitators face is when their assessment of the learner performance differs from the learners’ perception of their own performance. The plus-delta approach captures a small “biopsy” of learner insights. With just one or two questions, facilitators obtain an overview of how learners viewed their own performance, which they can quickly compare with their own personal assessment and/or pre-defined performance measures. In some instances, learners provide a self-assessment that does not agree with the facilitator’s assessment of their performance [19, 22, 23, 25, 46]. This becomes clear when one or more learners categorize behaviors in the “plus” column that the facilitator believes belong in the “delta” column, or vice versa. Here facilitators face a perception mismatch—namely, learners’ believe they have performed well, when in fact they have performed below the standard (or vice versa). Discordant assessments of performance amongst learners thus highlight differences in perception that require further discussion. This is important because people tend to wrongfully assume that others share their perception [47] which prevents them from explicitly discussing them. Reflecting on differences in perceptions allows team members to update team mental models that represent knowledge structures, thus enabling team members to build accurate explanations and expectations of a task [14, 48]. As such, facilitators should prioritize perception mismatches as key learning opportunities during debriefings. Perception mismatches also threaten psychologically safe learning environments. Without the feeling that they can speak their mind, learners may withhold their self-assessment to protect themselves from feared criticism or feel alone with, or even ashamed of, their individual perception [49].

To foster psychologically safe conversations when perception mismatches exist, we encourage facilitators to explicitly introduce the issue with a preview statement: “I’m hearing two slightly different perspectives on the way the team approached airway management. Let’s spend some time reflecting on how and why this unfolded….” A preview statement provides clarity and frames the upcoming portion of discussion for learners. Facilitators may subsequently pose additional probing questions to explore the “what,” “how,” and “why” of their performance, or they may use specific focused facilitation strategies (e.g., advocacy inquiry [37, 44] or circular questions [45) to uncover the rationale driving certain learner behaviors (Table 2). Facilitators help normalize differences in experiences and explicitly appreciate shared self-assessment(s) that seem to stand out or be in the minority. This intervention also helps manage group polarization (i.e., shift towards talking about certain issues while neglecting others) [50]. Through these combined approaches, facilitators gather various perspectives, gain understanding about learners’ rationale for behavior, and work to close gaps in knowledge, skills, or teamwork that contributed to the perception mismatch.


The process of learner self-assessment enables performance improvement, lifelong learning, and most importantly, safe patient care. A genuine connection between the educator and learner fosters learning through the self-assessment process [26]. In debriefing, this connection can be built by ensuring a psychologically safe learning environment through implicit (e.g., body language, eye contact) and explicit strategies (e.g., validation, normalization) [49]. To maximize the benefit of this process, the facilitator should work towards optimizing accurate learner self-assessment.

In describing effective informed self-assessment practices, Epstein et al. highlight that the “power of self-assessment lies in … the integration of high-quality external and internal data” to assess performance [51]. Many debriefings rely heavily (or entirely) upon internal data, or learners’ “self-perception of their performance and emotional state” [31], which relies on personal biases and is often flawed. The incorporation of external data sources (e.g., objective data, performance checklists, and video) into their debriefing conversations can counter biases and misperceptions arising from internal data. Recently published guidelines from the American Heart Association recommend the inclusion of objective CPR data during post-event debriefings, as evidence suggests data-informed debriefing improves provider performance and survival outcomes from cardiac arrest [52]. The impact of using performance checklists as external data sources can be augmented if learners clearly understand these benchmarks, and if learners actively make judgments of their performance using these criteria [30, 53]. The introduction of the performance standards during the pre-briefing, coupled with a plus-delta approach supported by performance checklist review (relative to performance) during the debriefing, would enact this recommendation. Lastly, we see opportunities for the selective use of video as objective, external data to facilitate informed learner self-assessment during debriefing. Video review could potentially clarify misperceptions in performance, or serve to illustrate outstanding performance that meets or exceeds standards [29].

Learner self-assessment, while often fraught with inaccuracies, has clear benefits that can support learning during debriefing. Ross’ four-stage model provides a guiding framework for specific strategies that foster learning through self-assessment in simulation-based education [47]. Facilitators may further master the art of plus-delta by managing perception mismatches, selectively engaging learners in self-assessing performance at either the “big picture” level or for specific performance issues, thoughtfully using single- vs. double-barreled questions, and unpacking positive performance. In providing evidence and strategies for informed learner self-assessment, we hope facilitators will embrace and confidently implement the plus-delta approach to debriefing in a manner that further enhances learning outcomes.

Availability of data and materials

Not Applicable


  1. Kolbe M, Grande B, Spahn DR. Briefing and debriefing during simulation-based training and beyond: content, structure, attitude and setting. Best Pract Res Clin Anaesthesiol. 2015;29(1):87–96.

    Article  PubMed  Google Scholar 

  2. Sawyer T, Eppich W, Brett-Fleegler M, Grant V, Cheng A. More than one way to debrief a critical review of healthcare simulation debriefing methods. Simul Healthc. 2016;11(3):209–17.

    Article  PubMed  Google Scholar 

  3. Cheng A, Eppich W, Grant V, Sherbino J, Zendejas B, Cook DA. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Med Educ. 2014;48(7):657–66.

    Article  PubMed  Google Scholar 

  4. Arafeh JM, Hansen SS, Nichols A. Debriefing in simulated-based learning: facilitating a reflective discussion. J Perinat Neonatal Nurs. 2010;24(4):302–9.

    Article  PubMed  Google Scholar 

  5. Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc. 2007;2(2):115–25.

    Article  PubMed  Google Scholar 

  6. Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34(6):e58–63.

    Article  PubMed  Google Scholar 

  7. Mullan PC, Wuestner E, Kerr TD, Christopher DP, Patel B. Implementation of an in situ qualitative debriefing tool for resuscitations. Resuscitation. 2013;84(7):946–51.

    Article  PubMed  Google Scholar 

  8. Sweberg T, Sen AI, Mullan PC, Cheng A, Knight L, del Castillo J, et al. Description of hot debriefings after in-hospital cardiac arrests in an international pediatric quality improvement collaborative. Resuscitation. 2018;128:181–7.

    Article  PubMed  Google Scholar 

  9. Zinns LE, Mullan PC, O'Connell KJ, Ryan LM, Wratney AT. An evaluation of a new debriefing framework: REFLECT. Pediatric Emergency Care. 2017;Publish Ahead of Print.

  10. Ahmed M, Arora S, Russ S, Darzi A, Vincent C, Sevdalis N. Operation debrief: a SHARP improvement in performance feedback in the operating room. Ann Surg. 2013;258(6):958–63.

    Article  PubMed  Google Scholar 

  11. Rose S, Cheng A. Charge nurse facilitated clinical debriefing in the emergency department. CJEM. 2018:1–5.

  12. Eppich W, Cheng A. Promoting Excellence and Reflective Learning in Simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simul Healthc. 2015;10(2):106–15.

    Article  PubMed  Google Scholar 

  13. Schmutz J, Eppich W. When I say ... team reflexivity. Medical Education. 2019;53(6):545-6, When I say … team reflexivity, DOI:

  14. Schmutz J, Eppich WJ. Promoting learning and patient care through shared reflection: a conceptual framework for team reflexivity in health care. Acad Med. 2017;92(11):1555–63.

    Article  PubMed  Google Scholar 

  15. Ross JA, Rolheiser C, Hogaboam-Gray A. Student evaluation in co-operative learning: teacher cognitions. Teachers and Teaching. 2006;4(2):299–316.

    Article  Google Scholar 

  16. Sawyer TL, Deering S. Adaptation of the US Army’s After-Action Review for simulation debriefing in healthcare. Simul Healthc. 2013;8(6):388–97.

    Article  PubMed  Google Scholar 

  17. Morrison J, Meliza L. Foundations of the After Action Review Process. Special Report #42. US Army Research Institute for the Behavioral and: Social Sciences; 1999.

    Google Scholar 

  18. Cheng A, Morse KJ, Rudolph J, Arab AA, Runnacles J, Eppich W. Learner-centered debriefing for health care simulation education: lessons for faculty development. Simul Healthc. 2016;11(1):32–40.

    Article  PubMed  Google Scholar 

  19. Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA. 2006;296(9):1094–102.

    Article  CAS  PubMed  Google Scholar 

  20. Andrade HL. A critical review of research on student self-assessment. Frontiers in Education. 2019;4.

  21. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6):1121–34.

    Article  CAS  PubMed  Google Scholar 

  22. Hodges B, Regehr G, Martin DR. Difficulties in recognizing one’s own incompetence: novice physicians who are unskilled and unaware of it. Acad Med. 2001;76(Supplement):S87–9.

    Article  CAS  PubMed  Google Scholar 

  23. Abadel FT, Hattab AS. How does the medical graduates' self-assessment of their clinical competency differ from experts’ assessment? BMC Medical Education. 2013;13(1):24.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med. 1991;66(12):762–9.

    Article  CAS  PubMed  Google Scholar 

  25. Cheng A, Overly F, Kessler D, Nadkarni VM, Lin Y, Doan Q, et al. Perception of CPR quality: influence of CPR feedback, Just-in-Time CPR training and provider role. Resuscitation. 2015;87:44–50.

    Article  PubMed  Google Scholar 

  26. Pisklakov S, Rimal J, McGuirt S. Role of self-evaluation and self-assessment in medical student and resident education. British J Educ, Soc Behav Sci. 2014;4(1):1–9.

    Article  Google Scholar 

  27. Arnold L, Willoughby T, Calkins E. Self-evaluation in undergraduate medical education: a longitudinal perspective. J Med Educ. 1985;60(1):21–8.

    Article  CAS  PubMed  Google Scholar 

  28. Leaf D, Neighbor W, Schaad D, Scott C. A comparison of self-report and chart audit in studying resident physician assessment of cardiac risk factors. J Gen Intern Med . 1995;10(4):194–8.

    Article  CAS  PubMed  Google Scholar 

  29. Hawkins SC, Osborne A, Schofield SJ, Pournaras DJ, Chester JF. Improving the accuracy of self-assessment of practical clinical skills using video feedback--the importance of including benchmarks. Med Teach. 2012;34(4):279–84.

    Article  CAS  PubMed  Google Scholar 

  30. Sargeant J. How external performance standards inform self-assessment. Med Teach. 2012;34(4):267–8.

    Article  PubMed  Google Scholar 

  31. Sargeant J, Lockyer J, Mann K, Holmboe E, Silver I, Armson H, et al. Facilitated reflective performance feedback: developing an evidence- and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Acad Med. 2015;90(12):1698–706.

    Article  PubMed  Google Scholar 

  32. Mann K, van der Vleuten C, Eva K, Armson H, Chesluk B, Dornan T, et al. Tensions in informed self-assessment: how the desire for feedback and reticence to collect and use it can conflict. Acad Med. 2011;86(9):1120–7.

    Article  PubMed  Google Scholar 

  33. Sargeant J, Eva KW, Armson H, Chesluk B, Dornan T, Holmboe E, et al. Features of assessment learners use to make informed self-assessments of clinical performance. Med Educ. 2011;45(6):636–47.

    Article  PubMed  Google Scholar 

  34. Colthart I, Bagnall G, Evans A, et al. The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME Guide no. 10. Med Teach. 2009;30(2):124-145.

  35. Cheng A, Nadkarni VM, Mancini MB, Hunt EA, Sinz EH, Merchant RM, et al. Resuscitation education science: educational strategies to improve outcomes from cardiac arrest: a scientific statement from the American Heart Association. Circulation. 2018;138(6):e82–e122.

    Article  PubMed  Google Scholar 

  36. West MA. Reflexivity, revolution and innovation in work teams. In: Beyerlein MM, Johnson DA, Beyerlein ST, editors. Product Development Teams. Stamford, CT: JAI Press; 2000. p. 1–29.

    Google Scholar 

  37. Rudolph JW, Simon R, Rivard P, Dufresne RL, Raemer DB. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesthesiol Clin. 2007;25(2):361–76.

    Article  PubMed  Google Scholar 

  38. Cheng A, Grant V, Robinson T, Catena H, Lachapelle K, Kim J, et al. The Promoting Excellence and Reflective Learning in Simulation (PEARLS) approach to health care debriefing: a faculty development guide. Clin Simul Nurs. 2016;12(10):419–28.

    Article  Google Scholar 

  39. Baumeister RF, Bratslavsky E, Finkenauer C, Vohs KD. Bad is stronger than good. Rev Gen Psychol. 2001;5(4):323–70.

    Article  Google Scholar 

  40. Vaish A, Grossmann T, Woodward A. Not all emotions are created equal: The negativity bias in social-emotional development. Psychol Bull. 2008;134(3):383–403.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Muller-Pinzler L, Czekalla N, Mayer AV, et al. Negativity-bias in forming beliefs about own abilities. Scientific reports. 2019;9(1):14416.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Hollnagel E. Safety-I and Safety-II. The past and future of safety management. Farnham, UK.: Ashgate; 2014.

  43. Dieckmann P, Patterson M, Lahlou S, Mesman J, Nyström P, Krage R. Variation and adaptation: learning from success in patient safety-oriented simulation training. Advances in Simulation. Adv Simul. 2017;2(1):1–14.

    Article  Google Scholar 

  44. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There’s no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc. 2006;1(1):49–55.

    Article  PubMed  Google Scholar 

  45. Kolbe M, Marty A, Seelandt J, Grande B. How to debrief teamwork interactions: using circular questions to explore and change team interaction patterns. Adv Simul. 2016;1(1):1–8.

    Article  Google Scholar 

  46. Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med. 1992;66:762–9.

    Article  Google Scholar 

  47. Ross L, Greene D, House P. The “false consensus effect”: an egocentric bias in social perception and attribution processes. J Exp Soc Psychol. 1977;13(3):279–301.

    Article  Google Scholar 

  48. Mohammed S, Ferzandi L, Hamilton K. Metaphor no more: a 15-year review of the team mental model construct. J Manag. 2010;36(4):876–910.

    Article  Google Scholar 

  49. Kolbe M, Eppich W, Rudolph J, Meguerdichian M, Catena H, Cripps A, et al. Managing psychological safety in debriefings: a dynamic balancing act. BMJ Simulation and Technology Enhanced Learning. 2020;6(3):164–71.

    Article  Google Scholar 

  50. Myers DG, Lamm H. The group polarization phenomenon. Psychol Bull. 1976;83(4):602–27.

    Article  Google Scholar 

  51. Epstein RM, Siegel DJ, Silberman J. Self-monitoring in clinical practice: a challenge for medical educators. J Contin Educ Health Prof. 2008;28(1):5–13.

    Article  PubMed  Google Scholar 

  52. Cheng A, Magid DJ, Auerbach M, Bhanji F, Bigham B, Blewer AL, et al. Part 6: Resuscitation Education Science. 2020 American Heart Association Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2020;142(suppl 2):S551–S79.

    PubMed  Google Scholar 

  53. Boud D. Enhancing learning through self-assessment. London: Kogan Page Limited; 1995.

    Google Scholar 

Download references


We would like to dedicate this paper to our friend and colleague, Chad Epps, who left us too soon. Chad … you will be sorely missed, but your legacy will live on forever.


Not Applicable

Author information

Authors and Affiliations



All authors contributed to the development and refinement of the content and drafting and revision of the manuscript and provided final approval of the manuscript as submitted.

Corresponding author

Correspondence to A. Cheng.

Ethics declarations

Ethics approval and consent to participate

Not Applicable

Consent for publication

Not Applicable

Competing interests

All authors are faculty for the Debriefing Academy, which runs debriefing courses for healthcare professionals. Michaela Kolbe is faculty at the Simulation Center of the University Hospital and the Debriefing Academy, both providing debriefing faculty development training. Walter Eppich receives salary support from the Center for Medical Simulation and the Debriefing Academy to teach on simulation educator courses; he also receives per diem honorarium from PAEDSIM e.V. to teach on simulation educator courses in Germany.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cheng, A., Eppich, W., Epps, C. et al. Embracing informed learner self-assessment during debriefing: the art of plus-delta. Adv Simul 6, 22 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: