Skip to main content

Putting the “learning” in “pre-learning”: effects of a self-directed study hall on skill acquisition in a simulation-based central line insertion course

Abstract

Background

Opportunities to practice procedural skills in the clinical learning environment are decreasing, and faculty time to coach skills is limited, even in simulation-based training. Self-directed learning with hands-on practice early in a procedural skill course might help maximize the benefit of later faculty coaching and clinical experience. However, it may also lead to well-learned errors if learners lack critical guidance. The present study sought to investigate the effects of a hands-on, self-directed “study hall” for central line insertion among first-year residents.

Methods

Learner cohorts before vs. after introduction of the study hall (n = 49) were compared on their pre- and post-test performance of key procedural behaviors that were comparable across cohorts, with all learners receiving traditional instructor-led training between tests.

Results

Study hall participants spent a median of 116 min in hands-on practice (range 57–175). They scored higher at pre-test (44% vs. 27%, p = .00; Cohen’s d = 0.95) and at post-test (80% vs. 72%, p = .02; Cohen’s d = 0.69). A dose–response relationship was found, such that 2 h of study hall were roughly equivalent to the performance improvement seen with four clinical observations or supervised insertions of central lines.

Conclusions

Self-directed, hands-on “study hall” supported improved procedural skill learning in the context of limited faculty availability. Potential additional benefits make the approach worth further experimentation and evaluation.

Background

As the Accreditation Council for Graduate Medical Education and others are pushing for more objective measurement of learners’ preparedness for practice in healthcare [1], opportunities to focus on the learning and maintenance of procedural skills are under pressure from multiple directions. Learners needing practice opportunities are finding that procedures are increasingly being performed by specialized proceduralist services and prehospital providers [2] or are becoming more rare given alternative treatments that may be provided [3]. Meanwhile, the increasing patient volume and pace of the clinical environment makes it more challenging than ever for faculty and learners to set aside time for procedural instruction while in the workplace [4]. It should thus not be surprising that learners often struggle to perform key procedures [5, 6]. Simulation is a powerful tool for learning [7], though simulation also requires the scarce time of faculty for the instruction, as well as professional development related to simulation-based education.

Ideally, learning resources could help learners “move up” the learning curve more quickly, using faculty time only when it is most necessary. For instance, central line insertion is a complex procedure, but it is not clear that all aspects of it require expert instruction. Some learning involves simply orienting to the vast array of necessary equipment or practicing unfamiliar but straightforward maneuvers. If learners could gain foundational knowledge and skill beforehand, time with instructors could be focused on task components not amenable to self-directed learning — perhaps finer points of ultrasound/needle coordination, for instance. Simulation-based procedural training courses do often feature “pre-learning” assignments, such as journal article readings or multimedia lectures and demonstrations [8, 9]. However, as these are often relatively passive, they seem unlikely to be highly effective or well-retained [10]. Far less common, and somewhat controversial, would be to support learners in self-directed procedural practice during the “pre-learning” phase.

Learning science is unclear as to whether such early self-directed practice by novice learners enhances or inhibits learning. Several theories, including cognitive load theory, encourage maximum coaching early in the learning process [11, 12]. Findings also suggest that learners struggle to self-assess their learning [13] and thus may make poor use of independent practice opportunities. Consistent with this, a recent study found that expert feedback during early deliberate practice supported greater learning of endourologic skills than did feedback provided during a later deliberate practice session [14]. However, other frameworks encourage educators to provide more self-directed learning opportunities [15], and preliminary evidence suggests that for fundamental laparoscopic skills, learner self-directed practice on take-home “box trainers” has led to positive learning outcomes with minimal prior coaching [16]. As such, we wondered how incorporating more self-directed practice early in a procedural skill course would affect learning.

In this study, we investigated residents’ scores on simulation-based central line insertion assessments conducted both before and after a traditional instructor-led training session, comparing learners who did versus did not attend an initial self-directed practice “study hall.” We hypothesized that study hall would facilitate higher pre-test scores and greater learning gains from instructor-led training.

Methods

The study is quasi-experimental, with pre- and post-test across two cohorts of first-year internal medicine residents from subsequent academic years. The initial cohort served as the control group for the following academic year in which treatment group residents participated in the study hall intervention prior to the three curricular components common to both groups: pre-test, instructor-led training, and post-test. The study was approved by the University of Kansas Medical Center Institutional Review Board.

Participants

Learner demographics are given in Table 1. Forty-nine of 51 learners consented to participate, all being first-year internal medicine residents in a mandatory central line insertion course. We focused analyses on first-year residents since performance scores for more senior residents would be confounded with prior simulation-based training experiences.

Table 1 Participant demographics

Measures

Each learner completed two central line insertion assessments (pre- and post-test) in a simulated environment. During the assessment, each learner had the opportunity to place a central line on a manikin situated in a simulated hospital room staged with the equipment and supplies identical to those found in the local clinical environment. Study personnel served as the patient voice, and a chief resident was trained to play the role of non-sterile assistant, which enabled learners to attempt a proper insertion from patient greeting through final assurance of successful insertion. After each pre- and post-test, the chief resident shared their observations related to errors with the learner in a debrief. Video and audio recordings were reviewed by trained observers using a scoring key, which was designed by an interdisciplinary group of expert clinicians for the local health system, including procedural steps and associated observable behaviors for each (Additional file 1: Appendix A). To mitigate internal validity threats associated with quasi-experimental research, the research team convened to compare assessment and training particulars across the two cohorts, noting any inconsistencies that might compromise fair comparison. For this study, we reduced the data down to comparable behaviors only. A sample of 20% of assessments were double coded to ensure reliability.

Additionally, a demographic survey was administered to all participants and included self-reported number of central lines inserted and observed prior to study hall. Videos of each learner’s journey through their study hall session were reviewed by the research team. The time spent in hands-on practice was recorded as the time between completion of the study hall orientation activities and the initiation of the exit survey.

Study procedures

There was guidance provided to support the self-directed practice during the study hall sessions. A nonclinical proctor provided an orientation to the individual learning stations including a simulator, ultrasound, line insertion equipment, and iPad with a multimedia didactic and demonstration learning module. Learners were provided with a list of procedural steps, each of which was demonstrated in the institution-specific videos within the learning module. Between one and four learners attended for any given session and were free to learn separately or collaboratively. The proctor was present for the session to answer basic questions and address equipment issues. Participants were encouraged to practice for at least 2 h but were free to practice longer as desired and feasible.

Approximately, 4 weeks later (or for control participants, as their initial course experience), participants completed the pre-test. Later that week, they completed an approximately 2-h instructor-led training session with two faculty instructors and up to five learners. Finally, again later that week, they completed the post-test.

Analyses

We used generalizability theory to estimate measurement reliability and then compared pre- and post-test scores by condition to estimate the overall effect of the intervention. To investigate dose–response relationships, pre-test scores were regressed on study hall time as the main predictor of interest, controlling for each participant’s number of central line insertions either observed or performed under supervision in clinical practice. Post-test scores were similarly regressed on study hall time and number of observed or supervised insertions, along with pre-test scores. Finally, we explored which items showed the greatest differences in probability of correct performance between control and treatment participants.

Results

Descriptive statistics

Among the treatment group, omitting the three participants unable to attend, median time spent in hands-on practice during study hall was 116 min, ranging from 57 to 175 min.

Reliability

Generalizability analyses of double-coded scores (within a fully crossed, learners-by-items-by-raters model, with items fixed) showed rank-order reliability of 0.83 for an individual item and 0.97 for total scores.

Treatment–control differences

We found statistically significant differences between the control and treatment groups in both the pre-test and the post-test scores. First, Fig. 1 depicts the discernible difference in pre-test scores for learners in the control vs. treatment groups (unequal variance t = 3.25, p = .00). Mean control and treatment group scores were 27% and 44%, respectively (SDs = 14% and 21%), resulting in a large effect size of the study hall intervention (Cohen’s d = 0.95). Second, a discernible effect was also seen for post-test scores (t = 2.35, p = .02). Control and treatment group means were 72% and 80%, respectively (SDs = 10% and 12%), resulting in a medium effect size (Cohen’s d = 0.69).

Fig. 1
figure 1

Total correct scores on procedural steps at pre- and post-test, control vs. treatment

Dose–response effects: controlling for insertion experience

In the regression model predicting pre-test scores, minutes spent in study hall was a significant predictor (p = .03), while number of observed or supervised insertions was not (p = .09). The predicted score on pre-test for an average learner not attending study hall was 25%. Spending 120 min in study hall was predicted to improve that score by 12 percentage points (to 37%), roughly equivalent to the improvement predicted from four observations or supervised insertions in the clinical environment. In the regression model predicting post-test scores, only pre-test scores were a significant predictor (bpretest = 0.32, p = .00). For both regression models, diagnostics were favorable (i.e., significant model fit and low variance inflation factors).

Item analyses

Table 2 shows percentage of learners who completed each step correctly on the pre-test and post-test for both the control and treatment groups, along with “normalized gain or loss” for each item [17]. The normalized gain quantifies the proportion of the possible gain the treatment group achieved relative to how much room for improvement there was in the control group performance. For example, referring to Table 2, 11% of learners in the control group completed “prepare insertion kit” correctly on the pretest. The maximal amount of gain, or improvement, possible for the treatment group would be an additional 89% (i.e., 100–11% = 89%). In actuality, 59% of the learners in the treatment group completed the step correctly on the pre-test after having participated in study hall. Thus, the normalized gain of the treatment group compared with the control group for this item was 54% (i.e., (59–11%)/89%)). All but two of 21 items showed a normalized gain on the pre-test in learners who participated in the study hall.

Table 2 Item analysis. Percent of learners correct by procedural step: control vs. treatment at pre- and post-test

Discussion

This study investigated the effects of a self-directed, hands-on “study hall” for central line insertion. Consistent with hypotheses, study hall participation was associated with considerable gains in both pre- and post-test scores. Regression analyses suggest that the effects persist even after controlling for line insertion experiences in the clinical environment, in the form of enhanced pre-test scores, which then predict enhanced post-test scores. The effect sizes compare favorably to learning associated with observation and supervised insertion in the clinical learning environment, suggesting the approach may be used to complement clinical experience.

One motivating factor in the design of the study hall intervention was to facilitate self-directed learning for more basic procedural steps that may be accomplished without the benefit of expert coaching, thereby preserving time with faculty for deliberate practice of more advanced skills. Although comparative analysis of the performance on individual procedural steps between the control and treatment groups is beyond the scope of this study, we did calculate the normalized gain (or loss), which is the proportion of the possible gain the treatment group achieved relative to how much room for improvement there was in the control group performance. The magnitude of the normalized gain was most positive for several basic procedural steps (e.g., prepare insertion kit, 54% of possible improvement at pretest, 89% at posttest; clean area with chlorhexidine, 38% and 44% of possible improvement respectively). This supports our vision for the progression of learning at each phase: that “study hall” offers ample amounts of practice for any content that does not require close instruction or coaching, formal training ensures sufficient (but necessarily limited) time for deliberate practice with expert feedback on more difficult-to-learn content, and then the clinical learning environment — obviously the most realistic but also least amenable to learner-adaptive deliberate practice — is where learners master the most complex task elements (e.g., patient variation).

One possible risk of self-directed, hands-on practice is that novices learn incorrect procedures. We did not see evidence of such negative learning overall in this investigation, as mean treatment group scores on both the pre- and post-tests were higher than the mean control group scores, and that difference was statistically significant. When looking at performance across individual procedural steps in Table 2, the average treatment group score was greater than the average control group for 19/21 of the procedural steps in the pre-test and for 13/21 on the posttest. Further exploration of the performance variation in normalized gain or loss across the procedural steps is an area ripe for further investigation.

Several aspects of this intervention make us optimistic; it can show even greater positive learning effects. For one, this was the first offering of study hall for our institute, and we have since refined aspects of it to encourage more effective peer learning practices [18] and more use of assessment for learning [19], consistent with the finding that guided self-direction is more effective than both unstructured self-direction and non-learner-directed practice [20]. Additionally, the study hall sessions include first-, second-, and third-year residents which has led to very impactful peer coaching, particularly when senior residents are paired with those with less experience. Since learners partially structure their own practice, study hall also creates opportunities for feedback and coaching on learners’ self-regulated learning strategies, referred to as “second-order scaffolding”[21] or “preparation for future learning” [22]. For instance, an educator might prompt reflection when a learner opts to engage in little practice and then shows sub-par performance later or an educator might cheer the fact that a learner strove to push themselves and make productive errors that they then learned from [23]. Finally, the intervention is scalable and convenient to learners. While the equipment required is substantial, it may be provided in a room with minimal support, without schedules needing to be aligned between learners and faculty. Study hall also makes it easier for learners to space out practice over multiple sessions, which can dramatically improve learning [24, 25].

The quasi-experimental nature of the study limits inferences somewhat, though we applied several statistical and logical controls to reduce validity threats [26]. Controlling for prior exposure to the central line course meant that we only investigated the study hall’s effects with first-year residents; however, we offer study hall to more senior residents as well, and it would be interesting to gauge its effects on their learning, to the extent that such effects can be disentangled from other factors related to their performance.

Several lines of follow-on research seem promising to us. First, it would be useful to model and maximize learner engagement in self-directed practice and to optimize learner’s practice strategies. One “high-tech” possibility in development is the use of computer-intelligent sensing and tutoring, to partially play the role a live coach might play in guiding learners [27, 28]. Similarly, “low-tech” peer coaching might help ensure more effective practice. Second, we are curious what the curricular impacts might be of adopting more guided self-directed learning of this nature — specifically, whether it improves future learning behaviors broadly, outside of the simulation center and/or beyond the specific procedure being learned. Third, we are interested to explore the impact of self-directed, hands-on learning on the durability of learning, as well as experiment with the impact of self-directed learning as a follow-up to instructor led instruction.

Conclusions

Our initial evaluation of a self-directed “study hall” with high-fidelity practice opportunities for central line insertion suggests the approach can have powerful effects for learning. As the practice of simulation-based healthcare education grows, we anticipate that guided learner self-direction will play an increasing role in helping expand simulation’s reach and impact.

Availability of data and materials

The datasets used and/or analyzed for the current study are available from the corresponding author on reasonable request.

References

  1. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system — rationale and benefits. N Engl J Med. 2012;366(11):1051–6.

    Article  CAS  PubMed  Google Scholar 

  2. Gisondi MA, Regan L, Branzetti J, Hopson LR. More learners, finite resources, and the changing landscape of procedural training at the bedside. Acad Med. 2017; Publish Ahead of Print. Available from: http://journals.lww.com/academicmedicine/Abstract/publishahead/More_Learners,_Finite_Resources,_and_the_Changing.98041.aspx. Cited 2017 Dec 4.

  3. Kyser KL, Lu X, Santillan D, Santillan M, Caughey AB, Wilson MC, Cram P. Forceps delivery volumes in teaching and nonteaching hospitals: are volumes sufficient for physicians to acquire and maintain competence? Acad Med. 2014;89(1):71–6.

  4. Block SM, Sonnino RE, Bellini L. Defining, “faculty” in academic medicine: responding to the challenges of a changing environment. Acad Med. 2015;90(3):279.

    Article  PubMed  Google Scholar 

  5. Shafer S, Rooney D, Schumacher R, House JB. Lumbar punctures at an academic level 4 NICU: indications for a new curriculum. Teach Learn Med. 2015;27(2):205–7.

    Article  PubMed  Google Scholar 

  6. Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Residents’ procedural experience does not ensure competence: a research synthesis. J Grad Med Educ. 2017;9(2):201–8.

    Article  PubMed  PubMed Central  Google Scholar 

  7. McGaghie WC, Issenberg SB, Barsuk JH, Wayne DB. A critical review of simulation-based mastery learning with translational outcomes. Med Educ. 2014;48(4):375–85.

    Article  PubMed  Google Scholar 

  8. Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med. 2009;169(15):1420–3.

    Article  PubMed  Google Scholar 

  9. Cheung JJH, Koh J, Brett C, Bägli DJ, Kapralos B, Dubrowski A. Preparation with web-based observational practice improves efficiency of simulation-based mastery learning. Simul Healthc J Soc Simul Healthc. 2016;11(5):316–22.

    Article  Google Scholar 

  10. Bjork RA. Memory and metamemory considerations in the training of human beings. In: Metcalfe J, Shimamura A, editors. Metacognition: knowing about knowing. Cambridge: MIT Press; 1994. p. 185–205. Available from: http://psycnet.apa.org/psycinfo/1994-97967-009. Cited 2012 Nov 30.

    Google Scholar 

  11. Van Merrienboer JJG, Sweller J. Cognitive load theory in health professional education: design principles and strategies. Med Educ. 2010;44(1):85–93.

    Article  PubMed  Google Scholar 

  12. Sawyer T, White M, Zaveri P, Chang T, Ades A, French H, et al. Learn, see, practice, prove, do, maintain: an evidence-based pedagogical framework for procedural skill training in medicine. Acad Med J Assoc Am Med Coll. 2015;90(8):1025–33.

    Article  Google Scholar 

  13. Eva KW, Cunnington JPW, Reiter HI, Keane DR, Norman GR. How can I know what I don’t know? Poor self assessment in a well-defined domain. Adv Health Sci Educ. 2004;9(3):211–24.

    Article  Google Scholar 

  14. Lee JY, McDougall EM, Lineberry M, Tekian A. Optimizing the timing of expert feedback during simulation-based spaced practice of endourologic skills. Simul Healthc. 2016;11(4):257.

    Article  PubMed  Google Scholar 

  15. Cutrer WB, Miller B, Pusic MV, Mejicano G, Mangrulkar RS, Gruppen LD, et al. Fostering the development of master adaptive learners: a conceptual model to guide skill acquisition in medical education. Acad Med J Assoc Am Med Coll. 2016;92(1):70–5.

    Article  Google Scholar 

  16. Korndorffer JR, Bellows CF, Tekian A, Harris IB, Downing SM. Effective home laparoscopic simulation training: a preliminary evaluation of an improved training paradigm. Am J Surg. 2012;203(1):1–7.

    Article  PubMed  Google Scholar 

  17. Hake R. Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. Am J Phys. 1998;66(1):64–74.

    Article  Google Scholar 

  18. Bjerrum AS, Eika B, Charles P, Hilberg O. Dyad practice is efficient practice: a randomised bronchoscopy simulation study. Med Educ. 2014;48(7):705–12.

    Article  PubMed  Google Scholar 

  19. Dannefer EF. Beyond assessment of learning toward assessment for learning: educating tomorrow’s physicians. Med Teach. 2013;35(7):560–3.

    Article  PubMed  Google Scholar 

  20. Bell BS, Kozlowski SW. Adaptive guidance: enhancing self-regulation, knowledge, and performance in technology-based training. Pers Psychol. 2006;55(2):267–306.

    Article  Google Scholar 

  21. van Merriënboer JJG, Kirschner PA. Ten steps to complex learning: a systematic approach to four-component instructional design. 2nd ed. New York: Routledge; 2012. p. 344.

    Book  Google Scholar 

  22. Mylopoulos M, Brydges R, Woods NN, Manzone J, Schwartz DL. Preparation for future learning: a missing competency in health professions education? Med Educ. 2016;50(1):115–23.

    Article  PubMed  Google Scholar 

  23. Keith N, Frese M. Self-regulation in error management training: emotion control and metacognition as mediators of performance effects. J Appl Psychol. 2005;90(4):677–91.

    Article  PubMed  Google Scholar 

  24. Moulton CAE, Dubrowski A, MacRae H, Graham B, Grober E, Reznick R. Teaching surgical skills: what kind of practice makes perfect?: A randomized, controlled trial. Ann Surg. 2006;244(3):400–9.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Sullivan NJ, Duval-Arnould J, Twilley M, Smith SP, Aksamit D, Boone-Guercio P, et al. Simulation exercise to improve retention of cardiopulmonary resuscitation priorities for in-hospital cardiac arrests: a randomized controlled trial. Resuscitation. 2015;86(Supplement C):6–13.

    Article  PubMed  Google Scholar 

  26. Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin; 2002.

    Google Scholar 

  27. Mejia EI, Yudkowsky R, Bui J, Lineberry M, Luciano C. Overview of multimodality motion tracking for training of central venous catheter placement. In: Proceedings of the 2015 Winter Simulation Conference (WSC). 2015. p. 1666–77.

    Chapter  Google Scholar 

  28. Lineberry M, Dev P, Lane HC, Talbot TB. Learner-adaptive educational technology for simulation in healthcare: foundations and opportunities. Simul Healthc. 2018; Publish Ahead of Print. Cited 2018 Feb 27.

Download references

Acknowledgements

Thanks to all the course faculty, simulation education specialists, simulation delivery team, and administrative team at the Zamierowski Institute for Experiential Learning for their tremendous work in supporting the delivery of the course this research is based on.

Funding

This work was supported by the Zamierowski Institute for Experiential Learning at the University of Kansas Medical Center and the University of Kansas Health System. This project did not receive grant funds from agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

ED, ML, LT, and JB contributed to study design, intervention design and execution, measurement tool development, data collection, and interpretation of findings. ML also contributed to data analysis. VS, JB, KE, MM, and WH all contributed to data collection. VS, MM, AA, and KE contributed to data analysis, and AA contributed to data interpretation. ED and ML contributed to drafting of manuscript, and all authors read and approved the final manuscript.

Corresponding author

Correspondence to Emily Diederich.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the University of Kansas Medical Center Institutional Review Board.

Consent for publication

All 49 participants whose data is included in this study have consented to participate.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Appendix A.

Central line insertion assessment: Concordance of comparable behaviors and procedural steps for academic year 2016 vs. 2017.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Diederich, E., Lineberry, M., Schott, V. et al. Putting the “learning” in “pre-learning”: effects of a self-directed study hall on skill acquisition in a simulation-based central line insertion course. Adv Simul 8, 21 (2023). https://doi.org/10.1186/s41077-023-00261-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41077-023-00261-4

Keywords