The aim of this study was to assess the benefit of the CPEP with respect to medical student competence and confidence.
Competency outcomes—OSCE data
The primary objective of this study was to assess whether an intensive 4-day simulation course would result in competency outcomes that were comparable to those achieved by a standard 6-week clinical rotation in cardiovascular and respiratory medicine. This was not with the intention of replacing the clinical rotation but to enhance the student’s experience of the rotation by providing them with a strong clinical platform. This comparison used the current assessable standard, the OSCE. The data collected from the OSCEs demonstrated that there was no statistical difference between the intervention group after the first week and the control group OSCE scores at the end of 6 weeks.
This data suggests that at the completion of 4 days of intensive simulation-based education, the intervention group were able to attain the same level of knowledge and skills that the control group had achieved over a 6–8-week period.
Confidence rating—survey data
The secondary objective was to assess student’s confidence in managing cardiovascular and respiratory problems. Self-reported confidence survey results provided a comparison of the ‘post-course’ (intervention) and the ‘end of rotation’ (control) groups. This demonstrated that students exposed to CPEP went into the remaining 5 weeks of their rotation with confidence levels that were similar to levels reported by the control group when they were at the end of their rotation. End of rotation survey confidence scores for the CPEP group showed nearly twice as many rating themselves as ‘usually, always confident’ compared to the control group. This displayed a further increase in confidence from the immediate post-course results.
Front-loading education for the rotation with the intervention group provided the students with an opportunity to build upon their knowledge, skills, and confidence for the remaining 5 weeks. In contrast, there is a more gradual development in the control group and they did not necessarily have the base to build their practical experiences upon. This may have wider curriculum applications with the option to concentrate key knowledge and skill components in the early phase of clinical rotations to optimise efficacy. This may be applicable with or without the use of simulation-based learning and is not limited to medical undergraduate teaching.
The CPEP course had originally been designed to clinically prepare the students for clinical practice, rather than how to pass an OSCE. In fact, the students had only two modules out of a total 16 that were directly related to clinical examination. These modules were presented as interactive sessions (‘how to learn how to do a cardiac/respiratory clinical examination’) and were designed to place the actual clinical relevance of an examination into a real-life context. The same skills and knowledge in examination were provided to the control group at various points in their rotation. Most clinicians and educators would expect students to emerge from a cardiovascular/respiratory term with some core fundamental knowledge and skills in place.
At the conclusion of the CPEP course, all of the intervention students went into their rotation rating themselves as either ‘sometimes or usually confident’. This effect continued to the end of rotation survey with all intervention responders rating themselves as ‘sometimes, usually or always confident’ at this point. Although it is possible that CPEP group of students developed falsely elevated confidence levels through their involvement in the course, the OSCE results suggested that the CPEP students were at an acceptable ‘end of rotation’ level at that point. If we compare this to the control group, it is concerning that there are a number of students who finish a key rotation with no confidence in their ability to practise as a doctor in this area. In fact, there were consistently higher responses that scored ‘rarely or never confident’ in the control group throughout the end of rotation subgroup analysis when compared to intervention group. Anecdotal comments from the participants suggest that some of the intervention group used the CPEP course to become aware early in their rotation about what they did and did not know, closing the gap between perceived and actual knowledge.
There are aspects of the clinical rotation that are difficult to replicate in simulation and are equally hard to measure. These include the real-time observation of clinical practice throughout days or weeks and the social and communication challenges that exposure to real patients provides. Allowing students to actively manage undifferentiated or acutely unwell patients is problematic due to the expectation that more senior staff would take over and the variability of exposure to these types of patients. Take the example of an acute myocardial infarction (AMI)—a heart attack, the leading cause of death in Australia [21]. The expectation would be that a student would learn how to assess and manage this common and life-threatening condition during their undergraduate clinical placement on a cardiology ward. A proportion of the control group finished their rotations being ‘rarely or never confident’ in their ability to either assess (11 % compared to 0 % intervention group) or manage (17 % compared to 0 % intervention group) an AMI (Fig. 4). CPEP did not specifically target the assessment and management of AMI; there were two simulation scenarios and an ECG facilitated discussion during the course.
Despite the prevalence of AMI, it has become increasingly common for medical students to progress through a cardiology term and not be exposed to this condition. Due to the different phases occurring along a patient’s journey from initial diagnosis and treatment (general practice, ambulance or emergency department) to definitive management (thrombolysis or primary coronary angioplasty) and ongoing care on the ward a student could easily go through their whole cardiology term without seeing an undifferentiated patient presenting with an AMI. On a cardiology ward, they are more likely to see an AMI patient after someone else more senior has diagnosed and treated them. The students therefore miss the learning process of taking a history in real time from a patient with chest pain, seeing where all the interventions (aspirin, oxygen, heparin, angiography, etc.) fit in, and communicating with a sick patient and other staff simultaneously. Simulation in a course such as CPEP has the ability to provide medical students with exposure to common conditions presenting as undifferentiated cardinal symptoms (chest pain and breathlessness) that they can attempt to assess and manage. This type of learning prepares the students to become clinically astute junior doctors who have been taught, via senior clinician modeling and feedback, how to approach sick patients with an unknown diagnosis, rather than a defined clinical condition—practicing being a doctor not a medical student. A clinically orientated simulation course provides the student with the opportunity to ask questions of and receive feedback from clinical experts. Time is quarantined for both the educator and the student without the need to compete with other students for patient or clinician time. Learning goals were not limited to specific clinical skills and the students received feedback on teamwork, communication and other non-technical/ professional skills. The key component of having expert feedback cannot be underestimated as previous problem-based curriculum changes may result in improved communication skills but reduced understanding of disease [2, 3].
Simulation fits the criteria of a modern, integrated educational approach designed to meet the challenge of finding ‘other educational strategies that promote student-centred rather than teacher-centred learning, promote active student enquiry, stimulate analytical and knowledge organisation skills, and foster lifelong learning skills’. [5]
Limitations
The authors identified several limitations to this study that may have impacted upon the results.
The OSCE data could suggest that the behaviour marking structure for the assessment is compromised as it assesses a student’s ability to undertake pattern recognition with little if any weighting in assessment being given to the student’s ability to understand why they are performing a task. The OSCE did not measure whether the intervention and control groups had similar levels of understanding and context of why the physical examinations were undertaken. Due to logistical reasons, not all of the participants were assessed by the same examiners which may have lead to some score variation.
A pseudo-randomisation strategy was used due to a number of logistical constraints. Access to the medical students was controlled and organised by The University of Melbourne’s Clinical School at St Vincent’s Hospital. As a result, the research team at St Vincent’s Clinical Education and Simulation Centre did not make the rotation group allocations.
We were advised that student allocation to clinical groups follows a process to maintain group diversity. Clinical groups are 7 to 8 persons in size, gender balanced and with the group population representing three criteria—direct school leavers, graduate students and international students. Groups are also gender balanced.
There are two issues that may have affected the control groups in this study. Not all of the clinical rotations were exactly 6 weeks, and therefore, control group OSCE testing did not always occur exactly at the 6-week mark. Where logistically possible, control groups were allocated OSCE testing times with the intervention group, on the Friday immediately following the 4-day CPEP. In order to achieve this, some control groups attended combined testing at weeks 5 or 7 and 8.
Two out of the six control groups involved in the study attended the end of rotation OSCE stations as an unmixed group (no intervention group participants). To minimise the impact of this potential bias for the OSCE examiners, all examiners were external to the CPEP and blinded as to what group is being examined. The examiners only dealt with a participant by number at the OSCE station they had been allocated to run. No teaching staff from the CPEP were in any way involved in the running of the OSCE examination stations.
In addition, due to limitations in access to resources and access to students the research group felt that the OSCE results may have provided greater assistance if there had been testing pre-rotation/pre-intervention, at the completion of the intervention/first week of rotation and at the completion of the rotational block. These three points for both the intervention and control group may have provided a further understanding as to whether there were any differences in rate of skill acquisition and retention as a result of the program. Although the data shows that the CPEP group after 1 week obtained similar OSCE scores to the control group who had undergone a full rotation, it is not possible to say that the time to skill competency was reduced, as there was no comparison control group OSCE taken at 1 week into the rotation.
Strengths of this study include the validity of groups for comparison, the collection of quantitative as well as qualitative data and the application of the standard assessment tool for this target group—the OSCE.