Today, there are various methods for simulation-based surgical training in ophthalmology including traditional wet lab training with animal models or hybrid animal/synthetic models, dry lab training on synthetic models as well as virtual reality training simulators [7]. While clinical development is still ongoing and there is not yet any data that would allow for a validation study of the simulator training, the following descriptions of some aspects of the training modules provide examples to illustrate the potential of virtual reality simulators to teach these procedures efficiently.
Modularized learning
For the implant insertion procedure, the simulator curriculum covers only the parts with the highest impact on the clinical outcome starting with the scleral dissection and ending with the actual insertion and placement of the PDS implant. This partial procedure is split into four separate training modules comprising (i) the scleral dissection, (ii) the laser ablation of the pars plana, (iii) the pars plana incision, and (iv) the actual implant insertion.
For the refill-exchange procedure, a separation into more than one training module is not appropriate. Instead, the procedure can be performed on different virtual patients representing varying levels of difficulty essentially controlled by the visibility of the implant through the conjunctiva and the haptic resistance during insertion.
Each training module can be started directly thus enabling the user to train procedural steps out of order and focus their efforts on those tasks considered most challenging. Furthermore, the user can restart a module at any point and thus reset the scene for a new attempt instantaneously. Out of order training and immediate restarts provide a significant benefit that cannot be attained when training on animal models and constitute crucial features considering that training sessions for the trial investigators may last only 15–30 min.
Scleral dissection
The PDS implant needs to be seated within a scleral incision of a very specific target length that corresponds to the long axis of the implant, with a minimal margin for error. Furthermore, the incision needs to be created without harming the underlying pars plana tissue. Considering that the average thickness of the scleral tissue is only ~ 0.7 mm [8], the surgeon needs to navigate his blade within a very narrow corridor while compensating for the erratic forces resulting from the toughness of the scleral tissue.
To enable a real-time simulation of this process, the simulator employs a very fine and dynamic sub-triangulation of the scleral tissue adjacent to the expanding cut. The resulting resolution is sufficient for an accurate physical simulation model and the rendering of consistent haptic feedback. A significant effort was made to obtain an accurate graphical rendering of the fibrous scleral tissue allowing the user to learn the correct interpretation of visual cues that indicate the incision depth within the scleral tissue.
Compared with other conceivable training methods, a major benefit of this approach is the ability of the VR simulator to compute and report the relevant metrics in real-time as well as in a posteriori analysis. The former is provided as a heads-up display (HUD) that allows the user to directly associate subtle visual cues with objective depth information and thus improve their ability to assess the incision geometry (see the first picture in Fig. 4). The HUD further allows the user to recognize common mistakes like rounded depth profiles at the incision corners which are hard to recognize even by expert observers. A posteriori evaluation of user performance includes information about tissue treatment as well as the latitudinal and longitudinal placement of the incision.
Laser ablation of the pars plana
The fundamental challenge of this procedure is the correct application of the laser probe which is used in a context and configuration not usually employed in vitreoretinal surgery. The ablation of the pars plana tissue needs to be applied as homogeneously as possible because the perforation of the tissue, which is achieved at the end-point of the procedure, may result in the prolapse of liquefied vitreous that will impede further treatment of yet insufficiently ablated parts of the tissue.
The VR simulator renders a highly accurate representation of the various visual indicators that reveal the current ablation state of the tissue. Using the instantaneous restart feature of the simulator, the user can experiment with different laser settings and quickly obtain an intuitive understanding of the coagulation technique. A further benefit of the VR environment is the simulator’s ability to display a heat map of the exposed pars plana tissue either as a HUD or as an overlay element (see Fig. 5).
Refill-exchange
Potential complications during this procedure originate mostly from poor alignment of the syringe needle with the implant septum. Implementing a sufficiently accurate simulation model was a challenge due to the complex coupling of the relevant scene objects: the implant orientation is coupled to its adjacent sclera tissue and the external forces that drive this interaction result from the penetration of the septum by the syringe needle or its collision with the implant flange. As the needle is flexible, the overall system has a distinct non-linear dynamic. Hence, significant development efforts were required to render the characteristic haptic effects that guide the user during needle insertion.
Early training sessions with the simulator quickly revealed that the users had a tendency to be overconfident about their ability to visually estimate the relative orientation of implant and syringe. Exploiting VR technologies to help the user understand and correct false perceptions became a priority among the training requirements. To this effect, the simulator supports users with abstract guidance elements augmenting the virtual scene. Colored arrows and targeting markers materialize on request and guide the user into the correct position and orientation, see Fig. 2. A replay feature allows the user to observe his own performance from a different perspective and recognize the nature of the cognitive biases causing a previously experienced misperception.