Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 -- 3237 2023-07-10 12:27:52 |
2 format change Meta information modification 3237 2023-07-11 05:41:35 | |
3 format change -1 word(s) 3236 2023-07-11 05:41:50 |

Video Upload Options

Do you have a full video?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
Seetohul, J.; Shafiee, M.; Sirlantzis, K. Augmented Reality for Surgical Robotic and Autonomous Systems. Encyclopedia. Available online: https://encyclopedia.pub/entry/46595 (accessed on 24 July 2024).
Seetohul J, Shafiee M, Sirlantzis K. Augmented Reality for Surgical Robotic and Autonomous Systems. Encyclopedia. Available at: https://encyclopedia.pub/entry/46595. Accessed July 24, 2024.
Seetohul, Jenna, Mahmood Shafiee, Konstantinos Sirlantzis. "Augmented Reality for Surgical Robotic and Autonomous Systems" Encyclopedia, https://encyclopedia.pub/entry/46595 (accessed July 24, 2024).
Seetohul, J., Shafiee, M., & Sirlantzis, K. (2023, July 10). Augmented Reality for Surgical Robotic and Autonomous Systems. In Encyclopedia. https://encyclopedia.pub/entry/46595
Seetohul, Jenna, et al. "Augmented Reality for Surgical Robotic and Autonomous Systems." Encyclopedia. Web. 10 July, 2023.
Augmented Reality for Surgical Robotic and Autonomous Systems
Edit

Novel surgical robots are the most sought-after approach in performing repetitive tasks in an accurate manner. Imaging technology has significantly changed the world of robotic surgery, especially when it comes to biopsies, the examination of complex vasculature for catheterization, and the visual estimation of target points for port placement. There is a great need for the image analysis of CT scans and X-rays for the identification of the correct position of an anatomical landmark such as a tumor or polyp. This information is at the core of most augmented reality systems, where development starts with the reconstruction and localization of targets. Hence, the primary role of augmented reality (AR) applications in surgery would be to visualize and guide a user towards a desired robot configuration with the help of intelligent computer vision algorithms.

augmented reality (AR) machine learning (ML) navigation planning robotic and autonomous systems (RAS) surgery

1. Introduction

Over the past couple of decades, significant advancements in the performance of robotic platforms have been achieved by researchers in the academic community, with the deployment of such robots soaring amidst the COVID-19 pandemic. Studies show that the high probability of a resurgence in COVID-19 cases necessitates cost-effective and self-deploying telepresence robots to ensure pathogen control worldwide [1]. According to Raje et al. [2], the market size of healthcare robots was over 9 billion in 2022, exceeding the current fleet number more than twofold in comparison to values in 2019. Today, robotic platforms such as the Davinci robot have significantly improved the way in which surgeons perform complex interventions, reducing the need for patient re-admission due to its minimally invasive nature. Novel surgical robots are, today, the most sought-after approach in performing repetitive tasks in an accurate manner. Imaging technology has significantly changed the world of robotic surgery, especially when it comes to biopsies, the examination of complex vasculature for catheterization, and the visual estimation of target points for port placement. There is a great need for the image analysis of CT scans and X-rays for the identification of the correct position of an anatomical landmark such as a tumor or polyp. This information is at the core of most augmented reality systems, where development starts with the reconstruction and localization of targets. Hence, the primary role of augmented reality (AR) applications in surgery would be to visualize and guide a user towards a desired robot configuration with the help of intelligent computer vision algorithms.
The application of such cutting-edge robotic technologies remains diverse in various sectors, from carrying out military or manufacturing tasks to airborne or underwater operations, due to their dexterity, ease of operation, high adaptability, and multi-functionality. The widespread demand for AR in surgery is the impetus for the work, with a core focus on the challenges encountered in their deployment in the existing literature as well as the proposed solutions in counteracting these issues, emphasizing the precision of end-effector placement and feedback from control systems. The field of surgery has seen a quantum leap in the evolution of procedural ergonomics and the level of autonomy in robots during an intervention. Since the first robotic-assisted surgery was successfully used to treat neurological tumors from a spin-out industrial robot called the PUMA 200 in 1985 [3], scientists across the globe have found the need for increased precision in robotic arm positioning and orientation to relieve surgeons of their long hours in operation theaters. From the AESOP robotic arm built by Computer Motion for laparoscopic camera positioning in 1993 to the DaVinci system cleared for use in 2000 for countless segmentectomies of various organs, each platform has been improved in terms of hardware and software features, introducing the 3D visualization of the inner anatomy to surgeons via a see-through display. From this evolutionary hierarchy, scientists have seen the use of AR as a blessing in the surgical setting, reducing the surgeon’s cognitive load whilst performing complex surgeries such as cardiothoracic, colorectal, head and neck, and urological resections.
The collaboration between AR and robotic and autonomous systems (RAS) is a breakthrough in the world of minimally invasive robotic surgeries, with the earliest publications on this principle dating back to the 2000s, by Worn et al. [4]. More recently, in the media, the healthcare startup company Novarad introduced an AR-based surgical navigation system called VisAR, which operates based on virtual organ superposition with submillimeter accuracy [5]. Various other startups, such as Proximie [6], have also emphasized the importance of AR for surgical guidance through their extensive work on virtual “scrubbing in” and the augmented visualization of end-effector operations. These platforms provide an incentive to surgical robot manufacturers to integrate similar collaborative software packages into their control systems to obviate the risk of hand tremors, improving the synergy in human–robot arm placement and enabling telepresence via microphone communication throughout the procedure. This type of collaboration remains in its early pilot stages, although it is of increasing relevance in addressing the gaps in non-contact surgery during the pre- and post-pandemic eras.

2. Hardware Components

2.1. Patient-to-Image Registration Devices

In surgical navigation systems, AR-based scenes require seamless interaction between the real world and the digital world, increasing the need for precise motion tracking via marker-based or location-based mechanisms [7]. Often, pose estimation via motion tracking enables the user to perform the accurate manipulation of tools and geometrically position end-effectors for the cutting, outlining, and extraction of anatomical landmarks such as shoulder blades or internal organs. Location-based triggers may be used in conjunction with, but not limited to, pose estimation sensors such as IMUs, which provide several measurements, such as acceleration, magnetic field strength, and orientation angles. There is also the possibility of obtaining accurate geographical locations of specific clinical personnel through AR screens such as smartphones, HMDs, and even smart glasses [8]. These markers provide a basis for the initial alignment of the virtual world to the real world, with respect to a generic reference frame in space towards the target of interest.
Contrary to marker-based AR calibration systems, which use pre-defined tracking markers such as QR codes to leverage objects onto a real-world scene, markerless systems tend to enable user-friendly referencing cues to position an object in space. They operate by experimenting with different human skin textures, internal vessel structures, and geometrical features from medical scans of a patient [9]. The user can prescribe the location of the model and navigate around the scene without necessarily disturbing the external aspects of their surroundings, collating relayed data from accelerometers and visual, haptic, and olfactory sensors, as well as GPS systems. Such AR models depend on computer vision algorithms such as convolutional neural networks (CNN) to perceive target objects without fiducial markers, commonly trained using a software program called TensorFlow API. The specific referencing points are passed through such neural networks in real time, such that the accurate positions of the user can be tested and validated in further experimental procedures (see Figure 1).
Figure 1. The tracking process during AR alignment between patient and device.
There exist a multitude of sensors that are integrated into robotic platforms for the detection of precise locations in a surgical procedure, ranging from ultrasonic sensors [10], mechanical sensors [11], and electromagnetic (EM) sensors [11] to optical tracking sensors [12]. Today, the most acclaimed sensors for image-guided surgical navigation systems include optical and EM tracking sensors. AR display systems require an integrated camera tracking mechanism, which involves the registration of the head location and direction. This process can be performed using different individual or a combination of tracking sensors, with a wide range of applications in the clinical sector, e.g., devices such as Polaris and MiniBird (Ascension Technology Corp., Milton, VT, USA), which attach to the surgeon’s head for accurate simultaneous localization and mapping (SLAM). This is the process by which a surgical robotic tool can construct and generate a collision-free map and simultaneously identify its exact location using the map. It uses different filtering techniques, such as Kalman filters (KF), particle filters, and graph-based SLAM. A range of ML algorithms [13] are used in the development of a navigation structure in a discrete-time state-space framework, such as the unscented KF, which approximates the state distribution with a Gaussian Random Variable, where the posterior mean and covariance are captured for propagation through the nonlinear system; the extended KF, to overcome the linearity assumption for the next probability state; and the Monte Carlo sequential algorithms for filtering through the estimation of trajectory samples. Other graphical SLAM techniques adopt a node-to-node graph formulation technique, where the back end enables robot pose correction in order to produce an independent topology of the robot, as explained in [14]. The most common SLAM algorithm used in surgery includes the visual SLAM, based on monocular and trinocular RGB-D camera images that use information from the surrounding surgical environment to track the 3D landmarks through Bayesian methods, as cited in the literature [15].

2.2. Object Detection and AR Alignment for Robotic Surgery

Alongside the multitude of advances in other areas, such as dexterity and accurate image acquisition, commercial surgical robots are currently equipped with AR technology for the manipulation of resection tools. Their ability to visualize the patient-specific anatomy during affected tissue extraction allows them to work within safe workspace boundaries. While the precise mapping of medical images is unlikely due to the constant deformation of tissue pre- and post-surgery, many research papers [16][17][18][19] are dedicated to exploring the possibility of decoupling virtual objects and their sensory stimuli from the real world using algorithmic approaches adapted from the DL repository. Amongst the most acclaimed methods, projection-based AR, marker-based AR, markerless AR, and superimposition AR are widely used in robotic platforms employed in the operation theater and remotely.

3. Software Integration

From the master–slave testbed to the operating theater, AR plays a pivotal role in the visualization of anatomical landmarks, particularly the ear, nose, and throat, as well as gastro-intestinal areas. AR-assisted robotic surgery has facilitated the surgeon’s task in reducing hand tremors and loss of triangulation during complicated procedures. Studies show that the transition from invasive open surgery to indirect, perception-driven surgery has resulted in a lower number of cases of tissue perforation, blood loss, and post-operative trauma [20]. In contrast to open surgery, which involves the direct manipulation of tissue, image-guided operations enable the surgeon to map medical image information, virtual or otherwise, spatially onto real-world body structures. Usually, the virtual fixture is defined as the superposition of augmented sensory data upon the user’s FoV from a software-generated platform that uses fiducial markers to register the location of an anatomical section in real time and space with respect to the user’s real-time scene. The use of publicly available datasets obtained from cutting-edge technology, such as CT and magnetic resonance imaging (MRI), in such scenarios enables minimal human error in data processing and hence improved success rates of surgeries.

3.1. Patient-To-Image Registration

The preliminary steps in diagnosing the area of concern in a patient include the use of computer guiding software to visualize inner anatomical landmarks. The loss of direct feel of the inner anatomy, reduced depth perception due to the monocularity of cameras, and distorted images have been addressed in novel techniques such as the segmentation of tissue in medical scans and 3D modeling for an augmented 360-degree field of view (FoV) [21]. In several papers by Londono et al. [22] and Pfefferle et al. [23], case studies of kidney biopsies examine the development of AR systems for the superposition of holograms over experimental phantoms. Studies show that preoperative CT scans from the lateral decubitus position result in deformed tissue internally, in addition to discrepancies between preoperative and intraoperative scans. Accurate image-guided surgery greatly depends on the registration of preoperative medical scans with their corresponding ones within the intraoperative anatomy. During the procedure, aligned coordinate frames are mapped onto the output registered image. The need to compensate for the time lag during registration means that multiple time frames are required at different regions of interest to enhance the quality of the registered image.
Usually, the preferred choice of registration method depends on the type of robotic environment that the surgeon is navigating, where feature-based registration attracts the most attention within the academic community. These methods are less computationally heavy and can be used to effectively match fiducials between preoperative and intraoperative images, with primarily deformable methods of surface registration. Due to the sole use of 2D parameters, the possibility of obtaining highly accurate 3D information is low, hence driving the research community to establish novel sensing technologies for 3D marker tracking. Registration methods such as point-based registration, feature-based registration, segmentation-based registration, and fluoroscopy-based registration are widely used in the image processing of medical scans. The geometric transformations of deformable objects are computed using fiducial markers, which act as positioning cues and can be analyzed for fiducial localization errors (FLEs). In cases where images have varying gray levels, DL algorithms are able to segregate different features using parameters such as the sum of squared or absolute differences (SSD), correlation coefficients, and mean squared difference (MSD). For real-time X-ray image processing, a contrasting material, such as barium or iodine, is used to create more subtle contrast differences for clinicians to analyze. The process of 2D to 3D image registration involves the alignment of matching preoperative and intraoperative features, which can be reconstructed in AR and superposed over a live fluoroscopic image with respect to reference points in the image sequence (Figure 2).
Figure 2. CT scans of the lung with its corresponding 3D reconstruction and marker localization, used by surgeons to locate tumors as indicated by the red marker.

3.2. Camera Calibration for Optimal Alignment

Automatic camera calibration and corresponding image alignment in intraoperative ultrasound is used to determine internal structural characteristics such as the focal length and surface anatomy of different organs. Analysis, visualization, and pre-planning using registered medical images enable the development of patient-specific models of the relevant anatomy. The researchers in [24] created a cross-modality AR model to correct the shifts in positioning using lesion holograms, generated during a CT image reconstruction process. A US transducer obtains two-dimensional scans from the site of interest and is merged with magnetic tracking data to produce a 3D resultant scan in line with a CNN algorithm. This alleviates the probability of false negatives appearing in the dataset, especially when mapping magnetically tracked ultrasound scans onto non-rigidly registered 3D scans for the detection of mismatches in deformation. Furthermore, this method is also used for needle guidance, as mentioned in [23], to predict trans-operative pathways during navigation, as well as detecting areas of extraction for lesions on Unity3D via the collision avoidance system. The object-to-image registration is optimized by placing markers, sufficiently far apart in a non-linear configuration, such that their combined center coincides with the projection of the target in the workspace.

3.3. 3D Visualization using Direct Volume Rendering

The next steps in creating an AR model include image processing techniques such as direct volume rendering, which are used to remove outliers and delineators from raw DICOM data. A method proposed by Calhoun et al. [25] involves voxel contrast adjustment and multimodal volume registration of the voxels in the CT images by replacing their existing density with a specific color and enhancing their contrast through thresholding, performed by a transfer function. Manual intensity thresholding removes all low-intensity artefacts and background noise from the image, ready for rigid attachment to an organ in virtuality. A transparency function is applied to filter out extreme contrasts in anatomical or pathological 3D landmarks and any blob-like contours detected can be used in the initial registration of CT scans under techniques such as topological structural analysis. The deformation properties of the organs are modeled using software such as Osirix 12.0, 3D Slicer 5.2.2, or VR-Render IRCAD2010, and the high contrast applied to output images makes structures such as tumors, bones, and aneurism-prone vessels more visible to the naked eye.

3.4. Surface Rendering after Segmentation of Pre-Processed Data

Surface rendering techniques in [26] depict the conversion of anatomical structures into a mesh for delineation and segmentation. Tamadazte et al. [27] used the epipolar geometry principle to acquire images from the left and right stereovision cameras. The authors then used a point correspondence approach to resample and build a 3D triangular mesh from local data points in its neighborhood. The current techniques utilized in AR are developed using a software program called Unity3D and require patient-specific polygons such as triangles for rapid processing. Furthermore, the anatomical scenes detected using US transducers may be reconstructed using multi-view stereo (MVS), which analyzes pieces of tissue extracted from an area, remeshes them by warping the raw DICOM data, and displays them with appropriate textures using speeded up robust feature (SURF) methods [28]. In most cases, segmentation may cause the loss of essential information in the original volume data. Therefore, in the quest to improve the quality of segmented images, Pandey et al. [29] introduced a faster and more robust system for US to CT registration using shadow peak (SP) bone registration. In another study by Hacihaliloglu et al. [30], similar bone surface segmentation techniques have been used to determine the phase symmetry of bone fractures.

3.5. Path Computational Framework for Navigation and Planning

In studies by El-Hariri et al. [31] and Hussain et al. [32], the use of tracking mechanisms for marker-based biopsy guidance has been widely commended and applied in surgery, such as that of the middle ear and the kidneys. Fiducial cues are registered to different locations on the patient’s body, using the robust surface matching of sphere markers with the standard model, alongside laparoscopic video streams. Image-to-patient registration is performed by comparing the acquired live images to the available patient-to-image datasets, which is a crucial operation to eliminate errors during automatic correction, as explained by Wittman et al. [33]. Leeming et al. [34] used proximity queries to detect internal changes in anatomy during the manipulation of a continuum robot for surgery around a bone cavity. A covariance tree is used in this case, as a live modeling algorithm, to maintain an explicit safety margin between the walls of an anatomical landmark during the maneuvering of surgical tools. For cases of minimally invasive surgery, precautionary measures such as CO2 inflation of the patient’s body and highlighting target locations with contrasting colors (for example, with ICG) facilitate the surgeon’s task, especially when performing cross-modality interventions with AR systems such as headsets. A study by Zhang et al. [35] explained the tracking mechanisms used in US procedures for intraoperative use. The probe was equipped with a HoloLens-tracked reference frame, which contained multiple reflective spheres on an organ. In terms of biopsy needle tracking, Pratt et al. [36] introduced the concept of registered stylus guidance in line with a simulated 3D replica reconstructed from CT images of the torso. During preoperative surgical navigation, a calibrated probe is used to collect data from internal organs to send to the 3D Slicer software over OpenIGTLink, whilst combining tracked data from the input instruments. The stylus tip is calibrated about a pivot and can be moved to various positions in the anatomical plane while tracking it over the probe reference frame using an iterative closest point (ICP)-based detection algorithm. Jiang et al. [35] proved that the projector view for puncture surgery also improves the efficiency of perception-based navigation, using superimposed markers to align the needle tip to a magenta circle. The researchers in the above study generated an accurate AR positioning method using DL techniques such as the Newton–Gauss method and Lie algebra to produce an optimized projection matrix. Any projection is performed towards the target location of the body, hence reducing the probability of parallax errors, as shown by Wu et al. [37].

References

  1. Chen, B.; Marvin, S.; While, A. Containing COVID-19 in China: AI and the robotic restructuring of future cities. Dialogues Hum. Geogr. 2020, 10, 238–241.
  2. Raje, S.; Reddy, N.; Jerbi, H.; Randhawa, P.; Tsaramirsis, G.; Shrivas, N.V.; Pavlopoulou, A.; Stojmenović, M.; Piromalis, D. Applications of Healthcare Robots in Combating the COVID-19 Pandemic. Appl. Bionics Biomech. 2021, 2021, 7099510.
  3. Leal Ghezzi, T.; Campos Corleta, O. 30 years of robotic surgery. World J. Surg. 2016, 40, 2550–2557.
  4. Wörn, H.; Mühling, J. Computer- and robot-based operation theatre of the future in cranio-facial surgery. Int. Congr. Ser. 2001, 1230, 753–759.
  5. VisAR: Augmented Reality Surgical Navigation. Available online: https://www.novarad.net/visar (accessed on 6 March 2022).
  6. Proximie: Saving Lives by Sharing the World’s Best Clinical Practice. Available online: https://www.proximie.com/ (accessed on 6 March 2022).
  7. Brito, P.Q.; Stoyanova, J. Marker versus markerless augmented reality. Which has more impact on users? Int. J. Hum. Comput. Interact. 2018, 34, 819–833.
  8. Estrada, J.; Paheding, S.; Yang, X.; Niyaz, Q. Deep-Learning- Incorporated Augmented Reality Application for Engineering Lab Training. Appl. Sci. 2022, 12, 5159.
  9. Rothberg, J.M.; Ralston, T.S.; Rothberg, A.G.; Martin, J.; Zahorian, J.S.; Alie, S.A.; Sanchez, N.J.; Chen, K.; Chen, C.; Thiele, K.; et al. Ultrasound-on-chip platform for medical imaging, analysis, and collective intelligence. Proc. Natl. Acad. Sci. USA 2021, 118, e2019339118.
  10. Alam, M.S.; Gunawan, T.; Morshidi, M.; Olanrewaju, R. Pose estimation algorithm for mobile augmented reality based on inertial sensor fusion. Int. J. Electr. Comput. Eng. 2022, 12, 3620–3631.
  11. Attivissimo, F.; Lanzolla, A.M.L.; Carlone, S.; Larizza, P.; Brunetti, G. A novel electromagnetic tracking system for surgery navigation. Comput. Assist. Surg. 2018, 23, 42–52.
  12. Lee, D.; Yu, H.W.; Kim, S.; Yoon, J.; Lee, K.; Chai, Y.J.; Choi, Y.J.; Koong, H.-J.; Lee, K.E.; Cho, H.S.; et al. Vision-based tracking system for augmented reality to localize recurrent laryngeal nerve during robotic thyroid surgery. Sci. Rep. 2020, 10, 8437.
  13. Scaradozzi, D.; Zingaretti, S.; Ferrari, A.J.S.C. Simultaneous localization and mapping (SLAM) robotics techniques: A possible application in surgery. Shanghai Chest 2018, 2, 5.
  14. Konolige, K.; Bowman, J.; Chen, J.D.; Mihelich, P.; Calonder, M.; Lepetit, V.; Fua, P. View-based maps. Int. J. Robot. Res. 2010, 29, 941–957.
  15. Cheein, F.A.; Lopez, N.; Soria, C.M.; di Sciascio, F.A.; Lobo Pereira, F.; Carelli, R. SLAM algorithm applied to robotics assistance for navigation in unknown environments. J. Neuroeng. Rehabil. 2010, 7, 10.
  16. Komorowski, J.; Rokita, P. Camera Pose Estimation from Sequence of Calibrated Images. arXiv 2018, arXiv:1809.11066.
  17. Ghasemi, Y.; Jeong, H.; Choi, S.H.; Park, K.B.; Lee, J.Y. Deep learning-based object detection in augmented reality: A systematic review. Comput. Ind. 2022, 139, 103661.
  18. Lee, T.; Jung, C.; Lee, K.; Seo, S. A study on recognizing multi-real world object and estimating 3D position in augmented reality. J. Supercomput. 2022, 78, 7509–7528.
  19. Portalés, C.; Gimeno, J.; Salvador, A.; García-Fadrique, A.; Casas-Yrurzum, S. Mixed Reality Annotation of Robotic-Assisted Surgery videos with real-time tracking and stereo matching. Comput. Graph. 2023, 110, 125–140.
  20. Ghaednia, H.; Fourman, M.S.; Lans, A.; Detels, K.; Dijkstra, H.; Lloyd, S.; Sweeney, A.; Oosterhoff, J.H.; Schwab, J.H. Augmented and virtual reality in spine surgery, current applications and future potentials. Spine J. 2021, 21, 1617–1625.
  21. Nachabe, R.; Strauss, K.; Schueler, B.; Bydon, M. Radiation dose and image quality comparison during spine surgery with two different, intraoperative 3D imaging navigation systems. J. Appl. Clin. Med. Phys. 2019, 20, 136–145.
  22. Londoño, M.C.; Danger, R.; Giral, M.; Soulillou, J.P.; Sánchez-Fueyo, A.; Brouard, S. A need for biomarkers of operational tolerance in liver and kidney transplantation. Am. J. Transplant. 2012, 12, 1370–1377.
  23. Pfefferle, M.; Shahub, S.; Shahedi, M.; Gahan, J.; Johnson, B.; Le, P.; Vargas, J.; Judson, B.O.; Alshara, Y.; Li, O.; et al. Renal biopsy under augmented reality guidance. In Proceedings of the SPIE Medical Imaging, Houston, TX, USA, 16 March 2020.
  24. Georgi, M.; Patel, S.; Tandon, D.; Gupta, A.; Light, A.; Nathan, A. How is the Digital Surgical Environment Evolving? The Role of Augmented Reality in Surgery and Surgical Training. Preprints.org 2021, 2021100048.
  25. Calhoun, V.D.; Adali, T.; Giuliani, N.R.; Pekar, J.J.; Kiehl, K.A.; Pearlson, G.D. Method for multimodal analysis of independent source differences in schizophrenia: Combining gray matter structural and auditory oddball functional data. Hum. Brain Mapp. 2006, 27, 47–62.
  26. Kronman, A.; Joskowicz, L. Image segmentation errors correction by mesh segmentation and deformation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2013; pp. 206–213.
  27. Tamadazte, B.; Voros, S.; Boschet, C.; Cinquin, P.; Fouard, C. Augmented 3-d view for laparoscopy surgery. In Workshop on Augmented Environments for Computer-Assisted Interventions; Springer: Berlin/Heidelberg, Germany, 2012; pp. 117–131.
  28. Wang, A.; Wang, Z.; Lv, D.; Fang, Z. Research on a novel non-rigid registration for medical image based on SURF and APSO. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; IEEE: Piscataway, NJ, USA, 2010; Volume 6, pp. 2628–2633.
  29. Pandey, P.; Guy, P.; Hodgson, A.J.; Abugharbieh, R. Fast and automatic bone segmentation and registration of 3D ultrasound to CT for the full pelvic anatomy: A comparative study. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1515–1524.
  30. Hacihaliloglu, I. Ultrasound imaging and segmentation of bone surfaces: A review. Technology 2017, 5, 74–80.
  31. El-Hariri, H.; Pandey, P.; Hodgson, A.J.; Garbi, R. Augmented reality visualisation for orthopaedic surgical guidance with pre-and intra-operative multimodal image data fusion. Healthc. Technol. Lett. 2018, 5, 189–193.
  32. Hussain, R.; Lalande, A.; Marroquin, R.; Guigou, C.; Grayeli, A.B. Video-based augmented reality combining CT-scan and instrument position data to microscope view in middle ear surgery. Sci. Rep. 2020, 10, 6767.
  33. Wittmann, W.; Wenger, T.; Zaminer, B.; Lueth, T.C. Automatic correction of registration errors in surgical navigation systems. IEEE Trans. Biomed. Eng. 2011, 58, 2922–2930.
  34. Zhang, Y.; Wang, K.; Jiang, J.; Tan, Q. Research on intraoperative organ motion tracking method based on fusion of inertial and electromagnetic navigation. IEEE Access 2021, 9, 49069–49081.
  35. Jiang, Z.; Gao, Z.; Chen, X.; Sun, W. Remote Haptic Collaboration for Virtual Training of Lumbar Puncture. J. Comput. 2013, 8, 3103–3110.
  36. Zeng, F.; Wei, F. Hole filling algorithm based on contours information. In Proceedings of the 2nd International Conference on Information Science and Engineering, Hangzhou, China, 4–6 December 2010.
  37. Wu, C.; Wan, J.W. Multigrid methods with newton-gauss-seidel smoothing and constraint preserving interpolation for obstacle problems. Numer. Math. Theory Methods Appl. 2015, 8, 199–219.
More
Information
Contributors MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register : , ,
View Times: 422
Revisions: 3 times (View History)
Update Date: 11 Jul 2023
1000/1000
Video Production Service