SIR 2025
General IR
Traditional Poster
Allan Thomas, PhD (he/him/his)
Assistant Professor
Mallinckrodt Institute of Radiology, Washington University School of Medicine, United States
Lunchi Guo, MS (he/him/his)
Student
Washington University in St Louis, United States
Dennis Trujillo, PhD
Senior Systems Manager
Mallinckrodt Institute of Radiology, United States
James R. Duncan, MD, PhD, FSIR
Professor
Mallinckrodt Institute of Radiology, United States
Accurate dosimetry calculations with fluoroscopically-guided interventional (FGI) procedures is hindered by limited knowledge of the specific anatomy that was irradiated. Current methods use RDSR data to estimate the patient anatomy exposed during each irradiation event [1]. We propose a deep learning algorithm to automatically match 2D fluoroscopic images with corresponding anatomical regions in computational phantoms, enabling more precise patient dose estimates.
Materials and Methods:
A phantom set developed by the University of Florida and the National Cancer Institute were used for this study. Simulated CT images have been derived for the entire PHANTOM library and are written in Digital Imaging and Communications in Medicine (DICOM) CT format[2]. Our method involves two main steps: (1) creating 2D fluoroscopic images, and (2) developing a deep learning algorithm to predict anatomical coordinates from these images. For the first part, we used DeepDRR, a framework for fast and realistic simulation of fluoroscopy and digital radiography from CT scans to generate 2D still images[3]. For the second part, we employ a Residual Neural Network (ResNet) architecture, known for its ability to train very deep networks effectively, to learn the mapping between 2D images and 3D anatomical coordinates[4].
Results:
A 35-year-old female phantom was used for initial testing. Using DeepDRR, we generated 5,928 simulated fluoroscopic images from various positions with different field sizes. For the ResNet model, we set the allowable error range between -10mm and 10mm. The model achieved an accuracy of 91.33%, indicating a high level of precision in its predictions. The average Y offset is 2.87mm, and the average Z offset is 4.53mm.
Conclusion: These results demonstrate the potential of synthetic image generation and deep learning approaches to significantly improve the accuracy of patient dose estimates in FGI procedures. Future work will focus on expanding the number of simulated fluoroscopic images to include images from additional phantoms. This expanded library will be used to continue training the ResNet model. Finally, we plan to train and test the model with images from actual fluoroscopic procedures in order to enhance the model’s generalization capabilities and robustness in diverse clinical scenarios.