Deep Learning Based Image Reconstruction for Hybrid Optoacoustic and Ultrasound Imaging
Over the last decade, Razansky lab was instrumental in the development of multi-spectral optoacoustic tomography (MSOT), transforming this novel bio-imaging technology from the initial demonstration of technical feasibility, through establishment of image reconstruction methodologies all the way toward its clinical translation. The method rapidly finds its place as a potent clinical imaging tool due to its high sensitivity and molecular specificity as well as non-invasive, real-time and high-resolution volumetric imaging capabilities deep in living biological tissues. Despite great promise demonstrated in the pilot clinical studies, human imaging with MSOT is afflicted by a limited tomographic access to the region of interest while significant constraints are further imposed on the light deposition in deep tissues. This project aims at development of new artificial intelligence capabilities for improving image quality and diagnostic capacity of MSOT images acquired by sub-optimal scanner configurations resulting from e.g. application-related constraints or low cost design considerations. In particular, we will devise machine learning approaches to enable efficient and robust multimodal combination of MSOT with pulse-echo ultrasonography by training neural networks on high-resolution and -quality training datasets generated by dedicated optimally designed scanner configurations. The trained models will be used to restore quality of artifactual images produced by various sub-optimal scanner configurations with limited tomographic view or sparsely acquired data in typical clinical imaging scenarios. Those advancements will help reducing inter-clinician variability and enable a more efficient, rapid, and objective analysis of large amounts of image data, thus relaxing requirements for specialized training and facilitating the wider adoption of MSOT apparatus in primary care and other non-hospital settings.
PI / Partners
Multi-Scale Functional and Molecular Imaging (ETHZ & UZH)
Devising deep learning approaches to enable accurate reconstruction of 2D and 3D multi-spectral optoacoustic tomography (MSOT) images from artifactual data recorded by sub-optimal imaging systems.
Development of accurate automatic segmentation and image improvement approaches for multimodal hybrid optoacoustic ultrasound (OPUS) images.
Correcting for the common MSOT image artefacts present in the images acquired under typical handheld clinical imaging scenarios.
Explore data science approaches for both acquired signal domain and reconstructed image domain paired and unpaired data for reconstruction of accurate scene using limited view input.
Explore data science approaches for segmentation of structures of interest (e.g., blood vessels) relying on weak annotations or different image domains (e.g., simulated data).
MSOT is a considerably new imaging modality among medical imaging approaches. It has many desired properties, such as real-time acquisition and high resolution. Image contrast is achieved through differences of tissue wavelength absorption properties, allowing yet a new insight into non-invasive tissue imaging close to surface with no known side affects on the imaged body (e.g., no ionizing radiation). An initial potential sought for MSOT is detection of cancerous tissue based on oxygen consumption of cystic bodies. Another field of application is assessment of lipid residue within vessels (e.g., carotid artery).
Berkan Lafci, Elena Merčep, Stefan Morscher, Xosé Luís Deán-Ben and Daniel Razansky, “Deep Learning for Automatic Segmentation of Hybrid Optoacoustic Ultrasound (OPUS) Images,” in IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 68, no. 3 (2020): 688-696. doi: 10.1109/TUFFC.2020.3022324.
Berkan Lafci, Elena Merčep, Joaquin L. Herraiz, Xosé Luís Deán-Ben, and Daniel Razansky, “Noninvasive multiparametric characterization of mammary tumors with transmission-reflection optoacoustic ultrasound” Neoplasia 22, no. 12 (2020): 770-777. doi: 10.1016/j.neo.2020.10.008
Neda Davoudi, Berkan Lafci, Ali Özbek, Xosé Luís Deán-Ben, and Daniel Razansky. “Deep learning of image-and time-domain data enhances the visibility of structures in optoacoustic tomography.” Optics Letters 46, no. 13 (2021): 3029-3032. doi: 10.1364/OL.424571
Berkan Lafci, Elena Merčep, Joaquin L. Herraiz, Xosé Luís Deán-Ben, and Daniel Razansky. “Transmission-reflection optoacoustic ultrasound (TROPUS) imaging of mammary tumors.” In Photons Plus Ultrasound: Imaging and Sensing, vol. 11642 (2021): 192-197. doi: 10.1117/12.2577907
Yexing Hu, Berkan Lafci, Artur Luzgin, Hao Wang, Jan Klohs, Xose Luis Dean-Ben, Ruiqing Ni, Daniel Razansky, and Wuwei Ren. “Deep learning facilitates fully automated brain image registration of optoacoustic tomography and magnetic resonance imaging.” arXiv preprint, arXiv:2109.01880 (2021).