Deep learning for TOMCAT imaging


The beamline for TOmographic Microscopy and Coherent rAdiology experimentTs (TOMCAT) at the Paul Scherrer Institut (PSI) allows to do tomographic microscopy (SRXTM) using the X-rays from a synchrotron. This technique makes it possible to capture the three-dimensional structure of animal tissue from organs like, but not limited to, heart, brain, lungs, and bones at a high speed and high resolution in a non-destructive manner. Such three-dimensional imaging can help clinicians, pathologists, and biomedical researchers in obtaining a deeper understanding of tissue and its functioning without going through the cumbersome conventional process of tissue fixing, drying, staining, paraffin blocking, and slicing, which in the end only provides two-dimensional images.

PSI, ETH Zurich, and the Swiss Data Sciene Center (SDSC) worked jointly on automatically analysing such three-dimensional images to ease the burden of the clinicians and pathologists in identifying healthy tissue from sick ones. But in addition to easing the aforesaid burden, the three-dimensional imaging and its automatic analysis can reveal new insights into the functioning of organs and their constitution. Together, both the accelerated analysis of tissue as well as a better understanding of the organs is going to benefit patients suffering from maladies of vital organs.

The focus of DeepMicroia is on heart tissue, in particular that of rodents. The goal is to assess the health of the tissue by estimating the amount and nature of collagen fibers. This calls for a pixel-wise segmentation of the three-dimensional image volumes. Since performing such segmentation manually is tedious and error-prone, the focus of the project is to develop automatic methods for this.

Starting Date / Status

December 2017


PI / Partners

Biomedical Image Computing (ETH Zürich)

X-Ray Tomography Group (PSI)

Read the article about this project on our blog:

Heart Tissue Analysis In a Heartbeat



Improving state-of-the-art in micro-CT image analysis for studying tissue microstructure and disease related alterations in the heart. This required segmenting hypertensive heart tissue into collagen, cells, and background. The specific goals were to:

  1. Create and Improve segmentation and quantification models
  2. Improve robustness to artefacts
  3. Reduce need for annotations


Micron-scale CT images are essential in studying tissue microstructure and alterations. Better automatic / semi-automatic tools would allow high throughput and more accurate analysis, which can be performed at a much faster rate than using manual approaches, which is the only option widely used.


SDSC contributed three solutions to the problem of finding collagen fibres in the heart tissue image volumes: a training-free image processing solution, a trained deep network solution, and a deep network solution that required minimal labelling. The video below show what a segmented volume looks like (collagen fibres in red, cells in yellow, and empty space being background).

The first approach was based on an image processing technique, involving the use of Difference of Gaussians to identify potential fibres, followed by hysteresis-based region growing. This method could separate both collagen fibres and background regions with the help of 8 manually tuneable parameters. This code was ported from C to Python for ease of use. Also, to be able to tune parameters easily, a user interface was developed in C++. The advantage of this method was that annotations were not required and the processing could be done on a regular laptop in a few seconds. The disadvantage was that the parameters had to be tuned manually for each case and certain low-contrast regions of the volumes were hard to segment with this method.

So, a second method was developed, which used the deep learning architecture called UNet. It required training data, which was collected laboriously over a few stacks. Compared to the image processing method, the segmentation was deemed to be of better quality by the experts. On the flip side though, the method required annotated training data and the use of GPUs.

One issue faced throughout the project was that the annotations themselves were of poor quality. Getting good annotations was very laborious and time consuming despite the use of Ilastik as a tool for making this easier. So, a third method was developed that relied on training the UNet with a single diagonal slice. This cut the training data requirements by two orders of magnitude. Thanks to the isotropic nature of the image volumes, a diagonal slice contains similar structure as a horizontal or vertical slices. This allowed labeling just 1 slice carefully instead of nearly 400 slices erroneously.


[1] Bredell, G., Tanner, C. and Konukoglu, E., 2018. Iterative Interaction Training for Segmentation Editing Networks. In Int. Workshop on Machine Learning in Medical Imaging (pp. 363-370). Springer.

[2] H. Dejea, C. Tanner, R. Achanta, M. Stanpanoni, F. Perez-Cruz, E. Monukoglu, A. Bonnin, 2019. Synchrotron X-Ray Phase Contrast Imaging and Deep Neural Networks for Cardiac Collagen Quantification in Hypertensive Rat Model, International Conference on Functional Imaging and Modeling of the Heart.