CoDAGANs

Abstract

Distinct digitization techniques for biomedical images yield different visual patterns in samples from many radiological exams. These differences may hamper the use of Machine Learning data-driven approaches for inference over these images, such as Deep Learning methods. Another important difficulty in this field is the lack of labeled data, even though in many cases there is an abundance of unlabeled data available. Therefore an important step in improving the generalization capabilities of these methods is to perform Unsupervised and Semi-Supervised Domain Adaptation between different datasets of biomedical images. In order to tackle this problem, in this work we propose an Unsupervised and Semi-Supervised Domain Adaptation method for dense labeling tasks in biomedical images using Generative Adversarial Networks for Unsupervised Image-to-Image Translation. We merge these unsupervised Deep Neural Networks with with well-known supervised deep semantic segmentation architectures in order to create a semi-supervised method capable of learning from both unlabeled and labeled data, whenever labeling is available. We compare our method using several domains, datasets, segmentation tasks and traditional baselines in the Transfer Learning literature, such as unsupervised feature space distance-based methods and using pretrained models both with and without fine-tuning. We perform both quantitative and qualitative analysis of the proposed method and baselines in the distinct scenarios considered in our experimental evaluation. The proposed method shows consistently and significantly better results than the baselines in scarce labeled data scenarios, achieving Jaccard values greater than 0.9 in most tasks. Completely Unsupervised Domain Adaptation results were observed to be close to the Fully Supervised Domain Adaptation used in the traditional procedure of fine-tuning pretrained Deep Neural Networks.

CoDAGAN Overview

CoDAGAN Overview

Official Implementation

Qualitative Assessment

Below there are some segmentation predictions from our method in several distinct experiments, datasets and image domains. All images, labels and ground truths from our main experiments can be seen in our Google Drive folder.

Lungs
Dataset Image Ground Truths Pretrained U-Net D2D CoDAGAN
JSRT
OpenIST
Shenzhen
Montgomery
Chest X-Ray 8
PadChest
NLMCXR
OCT CXR
Heart
Dataset Image Ground Truths Pretrained U-Net D2D CoDAGAN
JSRT
OpenIST
Shenzhen
Montgomery
Chest X-Ray 8
PadChest
NLMCXR
Clavicles
Dataset Image Ground Truths Pretrained U-Net D2D CoDAGAN
JSRT
OpenIST
Shenzhen
Montgomery
Chest X-Ray 8
PadChest
NLMCXR
Pectoral Muscle
Dataset Image Ground Truths Pretrained U-Net D2D CoDAGAN
INbreast
MIAS
DDSM B/C
DDSM A
BCDR
LAPIMO
Breast Region
Dataset Image Ground Truths Pretrained U-Net D2D CoDAGAN
INbreast
MIAS
DDSM B/C
DDSM A
BCDR
LAPIMO
Teeth
Dataset Image Ground Truths Pretrained U-Net D2D CoDAGAN
IvisionLab
IvisionLab
Panoramic X-ray
Panoramic X-ray
Mandible
Dataset Image Ground Truths Pretrained U-Net D2D CoDAGAN
Panoramic X-ray
Panoramic X-ray
IvisionLab
IvisionLab

Referencing

If you use any of the models or code in this website, please be sure to cite this research's paper [1]; the base implementation for MUNIT/UNIT [2] and its paper [3]; and the pytorch semantic segmentation framework (if needed) [4]:
    
[1] "Conditional Domain Adaptation GANs for Biomedical Image Segmentation". Oliveira, Hugo; Ferreira, Edemir; and dos Santos Jefersson. arXiv (2018).
 
[2] https://github.com/nvlabs/MUNIT
 
[3] "Multimodal Unsupervised Image-to-Image Translation". Huang, Xun; Liu, Ming-Yu; Belongie, Serge; Kautz, Jan. 2018. https://arxiv.org/abs/1804.04732.
 

Contact

If you have any doubt regarding the paper, methodology or code, please contact oliveirahugo [at] dcc.ufmg.br and jefersson [at] dcc.ufmg.br.