Hi! I'm Olivier Petit, PhD in Deep Learning and Computer Vision.

Introduction


After getting my MSc and engineering degree in computer science and machine learning at the INSA of Rouen, I joined the CEDRIC at le CNAM in Paris to study Deep Learning and Computer Vision under the supervision of Nicolas Thome. I work directly with Visible Patient which develop a software for professionals in the medical field. Thanks to that partnership I get my PhD degree in decembre 2021 in deep learning applied to medical images. I'm very proud and happy to bring Artificial Intelligence in medical applications.

Activity

Publications


2021

Olivier Petit.
Thesis "Semantic Segmentation of 3D Medical Images with Deep Learning"
[PDF]

STIPPLE graphical absract

Olivier Petit, Nicolas Thome and Luc Soler.
"3D Spatial Priors for Semi-Supervised Organ Segmentation with Deep Convolutional Neural Networks"
IJCARS 2021 [PDF]

Purpose: Fully Convolutional neural Networks (FCNs) are the most popular models for medical image segmentation. However, they do not explicitly integrate spatial organ positions, which can be crucial for proper labeling in challenging contexts.

Methods: In this work, we propose a method that combines a model representing prior probabilities of an organ position in 3D with visual FCN predictions by means of a generalized prior-driven prediction function. The prior is also used in a self-labeling process to handle low-data regimes, in order to improve the quality of the pseudo-label selection.

Results: Experiments carried out on CT scans from the public TCIA pancreas segmentation dataset reveal that the resulting STIPPLE model can significantly increase performances compared to the FCN baseline, especially with few training images. We also show that STIPPLE outperforms state-of-the-art semi-supervised segmentation methods by leveraging the spatial prior information.

Conslusion: STIPPLE provides a segmentation method effective with few labeled examples, which is crucial in the medical domain. It offers an intuitive way to incorporate absolute position information by mimicking expert annotators.

U-Transformer graphical absract

Olivier Petit, Nicolas Thome, Clement Rambour, Loic Themyr, Toby Collins and Luc Soler.
"U-Net Transformer: Self and Cross Attention for Medical Image Segmentation"
MICCAI 2021 workshop MLMI [PDF] [arXiv]

Medical image segmentation remains particularly challenging for complex and low-contrast anatomical structures. In this paper, we introduce the U-Transformer network, which combines a U-shaped architecture for image segmentation with self- and cross-attention from Transformers. U-Transformer overcomes the inability of U-Nets to model long-range contextual interactions and spatial dependencies, which are arguably crucial for accurate segmentation in challenging contexts. To this end, attention mechanisms are incorporated at two main levels: a self-attention module leverages global interactions between encoder features, while cross-attention in the skip connections allows a fine spatial recovery in the U-Net decoder by filtering out non-semantic features. Experiments on two abdominal CT-image datasets show the large performance gain brought out by U-Transformer compared to U-Net and local Attention U-Nets. We also highlight the importance of using both self- and cross-attention, and the nice interpretability features brought out by U-Transformer.
INERRANT graphical absract

Olivier Petit, Nicolas Thome and Luc Soler.
"Iterative Confidence Relabeling with Deep ConvNets for Organ Segmentation with Partial Labels"
Computerized Medical Imaging and Graphics, 2021 [PDF]

Training deep ConvNets requires large labeled datasets. However, collecting pixel-level labels for medical image segmentation is very expensive and requires a high level of expertise. In addition, most existing segmentation masks provided by clinical experts focus on specific anatomical structures. In this paper, we propose a method dedicated to handle such partially labeled medical image datasets. We propose a strategy to identify pixels for which labels are correct, and to train Fully Convolutional Neural Networks with a multi-label loss adapted to this context. In addition, we introduce an iterative confidence self-training approach inspired by curriculum learning to relabel missing pixel labels, which relies on selecting the most confident prediction with a specifically designed confidence network that learns an uncertainty measure which is leveraged in our relabeling process. Our approach, INERRANT for Iterative coNfidencE Relabeling of paRtial ANnoTations, is thoroughly evaluated on two public datasets (TCAI and LITS), and one internal dataset with seven abdominal organ classes. We show that INERRANT robustly deals with partial labels, performing similarly to a model trained on all labels even for large missing label proportions. We also highlight the importance of our iterative learning scheme and the proposed confidence measure for optimal performance. Finally we show a practical use case where a limited number of completely labeled data are enriched by publicly available but partially labeled data.
2019
MIDL Prior graphical absract

Olivier Petit, Nicolas Thome and Luc Soler.
"Biasing Deep ConvNets for Semantic Segmentation of Medical Images with a Prior-driven Prediction Function"
MIDL 2019, extended abstract [PDF]

2018
Illustration for the paper missing annotations

Olivier Petit, Nicolas Thome, Arnaud Charnoz, Alexandre Hostettler and Luc Soler.
"Handling Missing Annotations for Semantic Segmentation with Deep ConvNets."
MICCAI 2018 workshop DLMIA [PDF]

Annotation of medical images for semantic segmentation is a very time consuming and difficult task. Moreover, clinical experts often focus on specific anatomical structures and thus, produce partially annotated images. In this paper, we introduce SMILE, a new deep convolutional neural network which addresses the issue of learning with incomplete ground truth. SMILE aims to identify ambiguous labels in order to ignore them during training, and don't propagate incorrect or noisy information. A second contribution is SMILEr which uses SMILE as initialization for automatically relabeling missing annotations, using a curriculum strategy. Experiments on 3 organ classes (liver, stomach, pancreas) show the relevance of the proposed approach for semantic segmentation: with 70% of missing annotations, SMILEr performs similarly as a baseline trained with complete ground truth annotations.