In this paper, a new object-based method to estimate noise in magnitude MR images is proposed. Th... more In this paper, a new object-based method to estimate noise in magnitude MR images is proposed. The main advantage of this object-based method is its robustness to background artefacts such as ghosting. The proposed method is based on the adaptation of the Median Absolute Deviation (MAD) estimator in the wavelet domain for Rician noise. The MAD is a robust and efficient estimator initially proposed to estimate Gaussian noise. In this work, the adaptation of MAD operator for Rician noise is performed by using only the wavelet coefficients corresponding to the object and by correcting the estimation with an iterative scheme based on the SNR of the image. During the evaluation, a comparison of the proposed method with several state-of-the-art methods is performed. A quantitative validation on synthetic phantom with and without artefacts is presented. A new validation framework is proposed to perform quantitative validation on real data. The impact of the accuracy of noise estimation on the performance of a denoising filter is also studied. The results obtained on synthetic images show the accuracy and the robustness of the proposed method. Within the validation on real data, the proposed method obtained very competitive results compared to the methods under study.
The automatic assessment of hippocampus volume is an important tool in the study of several neuro... more The automatic assessment of hippocampus volume is an important tool in the study of several neurodegenerative diseases such as Alzheimer's disease. Specifically, the measurement of hippocampus subfields properties is of great interest since it can show earlier pathological changes in the brain. However, segmentation of these subfields is very difficult due to their complex structure and for the need of high-resolution magnetic resonance images manually labeled. In this work, we present a novel pipeline for automatic hippocampus subfield segmentation based on a deeply supervised convolutional neural network. Results of the proposed method are shown for two available hippocampus subfield delineation protocols. The method has been compared to other state-of-the-art methods showing improved results in terms of accuracy and execution time.
Whole brain segmentation using deep learning (DL) is a very challenging task since the number of ... more Whole brain segmentation using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a global convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called Assem-blyNet, made of two "assemblies" of U-Nets. Such a parliamentary system is capable of dealing with complex decisions and reaching a consensus quickly. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an "amendment" procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. When using the same 45 training images, AssemblyNet outperforms global U-Net by 28% in terms of the Dice metric, patch-based joint label fusion by 15% and SLANT-27 by 10%. Finally, AssemblyNet demonstrates high capacity to deal with limited training data to achieve whole brain segmentation in practical training and testing times.
The hippocampus is a brain structure that is involved in several cognitive functions such as memo... more The hippocampus is a brain structure that is involved in several cognitive functions such as memory and learning. It is a structure of great interest in the study of the healthy and diseased brain due to its relationship to several neurodegenerative pathologies. In this work, we propose a novel patch-based method that uses an ensemble of boosted neural networks to perform the hippocampus subfield segmentation on multimodal MRI. This new method minimizes both random and systematic errors using an overcomplete autocontext patch-based labeling using a novel boosting strategy. The proposed method works well on high resolution MRI but also on standard resolution images after superresolution. Finally, the proposed method was compared with a similar state-of-the-art methods showing better results in terms of both accuracy and efficiency.
This paper proposes a novel method for automatic MRI denoising that exploits last advances in dee... more This paper proposes a novel method for automatic MRI denoising that exploits last advances in deep learning feature regression and self-similarity properties of the MR images. The proposed method is a two-stage approach. In the first stage, an overcomplete patch-based convolutional neural network blindly removes the noise without specific estimation of the local noise variance to produce a preliminary estimation of the noise-free image. The second stage uses this preliminary denoised image as a guide image within a rotationally invariant non-local means filter to robustly denoise the original noisy image. The proposed approach has been compared with related state-of-the-art methods and showed competitive results in all the studied cases while being much faster than comparable filters. We present a denoising method that can be blindly applied to any type of MR image since it can automatically deal with both stationary and spatially varying noise patterns.
In this paper, we present an innovative MRI-based method for Alzheimer’s Disease (AD) detection a... more In this paper, we present an innovative MRI-based method for Alzheimer’s Disease (AD) detection and mild cognitive impairment (MCI) prognostic, using lifespan trajectories of brain structures. After a full screening of the most discriminant structures between AD and normal aging based on MRI volumetric analysis of 3032 subjects, we propose a novel Hippocampal-Amygdalo-Ventricular Alzheimer score (HAVAs) based on normative lifespan models and AD lifespan models. During a validation on three external datasets on 1039 subjects, our approach showed very accurate detection (AUC ≥ 94%) of patients with AD compared to control subjects and accurate discrimination (AUC=78%) between progressive MCI and stable MCI (during a 3 years follow-up). Compared to normative modelling and recent state-of-the-art deep learning methods, our method demonstrated better classification performance. Moreover, HAVAs simplicity makes it fully understandable and thus well-suited for clinical practice or future ph...
Recently, segmentation methods based on Convolutional Neural Networks (CNNs) showed promising per... more Recently, segmentation methods based on Convolutional Neural Networks (CNNs) showed promising performance in automatic Multiple Sclerosis (MS) lesions segmentation. These techniques have even outperformed human experts in controlled evaluation condition. However state-of-the-art approaches trained to perform well on highly-controlled datasets fail to generalize on clinical data from unseen datasets. Instead of proposing another improvement of the segmentation accuracy, we propose a novel method robust to domain shift and performing well on unseen datasets, called DeepLesionBrain (DLB). This generalization property results from three main contributions. First, DLB is based on a large ensemble of compact 3D CNNs. This ensemble strategy ensures a robust prediction despite the risk of generalization failure of some individual networks. Second, DLB includes a new image quality data augmentation to reduce dependency to training data specificity (e.g., acquisition protocol). Finally, to le...
Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are va... more Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are valuable tools which may help to obtain an objective diagnosis and follow-up of many neurological diseases. To estimate such regional brain volumes, the Intracranial Cavity Volume (ICV) is often used for normalization. However, the high variability of brain shape and size due to normal inter-subject variability, normal changes occurring over the lifespan, and abnormal changes due to disease makes the ICV estimation problem challenging. In this paper, we present a new approach to perform ICV extraction based on the use of a library of pre- labeled brain images to capture the large variability of brain shapes. To this end, an improved non-local label fusion scheme based on BEaST technique is proposed to increase the accuracy of the ICV estimation. The proposed method is compared with recent state-of-the-art methods and the results demonstrate an improved performance both in terms of accuracy...
Recently, segmentation methods based on Convolutional Neural Networks (CNNs) showed promising per... more Recently, segmentation methods based on Convolutional Neural Networks (CNNs) showed promising performance in automatic Multiple Sclerosis (MS) lesions segmentation. These techniques have even outperformed human experts in controlled evaluation conditions such as Longitudinal MS Lesion Segmentation Challenge (ISBI Challenge). However state-of-the-art approaches trained to perform well on highly-controlled datasets fail to generalize on clinical data from unseen datasets. Instead of proposing another improvement of the segmentation accuracy, we propose a novel method robust to domain shift and performing well on unseen datasets, called DeepLesionBrain (DLB). This generalization property results from three main contributions. First, DLB is based on a large group of compact 3D CNNs. This spatially distributed strategy ensures a robust prediction despite the risk of generalization failure of some individual networks. Second, DLB includes a new image quality data augmentation to reduce dependency to training data specificity (e.g., acquisition protocol). Finally, to learn a more generalizable representation of MS lesions, we propose a hierarchical specialization learning (HSL). HSL is performed by pre-training a generic network over the whole brain, before using its weights as initialization to locally specialized networks. By this end, DLB learns both generic features extracted at global image level and specific features extracted at local image level. DLB generalization was validated in cross-dataset experiments on MSSEG'16, ISBI challenge, and inhouse datasets. During experiments, DLB showed higher segmentation accuracy, better segmentation consistency and greater generalization performance compared to state-of-the-art methods. Therefore, DLB offers a robust framework well-suited for clinical practice.
Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 2021
Semi-supervised learning (SSL) uses unlabeled data to compensate for the scarcity of annotated im... more Semi-supervised learning (SSL) uses unlabeled data to compensate for the scarcity of annotated images and the lack of method generalization to unseen domains, two usual problems in medical segmentation tasks. In this work, we propose POPCORN, a novel method combining consistency regularization and pseudo-labeling designed for image segmentation. The proposed framework uses high-level regularization to constrain our segmentation model to use similar latent features for images with similar segmentations. POPCORN estimates a proximity graph to select data from easiest ones to more difficult ones, in order to ensure accurate pseudo-labeling and to limit confirmation bias. Applied to multiple sclerosis lesion segmentation, our method demonstrates competitive results compared to other state-of-the-art SSL strategies.
In this paper, we present a new tool for white matter lesion segmentation called lesionBrain. Our... more In this paper, we present a new tool for white matter lesion segmentation called lesionBrain. Our method is based on a 3-stage strategy including multimodal patch-based segmentation, patch-based regularization of probability map and patch-based error correction using an ensemble of shallow neural networks. Its robustness and accuracy have been evaluated on the MSSEG challenge 2016 datasets. During our validation, the performance obtained by le-sionBrain was competitive compared to recent deep learning methods. Moreover, lesionBrain proposes automatic lesion categorization according to location. Finally, complementary information on gray matter atrophy is included in the generated report. LesionBrain follows a software as a service model in full open access.
Whole brain segmentation using deep learning (DL) is a very challenging task since the number of ... more Whole brain segmentation using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a single convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called AssemblyNet, made of two "assemblies" of U-Nets. Such a parliamentary system is capable of dealing with complex decisions, unseen problem and reaching a consensus quickly. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an "amendment" procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. During our validation, AssemblyNet showed competitive performance compared to state-of-the-art methods such as U-Net, Joint label fusion and SLANT. Moreover, we investigated the scan-rescan consistency and the robustness to disease effects of our method. These experiences demonstrated the reliability of AssemblyNet. Finally, we showed the interest of using semi-supervised learning to improve the performance of our method.
Affine registration of one or several brain image(s) onto a common reference space is a necessary... more Affine registration of one or several brain image(s) onto a common reference space is a necessary prerequisite for many image processing tasks, such as brain segmentation or functional analysis. Manual assessment of registration quality is a tedious and time-consuming task, especially in studies comprising a large amount of data. Automated and reliable quality control (QC) becomes mandatory. Moreover, the computation time of the QC must be also compatible with the processing of massive datasets. Therefore, automated deep neural network approaches have emerged as a method of choice to automatically assess registration quality. In the current study, a compact 3D convolutional neural network, referred to as RegQCNET, is introduced to quantitatively predict the amplitude of an affine registration mismatch between a registered image and a reference template. This quantitative estimation of registration error is expressed using the metric unit system. Therefore, a meaningful task-specific...
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific re... more HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
ObjectiveThe cerebellum is involved in cognitive processing and emotion control. Cerebellar alter... more ObjectiveThe cerebellum is involved in cognitive processing and emotion control. Cerebellar alterations could explain symptoms of schizophrenia spectrum disorder (SZ) and bipolar disorder (BD). In addition, literature suggests that lithium might influence cerebellar anatomy. Our aim was to study cerebellar anatomy in SZ and BD, and investigate the effect of lithium.MethodsParticipants from 7 centers worldwide underwent a 3T MRI. We included 182 patients with SZ, 144 patients with BD, and 322 controls. We automatically segmented the cerebellum using the CERES pipeline. All outputs were visually inspected.ResultsPatients with SZ showed a smaller global cerebellar gray matter volume compared to controls, with most of the changes located to the cognitive part of the cerebellum (Crus II and lobule VIIb). This decrease was present in the subgroup of patients with recent‐onset SZ. We did not find any alterations in the cerebellum in patients with BD. However, patients medicated with lithiu...
ABSTRACTNumerous studies have proposed biomarkers based on magnetic resonance imaging (MRI) to de... more ABSTRACTNumerous studies have proposed biomarkers based on magnetic resonance imaging (MRI) to detect and predict the risk of evolution toward Alzheimer’s disease (AD). While anatomical MRI captures structural alterations, studies demonstrated the ability of diffusion MRI to capture microstructural modifications at an earlier stage. Several methods have focused on hippocampus structure to detect AD. To date, the patch-based grading framework provides the best biomarker based on the hippocampus. However, this structure is complex since the hippocampus is divided into several heterogeneous subfields not equally impacted by AD. Former in-vivo imaging studies only investigated structural alterations of these subfields using volumetric measurements and microstructural modifications with mean diffusivity measurements. The aim of our work is to study the efficiency of hippocampal subfields compared to the whole hippocampus structure with a multimodal patch-based framework that enables to c...
Whether hippocampal subfields are differentially vulnerable at the earliest stages of multiple sc... more Whether hippocampal subfields are differentially vulnerable at the earliest stages of multiple sclerosis (MS) and how this impacts memory performance is a current topic of debate. We prospectively included 56 persons with clinically isolated syndrome (CIS) suggestive of MS in a 1-year longitudinal study, together with 55 matched healthy controls at baseline. Participants were tested for memory performance and scanned with 3 T MRI to assess the volume of 5 distinct hippocampal subfields using automatic segmentation techniques. At baseline, CA4/dentate gyrus was the only hippocampal subfield with a volume significantly smaller than controls (p < .01). After one year, CA4/dentate gyrus atrophy worsened (-6.4%, p < .0001) and significant CA1 atrophy appeared (both in the stratum-pyramidale and the stratum radiatum-lacunosum-moleculare, -5.6%, p < .001 and -6.2%, p < .01, respectively). CA4/dentate gyrus volume at baseline predicted CA1 volume one year after CIS (R2 = 0.44 t...
The human cerebellum is involved in language, motor tasks and cognitive processes such as attenti... more The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes).
In structural and functional MRI studies there is a need for robust and accurate automatic segmen... more In structural and functional MRI studies there is a need for robust and accurate automatic segmentation of various brain structures. We present a comparison study of three automatic segmentation methods based on the new T1weighted MR sequence called MP2RAGE, which has superior soft tissue contrast. Automatic segmentations of the thalamus and hippocampus are compared to manual segmentations. In addition, we qualitatively evaluate the segmentations when warped to co-registered maps of the fractional anisotropy (FA) of water diffusion. Compared to manual segmentation, the best results were obtained with a patch-based segmentation method (volBrain) using a library of images from the same scanner (local), followed by volBrain using an external library (external), FSL and Freesurfer. The qualitative evaluation showed that volBrain local and volBrain external produced almost no segmentation errors when overlaid on FA maps, while both FSL and Freesurfer segmentations were found to overlap with white matter tracts. These results underline the importance of applying accurate and robust segmentation methods and demonstrate the superiority of patch-based methods over more conventional methods.
Automatic segmentation methods are important tools for quantitative analysis of Magnetic Resonanc... more Automatic segmentation methods are important tools for quantitative analysis of Magnetic Resonance Images (MRI). Recently, patch-based label fusion approaches have demonstrated state-of-the-art segmentation accuracy. In this paper, we introduce a new patch-based label fusion framework to perform segmentation of anatomical structures. The proposed approach uses an Optimized PAtchMatch Label fusion (OPAL) strategy that drastically reduces the computation time required for the search of similar patches. The reduced computation time of OPAL opens the way for new strategies and facilitates processing on large databases. In this paper, we investigate new perspectives offered by OPAL, by introducing a new multi-scale and multi-feature framework. During our validation on hippocampus segmentation we use two datasets: young
In this paper, a new object-based method to estimate noise in magnitude MR images is proposed. Th... more In this paper, a new object-based method to estimate noise in magnitude MR images is proposed. The main advantage of this object-based method is its robustness to background artefacts such as ghosting. The proposed method is based on the adaptation of the Median Absolute Deviation (MAD) estimator in the wavelet domain for Rician noise. The MAD is a robust and efficient estimator initially proposed to estimate Gaussian noise. In this work, the adaptation of MAD operator for Rician noise is performed by using only the wavelet coefficients corresponding to the object and by correcting the estimation with an iterative scheme based on the SNR of the image. During the evaluation, a comparison of the proposed method with several state-of-the-art methods is performed. A quantitative validation on synthetic phantom with and without artefacts is presented. A new validation framework is proposed to perform quantitative validation on real data. The impact of the accuracy of noise estimation on the performance of a denoising filter is also studied. The results obtained on synthetic images show the accuracy and the robustness of the proposed method. Within the validation on real data, the proposed method obtained very competitive results compared to the methods under study.
The automatic assessment of hippocampus volume is an important tool in the study of several neuro... more The automatic assessment of hippocampus volume is an important tool in the study of several neurodegenerative diseases such as Alzheimer's disease. Specifically, the measurement of hippocampus subfields properties is of great interest since it can show earlier pathological changes in the brain. However, segmentation of these subfields is very difficult due to their complex structure and for the need of high-resolution magnetic resonance images manually labeled. In this work, we present a novel pipeline for automatic hippocampus subfield segmentation based on a deeply supervised convolutional neural network. Results of the proposed method are shown for two available hippocampus subfield delineation protocols. The method has been compared to other state-of-the-art methods showing improved results in terms of accuracy and execution time.
Whole brain segmentation using deep learning (DL) is a very challenging task since the number of ... more Whole brain segmentation using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a global convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called Assem-blyNet, made of two "assemblies" of U-Nets. Such a parliamentary system is capable of dealing with complex decisions and reaching a consensus quickly. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an "amendment" procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. When using the same 45 training images, AssemblyNet outperforms global U-Net by 28% in terms of the Dice metric, patch-based joint label fusion by 15% and SLANT-27 by 10%. Finally, AssemblyNet demonstrates high capacity to deal with limited training data to achieve whole brain segmentation in practical training and testing times.
The hippocampus is a brain structure that is involved in several cognitive functions such as memo... more The hippocampus is a brain structure that is involved in several cognitive functions such as memory and learning. It is a structure of great interest in the study of the healthy and diseased brain due to its relationship to several neurodegenerative pathologies. In this work, we propose a novel patch-based method that uses an ensemble of boosted neural networks to perform the hippocampus subfield segmentation on multimodal MRI. This new method minimizes both random and systematic errors using an overcomplete autocontext patch-based labeling using a novel boosting strategy. The proposed method works well on high resolution MRI but also on standard resolution images after superresolution. Finally, the proposed method was compared with a similar state-of-the-art methods showing better results in terms of both accuracy and efficiency.
This paper proposes a novel method for automatic MRI denoising that exploits last advances in dee... more This paper proposes a novel method for automatic MRI denoising that exploits last advances in deep learning feature regression and self-similarity properties of the MR images. The proposed method is a two-stage approach. In the first stage, an overcomplete patch-based convolutional neural network blindly removes the noise without specific estimation of the local noise variance to produce a preliminary estimation of the noise-free image. The second stage uses this preliminary denoised image as a guide image within a rotationally invariant non-local means filter to robustly denoise the original noisy image. The proposed approach has been compared with related state-of-the-art methods and showed competitive results in all the studied cases while being much faster than comparable filters. We present a denoising method that can be blindly applied to any type of MR image since it can automatically deal with both stationary and spatially varying noise patterns.
In this paper, we present an innovative MRI-based method for Alzheimer’s Disease (AD) detection a... more In this paper, we present an innovative MRI-based method for Alzheimer’s Disease (AD) detection and mild cognitive impairment (MCI) prognostic, using lifespan trajectories of brain structures. After a full screening of the most discriminant structures between AD and normal aging based on MRI volumetric analysis of 3032 subjects, we propose a novel Hippocampal-Amygdalo-Ventricular Alzheimer score (HAVAs) based on normative lifespan models and AD lifespan models. During a validation on three external datasets on 1039 subjects, our approach showed very accurate detection (AUC ≥ 94%) of patients with AD compared to control subjects and accurate discrimination (AUC=78%) between progressive MCI and stable MCI (during a 3 years follow-up). Compared to normative modelling and recent state-of-the-art deep learning methods, our method demonstrated better classification performance. Moreover, HAVAs simplicity makes it fully understandable and thus well-suited for clinical practice or future ph...
Recently, segmentation methods based on Convolutional Neural Networks (CNNs) showed promising per... more Recently, segmentation methods based on Convolutional Neural Networks (CNNs) showed promising performance in automatic Multiple Sclerosis (MS) lesions segmentation. These techniques have even outperformed human experts in controlled evaluation condition. However state-of-the-art approaches trained to perform well on highly-controlled datasets fail to generalize on clinical data from unseen datasets. Instead of proposing another improvement of the segmentation accuracy, we propose a novel method robust to domain shift and performing well on unseen datasets, called DeepLesionBrain (DLB). This generalization property results from three main contributions. First, DLB is based on a large ensemble of compact 3D CNNs. This ensemble strategy ensures a robust prediction despite the risk of generalization failure of some individual networks. Second, DLB includes a new image quality data augmentation to reduce dependency to training data specificity (e.g., acquisition protocol). Finally, to le...
Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are va... more Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are valuable tools which may help to obtain an objective diagnosis and follow-up of many neurological diseases. To estimate such regional brain volumes, the Intracranial Cavity Volume (ICV) is often used for normalization. However, the high variability of brain shape and size due to normal inter-subject variability, normal changes occurring over the lifespan, and abnormal changes due to disease makes the ICV estimation problem challenging. In this paper, we present a new approach to perform ICV extraction based on the use of a library of pre- labeled brain images to capture the large variability of brain shapes. To this end, an improved non-local label fusion scheme based on BEaST technique is proposed to increase the accuracy of the ICV estimation. The proposed method is compared with recent state-of-the-art methods and the results demonstrate an improved performance both in terms of accuracy...
Recently, segmentation methods based on Convolutional Neural Networks (CNNs) showed promising per... more Recently, segmentation methods based on Convolutional Neural Networks (CNNs) showed promising performance in automatic Multiple Sclerosis (MS) lesions segmentation. These techniques have even outperformed human experts in controlled evaluation conditions such as Longitudinal MS Lesion Segmentation Challenge (ISBI Challenge). However state-of-the-art approaches trained to perform well on highly-controlled datasets fail to generalize on clinical data from unseen datasets. Instead of proposing another improvement of the segmentation accuracy, we propose a novel method robust to domain shift and performing well on unseen datasets, called DeepLesionBrain (DLB). This generalization property results from three main contributions. First, DLB is based on a large group of compact 3D CNNs. This spatially distributed strategy ensures a robust prediction despite the risk of generalization failure of some individual networks. Second, DLB includes a new image quality data augmentation to reduce dependency to training data specificity (e.g., acquisition protocol). Finally, to learn a more generalizable representation of MS lesions, we propose a hierarchical specialization learning (HSL). HSL is performed by pre-training a generic network over the whole brain, before using its weights as initialization to locally specialized networks. By this end, DLB learns both generic features extracted at global image level and specific features extracted at local image level. DLB generalization was validated in cross-dataset experiments on MSSEG'16, ISBI challenge, and inhouse datasets. During experiments, DLB showed higher segmentation accuracy, better segmentation consistency and greater generalization performance compared to state-of-the-art methods. Therefore, DLB offers a robust framework well-suited for clinical practice.
Medical Image Computing and Computer Assisted Intervention – MICCAI 2021, 2021
Semi-supervised learning (SSL) uses unlabeled data to compensate for the scarcity of annotated im... more Semi-supervised learning (SSL) uses unlabeled data to compensate for the scarcity of annotated images and the lack of method generalization to unseen domains, two usual problems in medical segmentation tasks. In this work, we propose POPCORN, a novel method combining consistency regularization and pseudo-labeling designed for image segmentation. The proposed framework uses high-level regularization to constrain our segmentation model to use similar latent features for images with similar segmentations. POPCORN estimates a proximity graph to select data from easiest ones to more difficult ones, in order to ensure accurate pseudo-labeling and to limit confirmation bias. Applied to multiple sclerosis lesion segmentation, our method demonstrates competitive results compared to other state-of-the-art SSL strategies.
In this paper, we present a new tool for white matter lesion segmentation called lesionBrain. Our... more In this paper, we present a new tool for white matter lesion segmentation called lesionBrain. Our method is based on a 3-stage strategy including multimodal patch-based segmentation, patch-based regularization of probability map and patch-based error correction using an ensemble of shallow neural networks. Its robustness and accuracy have been evaluated on the MSSEG challenge 2016 datasets. During our validation, the performance obtained by le-sionBrain was competitive compared to recent deep learning methods. Moreover, lesionBrain proposes automatic lesion categorization according to location. Finally, complementary information on gray matter atrophy is included in the generated report. LesionBrain follows a software as a service model in full open access.
Whole brain segmentation using deep learning (DL) is a very challenging task since the number of ... more Whole brain segmentation using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a single convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called AssemblyNet, made of two "assemblies" of U-Nets. Such a parliamentary system is capable of dealing with complex decisions, unseen problem and reaching a consensus quickly. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an "amendment" procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. During our validation, AssemblyNet showed competitive performance compared to state-of-the-art methods such as U-Net, Joint label fusion and SLANT. Moreover, we investigated the scan-rescan consistency and the robustness to disease effects of our method. These experiences demonstrated the reliability of AssemblyNet. Finally, we showed the interest of using semi-supervised learning to improve the performance of our method.
Affine registration of one or several brain image(s) onto a common reference space is a necessary... more Affine registration of one or several brain image(s) onto a common reference space is a necessary prerequisite for many image processing tasks, such as brain segmentation or functional analysis. Manual assessment of registration quality is a tedious and time-consuming task, especially in studies comprising a large amount of data. Automated and reliable quality control (QC) becomes mandatory. Moreover, the computation time of the QC must be also compatible with the processing of massive datasets. Therefore, automated deep neural network approaches have emerged as a method of choice to automatically assess registration quality. In the current study, a compact 3D convolutional neural network, referred to as RegQCNET, is introduced to quantitatively predict the amplitude of an affine registration mismatch between a registered image and a reference template. This quantitative estimation of registration error is expressed using the metric unit system. Therefore, a meaningful task-specific...
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific re... more HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
ObjectiveThe cerebellum is involved in cognitive processing and emotion control. Cerebellar alter... more ObjectiveThe cerebellum is involved in cognitive processing and emotion control. Cerebellar alterations could explain symptoms of schizophrenia spectrum disorder (SZ) and bipolar disorder (BD). In addition, literature suggests that lithium might influence cerebellar anatomy. Our aim was to study cerebellar anatomy in SZ and BD, and investigate the effect of lithium.MethodsParticipants from 7 centers worldwide underwent a 3T MRI. We included 182 patients with SZ, 144 patients with BD, and 322 controls. We automatically segmented the cerebellum using the CERES pipeline. All outputs were visually inspected.ResultsPatients with SZ showed a smaller global cerebellar gray matter volume compared to controls, with most of the changes located to the cognitive part of the cerebellum (Crus II and lobule VIIb). This decrease was present in the subgroup of patients with recent‐onset SZ. We did not find any alterations in the cerebellum in patients with BD. However, patients medicated with lithiu...
ABSTRACTNumerous studies have proposed biomarkers based on magnetic resonance imaging (MRI) to de... more ABSTRACTNumerous studies have proposed biomarkers based on magnetic resonance imaging (MRI) to detect and predict the risk of evolution toward Alzheimer’s disease (AD). While anatomical MRI captures structural alterations, studies demonstrated the ability of diffusion MRI to capture microstructural modifications at an earlier stage. Several methods have focused on hippocampus structure to detect AD. To date, the patch-based grading framework provides the best biomarker based on the hippocampus. However, this structure is complex since the hippocampus is divided into several heterogeneous subfields not equally impacted by AD. Former in-vivo imaging studies only investigated structural alterations of these subfields using volumetric measurements and microstructural modifications with mean diffusivity measurements. The aim of our work is to study the efficiency of hippocampal subfields compared to the whole hippocampus structure with a multimodal patch-based framework that enables to c...
Whether hippocampal subfields are differentially vulnerable at the earliest stages of multiple sc... more Whether hippocampal subfields are differentially vulnerable at the earliest stages of multiple sclerosis (MS) and how this impacts memory performance is a current topic of debate. We prospectively included 56 persons with clinically isolated syndrome (CIS) suggestive of MS in a 1-year longitudinal study, together with 55 matched healthy controls at baseline. Participants were tested for memory performance and scanned with 3 T MRI to assess the volume of 5 distinct hippocampal subfields using automatic segmentation techniques. At baseline, CA4/dentate gyrus was the only hippocampal subfield with a volume significantly smaller than controls (p < .01). After one year, CA4/dentate gyrus atrophy worsened (-6.4%, p < .0001) and significant CA1 atrophy appeared (both in the stratum-pyramidale and the stratum radiatum-lacunosum-moleculare, -5.6%, p < .001 and -6.2%, p < .01, respectively). CA4/dentate gyrus volume at baseline predicted CA1 volume one year after CIS (R2 = 0.44 t...
The human cerebellum is involved in language, motor tasks and cognitive processes such as attenti... more The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes).
In structural and functional MRI studies there is a need for robust and accurate automatic segmen... more In structural and functional MRI studies there is a need for robust and accurate automatic segmentation of various brain structures. We present a comparison study of three automatic segmentation methods based on the new T1weighted MR sequence called MP2RAGE, which has superior soft tissue contrast. Automatic segmentations of the thalamus and hippocampus are compared to manual segmentations. In addition, we qualitatively evaluate the segmentations when warped to co-registered maps of the fractional anisotropy (FA) of water diffusion. Compared to manual segmentation, the best results were obtained with a patch-based segmentation method (volBrain) using a library of images from the same scanner (local), followed by volBrain using an external library (external), FSL and Freesurfer. The qualitative evaluation showed that volBrain local and volBrain external produced almost no segmentation errors when overlaid on FA maps, while both FSL and Freesurfer segmentations were found to overlap with white matter tracts. These results underline the importance of applying accurate and robust segmentation methods and demonstrate the superiority of patch-based methods over more conventional methods.
Automatic segmentation methods are important tools for quantitative analysis of Magnetic Resonanc... more Automatic segmentation methods are important tools for quantitative analysis of Magnetic Resonance Images (MRI). Recently, patch-based label fusion approaches have demonstrated state-of-the-art segmentation accuracy. In this paper, we introduce a new patch-based label fusion framework to perform segmentation of anatomical structures. The proposed approach uses an Optimized PAtchMatch Label fusion (OPAL) strategy that drastically reduces the computation time required for the search of similar patches. The reduced computation time of OPAL opens the way for new strategies and facilitates processing on large databases. In this paper, we investigate new perspectives offered by OPAL, by introducing a new multi-scale and multi-feature framework. During our validation on hippocampus segmentation we use two datasets: young
Uploads
Papers by jose manjon