An algorithm for learning shape and appearance models without annotations

Med Image Anal. 2019 Jul:55:197-215. doi: 10.1016/j.media.2019.04.008. Epub 2019 Apr 30.

Abstract

This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. The algorithm was developed with the aim of eventually enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearance basis functions) may be passed across sites, whereas latent variables that encode individual images remain secure within each site. These latent variables are proposed as features for privacy-preserving data mining applications. The approach is demonstrated qualitatively on the KDEF dataset of 2D face images, showing that it can align images that traditionally require shape and appearance models trained using manually annotated data (manually defined landmarks etc.). It is applied to the MNIST dataset of handwritten digits to show its potential for machine learning applications, particularly when training data is limited. The model is able to handle "missing data", which allows it to be cross-validated according to how well it can predict left-out voxels. The suitability of the derived features for classifying individuals into patient groups was assessed by applying it to a dataset of over 1900 segmented T1-weighted MR images, which included images from the COBRE and ABIDE datasets.

Keywords: Appearance model; Diffeomorphisms; Geodesic shooting; Latent variables; Machine learning; Shape model.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Brain / diagnostic imaging*
  • Face / diagnostic imaging*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Imaging, Three-Dimensional / methods
  • Machine Learning*
  • Magnetic Resonance Imaging / methods*