Clustering and Shifting of Regional Appearance for
Deformable Model Segmentation

Medical Image Display & Analysis Group

Abstract: Automated medical image segmentation is a challenging task that benefits from the use of effective image appearance models. An appearance model describes the grey-level intensity information relative to the object being segmented. Previous models that compare the target against a single template image or that assume a very small-scale correspondence fail to capture the variability seen in the target cases. In this dissertation I present novel appearance models to address these deficiencies, and I show their efficacy in segmentation via deformable models.
    The models developed here use clustering and shifting of the object-relative appearance to capture the true variability in appearance. They all learn their parameters from training sets of previously-segmented images. The first model uses clustering on cross-boundary intensity profiles in the training set to determine profile types, and then it builds a template of optimal types that reflects the various edge characteristics seen around the boundary. The second model uses clustering on local regional image descriptors to determine large-scale regions relative to the boundary. The method then partitions the object boundary according to region type and captures the intensity variability per region type. The third and fourth models allow shifting of the image model on the boundary to reflect knowledge of the variable regional conformations seen in training.
    I evaluate the appearance models by considering their efficacy in segmentation of the kidney, bladder, and prostate in abdominal and male pelvis CT. I compare the automatically generated segmentations using these models against expert manual segmentations of the target cases and against automatically generated segmentations using previous models.
    (Advisor: Stephen M. Pizer)

The pdf of the paper.

A ppt of my defense talk.