Programming and experiments by Anurag Vaidya '21
Abstract: Magnetic Resonance Imaging (MRI) encompasses a set of powerful imaging
techniques for understanding brain structure and diagnosing pathology. Various MRI
sequences including T1- and T2-weighted provide rich complementary information. However,
significant equipment costs and acquisition times have inhibited uptake of this critical
technology, adversely impacting health equity globally. To ameliorate these costs
associated with brain MRIs, we present pTransGAN, a generative adversarial
network (GAN) capable of translating both healthy and unhealthy T1 scans into T2 scans,
thereby obviating T2 acquisition. Extending prior GAN-based image translation, we show
that the addition of non-adversarial perceptual losses, like style and content loss,
improves the translations provided, especially making the generated images sharper, and
making the model more robust. Additionally in previous studies, separate models have
been created for healthy and unhealthy brain MRI. Thus here, we also present a novel
simultaneous training protocol that allows pTransGAN to concurrently train on
healthy and unhealthy data sampled from two open brain MRI datasets. As measured by
novel metrics that closely match perceptual similarity of human observers, our
simultaneously trained pTransGAN model outperforms the models individually
trained on just healthy or unhealthy data. These encouraging results should be further
validated with independent paired and unpaired clinical datasets.
Paper (pdf, preprint)
Poster (pdf)
This work relies on Anurag's Bucknell Honors Thesis.
Anurag was awarded the Miller Prize for best Honors Thesis in 2021.