Abstract
Magnetic Resonance Imaging (MRI) is a crucial tool in clinical
diagnostics, with T1-weighted (T1) and T2-weighted (T2). Acquiring
high-quality T2-weighted MRI, especially for infant brains, presents
challenges due to lengthy acquisition times, motion artifacts, and
scanner variability. This study introduces the Adaptive Dual Domain
U-Net, a novel 3D U-Net architecture enhanced with dynamic channel
alignment for synthesizing T2-weighted MRI from T1-weighted inputs.
The proposed model addresses domain variability, integrates
explainability tools using Captum, and employs patch-based training
for efficient memory utilization and high-resolution reconstruction.
Quantitative evaluations on the iSeg-2019 dataset demonstrate
superior performance across key metrics such as Mean Squared Error
(MSE), Structural Similarity Index (SSIM), and R² compared to
baseline methods. Qualitative results highlight the model’s ability to
generate structurally accurate and clinically interpretable synthetic T2-
weighted images, making it a robust tool for both clinical and research applications.
Authors
Param Ahir1, Mehul Parikh2
Gujarat Technological University, India1, L. D. College of Engineering, India2
Keywords
Magnetic Resonance Imaging, Deep Learning, Medical Imaging, Cross-Modality MRI Synthesis, Infant Brain MRI, 3D U-Net