Abstract:
Super-resolution is a technique that improves the quality of an image. Super-resolution images contain more detail because they have a high density of pixels. Super-resolution images provide complementary detail for clinical diagnosis, astronomy, and biometry. The medical images should be of good resolution for perceiving fine-grained details in the image so that the patients are diagnosed at their early stages of tumours. If the images have low resolution, doctors cannot diagnose the tumours or segment the malignant regions. For super-resolution images, the cost in terms of machines and the health of patients also matter. If patients are at high risk of cancerous cells, continuous radiation exposure will lead to activation of these cancerous cells. So the goal is to enhance the low-quality scans. This thesis emphasises single image super-resolution (SISR). Previously used techniques for the task of super-resolution were interpolation and reconstruction. However, these techniques have limitations in the loss of information in images like edges and boundaries. Recently, deep learning was introduced and aimed to features extraction from the related images for enhancement automatically. For this purpose, Convolutional Neural Networks (CNNs) are employed due to their excellent results in similar domains. These CNN based approaches achieved encouraging results. Transfer learning have been successfully used for image recognition and human action detection. This research work foremostly in-depth examined the learned representation models incorporating transfer learning and fusion of different models. EDSR and WDSR are the winners of NTIRE challenges for the superresolution of natural images. These models, including SRGAN, are evaluated with and without transfer learning. The proposed concepts unleash the integrated potential of these three models in the medical domain with higher PSNR and SSIM as compared to contemporary models. The transfer learning EDSR, WDSR, and SRGAN achieved 40.38, 39.02, and 38.15 PSNR and 0.964, 0.963, and 0.953 SSIM respectively. The fusion of these three models shows PSNR, and SSIM 37.84 and 0.964 respectively. The results report that proposed approaches outperformed state-of-the-art methods by learning superresolution features on NAMIC datasets. The results highlight the importance of transfer learning.