![]() ![]() Deep tuning our network on images from the benchmark Lung Image Database Consortium and Infectious Disease Research Institute (LIDC/IDRI) database yields higher classification scores as compared to the state-of-the-art. Our work thus presents a mechanism to make the learning process both human-interactive and explainable. Our transfer learning module comprises a deep convolutional autoencoder (CAE) that is pre-trained on a source domain comprising of a small and selective subset of only two objects: flowers and rivers that are selected by voting by human annotators to visually correlate with images of lung nodules and non-nodules, respectively. Though some structural differences may exist, these are imperceptible to the human eye since it focusses on “what it wants to see.” This is the central idea behind our work that trains a deep convolutional neural network on commonly found natural scene images containing visual structures similar in appearance to cropped images of pulmonary nodules from computed tomography (CT) scans for the purpose of cancer diagnosis. The Gestalt principle of similarity states that the human mind tends to club visually similar structures together based on some object attributes such as shape and color. We specifically focus on the pulmonary nodule detection problem in which the task is to distinguish the image patches that contain lung nodules from those that do not. In the distant domain problem addressed in this paper, the source and target domains are totally unrelated but have similar visual structures, thereby infusing explainability in transfer learning. Transfer learning is a trending concept in computer vision that is based on the transfer of knowledge between the source and target domains.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |