Diagnosis of Gastric Cancer via Classification of the Tongue Images using Deep Convolutional Networks
محورهای موضوعی : Image ProcessingElham Gholami 1 , Seyed Reza Kamel Tabbakh 2 , Maryam khairabadi 3
1 - Department of Computer Engineering, Neyshabur branch, Islamic Azad University, Neyshabur,Iran
2 - Department of Computer Engineering, Mashhad Branch,Islamic Azad University, Mashhad, Iran
3 - Department of Computer Engineering, Neyshabur Branch,Islamic Azad University, Neyshabur,Iran
کلید واژه: Gastric Cancer, Deep Convolutional Networks, Image Classification, Fine-grained Recognition,
چکیده مقاله :
Gastric cancer is the second most common cancer worldwide, responsible for the death of many people in society. One of the issues regarding this disease is the absence of early and accurate detection. In the medical industry, gastric cancer is diagnosed by conducting numerous tests and imagings, which are costly and time-consuming. Therefore, doctors are seeking a cost-effective and time-efficient alternative. One of the medical solutions is Chinese medicine and diagnosis by observing changes of the tongue. Detecting the disease using tongue appearance and color of various sections of the tongue is one of the key components of traditional Chinese medicine. In this study, a method is presented which can carry out the localization of tongue surface regardless of the different poses of people in images. In fact, if the localization of face components, especially the mouth, is done correctly, the components leading to the biggest distinction in the dataset can be used which is favorable in terms of time and space complexity. Also, since we have the best estimation, the best features can be extracted relative to those components and the best possible accuracy can be achieved in this situation. The extraction of appropriate features in this study is done using deep convolutional neural networks. Finally, we use the random forest algorithm to train the proposed model and evaluate the criteria. Experimental results show that the average classification accuracy has reached approximately 73.78 which demonstrates the superiority of the proposed method compared to other methods.
Gastric cancer is the second most common cancer worldwide, responsible for the death of many people in society. One of the issues regarding this disease is the absence of early and accurate detection. In the medical industry, gastric cancer is diagnosed by conducting numerous tests and imagings, which are costly and time-consuming. Therefore, doctors are seeking a cost-effective and time-efficient alternative. One of the medical solutions is Chinese medicine and diagnosis by observing changes of the tongue. Detecting the disease using tongue appearance and color of various sections of the tongue is one of the key components of traditional Chinese medicine. In this study, a method is presented which can carry out the localization of tongue surface regardless of the different poses of people in images. In fact, if the localization of face components, especially the mouth, is done correctly, the components leading to the biggest distinction in the dataset can be used which is favorable in terms of time and space complexity. Also, since we have the best estimation, the best features can be extracted relative to those components and the best possible accuracy can be achieved in this situation. The extraction of appropriate features in this study is done using deep convolutional neural networks. Finally, we use the random forest algorithm to train the proposed model and evaluate the criteria. Experimental results show that the average classification accuracy has reached approximately 73.78 which demonstrates the superiority of the proposed method compared to other methods.
[1] B. R. Bistrian, “Modern Nutrition in Health and Disease (Tenth Edition),” Crit. Care Med., vol. 34, no. 9, p. 2514, Sep. 2006, doi: 10.1097/01.CCM.0000236502.51400.9F.
[2] R. A. Smith et al., “American Cancer Society Guidelines for the Early Detection of Cancer,” CA. Cancer J. Clin., vol. 52, no. 1, pp. 8–22, Jan. 2002, doi: 10.3322/canjclin.52.1.8.
[3] G. Murphy, R. Pfeiffer, M. C. Camargo, and C. S. Rabkin, “Meta-analysis Shows That Prevalence of Epstein–Barr Virus-Positive Gastric Cancer Differs Based on Sex and Anatomic Location,” Gastroenterology, vol. 137, no. 3, pp. 824–833, 2009, doi: https://doi.org/10.1053/j.gastro.2009.05.001.
[4] H. H. & J. M. Azizi F, “Epidemiology and control of common diseases in Iran,” 3th ed. Tehran Khosravi Publ., pp. 45–7, 2010.
[5] P. Bertuccio et al., “Recent patterns in gastric cancer: A global overview,” International Journal of Cancer, vol. 125, no. 3. Wiley-Liss Inc., pp. 666–673, Aug. 01, 2009, doi: 10.1002/ijc.24290.
[6] A. Baranovsky and M. H. Myers, “Cancer Incidence and Survival in Patients 65 Years of Age and Older,” CA. Cancer J. Clin., vol. 36, no. 1, pp. 26–41, Jan. 1986, doi: 10.3322/canjclin.36.1.26.
[7] J. D. Emerson and G. A. Colditz, “ Use of Statistical Analysis in The New England Journal of Medicine ,” N. Engl. J. Med., vol. 309, no. 12, pp. 709–713, Sep. 1983, doi: 10.1056/nejm198309223091206.
[8] et al. Hasanzadeh, J., “No TitleGender differences in esophagus, stomach, colon and rectum cancers in fars, Iran, during 2009-2010: an epidemiological population based study,” J. Rafsanjan Univ. Med. Sci., vol. 12.5, pp. 333-342., 2013.
[9] N. Zhang, J. Donahue, R. Girshick, and T. Darrell, “Part-Based R-CNNs for Fine-Grained Category Detection BT - Computer Vision – ECCV 2014,” 2014, pp. 834–849.
[10] H. H. Hartgrink, E. P. M. Jansen, N. C. T. van Grieken, and C. J. H. van de Velde, “Gastric cancer,” Lancet, vol. 374, no. 9688, pp. 477–490, 2009, doi: https://doi.org/10.1016/S0140-6736(09)60617-6.
[11] S. Sarebanha, A. Hooman Kazemi, P. Sadrolsadat, and N. Xin, “75 Comparison of Traditional Chinese Medicine and Traditional Iranian Medicine in Diagnostic Aspect,” Mar. 2016. Accessed: Nov. 19, 2020. [Online]. Available: http://jtim.tums.ac.ir.
[12] J. Hu, S. Han, Y. Chen, and Z. Ji, “Variations of Tongue Coating Microbiota in Patients with Gastric Cancer,” Biomed Res. Int., vol. 2015, 2015, doi: 10.1155/2015/173729.
[13] T. Ma, C. Tan, H. Zhang, M. Wang, W. Ding, and S. Li, “Bridging the gap between traditional Chinese medicine and systems biology: The connection of Cold Syndrome and NEI network,” Molecular BioSystems, vol. 6, no. 4. The Royal Society of Chemistry, pp. 613–619, Mar. 17, 2010, doi: 10.1039/b914024g.
[14] X. Liu et al., “The Metabonomic Studies of Tongue Coating in H. pylori Positive Chronic Gastritis Patients,” Evidence-Based Complement. Altern. Med., vol. 2015, p. 804085, 2015, doi: 10.1155/2015/804085.
[15] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd birds-200-2011 dataset,” 2011.
[16] X. Liu et al., “The Metabonomic Studies of Tongue Coating in H. pylori Positive Chronic Gastritis Patients,” Evidence-based Complement. Altern. Med., vol. 2015, 2015, doi: 10.1155/2015/804085.
[17] C.-C. Chiu, “A novel approach based on computerized image analysis for traditional Chinese medical diagnosis of the tongue,” Comput. Methods Programs Biomed., vol. 61, no. 2, pp. 77–89, 2000, doi: https://doi.org/10.1016/S0169-2607(99)00031-0.
[18] S. Branson et al., “Visual Recognition with Humans in the Loop BT - Computer Vision – ECCV 2010,” 2010, pp. 438–451.
[19] M. Moghimi, “Using color for object recognition,” Calif. Inst. Technol. Tech. Rep, 2011.
[20] E. Gavves, B. Fernando, C. G. M. Snoek, A. W. M. Smeulders, and T. Tuytelaars, “Fine-Grained Categorization by Alignments,” pp. 1713–1720, 2013, doi: 10.1109/ICCV.2013.215.
[21] X. Zhang, H. Xiong, W. Zhou, and Q. Tian, “Fused One-vs-All Features with Semantic Alignments for Fine-Grained Visual Categorization,” IEEE Trans. Image Process., vol. 25, no. 2, pp. 878–892, Feb. 2016, doi: 10.1109/TIP.2015.2509425.
[22] B. Yao, A. Khosla, and L. Fei-Fei, “Combining randomization and discrimination for fine-grained image categorization,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2011, pp. 1577–1584, doi: 10.1109/CVPR.2011.5995368.
[23] B. Yao, G. Bradski, and L. Fei-Fei, “A codebook-free and annotation-free approach for fine-grained image categorization,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2012, pp. 3466–3473, doi: 10.1109/CVPR.2012.6248088.
[24] N. Zhang, R. Farrell, and T. Darrell, “Pose pooling kernels for sub-category recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2012, pp. 3665–3672, doi: 10.1109/CVPR.2012.6248364.
[25] Y. J. Lee, A. A. Efros, and M. Hebert, “Style-aware mid-level representation for discovering visual connections in space and time,” in Proceedings of the IEEE international conference on computer vision, 2013, pp. 1857–1864.
[26] T. Berg and P. N. Belhumeur, “Poof: Part-based one-vs.-one features for fine-grained categorization, face verification, and attribute estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 955–962.
[27] A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: an astounding baseline for recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2014, pp. 806–813.
[28] N. Zhang, R. Farrell, F. Iandola, and T. Darrell, “Deformable part descriptors for fine-grained recognition and attribute prediction,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 729–736.
[29] J. Donahue et al., “Decaf: A deep convolutional activation feature for generic visual recognition,” in International conference on machine learning, 2014, pp. 647–655.