مقایسه شبکه های عمیق Faster RCNN و RetinaNet جهت تشخیص خودرو در آبوهوای نامساعد
محورهای موضوعی : مهندسی برق و کامپیوتریاسر جمشیدی 1 , راضیه سادات اخوت 2
1 - دانشکده فنی مهندسی، دانشگاه علم و فرهنگ، ایران
2 - دانشکده فنی مهندسی، دانشگاه علم و فرهنگ، ایران
کلید واژه: تشخیص شیء, تشخیص خودرو, یادگیری عمیق, سیستمهای حملونقل هوشمند, پردازش تصویر در آبوهوای نامساعد,
چکیده مقاله :
تشخيص وسايل نقليه و رديابی آن، نقش مهمی در اتومبیلهای خودران و سيستمهاي حملونقل هوشمند ايفا میکند. شرايط آبوهوايی نامساعد مانند حضور برف سنگين، مه، باران و گرد و غبار با کاهش ديد دوربين، محدوديتهاي خطرناکی ايجاد کرده و بر عملکرد الگوريتمهاي تشخيصی استفادهشده در سيستمهاي نظارت بر ترافيک و برنامههاي رانندگی خودکار تأثير میگذارد. در این مقاله از شبکه عمیق تشخیص اشیای Faster RCNN با هسته 50ResNet و شبکه RetinaNet استفاده شده و دقت این دو شبکه جهت تشخیص خودرو در آبوهوای نامساعد مورد بررسی قرار میگیرد. پایگاه داده مورد استفاده، فایل DAWN میباشد که شامل تصاویر دنیای واقعی است و با انواع مختلفی از شرایط آبوهوایی نامطلوب جمعآوری شدهاند. نتایج بهدستآمده نشان میدهند که روش ارائهشده در بهترین حالت، دقت تشخیص را از %2/0 به %75 افزایش داده و بیشترین میزان افزایش دقت نیز مربوط به شرایط بارانی میباشد. تمام پردازشها به زبان پایتون و در گوگل کولب انجام شده است.
Vehicle detection and tracking plays an important role in self-driving cars and smart transportation systems. Adverse weather conditions, such as the heavy snow, fog, rain, dust, create dangerous limitations by reducing camera visibility and affect the performance of detection algorithms used in traffic management systems and autonomous cars. In this article, Faster RCNN deep object recognition network with ResNet50 core and RetinaNet network is used and the accuracy of these two networks for vehicle recognition in adverse weather is investigated. The used dataset is the DAWN file, which contains real-world images collected with different types of adverse weather conditions. The obtained results show that the presented method has increased the detection accuracy from 0.2% to 75% in the best case, and the highest increase in accuracy is related to rainy conditions.
[1] M. Hassaballah, M. A. Kenk, K. Muhammad, and S. Minaee, "Vehicle detection and tracking in adverse weather using a deep learning framework," IEEE Trans. on Intelligent Transportation Systems, vol. 22, no. 7, pp. 4230-4242, Jul. 2020.
[2] M. Hassaballah, M. A. Kenk, and I. M. El-Henawy, "Local binary pattern-based on-road vehicle detection in urban traffic scene," Pattern Analysis and Applications, vol. 23, no. 4, pp. 1505-1521, 2020.
[3] P. Kumar Bhaskar and S. Yong, "Image processing based vehicle detection and tracking method," in Proc. Int. Conf. on Computer and Information Sciences, ICCOINS'14, 5 pp., Kuala Lumpur, Malaysia, 03-05 Jun. 2014.
[4] W. Chu, Y. Liu, C. Shen, D. Cai, and X. S. Hua, "Multi-task vehicle detection with region-of-interest voting," IEEE Trans. Image Process., vol. 27, no. 1, pp. 432-441, Jan. 2018.
[5] L. Zhang, et al., "Vehicle object detection based on improved retinanet," J. of Physics: Conf. Series, vol. 1757, no. 1, Article ID: 012070, 2021.
[6] J. Shin, et al., "Real-time vehicle detection using deep learning scheme on embedded system," in Proc. 9th Int. Conf. on Ubiquitous and Future Networks, ICUFN'17, pp. 272-274, Milan, Italy, 4-7 Jul. 2017.
[7] H. Wang, et al., "A comparative study of state-of-the-art deep learning algorithms for vehicle detection," IEEE Intelligent Transportation Systems Magazine, vol. 11, no. 2, pp. 82-95, Summer 2019.
[8] H. Kuang, X. Zhang, Y. J. Li, L. L. H. Chan, and H. Yan, "Nighttime vehicle detection based on bio inspired image eenhancement and weighted score-level feature fusion," IEEE Trans. Intell. Transp. Syst., vol. 18, no. 4, pp. 927-936, Apr. 2017.
[9] C. Sakaridis, D. Dai, and L. Van Gool, "Semantic foggy scene understanding with synthetic data," Int. J. Comput. Vis., vol. 126, no. 9, pp. 973-992, Sept. 2018.
[10] C. Hodges, M. Bennamoun, and H. Rahmani, "Single image dehazing using deep neural networks," Pattern Recognit. Lett, vol. 128, pp. 70-77, Dec. 2019.
[11] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, "Dehazenet: an end-to-end system for single image haze removal," IEEE Trans. on Image Processing, vol. 25, pp. 5187-5198, 2016.
[12] S. Li, et al., "Single image deraining: a comprehensive benchmark analysis," in Proc. of the IEEE/CVF Conf. Comput. Vis. Pattern Recognit, pp. 3838-3847, Long Beach, CA, USA, 15-20 Jun. 2019.
[13] Y. L. Chen and C. T. Hsu, "A generalized low-rank appearance model for spatio-temporally correlated rain streaks," in Proc. of the IEEE Int. Conf. on Computer Vision, pp. 1968-1975, Sydney, Australia, 1-8 Dec. 2013.
[14] L. Zhu, C. W. Fu, D. Lischinski, and P. A. Heng, "Joint bi-layer optimization for single-image rain streak removal," in Proc. of the IEEE In. Conf. on Computer Vision, pp. 2526-2534, Venice, Italy, 27-29 Oct. 2017.
[15] D. Sudha and J. Priyadarshini, "An intelligent multiple vehicle detection and tracking using modified vibe algorithm and deep learning algorithm," Soft Computing, vol. 24, no. 22, pp. 17417-17429, Nov. 2020.
[16] X. Han, J. Chang, and K. Wang, "Real-time object detection based on YOLO-V2 for tiny vehicle object," Procedia Computer Science, vol. 183, pp. 61-72, 2021.
[17] M. Shafiee, B. Chywl, F. Li, and A. Wong, Fast YOLO: A Fast You Only Look Once System for Real-Time Embedded Object Setection in Video, arXiv preprint arXiv:1709.05943, 2017.
[18] S. R. Sree, S. B. Vyshnavi, and N. Jayapandian, "Real-world application of machine learning and deep learning," in Proc. of the Int. Conf. on Smart Systems and Inventive Technology, ICSSIT'19, pp. 1069-1073, Tirunelveli, India, 27-29 Nov. 2019.
[19] M. A. Ponti, L. S. F. Ribeiro, T. S. Nazare, T. Bui, and J. Collomosse, "Everything you wanted to know about deep learning for computer vision but were afraid to ask," in Proc. of the 30th SIBGRAPI Conf. on Graphics, Patterns and Images Tutorials, SIBGRAPI-T'17, pp. 17-41, Niteroi, Brazil, 17-18 Oct. 2017.
[20] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: towards real-time object detection with region proposal networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 137-1149, Jun. 2017.
[21] R. Girshick, "Fast R-CNN," in Proc. of the IEEE Int. Conf. on Computer Vision, ICCV'15, pp. 1440-1448, Santiago, Chile, 07-13 Dec. 2015.
[22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: unified, real-time object detection," in Proc. of the IEEE Conf. Computer Vision and Pattern Recognition, CVPR'16, pp. 779-788, Las Vegas, NV, USA, 27-30 Jun. 2016.
[23] W. Liu, et al., "SSD: single shot multibox detector," in Proc. of the 14th European Conference, Computer Vision, ECCV'16, pp. 21-37, Amsterdam, The Netherlands, 11-14 Oct. 2016.
[24] T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, "Focal loss for dense object detection," in Proc. of the IEEE Int. Conf. on Computer Vision, pp. 2980-2988, Venice, Italy, 22-29 Oct. 2017.
[25] T. Y et al., "Feature pyramid networks for object detection," in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2117-2125, Honolulu, HI, USA, 21-26 Jul. 2017.
[26] T. Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 318 - 327, Feb. 2017.