شبکه سریع و کموزن برای شناسایی خطوط جاده با استفاده از معماری موبایلنت و توابع هزینه متفاوت
محورهای موضوعی : مهندسی برق و کامپیوترپژمان گودرزی 1 , میلاد حیدری 2 , مهدی حسین پور 3
1 - پژوهشکده فناوری اطلاعات، پژوهشگاه ارتباطات و فناوری اطلاعات
2 - دانشکده فناوری اطلاعات و مهندسی کامپیوتر، دانشگاه شهید مدنی آذربایجان
3 - پژوهشکده فناوری اطلاعات، پژوهشگاه ارتباطات و فناوری اطلاعات
کلید واژه: تشخیص خط, خودروی خودران, موبایلنت, یادگیری عمیق,
چکیده مقاله :
با استفاده از سیستم تشخیص خط در خودروهای خودران میتوان موقعیت نسبی خودرو را نسبت به دیگر خودروها و همچنین احتمال خروج از خط و حتی امکان تصادف را بررسی کرد. در این مقاله، یک رویکرد تشخیص خط کموزن و سریع برای تصاویر برگرفتهشده از دوربین تعبیهگردیده در شیشه جلویی خودروها ارائه شده است. بیشتر روشهای موجود، مسئله تشخیص خط را به صورت کلاسبندی در سطح پیکسل در نظر میگیرند. این روشها با وجود داشتن قدرت تشخیص بالا، از دو ضعف داشتن پیچیدگی محاسباتی بالا و عدم توجه به اطلاعات محتوایی کلی تصویر منحصربهفرد خطوط (در نتیجه در صورت وجود مانع، امکان تشخیص ندارند) رنج میبرند. روش پیشنهادی پیش رو با بهرهگیری از روش انتخاب بر اساس ردیف، وجود خطوط در هر ردیف را بررسی میکند. همچنین استفاده از معماری موبایلنت باعث بهدستآمدن نتایج خوب با تعداد پارامترهای یادگیری کمتر شده است. استفاده از سه تابع مختلف به عنوان توابع هزینه با اهداف متفاوت، باعث بهدستآمدن نتایج عالی و درنظرگرفتن اطلاعات محتوایی کلی منحصربهفرد خطوط در کنار اطلاعات محلی شده است. آزمایشهای انجامگرفته بر روی مجموعه تصاویر ویدئویی TuSimple نشان از عملکرد مناسب رویکرد پیشنهادی از لحاظ کارایی و مخصوصاً از لحاظ سرعت دارد.
By using the line detection system, the relative position of the self-driving cars compared to other cars, the possibility of leaving the lane or an accident can be checked. In this paper, a fast and lightweight line detection approach for images taken from a camera installed in the windshield of cars is presented. Most of the existing methods consider the problem of line detection in the form of classification at the pixel level. These methods despite having high accuracy, suffer from two weaknesses of having the high computational cost and not paying attention to the general lines content information of the image (as a result, they cannot detect if there is an obstacle). The proposed method checks the presence of lines in each row by using the row-based selection method. Also, the use of Mobile-net architecture has led to good results with fewer learning parameters. The use of three different functions as cost functions, with different objectives, has resulted in obtaining excellent results and considering general content information along with local information. Experiments conducted on the TuSimple video image collection show the suitable performance of the proposed approach both in terms of efficiency and especially in terms of speed.
[1] S. T. Ying, T. S. Jeng, and V. Chan, "HSI color model based lane-marking detection," in Proc. IEEE Intelligent Transportation Systems Conf., pp. 1168-1172, Toronto, Canada, 17-20 Sept. 2006.
[2] H. Y. Cheng, B. S. Jeng, P. T. Tseng, and K. C. Fan, "Lane detection with moving vehicles in the traffic scenes," IEEE Trans. on Intelligent Transportation Systems, vol. 7, no. 4, pp. 571-582, Dec. 2006.
[3] M. A. Sotelo, F. J. Rodriguez, L. Magdalena, L. M. Bergasa, and L. Boquete, "A color vision-based lane tracking system for autonomous driving on unmarked roads," Autonomous Robots, vol. 16, pp. 95-116, 2004.
[4] P. C. Wu, C. Y. Chang, and C. H. J. P. R. Lin, "Lane-mark extraction for automobiles under complex conditions," Pattern Recognition, vol. 47, no. 8, pp. 2756-2767, Aug. 2014.
[5] C. Mu and X. Ma, "Lane detection based on object segmentation and piecewise fitting," Indonesian J. of Electrical Engineering and Computer Science, vol. 12, no. 5, pp. 3491-3500, May 2014.
[6] J. Niu, J. Lu, M. Xu, P. Lv, and X. Zhao, "Robust lane detection using two-stage feature extraction with curve fitting," Pattern Recognition, vol. 59, pp. 225-233, Nov. 2016.
[7] J. C. McCall and M. M. Trivedi, "Video-based lane estimation and tracking for driver assistance: survey, system, and evaluation," IEEE Trans. on Intelligent Transportation Systems, vol. 7, no. 1, pp. 20-37, Mar. 2006.
[8] P. Y. Hsiao, C. W. Yeh, S. S. Huang, and L. C. Fu, "A portable vision-based real-time lane departure warning system: day and night," IEEE Trans. on Vehicular Technology, vol. 58, no. 4, pp. 2089-2094, May 2008.
[9] M. Fu, X. Wang, H. Ma, Y. Yang, and M. Wang, "Multi-lanes detection based on panoramic camera," in Proc. 11th IEEE Int. Conf. on Control & Automation, ICCA'14, pp. 655-660, Taichung, Taiwan, 18-20 Jun. 2014.S [10] J. G. Kim, J. H. Yoo, and J. C. Koo, "Road and lane detection using stereo camera," in Proc. IEEE Int. Conf. on Big Data and Smart Computing, BigComp'18, pp. 649-652, Shanghai, China, 15-17 Jan. 2018.
[11] S. P. Narote, P. N. Bhujbal, A. S. Narote, and D. M. Dhane, "A review of recent advances in lane detection and departure warning system," Pattern Recognition, vol. 73, pp. 216-234, Jan. 2018.
[12] J. Tang, S. Li, and P. Liu, "A review of lane detection methods based on deep learning," Pattern Recognition, vol. 111, Article ID: 107623, Mar. 2021.
[13] X. Wu, D. Sahoo, and S. C. Hoi, "Recent advances in deep learning for object detection," Neurocomputing, vol. 396, pp. 39-64, 5 Jul. 2020.
[14] Z. Q. Zhao, P. Zheng, S. T. Xu, and X. Wu, "Object detection with deep learning: a review," IEEE Trans. on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212-3232, Nov. 2019.
[15] D. Neven, B. D. Brabandere, S. Georgoulis, M. Proesmans, and L. V. Gool, "Towards end-to-end lane detection: an instance segmentation approach," in Proc. IEEE Intelligent Vehicles Symp. (IV), pp. 286-291, Changshu, China, 26-30 Jun. 2018.
[16] M. Ghafoorian, C. Nugteren, N. Baka, O. Booij, and M. Hofmann, "EL-GAN: embedding loss driven generative adversarial networks for lane detection," in Proc. of the European Conf. on Computer Vision Workshops, 14 pp., 2018.
[17] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834-848, Apr. 2017.
[18] Z. Qin, H. Wang, and X. Li, "Ultra fast structure-aware deep lane detection," in Proc. of the Computer Vision-ECCV, pp. 276-291, 2020.
[19] X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, "Spatial as deep: spatial CNN for traffic scene understanding," in Proc. of the AAAI Conf. on Artificial Intelligence, pp. 7276-7283, New Orleans, LA, USA, 2-7 Feb. 2018.
[20] Y. Hou, Z. Ma, C. Liu, and C. C. Loy, "Learning lightweight lane detection CNNs by self attention distillation," in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, pp. 1013-1021, Seoul, Korea (South), 27 Oct.-2 Nov. 2019.
[21] J. Kim and M. Lee, "Robust lane detection based on convolutional neural network and random sample consensus," in Proc. of the Int. Conf. on Neural Information Processing, pp. 454-461, Montreal, Canada 8-13 Dec. 2014.
[22] A. Howard, et al., "Searching for mobilenetv3," in Proc. of the IEEE/CVF Int. Conf. on Computer Vision, pp. 1314-1324, Seoul, Korea (South), 27 Oct.-2 Nov. 2019.
[23] A. G. Howard, et al., "Mobilenets: efficient convolutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017.
[24] Tusimple. Lane Detection Benchmark, 2017. Available: https://github.com/TuSimple/tusimple-benchmark
[25] D. P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, arXiv preprint arXiv:1412.6980, 2014.
[26] I. Loshchilov and F. Hutter, SGDR: Stochastic Gradient Descent with Warm Restarts, arXiv preprint arXiv:1608.03983, 2016.