Multi-task environment perception algorithm for autonomous driving
DOI:
Author:
Affiliation:

1.School of Computer and Information Engineering,Shanghai Polytechnic University, Shanghai 201209, China; 2.School of Intelligent Manufacturing and Control Engineering,Shanghai Polytechnic University, Shanghai 201209, China

Clc Number:

TP183

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In order to solve the problem of low object detection accuracy in complex driving scenarios which makes it difficult to meet the needs of autonomous driving, an efficient network model MEPNet based on YOLOP is proposed. MEPNet can simultaneously handle three tasks: vehicle detection, drivable area segmentation, and lane detection. First, YOLOv7 is used as the main structure to balance the accuracy and real-time performance. Second, the FRFB module is designed to enlarge receptive fields and enhance the feature extraction capability of the network. The proposed small object detection layer added to the head of the detection network effectively alleviates interference caused by vehicle occlusion and overlap. Finally, CARAFE is used as the upsampling operator to accurately locate object contours while preserving semantic information in images. Experimental results show that the algorithm achieves a inference speed of 42.5 fps, and compared with the baseline YOLOP, it improves the mAP50 and Recall of vehicle detection by 6.8% and 6.3%, the accuracy and IoU of lane detection by 6% and 1%, and the mIoU of drivable area segmentation reaches 92.5%, which significantly improves performance. Furthermore, MEPNet-s has been further designed to accomplish four-task object detection, while simultaneously meeting the accuracy and real-time requirements of autonomous driving.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: March 27,2024
  • Published: