基于深度学习的高速公路交通事故检测方法研究
DOI:
CSTR:
作者:
作者单位:

1.桂林电子科技大学信息与通信学院 桂林 541004; 2.桂林电子科技大学卫星导航定位与 位置服务国家地方联合工程研究中心 桂林 541004

作者简介:

通讯作者:

中图分类号:

TN911.73;TP391.41;U491.3

基金项目:

国家自然科学基金(62101147)、广西自然科学基金(桂科2020GXNSFAA159146)、广西创新驱动发展专项(桂科AA21077008)、教育部重点实验室基金(CRKL190108)项目资助


Research of deep learning-based methods for highway traffic accident detection
Author:
Affiliation:

1.School of Information and Communication, Guilin University of Electronic Technology,Guilin 541004, China; 2.National and Local Joint Engineering Research Center for Satellite Navigation, Positioning and Location Services, Guilin University of Electronic Technology,Guilin 541004, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    现有单阶段深度模型的交通事故检测在高速公路场景中误报率高、计算冗余大,严重制约了实际部署。为此,本文提出一种基于双阶段架构的高速公路交通事故检测方法,采用“静止车辆筛选+外观特征识别”的处理流程。第一阶段结合YOLO11与Bot-SORT,实现对车辆的检测与跟踪,并通过帧间速度分析筛选出疑似事故的静止车辆。第二阶段引入改进模型YOLO-EA,仅对静止车辆执行外观检测,并采用多帧投票机制提高判断稳定性与鲁棒性。YOLO-EA基于YOLO11架构,引入EAS-Stem模块与AWD-Conv模块,前者增强了输入阶段边缘与轮廓的特征提取,后者提升了下采样效率并在保留关键特征同时降低计算负担。实验结果表明,YOLO-EA在Precision、mAP@0.5和mAP@0.5:0.95上分别提升10.9%、3.4%和2.8%,参数量减少21%;在构建的事故视频数据集上,本方法的事故识别率达81.25%,相对于单阶段检测策略误报率降低24.46%。该方法在准确性与推理效率之间实现良好平衡,具备较强的实际部署潜力。

    Abstract:

    Existing single-stage deep models for traffic accident detection often suffer from high false alarm rates and computational redundancy in highway scenarios, severely limiting their practical deployment. To address these issues, this paper proposes a two-stage traffic accident detection method tailored for highways, following a "stationary vehicle filtering+appearance-based recognition" strategy. In the first stage, YOLO11 and Bot-SORT are integrated to detect and track vehicles, and inter-frame speed analysis is used to identify stationary vehicles as potential accident candidates. In the second stage, an improved model named YOLO-EA is introduced to perform appearance-based detection exclusively on the stationary vehicles, combined with a multi-frame voting mechanism to enhance stability and robustness. Built upon the YOLO11 architecture, YOLO-EA incorporates an EAS-Stem module and an AWD-Conv module. The former enhances edge and contour extraction in the input stage, while the latter improves downsampling efficiency by retaining critical features and reducing computational cost. Experimental results show that YOLO-EA improves Precision, mAP@0.5 and mAP@0.5:0.95 by 10.9%, 3.4% and 2.8% respectively, while reducing parameter count by 21%. On the constructed accident video dataset, the proposed method achieves an accident recognition rate of 81.25%, with a 24.46% reduction in false alarm rate compared to single-stage detection strategies. This method achieves a favorable balance between accuracy and inference efficiency, demonstrating strong potential for real-world deployment.

    参考文献
    相似文献
    引证文献
引用本文

凌锐,闫坤,梁宏宇,韦焯淇,郝航勃.基于深度学习的高速公路交通事故检测方法研究[J].电子测量技术,2026,49(6):29-38

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2026-05-13
  • 出版日期:
文章二维码

重要通知公告

①《电子测量技术》期刊收款账户变更公告