Artificial intelligence for automated identification and condition assessment of traffic control devices

published:
Number: Issue 32(2025)
Section: Transport technology
The page spacing of the article: 328-337
Keywords: road, traffic sign inspection; computer vision; deep learning; condition assessment; intelligent transportation systems; asset management.
How to quote an article: Roman Smolyanyuk, Serhii Khalin. Artificial intelligence for automated identification and condition assessment of traffic control devices. Dorogi і mosti [Roads and bridges]. Kyiv, 2025. Issue 32. P. 328–337 [in Ukrainian].

Authors

Kharkiv National Automobile and Highway University (KHNADU), Kharkiv, Ukraine
https://orcid.org/0000-0001-7087-7834
Kharkiv National Automobile and Highway University (KHNADU), Kharkiv, Ukraine
https://orcid.org/0009-0009-6061-8498

Summary

Introduction. Traffic control devices, primarily traffic signs and road markings make the roadway environment intelligible to drivers and have a major impact on safety. Manual inspections remain the benchmark for compliance, but they are labour‑intensive, costly, and episodic, which creates long lag times between deterioration and remediation. Recent advances in computer vision and deep learning enable automated pipelines that detect, classify and assess the condition of signs using video and images, optionally supported by photometric measurements of retroreflectivity.

Problem statement. Despite high accuracy on public benchmarks, deep models degrade in real‑world edge cases: fading, dirt, graffiti, occlusions by foliage, snow or fog, and strong off‑axis viewpoints. Moreover, condition assessment requires quantified metrics — colour difference, glyph legibility and contrast, geometric deformation and retroreflectivity — rather than a coarse «good / damaged» label.

Purpose. To consolidate approaches for automated inspection, compare their strengths and limitations under realistic constraints, and outline an edge–cloud architecture that minimizes manual effort while meeting regulatory tolerances.

Materials and methods. We consider a spectrum of methods — from color/shape rules and classical hand-crafted features (HOG-SVM, SIFT-SVM) to single-stage object detectors (YOLOv8, SSD), two-stage detectors (Faster R-CNN, Mask R-CNN), and multi-task as well as multimodal approaches that incorporate depth maps (LiDAR/stereo). For quantitative condition assessment, we describe conversion to the CIELAB color space after Gray-World or Shades-of-Grey white balancing, the use of physics-guided networks to estimate retroreflectivity, and the evaluation metrics employed (mAP for recognition, RMSE for condition regression, FPS on Jetson-NX).

Results. Based on aggregated data, single- and two-stage deep detectors deliver mAP of 0.95 – 0.97, while multi-task/multimodal pipelines achieve the lowest error in condition estimation (RMSE 0.05 – 0.08). On edge devices, 18 – 35 fps is attainable (architecture-dependent), enabling on-device processing with subsequent offloading of candidate frames for heavy segmentation in the cloud. The proposed architecture combines a lightweight on-board YOLOv8-Nano detector (~28 fps) with cloud modules for segmentation and photometric analysis; contrastive pretraining on 20,000 unlabeled patches reduces labeling needs by ~60%, and an inexpensive solid-state LiDAR improves damage-class accuracy and enables tilt/roll measurement with ±2° precision.

Conclusions. AI‑assisted inspection substantially increases the frequency and objectivity of assessments, shortens maintenance cycles and creates the basis for data‑driven asset management. Future priorities include expanding open datasets with authentic degradation patterns, improving physics‑guided networks for direct retroreflectivity estimation, and conducting longitudinal studies to quantify safety and economic benefits.

References

  1. FHWA. Sign Retroreflectivity – Definitions and Units (R_A, cd/lux/m²). URL: https://highways.dot.gov/safety/other/visibility/workshops-nighttime-visibility-traffic-signs-summary-workshop-findings-7  (Last accessed: 08.09.2025) [in English].
  2. Khalilikhah M., et al. Using Stationary Image-Based Method for Evaluation of Traffic Sign Condition. Case Studies in Transport Policy, 2016. URL: https://www.sciencedirect.com/science/article/ pii/S2046043016300429  (Last accessed: 08.09.2025) [in English].
  3. Balali V., et al. Image-Based Remote Measurement of Retro-Reflectivity of Traffic Signs (Daytime HDR → Nighttime). Mineta Transportation Institute Report 1878, 2021. URL: https://transweb.sjsu.edu/sites/default/files/1878-Balili-Remote-Measurement-Road-Reflectivity.pdf (Last accessed: 08.09.2025) [in English].
  4. Oregon DOT (ODOT). Retroreflective Signs — Final Report. 2018. URL: https://www.oregon.gov/odot/Programs/ResearchDocuments/SPR799LidarRetroFinalReport.pdf (Last accessed: 08.09.2025) [in English].
  5. Maerz N. H., et al. Automated Mobile Highway Sign Visibility Measurement Using Nighttime Video. TRB, 2002. URL: https://www.mst.edu/~norbert/pdf/trb02_maerz_sign.pdf (Last accessed: 08.09.2025) [in English].
  6. Cai H. Y., Li L. J. Measuring Light and Geometry Data of Roadway Environments using HDR Photogrammetry. Journal of Transportation Technologies, 2012. URL: https://scispace.com/pdf/measuring-light-and-geometry-data-of-roadway-environments-1axhsejfvc.pdf  (Last accessed: 08.09.2025) [in English].
  7. Finlayson G.D., Trezzi E. Shades of Gray and Colour Constancy. IS&T Color Imaging Conference, 2004. URL: https://library.imaging.org/admin/apis/public/api/ist/website/downloadArticle/cic/12/1/art00008  (Last accessed: 08.09.2025) [in English].
  8. van de Weijer J., Gevers T., Gijsenij A. Grey-Edge Hypothesis and Color Constancy. Technical Report CR-2542, INRIA/LEAR, 2007. URL: https://lear.inrialpes.fr/people/vandeweijer/papers/cr2542.pdf   (Last accessed: 08.09.2025) [in English].
  9. Sharma G., Wu W., Dalal E. N. The CIEDE2000 color-difference formula: Implementation notes… Color Research & Application, 2005, 30 (1). 21–30. DOI: https://doi.org/10.1002/col.20070. URL: https://onlinelibrary.wiley.com/doi/10.1002/col.20070  (Last accessed: 08.09.2025) [in English].
  10. State Agency for Restoration and Development of Infrastructure of Ukraine. Report on the Execution of the Passport of the Budget Program for 2023 (KPKVK 3111020) [in Ukrainian]. Kyiv, 2024. URL: https://restoration.gov.ua/4489/finansovo-ekonomichna_diialnist/59452/59540.pdf (Last accessed: 09.10.2025) [in English].
  11. Lim X. R., Lee C. P., Lim K. M., Ong T. S., Alqahtani A., Ali M. Recent Advances in Traffic Sign Recognition: Approaches and Datasets. Sensors. 2023. 23(10):4674. DOI: https://doi.org/10.3390/s23104674. URL: https://www.mdpi.com/1424-8220/23/10/4674  (Last accessed: 08.10.2025) [in English].
  12. Chen H., Li X., Zhou W., et al. Computational methods for automatic traffic signs detection and recognition: a review. Array, 2024. URL: https://www.sciencedirect.com/science/article/pii/ S2590123024017961  (Last accessed: 08.09.2025) [in English].
  13. Houben S., Stallkamp J., Salmen J., Schlipsing M., Igel C. Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark. Proc. IJCNN, 2013. 715–722. DOI: https://doi.org/10.1109/IJCNN.2013.6706807 [in English].
  14. Neuhold G., Ollmann T., Rota Bulo S., Kontschieder P. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. ICCV, 2017. URL: https://openaccess.thecvf.com/content_ICCV_2017/ papers/Neuhold_The_Mapillary_Vistas_ICCV_2017_paper.pdf  (Last accessed: 08.09.2025) [in English].
  15. Lin T.-Y., Dollár P., Girshick R., He K., Hariharan B., Belongie S. Feature Pyramid Networks for Object Detection. CVPR, 2017. URL: https://openaccess.thecvf.com/content_cvpr_2017/papers/Lin_ Feature_Pyramid_Networks_CVPR_2017_paper.pdf (Last accessed: 08.10.2025) [in English].
  16. He K., Gkioxari G., Dollár P., Girshick R. Mask R-CNN. arXiv:1703.06870, 2017. URL: https://arxiv.org/pdf/1703.06870  (Last accessed: 08.09.2025) [in English].
  17. Zhang H., Wu C., Zhang Z., et al. ResNeSt: Split-Attention Networks. arXiv:2004.08955, 2020. URL: https://arxiv.org/abs/2004.08955  (Last accessed: 08.09.2025) [in English].
  18. Li S., Zhang H, Wang B., et al. A Small Object Detection Algorithm for Traffic Signs Based on the Improved YOLOv7. Sensors, 2023. URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC10459082/  (Last accessed: 08.09.2025) [in English].
  19. Huang X., et al. Adaptive colour calibration for traffic‑sign inspection. IET Intelligent Transport Systems, 2023 [in English].
  20. Ronneberger O., Fischer P., Brox T. U‑Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597, 2015. URL: https://arxiv.org/abs/1505.04597  (Last accessed: 08.09.2025) [in English].
  21. DSTU 4100:2021. Road signs. General technical conditions. Kyiv: SE «UkrNDNC», 2021 [in Ukrainian].
  22. Zhang F., et al. Extracting Traffic Signage by Combining Point Clouds and Images. Remote Sensing, 2023. URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC9964076/  (Last accessed: 08.10.2025) [in English].
  23. Ahn Y., Munjy R., Li Z. Traffic Sign Extraction from Mobile LiDAR Point Cloud. ROSA P / NTL, 2024. URL: https://rosap.ntl.bts.gov/view/dot/75693  (Last accessed: 08.09.2025) [in English].
  24. Cheng Y.-T., et al. Image-Aided LiDAR Extraction, Classification, and Mapping. Remote Sensing, 2024. URL: https://www.mdpi.com/2072-4292/16/10/1668  (Last accessed: 08.09.2025) [in English].
  25. Müller S., et al. Multimodal damage detection for road‑side assets. ITS World Congress, 2024 [in English].
  26. Tomita K., et al. A Review of Infrared Thermography for Delamination Detection. Sensors, 2022. URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC8779359/  (Last accessed: 08.09.2025) [in English].