Skip to content

A new benchmark model for the automated recognition of a wide range of heavy construction equipment

Notifications You must be signed in to change notification settings

SenseableSpace/Detection-Heavy-Equipment-Construction-Benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Official Python Implementation

This work was conducted by Yejin Shin, Yujin Choi, Jaeseung Won, Taehoon Hong, and Choongwan Koo (Corresponding Author) | Paper

Affiliation: Construction Engineering & Management Lab, Incheon National University (INUCEM).

Cite This:

Shin, Y., Choi, Y., Won, J., Hong, T., and Koo, C. (2024). "A new benchmark model for the automated detection and classification of a wide range of heavy construction equipment." Journal of Management in Engineering, 40(2), 04023069, https://doi.org/10.1061/JMENEA.MEENG-5630.

A new benchmark model for the automated recognition of a wide range of heavy construction equipment

Abstract

The integration of computer vision technology into construction sites poses various challenges due to the complex environment. Prior studies on computer vision related to heavy construction equipment has primarily focused on a limited range of equipment types provided in standard databases, such as the Microsoft Common Objects in Context (MS COCO) dataset. The conventional approach has limitations in capturing the diverse working conditions and dynamic environments encountered in real construction sites. To overcome the challenge, this study proposed a new benchmark model for the automated detection and classification of a wide range of heavy construction equipment (i.e., nine representative types) commonly used in construction sites, by using a deep convolution neural network. This study was conducted in four steps: (i) data collection and preparation; (ii) data transformation; (iii) model training; and (iv) model validation. The proposed YOLOv5l (large, YOLOv5 with a larger network) model demonstrated high reliability, achieving a "mean Average Precision (mAP)_0.5:0.95" of 90.26%. This study makes a significant contribution to the domain of construction engineering and management by providing a more efficient and systematic management system to proactively prevent heavy equipment-related safety accidents with diverse working conditions and dynamic environments encountered at construction sites. Moreover, the proposed approach can be extended to integrate advanced techniques such as case-based reasoning, digital twin, and blockchain, allowing for the automated activity recognition in various occlusions, the carbon emissions monitoring and diagnostics of heavy equipment, and a robust real-time construction management system with enhanced security.

Keywords

Construction site; Heavy equipment; Benchmark model; Object detection and classification; Computer vision; Field applicability

Research Framework

Figure 1. Research framework

Figure1

Results

Figure 2. Inference with image dataset - Confidence score by equipment class (YOLOv5s, YOLOv5m, and YOLOv5l models)

Figure2-1 Figure2-2 Figure2-3

Code Definition

Category Description
WebCrawling.ipynb This code has been used to automatically collect a group of authentic images for a wide range of heavy construction equipment from google website.
YOLOv5_open.ipynb This code has been used to develop the proposed vision-based classifiers for the automated recognition of a wide range of heavy construction equipment.

Releases

No releases published

Packages

No packages published