CN116543327A - Method, device, computer equipment and storage medium for identifying work types of operators - Google Patents

Method, device, computer equipment and storage medium for identifying work types of operators Download PDF

Info

Publication number
CN116543327A
CN116543327A CN202210089320.3A CN202210089320A CN116543327A CN 116543327 A CN116543327 A CN 116543327A CN 202210089320 A CN202210089320 A CN 202210089320A CN 116543327 A CN116543327 A CN 116543327A
Authority
CN
China
Prior art keywords
feature
image
characteristic
operator
work
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210089320.3A
Other languages
Chinese (zh)
Inventor
武晓敏
段志伟
刘明
阳化
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glodon Co Ltd
Original Assignee
Glodon Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Glodon Co Ltd filed Critical Glodon Co Ltd
Priority to CN202210089320.3A priority Critical patent/CN116543327A/en
Publication of CN116543327A publication Critical patent/CN116543327A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The invention can provide a method, a device, computer equipment and a storage medium for identifying the work types of operators, wherein the method for identifying the work types of operators comprises the following steps: performing target detection on the input image, and determining the position information of the operator in the input image based on a target detection result; intercepting a characteristic association region image from an input image based on operator position information, wherein the characteristic association region image has operator characteristics and construction operation characteristics; and identifying the plurality of characteristic association region images according to a preset identification rule, and determining the work types of operators in the characteristic association region images. According to the invention, the characteristic association region image is intercepted based on the position information of the personnel, and the characteristic detection is carried out on the image to be identified obtained by screening the characteristic association region image to determine the work type, so that the purpose of identifying the work type is achieved by adopting a mode of specifically detecting the characteristic of the association region, and the method has the outstanding advantages of higher identification precision, better identification effect and the like.

Description

Method, device, computer equipment and storage medium for identifying work types of operators
Technical Field
The invention relates to the technical field of work category identification, in particular to a method, a device, computer equipment and a storage medium for identifying work categories of operators.
Background
At present, the operation condition of a construction site is often required to be transmitted back to the construction site in a remote and real-time manner in a smart construction site scene. The operation conditions of the construction site include, but are not limited to, what work is done on the current working surface, what working procedure is done on the current working surface, whether the working personnel wear safety protection tools such as safety helmets, reflective clothing, safety hanging buckles and the like; in particular, how to accurately define the work roles of the current operators is important for a large number of operators on a construction site. In the traditional work type identification scheme of the operators, the current actions of the operators can be identified, so that the type of the current operators can be judged. Although the mode can achieve the purpose of personnel work type recognition, the similar or even the same actions can be carried out frequently in different work types, for example, standing actions can be carried out on a wall supporting template or a wall reinforcing steel bar, squatting actions can be carried out on a floor reinforcing steel bar or a water distribution pipe, and the problems that the recognition accuracy is low, false recognition is easy to occur, work type recognition cannot be carried out on unknown actions and the like exist in the traditional scheme are solved.
Disclosure of Invention
In order to solve the problems of low recognition precision, easy occurrence of false recognition, narrow application range and the like of the existing personnel and work type recognition scheme, the invention provides a method, a device, computer equipment and a storage medium for recognizing the personnel and work type, so as to achieve the technical purposes of improving the recognition precision, the recognition effect and the like of the personnel and work type.
To achieve the above technical object, the present invention provides a method for identifying work kinds of operators, which may include, but is not limited to, at least one of the following steps.
And carrying out target detection on the input image, and determining the position information of the operator in the input image based on a target detection result.
And intercepting a characteristic association region image from the input image based on the position information of the operator, wherein the characteristic association region image has operator characteristics and construction operation characteristics.
And identifying the plurality of characteristic association region images according to a preset identification rule, and determining the work types of the operators in the characteristic association region images.
To achieve the above-mentioned technical object, the present invention also provides a device for identifying work types of operators, including but not limited to a target detection unit, an image capturing unit, an image screening unit, and a feature detection unit.
And the target detection unit is used for carrying out target detection on the input image and determining the position information of the operator in the input image based on a target detection result.
And the image intercepting unit is used for intercepting a characteristic association area image from the input image based on the position information of the operator, wherein the characteristic association area image has the characteristics of the operator and the construction operation characteristics.
And the work type determining unit is used for identifying a plurality of characteristic association region images according to a preset identification rule and determining the work type of the operator in the characteristic association region images.
To achieve the above object, the present invention also provides a computer device, which may include a memory and a processor, where the memory stores computer readable instructions that, when executed by the processor, cause the processor to perform the steps of the method for identifying a job class of an operator according to any of the embodiments of the present invention.
To achieve the above object, the present invention also provides a storage medium storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the method for worker job identification according to any of the embodiments of the present invention.
The beneficial effects of the invention are as follows:
according to the invention, the characteristic association region image is intercepted based on the position information of the personnel, and the characteristic detection is carried out on the image to be identified obtained by screening the characteristic association region image to determine the working personnel type, so that the purpose of identifying the working personnel type is achieved by adopting a mode of specifically detecting the characteristics of the association region, and the method has the outstanding advantages of higher identification precision, better identification effect and the like. The scheme of the invention has very small limitation on specific scenes, can effectively identify the work types of operators even under the condition of simultaneous operation of a plurality of work types, and has higher identification accuracy. Compared with the existing complex and tedious action recognition scheme, the method is easier to implement, the implementation cost is lower, the user experience is better, and the satisfaction is higher.
Drawings
FIG. 1 illustrates a flow diagram of a method of worker job identification in one or more embodiments of the present invention.
Fig. 2 is a schematic diagram of an implementation flow for identifying a worker job by the first embodiment in one or more embodiments of the present invention.
Fig. 3 is a schematic diagram of a process for identifying a worker task through a second embodiment according to one or more embodiments of the present invention.
FIG. 4 illustrates a training flow diagram of a person detection module in one or more embodiments of the invention.
FIG. 5 illustrates a training flow diagram of a coarse classification module in one or more embodiments of the invention.
FIG. 6 illustrates a training flow diagram of a work surface feature detection module in one or more embodiments of the invention.
FIG. 7 illustrates a schematic composition of an apparatus for worker job identification in one or more embodiments of the present invention.
FIG. 8 is a schematic diagram showing the internal structural components of a computer device in one or more embodiments of the invention.
Detailed Description
The invention provides a method, a device, computer equipment and a storage medium for identifying work types of operators, which are specifically explained and illustrated below with reference to the accompanying drawings.
The invention provides a simple and effective method for identifying the work class of the operator based on the surrounding operation characteristics of the current operator, which solves the problems that the traditional work class identification scheme has low identification precision, is easy to identify by mistake, can not identify the unknown operation and the like.
As shown in fig. 1, and in conjunction with fig. 2, one or more embodiments of the present invention can provide a method for worker job identification; the method of worker job identification may include, but is not limited to, one or more of the following steps, described in detail below.
And 100, performing target detection on the input image, and determining the position information of the operator in the input image based on the target detection result. It should be understood that the number of workers in the input image of the present embodiment is at least one, that is, the present invention can perform job identification on a plurality of workers at the same time. Therefore, the technical scheme provided by the invention is not limited by specific scenes, and can support the identification of working personnel and work types under the condition that a plurality of working procedures and a plurality of working surfaces exist and are carried out simultaneously.
Specifically, the object detection of the input image according to the embodiment of the present invention includes: receiving an acquired working surface image, wherein the invention takes a large image of the acquired working surface as an Input source (Input); and an input image can be obtained by performing a size conversion (size) and normalization process on the work surface image, the present embodiment can convert the work surface image size to a fixed size, for example 640×640, by a size conversion method, but is not limited thereto; in this embodiment, the input image (Img) can be obtained by converting the pixel range in the work plane image from 1 to 255 to a floating point value in the range of 0 to 1 by a normalization method, and further, human (operator) target detection can be performed on the input image.
It should be understood that the working surface according to the embodiment of the present invention refers to: the movable space that a professional work needs to have when working a building product is called the work surface of the work. For the construction types related to the invention, taking the standard layer construction sequence in building construction as an example, each working procedure can correspond to one construction type, and can comprise, for example and without limitation, a spring line construction type, a wall steel bar binding construction type, a wall supporting template construction type, a wall body concrete construction type, a maintenance construction type, a wall dismantling template construction type, a floor slab supporting template construction type, a floor slab steel bar binding construction type, a floor slab concrete casting construction type and the like. Based on effective detection of a human body target, the technical scheme of the invention is not limited by external factors, such as the shooting angle and shooting position of the image acquisition device.
As shown in fig. 2, the present embodiment performs target detection on an input image by a person detection module to detect operator position information. In this embodiment, the position information of the operator is represented by a position frame, and the specific position { bbox (xmin, ymin, xmax, ymax) } of each operator corresponding to the working surface in the input image is detected by a personnel detection module, where xmin and ymin respectively represent the x coordinate and the y coordinate of the upper left corner of the position frame, and xmax and ymax respectively represent the x coordinate and the y coordinate of the lower right corner of the position frame.
As shown in fig. 4, the human detection module in this embodiment is a Deep Learning (DL) model for detecting and locating the position of a human in a large working surface map, and the present embodiment trains the Deep Learning model in the following manner, so as to train the completed model as a human detection module.
Firstly, collecting or acquiring images of a working surface (comprising working personnel), and marking the positions of the working personnel; secondly, taking the marked image as a training set of the deep learning model, and carrying out data preprocessing on the training set, wherein the preprocessing is used for converting the image into a readable file format and the like in the training process; again, the training set is utilized to train the deep learning network after pre-training. In this embodiment, the model is continuously trained by the mature detection framework, and is optimized until the model converges, so as to derive an optimal model (personnel detection module).
And 200, cutting out a characteristic association region image from the input image based on the position information of the operator, wherein the characteristic association region image has the characteristics of the operator and the construction operation characteristics. The characteristic association area of the invention not only comprises the operator, but also comprises the area around the operator, thereby realizing the association between the operator and the area.
Specifically, in the embodiment of the present invention, capturing (crop) a feature associated region image from an input image based on position information of an operator can include: determining a first characteristic region on the input image based on the position information of the operator, and performing expansion processing on the first characteristic region to obtain a second characteristic region containing the first characteristic region; an image within the second characteristic region is cut from the input image to obtain a characteristic association region image. The embodiment obtains the position information of each operator based on the personnel detection module, and performs the outward expansion operation, and mainly expands and extends to the two ends aiming at the left upper corner position and the right lower corner position. By means of the expansion processing, the method and the device realize that the intercepted characteristic association area image not only contains the characteristics of operators, but also contains the characteristics of construction operations.
The performing the expansion processing on the first characteristic region in the embodiment of the invention comprises the following steps: and acquiring first coordinate information of the first characteristic region, determining second coordinate information based on the first coordinate information and a preset expansion coefficient, and determining a second characteristic region by using the second coordinate information. Specifically, the present embodiment may calculate differences dx and dy between the upper left corner position coordinate and the lower right corner position coordinate in the first coordinate information, multiply the differences dx and dy between the upper left corner position coordinate and the lower right corner position coordinate in the first coordinate information with preset expansion coefficients ratio_x and ratio_y, respectively, obtain size information of the second feature region, and amplify the first feature region according to the size information of the second feature region to determine second coordinate information { R }, respectively bbox (xmin ', xmax ', ymax ') to obtain a second feature region. If the boundary is out of limits, the coordinate value of the point with the expanded pixel coordinate smaller than 0 is forcedly set to 0, and the coordinate value of the point with the expanded pixel coordinate larger than the width and the height of the original image is forcedly set to the width and the height of the original image, so as to obtain the related element and material characteristic related area { Rbbox (xmin ', ymin', xmax ', ymax') } of each worker, namely, the embodiment obtains the position information (xmin ', ymin'), (x) of four corner points of the image to be interceptedmax ', ymax '), (xmin ', ymax '), (xmax ', ymin ') whereby the feature associated region image Img ' is more accurately captured.
Based on the improved technical scheme, the feature-associated region image is intercepted in an outward expansion processing mode, so that the region image further comprises construction operation features on the basis of comprising the features of operators; based on the outward expansion processing mode, the invention can be suitable for the situation that the operator is positioned on the edge of the working face, such as the working face of the wall form, the working face of the wall-binding steel bar, and the like, thereby getting rid of the assumption that the operator is positioned on the current construction working face, and being suitable for more scenes.
And 300, identifying a plurality of characteristic association region images according to a preset identification rule, and determining the work types of operators in the characteristic association region images. The present invention can realize this process by two embodiments, which are specifically described below.
As shown in fig. 2, the present invention identifies a plurality of feature-related region images according to a preset identification rule, and determines the job type of the operator in the feature-related region images, and the first embodiment thereof may include, but is not limited to, step S10 and step S20.
And step S10, screening the plurality of characteristic association area images according to the characteristics of the operators to determine the images to be identified. Through screening the characteristic association areas, the invention can more effectively associate the characteristics of the operators and the construction operation characteristics, thereby improving the recognition effect of the subsequent staff work types.
The screening of the plurality of characteristic association region images according to the characteristics of the operators can comprise the following steps: classifying the plurality of characteristic association area images according to the characteristics of the operators, and taking the characteristic association area images corresponding to the busy characteristics of the operators as images to be identified; the worker features include a person busy feature and a person idle feature. By carrying out two-classification processing on each characteristic association area image, the invention can predict that the operator in the characteristic area is in a busy state or an idle state so as to screen out the characteristic association area image containing the operator in the busy state, the association degree between the busy operator and the surrounding area is higher, and support is provided for accurately identifying the staff species.
As shown in fig. 2, the embodiment of the present invention recognizes that the worker included in the feature association area small drawing is in a busy state or an idle state (whether busy or idle) through the coarse classification module. The coarse classification module is a classifier, classifies and outputs the characteristic association region image Img' to obtain a preliminary screening result, and the embodiment of the invention carries out subsequent fine classification on the preliminary screening result which is a characteristic association region small image containing busy state information of operators. In addition, the embodiment of the invention can also perform image preprocessing, such as scale transformation, normalization and the like, before the image is sent to the coarse classification module, so that the preprocessed image meets the input requirement of the coarse classification module.
As shown in fig. 5, the rough classification module in this embodiment is a deep learning model for rough classification of local feature related region small images related to operators, where the model mainly includes two classes, one class is idle (the current worker is in a rest state and not in operation construction), and one class is busy (the current worker is in operation construction).
Firstly, carrying out outward expansion processing on the output result of a personnel detection module, then cutting out a series of small images to be used as a main training data set of a current model, classifying the data into two categories according to busy and idle, marking and classifying the data, and taking the marked data set as a model training set; the embodiment can perform data preprocessing on the training set so as to convert the image into a readable file format and the like in the training process; secondly, in the embodiment of the invention, the training of the classification model is realized by utilizing the CNN (Convolutional Neural Networks, convolutional neural network) classification model.
The present embodiment continuously trains the classification model and adjusts until the model converges, thereby deriving an optimal model (coarse classification module or work classification module). Based on the improved technical scheme, the method and the device can realize targeted identification of the busy operators by screening out the characteristic association area images corresponding to the busy characteristics of the operators, so that the method and the device can be suitable for the condition that a plurality of work types work simultaneously, even for a plurality of different work types, the method and the device can still achieve very high identification precision, have very low possibility of false identification, and can be suitable for more scenes.
Step S20, determining the work type of the operator by means of feature detection of the construction work features in the image to be identified. The invention realizes the work type identification of the operator through the surrounding construction operation characteristics related to the operator.
In the embodiment of the invention, the method for determining the work type of the operator by carrying out the feature detection on the construction operation features in the image to be identified comprises the following steps: and determining the characteristic Label (Label) information of the image to be identified by carrying out characteristic detection on the construction operation characteristics in the image to be identified, and taking the work class associated with the characteristic Label information as the work class of the operator. Based on the fine local characteristics of the construction working face, the invention carries out fine recognition on the work types of operators on the basis of the characteristic areas, thereby realizing the fine granularity recognition method of the work types on the construction working face based on the associated characteristic areas, and having the advantages of stronger characteristic recognition degree, very low possibility of false recognition and the like.
More specifically, determining feature tag information of an image to be identified by performing feature detection on construction operation features in the image to be identified according to the embodiment of the present invention may include: performing feature detection on construction operation features in the image to be identified to obtain work type element features; and determining feature tag information matched with the characteristics of the industrial and plant elements. The invention characterizes the work types of the operators through the characteristic labels so as to achieve the purpose of identifying the work types of the operators in a mode of identifying the element characteristics of the work types.
As shown in fig. 2, the embodiment of the invention can detect whether the feature related area small diagram contains the element feature corresponding to the feature job, and output the working face feature label when the feature related area small diagram contains the element feature, wherein the working face feature label can include, but is not limited to, a steel bar binding label, a formwork label, a concrete pouring label, a line snapping label, a water distribution pipe label and the like, so as to determine the job information of the operator. According to the embodiment, the label is output through the working face feature detection module and the output label is given to the corresponding working personnel, so that the working face feature detection module can output the work type label information corresponding to the personnel in the feature association area and the position of the personnel.
As shown in fig. 6, the work surface feature detection module of the present embodiment is a deep learning model, and is configured to further determine whether a certain process related element or material feature exists in a small graph with a busy prediction label output by the coarse classification module, and identify what work is being performed from the small graph according to a specific local feature, so as to determine the work class of the operator through the corresponding label.
Firstly, detecting busy worker images through a worker detection module, carrying out external expansion on the output result of the worker detection module, cutting out a series of small images to be used as a training set of a current deep learning model, and marking according to material element characteristics related to the current work, for example, main element characteristics in a formwork work comprise black or red boards, main element characteristics in a binding steel bar work comprise black steel bars, and carrying out as many characteristic marks as possible, wherein the marked images are used as the training set of the current deep learning model, and the embodiment can carry out data preprocessing on the training set to convert the images into readable file formats and the like in the training process; secondly, training the deep learning network after pre-training by using a training set, wherein the deep learning network in the embodiment of the invention is a Yolo5 framework, and specifically, the Yolo5-m is adopted for training. In this embodiment, the model is continuously trained by the mature detection framework, and the model is optimized until the model converges, so as to derive an optimal model (operation surface feature detection module).
Compared with the prior art, the method does not need to identify the personnel actions, but determines the working variety of the operators by carrying out feature detection on the images to be identified obtained by screening the feature association region images. Therefore, the realization process of the invention is not influenced by the actions or behaviors of operators, and the unknown construction actions or behaviors in the operation process do not need to be considered, namely the invention realizes the de-behaving function in the work type recognition scheme. In addition, the sample size used by the invention is smaller, and the invention can still achieve higher work class recognition rate of operators.
As shown in fig. 3, the present invention identifies a plurality of feature-related region images according to a preset identification rule, and determines the job type of the operator in the feature-related region image, and the second embodiment thereof may include, but is not limited to, step S11, step S21, and step S31.
Step S11, carrying out feature recognition on the feature association region image through a first feature recognition module to obtain a first recognition result; and performing feature recognition on the feature associated region image through a second feature recognition module to obtain a second recognition result. The embodiment of the invention can adopt two feature recognition modules to perform feature recognition on the feature associated region image in parallel, namely, detect the feature associated region small image simultaneously and output a first recognition result Res1 and a second recognition result Res2 respectively. The first feature recognition module and the second feature recognition module in this embodiment are arranged in parallel, the first feature recognition module can be implemented by the coarse classification module (the number and the type of the labels will be more) in the first embodiment of the present invention, and the second feature recognition module can be implemented by the working face feature detection module in the first embodiment of the present invention, which is not described in detail in this embodiment.
And S21, determining characteristic label information of the characteristic association region image according to the first identification result and the second identification result so as to determine the work type of the operator by utilizing the characteristic label information.
Specifically, the determining, according to the first recognition result and the second recognition result, the feature tag information of the feature-related region image includes: according to the fact that the confidence coefficient of the first recognition result is larger than a first confidence coefficient threshold T1, the confidence coefficient of the second recognition result is larger than a second confidence coefficient threshold T2, and the first recognition result and the second recognition result corresponding to the same feature association area image have consistent prediction tag information; the present embodiment uses the consistent predictive tag information as the feature tag information of the feature-related area image.
Taking the first feature recognition module and the second feature recognition module as two different operation surface feature detection modules as examples, the confidence coefficient of the first recognition result Res1 is greater than the first confidence coefficient threshold T1, the confidence coefficient of the second recognition result Res2 is greater than the second confidence coefficient threshold T2, and the predicted tag information of the first recognition result Res1 is the same as the predicted tag information of the second recognition result Res2, and the same predicted tag information is used as the feature tag information of the feature association region image. The two different operation surface feature detection modules can be obtained by training the same training model or by training different training models.
Taking the first feature recognition module and the second feature recognition module as a coarse classification module and a working face feature detection module respectively as an example, if the confidence coefficient of the first recognition result Res1 output by the coarse classification module is greater than a first confidence coefficient threshold T1, and the confidence coefficient of the second recognition result Res2 output by the working face feature detection module is greater than a second confidence coefficient threshold T2, and the predicted tag information in the first recognition result Res1 is consistent with the predicted tag information in the second recognition result Res2 (for example, the similarity of the two predicted tag information is greater than a set value), the embodiment of the present invention can take the predicted tag information in the second recognition result Res2 output by the working face feature detection module as the feature tag information of the feature association region image.
Step S31, the work class associated with the characteristic tag information is used as the work class of the operator.
As shown in fig. 7, the method of identifying the work class of the operator is based on the same technical concept of the invention, and one or more embodiments of the present invention can also provide a device for identifying the work class of the operator.
The device for identifying the work class of the operator provided by the invention comprises a target detection unit, an image interception unit and a work class determination unit, and is specifically described as follows.
And the target detection unit is used for carrying out target detection on the input image and determining the position information of the operator in the input image based on the target detection result.
Optionally, the object detection unit of the embodiment of the present invention may be configured to receive the acquired working surface image, and may be configured to obtain the input image by performing a size conversion and normalization process on the working surface image; the target detection unit is specifically used for detecting a human body target of the input image.
And the image intercepting unit is used for intercepting a characteristic association area image from the input image based on the position information of the operator, wherein the characteristic association area image has the characteristics of the operator and the construction operation characteristics.
Optionally, the image capturing unit of the embodiment of the present invention is configured to determine a first feature area on the input image based on position information of an operator, and is configured to perform an expansion process on the first feature area, so as to obtain a second feature area including the first feature area; the image capturing unit of the embodiment of the invention is used for capturing the image in the second characteristic area range from the input image so as to obtain the characteristic association area image.
Specifically, the image capturing unit according to the embodiment of the present invention can be used to obtain the first coordinate information of the first feature area, and can be used to determine the second coordinate information based on the first coordinate information and the preset expansion coefficient, and to determine the second feature area by using the second coordinate information.
The job type determining unit in one embodiment of the present invention may include, but is not limited to, an image screening unit and a feature detecting unit.
And the image screening unit is used for screening the plurality of characteristic association area images according to the characteristics of the operators so as to determine the images to be identified.
Optionally, the image filtering unit in the embodiment of the present invention is configured to perform classification processing on the multiple feature associated area images according to the features of the operator, and is configured to use the feature associated area image corresponding to the busy feature of the operator as the image to be identified.
The worker features include a busy feature and a free feature.
And the characteristic detection unit is used for determining the work type of the operator in a mode of carrying out characteristic detection on the construction operation characteristics in the image to be identified.
Optionally, the feature detection unit of the embodiment of the invention may be used for determining feature tag information of the image to be identified by performing feature detection on the construction operation feature in the image to be identified, and is used for taking the work class associated with the feature tag information as the work class of the operator.
Specifically, the feature detection unit of the embodiment of the invention can be used for carrying out feature detection on the construction operation features in the image to be identified so as to obtain the work element features, and the feature detection unit is used for determining the feature tag information matched with the work element features.
The work category determination unit in another embodiment of the present invention may include, but is not limited to, a first feature identification module, a second feature detection module, and a work category determination module.
Performing feature recognition on the feature association region image through a first feature recognition module to obtain a first recognition result; performing feature recognition on the feature associated region image through a second feature recognition module to obtain a second recognition result; wherein the first feature recognition module is arranged in parallel with the second feature recognition module.
The work type determining module is used for determining the characteristic label information of the characteristic association area image according to the first identification result and the second identification result, and is used for taking the work type associated with the characteristic label information as the work type of the operator. Specifically, the job type determining module of the present embodiment is configured to use, according to the confidence coefficient of the first recognition result being greater than the first confidence coefficient threshold and the confidence coefficient of the second recognition result being greater than the second confidence coefficient threshold, the first recognition result and the second recognition result corresponding to the same feature association region image having identical prediction tag information, and take the identical prediction tag information as feature tag information of the feature association region image.
As shown in fig. 8, the method for identifying the job of the operator is based on the same technical concept, and one or more embodiments of the present invention can also provide a computer device, where the computer device includes a memory and a processor, and the memory stores computer readable instructions, where the computer readable instructions, when executed by the processor, cause the processor to perform the steps of the method for identifying the job of the operator in any embodiment of the present invention. The detailed implementation process of the method for identifying the work species of the operator is described in detail in the specification, and will not be described in detail herein.
As shown in fig. 8, one or more embodiments of the present invention can also provide a storage medium storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of the method for identifying a job of an operator in any of the embodiments of the present invention, based on the same inventive concept. The detailed implementation process of the method for identifying the work species of the operator is described in detail in the specification, and will not be described in detail herein.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection (electronic device) with one or more wires, a portable computer cartridge (magnetic device), a random access Memory (RAM, random Access Memory), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM, erasable Programmable Read-Only Memory, or flash Memory), an optical fiber device, and a portable compact disc Read-Only Memory (CDROM, compact Disc Read-Only Memory). In addition, the computer-readable storage medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits with logic gates for implementing logic functions on data signals, application specific integrated circuits with appropriate combinational logic gates, programmable gate arrays (PGA, programmable Gate Array), field programmable gate arrays (FPGA, field Programmable Gate Array), and the like.
In the description of the present specification, a description referring to the terms "present embodiment," "one embodiment," "some embodiments," "example," "specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
The above description is only of the preferred embodiments of the present invention, and is not intended to limit the invention, but any modifications, equivalents, and simple improvements made within the spirit of the present invention should be included in the scope of the present invention.

Claims (13)

1. A method for job identification of an operator, comprising:
performing target detection on an input image, and determining position information of an operator in the input image based on a target detection result;
intercepting a characteristic association region image from the input image based on the position information of the operator, wherein the characteristic association region image has operator characteristics and construction operation characteristics;
and identifying a plurality of characteristic association region images according to a preset identification rule, and determining the work types of the operators in the characteristic association region images.
2. The method of staff job identification as set forth in claim 1, wherein the capturing a feature associated region image from the input image based on the positional information of the staff member includes:
determining a first feature area on the input image based on the position information of the operator;
performing expansion processing on the first characteristic region to obtain a second characteristic region containing the first characteristic region;
and intercepting the image in the second characteristic area range from the input image to obtain a characteristic association area image.
3. The method of claim 2, wherein the performing the expansion process on the first feature region includes:
acquiring first coordinate information of a first characteristic region;
determining second coordinate information based on the first coordinate information and a preset expansion coefficient;
and determining a second characteristic region by using the second coordinate information.
4. The method for identifying the work class of the operator according to claim 1, wherein the identifying the plurality of feature-related region images according to the preset identification rule, and determining the work class of the operator in the feature-related region images, comprises:
screening the multiple feature association area images according to the features of the operators to determine the images to be identified;
and determining the work type of the operator by carrying out feature detection on the construction operation features in the image to be identified.
5. The method of claim 4, wherein the screening the plurality of feature-related region images according to the operator features comprises:
classifying the plurality of feature association area images according to the features of the operators, and taking the feature association area images corresponding to the busy features of the operators as images to be identified;
the worker features include a person busy feature and a person idle feature.
6. The method for identifying a worker task of claim 4, wherein the determining a worker task by means of feature detection of a construction task feature in the image to be identified comprises:
feature label information of the image to be identified is determined by feature detection of construction operation features in the image to be identified;
and using the work types related to the characteristic tag information as the work types of operators.
7. The method of claim 6, wherein determining the feature tag information of the image to be identified by feature detection of construction work features in the image to be identified comprises:
performing feature detection on construction operation features in the image to be identified to obtain work type element features;
and determining feature tag information matched with the characteristic of the industrial and plant element.
8. The method for identifying the work class of the operator according to claim 1, wherein the identifying the plurality of feature-related region images according to the preset identification rule, and determining the work class of the operator in the feature-related region images, comprises:
performing feature recognition on the feature association region image through a first feature recognition module to obtain a first recognition result; performing feature recognition on the feature association region image through a second feature recognition module to obtain a second recognition result;
the first feature recognition module and the second feature recognition module are arranged in parallel;
determining feature tag information of a feature association region image according to the first recognition result and the second recognition result;
and using the work class associated with the characteristic tag information as the work class of the operator.
9. The method of claim 8, wherein the determining feature tag information of a feature-related region image based on the first recognition result and the second recognition result comprises:
according to the fact that the confidence coefficient of the first recognition result is larger than a first confidence coefficient threshold value and the confidence coefficient of the second recognition result is larger than a second confidence coefficient threshold value, the first recognition result corresponding to the same feature association region image and the second recognition result have consistent prediction tag information;
and taking the consistent predictive label information as the characteristic label information of the characteristic association region image.
10. The method of claim 1, wherein said performing object detection on the input image comprises:
receiving the collected working surface image;
obtaining an input image by performing size transformation and normalization processing on the working surface image;
and detecting the human body target of the input image.
11. A device for identifying a work category of an operator, comprising:
the target detection unit is used for carrying out target detection on the input image and determining the position information of the operator in the input image based on a target detection result;
an image capturing unit configured to capture a feature-related region image from the input image based on position information of the operator, the feature-related region image having an operator feature and a construction operation feature;
and the work type determining unit is used for identifying a plurality of characteristic association region images according to a preset identification rule and determining the work type of the operator in the characteristic association region images.
12. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the method of worker job identification of any of claims 1 to 10.
13. A storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the method of job identification of a job party according to any one of claims 1 to 10.
CN202210089320.3A 2022-01-25 2022-01-25 Method, device, computer equipment and storage medium for identifying work types of operators Pending CN116543327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210089320.3A CN116543327A (en) 2022-01-25 2022-01-25 Method, device, computer equipment and storage medium for identifying work types of operators

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210089320.3A CN116543327A (en) 2022-01-25 2022-01-25 Method, device, computer equipment and storage medium for identifying work types of operators

Publications (1)

Publication Number Publication Date
CN116543327A true CN116543327A (en) 2023-08-04

Family

ID=87444097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210089320.3A Pending CN116543327A (en) 2022-01-25 2022-01-25 Method, device, computer equipment and storage medium for identifying work types of operators

Country Status (1)

Country Link
CN (1) CN116543327A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116994210A (en) * 2023-09-20 2023-11-03 中国水利水电第七工程局有限公司 Tunnel constructor identification method, device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116994210A (en) * 2023-09-20 2023-11-03 中国水利水电第七工程局有限公司 Tunnel constructor identification method, device and system
CN116994210B (en) * 2023-09-20 2023-12-22 中国水利水电第七工程局有限公司 Tunnel constructor identification method, device and system

Similar Documents

Publication Publication Date Title
CN110197203B (en) Bridge pavement crack classification and identification method based on width learning neural network
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
CN110569856B (en) Sample labeling method and device, and damage category identification method and device
WO2021051885A1 (en) Target labeling method and apparatus
CN110781839A (en) Sliding window-based small and medium target identification method in large-size image
WO2020238256A1 (en) Weak segmentation-based damage detection method and device
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN109886928A (en) A kind of target cell labeling method, device, storage medium and terminal device
CN112347985A (en) Material type detection method and device
CN114882213A (en) Animal weight prediction estimation system based on image recognition
CN112633118A (en) Text information extraction method, equipment and storage medium
CN110969610A (en) Power equipment infrared chart identification method and system based on deep learning
CN114332004A (en) Method and device for detecting surface defects of ceramic tiles, electronic equipment and storage medium
CN103699876A (en) Method and device for identifying vehicle number based on linear array CCD (Charge Coupled Device) images
CN116543327A (en) Method, device, computer equipment and storage medium for identifying work types of operators
CN116152219A (en) Concrete member damage detection and assessment method and system thereof
Vijayan et al. A survey on surface crack detection in concretes using traditional, image processing, machine learning, and deep learning techniques
CN112001320B (en) Gate detection method based on video
CN112016514B (en) Traffic sign recognition method, device, equipment and storage medium
CN109190489A (en) A kind of abnormal face detecting method based on reparation autocoder residual error
CN112001336A (en) Pedestrian boundary crossing alarm method, device, equipment and system
CN112329770B (en) Instrument scale identification method and device
CN109325557B (en) Data intelligence acquisition method based on computer visual image identification
CN116110127A (en) Multi-linkage gas station cashing behavior recognition system
CN115937684A (en) Building construction progress identification method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination