WO2021184966A1 - 证件照检测方法、装置、电子设备以及存储介质 - Google Patents
证件照检测方法、装置、电子设备以及存储介质 Download PDFInfo
- Publication number
- WO2021184966A1 WO2021184966A1 PCT/CN2021/073878 CN2021073878W WO2021184966A1 WO 2021184966 A1 WO2021184966 A1 WO 2021184966A1 CN 2021073878 W CN2021073878 W CN 2021073878W WO 2021184966 A1 WO2021184966 A1 WO 2021184966A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detected
- photo
- shoulder
- position information
- information
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Definitions
- This application relates to the technical field of electronic equipment, and more specifically, to a method, device, electronic equipment, and storage medium for detecting ID photos.
- this application proposes a certificate photo detection method, device, electronic equipment and storage medium to solve the above-mentioned problems.
- an embodiment of the present application provides a method for detecting ID photos.
- the method includes: obtaining ID photos to be detected; inputting the ID photos to be detected into a trained human body feature point detection model, and obtaining the detected ID photos.
- the detection information output by the trained human body feature point detection model, where the detection information includes the position information of at least two feature points of the part to be detected in the document photo to be detected;
- the location information of the characteristic point determines whether the to-be-detected part is level in the to-be-detected ID photo.
- an embodiment of the present application provides a certificate photo detection device, the device includes: a certificate photo acquisition module for obtaining a certificate photo to be tested; a detection information acquisition module for inputting the certificate photo to be tested A trained human body feature point detection model, and obtain detection information output by the trained human body feature point detection model, wherein the detection information includes at least two feature points of the part to be detected in the to-be-detected ID photo
- the location information of the ID photo detection module which is used to determine whether the location to be detected is level in the ID photo based on the location information of at least two characteristic points of the location to be detected.
- an embodiment of the present application provides an electronic device, including a memory and a processor, the memory is coupled to the processor, the memory stores instructions, and the instructions are executed when the instructions are executed by the processor.
- the processor executes the above method.
- an embodiment of the present application provides a computer readable storage medium, and the computer readable storage medium stores program code, and the program code can be invoked by a processor to execute the above method.
- FIG. 1 shows a schematic flow chart of a method for detecting ID photos according to an embodiment of the present application
- FIG. 2 shows a schematic flow chart of a method for detecting ID photos according to another embodiment of the present application
- FIG. 3 shows a schematic flow chart of the ID photo detection method provided by still another embodiment of the present application.
- FIG. 4 shows a schematic flow chart of a method for detecting ID photos according to another embodiment of the present application
- FIG. 5 shows a block diagram of a module photo detection device provided by an embodiment of the present application
- FIG. 6 shows a block diagram of an electronic device used to execute the ID photo detection method according to the embodiment of the present application
- Fig. 7 shows a storage unit for storing or carrying program codes for implementing the ID photo detection method according to the embodiment of the present application.
- ID photos on many occasions.
- electronic equipment can generate or obtain ID photos, and check whether the ID photos are in compliance.
- the current method for electronic devices to perform compliance checks on ID photos is through traditional image processing methods, which will face accurate recognition. The rate is not high and the misjudgment is serious.
- the inventor has discovered through long-term research and proposed the ID photo detection method, device, electronic equipment, and storage medium provided by the embodiments of this application.
- the ID photo to be detected is detected through the trained human feature point detection model. And based on the detection information to determine whether the part to be detected is level, so as to improve the detection accuracy and detection efficiency of the ID photo.
- the specific identification photo detection method will be described in detail in the subsequent embodiments.
- FIG. 1 shows a schematic flow chart of a method for detecting ID photos according to an embodiment of the present application.
- the ID photo detection method is used to detect the ID photo to be detected through a trained human feature point detection model, and determine whether the part to be detected is level based on the detection information, so as to improve the detection accuracy and detection efficiency of the ID photo.
- the ID photo detection method is applied to the ID photo detection device 200 as shown in FIG. 5 and the electronic device 100 equipped with the ID photo detection device 200 (FIG. 6 ).
- FIG. 6 the electronic device 100 equipped with the ID photo detection device 200
- the electronic device applied in this embodiment may be a smart phone, a tablet computer, a wearable electronic device, etc., which is not limited here.
- the process shown in Fig. 1 will be described in detail below.
- the ID photo detection method may specifically include the following steps:
- Step S110 Obtain the ID photo to be detected.
- the electronic device can obtain the ID photo as the ID photo to be detected.
- the electronic device may collect a user image through a camera, and generate a passport photo to be detected based on the collected user image.
- the electronic device can collect user images through the front camera, and generate the ID photo to be detected based on the user image collected by the front camera.
- the electronic device may collect user images through a rear camera, and generate a document photo to be detected based on the user image collected by the rear camera.
- the electronic device may collect a user image by rotating the camera, and generate the ID photo to be detected based on the user image collected by the rotating camera.
- the electronic device may collect a user image through a sliding camera, and generate the ID photo to be detected based on the user image collected by the sliding camera.
- the foregoing manner is only an enumeration of this embodiment, and the electronic device may also collect user images in other manners, which are not limited herein.
- the electronic device can obtain the ID photo to be detected from the photo album.
- the ID photos to be detected obtained by the electronic device from the photo album may include: ID photos pre-collected by a camera and stored in a local photo album, ID photos pre-downloaded from the network and stored in a local photo album, etc., which are not limited here.
- the electronic device can download the ID photo to be detected from the network.
- the way for the electronic device to download the ID photo to be detected from the network may include: ID photos downloaded from a server via a wireless network, ID photos downloaded from a server via a data network, ID photos obtained from other electronic devices via a wireless network, and data ID photos obtained from other electronic devices on the network are not limited here.
- Step S120 Input the to-be-detected ID photo into the trained human body feature point detection model, and obtain the detection information output by the trained human body feature point detection model, where the detection information includes the to-be-detected ID photo Position information of at least two feature points of the part to be detected in.
- the ID photo to be detected after obtaining the ID photo to be detected, the ID photo to be detected can be input into the trained human feature point detection model, and the detection information output by the trained human feature point detection model can be obtained, where the The detection information output by the trained human body feature point detection model may include: position information of at least two feature points of the part to be detected in the ID photo to be detected. Among them, the trained human body feature point detection model is obtained through machine learning.
- a training data set is first collected, where the attributes or features of one type of data in the training data set are distinguished from another type of data, and then the The collected training data set is trained and modeled on the neural network according to the preset algorithm, so that the law is summarized based on the training data set, and the trained human body feature point detection model is obtained.
- the training data set may be, for example, multiple ID photos and the position information of at least two feature points of the parts to be detected in each ID photo.
- the neural network may be, for example, an openpose human body feature point model, etc. There is no limitation here.
- the trained human body feature point detection model may be stored in a server that is in communication with the electronic device after pre-training is completed. Based on this, after the electronic device obtains the ID photo to be detected, it can send an instruction via the network to the trained human feature point detection model stored in the server to instruct the trained human feature point detection model to read the electronic device through the network The acquired ID photo to be detected or the electronic device can send the ID photo to be detected to the trained human feature point detection model stored on the server through the network, so that by storing the trained human feature point detection model on the server, Reduce the occupation of the storage space of the electronic device and reduce the impact on the normal operation of the electronic device.
- the trained human body feature point detection model when the trained human body feature point detection model is stored in the server, the trained human body feature point detection model may be obtained based on the openpose human body feature point detection model training with the backbone network as the model structure.
- the trained human feature point detection model may be stored locally in the electronic device after pre-training is completed. Based on this, the electronic device can directly call the trained human feature point detection model locally after acquiring the ID photo to be detected. For example, it can directly send an instruction to the human feature point detection model to instruct the trained human feature point to detect The model reads the ID photo to be detected in the target storage area, or the electronic device can directly input the ID photo to be detected into the trained human feature point detection model stored locally, thereby effectively avoiding the reduction of the ID photo to be detected due to the influence of network factors According to the input speed of the trained human feature point detection model, the speed of the trained human feature point detection model to obtain the ID photo to be detected is improved, and the user experience is improved. Wherein, when the trained human body feature point detection model is stored in the electronic device, the trained human body feature point detection model may be obtained based on the openpose human body feature point detection model training with mobilenetv2 as the model structure.
- the part to be detected may be a pre-designated body part of the user that can be presented in the document photo to be detected.
- the part to be detected may be a pre-designated user's shoulder that can be shown in the document photo to be detected, and the part to be detected may be the user's eyes and the user's pre-designated part that can be shown in the document photo to be detected.
- the detection part can be a pre-designated eyebrow of the user that can be shown in the ID photo to be detected, and the part to be detected can be a pre-designated ear of the user that can be shown in the ID photo to be detected, etc., which are not limited here. .
- the acquired detection information includes the position information of at least two characteristic points of the part to be detected, that is, the position information of two characteristic points, the position information of three characteristic points, and the position information of four characteristic points of the part to be detected can be acquired.
- Location information of feature points, etc. For example, when the part to be detected is the shoulder, the detection information may include at least two feature points of the shoulder, and when the part to be detected is the eye, the detection information may include at least two feature points of the eye, and when the part to be detected is the eyebrow When the detection information may include at least two feature points of the eyebrows, when the part to be detected is an ear, the detection information may include at least two feature points of the ear, etc., which is not limited herein.
- Step S130 Based on the position information of at least two characteristic points of the to-be-detected part, determine whether the to-be-detected part is level in the to-be-detected ID photo.
- the position information of the at least two feature points of the part to be detected when the position information of at least two feature points of the part to be detected output by the trained human body feature point detection model is obtained, the position information of the at least two feature points of the part to be detected may be determined based on the position information of the at least two feature points of the part to be detected. Whether the inspection site is level in the document photo to be inspected.
- the position information of the at least two feature points of the part to be detected may be calculated to obtain The position relationship of at least two feature points of the part is determined based on the position relationship of the at least two feature points of the part to be detected whether the part to be detected is level in the document photo to be detected.
- the positions of the at least two feature points of the shoulder may be calculated based on the position information of the at least two feature points of the shoulder Relationship, based on the positional relationship of at least two characteristic points of the shoulder, determine whether the shoulder is level in the ID photo.
- the ID photo detection method obtains the ID photo to be detected, inputs the ID photo to be detected into a trained human feature point detection model, and obtains detection information output by the trained human feature point detection model, where:
- the detection information includes the position information of at least two characteristic points of the part to be detected in the document photo to be detected. Based on the position information of the at least two characteristic points of the part to be detected, it is determined whether the part to be detected is level in the document photo to be detected, thereby
- the trained human feature point detection model is used to detect the ID photos to be detected, and based on the detection information to determine whether the parts to be detected are level, so as to improve the detection accuracy and efficiency of the ID photos.
- FIG. 2 shows a schematic flow chart of the ID photo detection method provided by another embodiment of the present application.
- the process shown in Fig. 2 will be described in detail below.
- the ID photo detection method may specifically include the following steps:
- Step S210 Obtain the ID photo to be detected.
- step S210 For the specific description of step S210, please refer to step S110, which will not be repeated here.
- Step S220 Input the to-be-detected ID photo into the trained human body feature point detection model, and obtain the detection information output by the trained human body feature point detection model, where the detection information includes the to-be-detected ID photo
- the part to be detected includes the shoulder
- the position information of at least two feature points of the part to be detected includes the position information of the center of the shoulder, the position information of the first side of the shoulder, and the position information of the second side of the shoulder.
- the position information of the shoulder center output by the trained human body feature point detection model, the position information of the first side of the shoulder, and the shoulder can be obtained. Location information on the second side.
- the position information of at least two feature points of the part to be detected may further include: position information of the center of the shoulder, position information of the first side of the shoulder, and position information of the center of the shoulder. And the position information of the second side of the shoulder, the position information of the first side of the shoulder, and the position information of the second side of the shoulder, etc., are not limited here.
- Step S230 Obtain the center of the shoulder, the first side of the shoulder, and the second side of the shoulder based on the position information of the shoulder center, the position information of the first side of the shoulder, and the position information of the second side of the shoulder. The positional relationship of the side.
- the position information of the shoulder center, the position information of the first side of the shoulder, and the position information of the second side of the shoulder it can be based on the position information of the shoulder center, the position information of the first side of the shoulder, and the second side of the shoulder.
- the position information of the side to obtain the positional relationship between the center of the shoulder, the first side of the shoulder, and the second side of the shoulder.
- the relative offset distance and relative offset angle of the shoulder center relative to the first side of the shoulder shoulder The relative offset distance and relative offset angle of the center relative to the second side of the shoulder, and the relative offset distance and relative offset angle of the first side of the shoulder relative to the second side of the shoulder, based on the obtained relative offset distance and relative offset Shift the angle to obtain the positional relationship between the center of the shoulder, the first side of the shoulder, and the second side of the shoulder.
- Step S240 Based on the position relationship, determine whether the shoulder is level in the document photo to be detected.
- the shoulder center, the first side of the shoulder, and the second side of the shoulder it can be determined whether the center of the shoulder, the first side of the shoulder, and the second side of the shoulder are on a horizontal line based on the positional relationship.
- the shoulder center, the first side of the shoulder, and the second side of the shoulder are on a horizontal line, the shoulder can be determined to be horizontal in the ID photo to be tested.
- the center of the shoulder, the first side of the shoulder, and the second side of the shoulder are not on the same horizontal line , It can be determined that the shoulder is not level in the ID photo to be tested.
- the line between the center of the shoulder, the first side of the shoulder, and the second side of the shoulder is in line with the horizontal line. Parallel or close to parallel.
- the shoulder is horizontal in the ID photo to be tested.
- the center of the shoulder and the first side of the shoulder are the first When the line connecting the side and the second side of the shoulder is not parallel or close to the horizontal line, it can be determined that the shoulder is not level in the ID photo to be detected.
- Step S250 When it is determined that the part to be detected is not level in the document photo to be detected, it is determined that the document photo to be detected is not compliant, and a prompt message is output, wherein the prompt information is used to prompt change to be detected.
- the posture of the part is not determined that the part to be detected is not level in the document photo to be detected, it is determined that the document photo to be detected is not compliant, and a prompt message is output, wherein the prompt information is used to prompt change to be detected. The posture of the part.
- the ID photo is generally required to be the user's photos taken in the standard shooting position, that is, the user's shoulder level, the two ears level, the two eyes level, and the two eyebrows level in the photo as the ID photo are generally required. Wait.
- a prompt message is output to remind the user to change the part to be detected based on the prompt information.
- Posture to re-acquire a compliant ID photo when it is determined that the shoulder is not level in the ID photo to be tested, it can be considered that the ID photo to be tested is not compliant, and a prompt message is output to remind the user to change the posture of the shoulder based on the prompt information to reacquire a compliant ID photo .
- the ID photo to be detected can be considered as non-compliant, and a prompt message is output to remind the user to change the position of the two ears (head) based on the prompt information. Posture to re-acquire compliant ID photos.
- the prompt information output by the electronic device may include voice prompt information, text prompt information, picture prompt information, flash prompt information, etc., which are not limited herein.
- the prompt information output by the electronic device may only include the non-compliance information of the ID photo to be detected, or it can include the non-compliance information of the ID photo to be detected and information guiding the change of the posture of the part to be detected, which is not limited here.
- Step S260 When it is determined that the part to be detected is level in the document photo to be tested, it is determined that the document photo to be tested is compliant, and the document photo to be tested is output.
- the document photo to be detected when it is determined that the position to be detected is in the level of the document photo to be detected, the document photo to be detected can be considered to be compliant, and the document photo to be detected is output for use by the user.
- the ID photo to be detected when it is determined that the shoulder is level in the ID photo to be detected, the ID photo to be detected can be considered to be compliant, and the ID photo to be detected is output.
- the document photo to be tested can be considered to be compliant, and the document photo to be tested is output.
- the ID photo to be tested when it is determined that the shoulder and two ears are level in the ID photo, the ID photo to be tested can be considered to be compliant, and the ID photo to be tested is output.
- the level is in Central Africa, it can be considered that the ID photo to be tested is not compliant, and a prompt message will be output.
- the ID photo to be detected is obtained, the ID photo to be detected is input into a trained human feature point detection model, and the detection information output by the trained human feature point detection model is obtained, wherein ,
- the detection information includes the position information of the shoulder center, the position information of the first side of the shoulder, and the position information of the second side of the shoulder in the ID photo to be detected, based on the position information of the shoulder center, the position information of the first side of the shoulder, and the second side of the shoulder.
- the position information of the side to obtain the positional relationship between the center of the shoulder, the first side of the shoulder, and the second side of the shoulder. Based on the positional relationship, determine whether the shoulder is level in the ID photo to be detected.
- the ID photo to be detected When it is level, it is determined that the ID photo to be detected is not compliant, and a prompt message is output.
- the prompt information is used to prompt to change the posture of the part to be detected.
- the ID photo to be detected is determined Comply with the regulations and output the ID photos to be tested. Compared with the ID photo detection method shown in FIG. 1, this embodiment also detects whether the shoulder in the ID photo to be detected is level, so as to improve the compliance effect of the ID photo.
- this embodiment also outputs prompt information when the object to be detected is not level in the ID photo to be detected, to remind the user to correct the posture in time to improve the effect of the ID photo, and output the waiting message when the object to be detected is level in the ID photo to be detected.
- prompt information when the object to be detected is not level in the ID photo to be detected, to remind the user to correct the posture in time to improve the effect of the ID photo, and output the waiting message when the object to be detected is level in the ID photo to be detected.
- FIG. 3 shows a schematic flow chart of a method for detecting ID photos according to another embodiment of the present application.
- the ID photo detection method may specifically include the following steps:
- Step S310 Obtain the ID photo to be detected.
- Step S320 Input the to-be-detected ID photo into the trained human body feature point detection model, and obtain the detection information output by the trained human body feature point detection model, where the detection information includes the to-be-detected ID photo Position information of at least two feature points of the part to be detected in.
- step S310 to step S320 please refer to step S110 to step S120, which will not be repeated here.
- Step S330 Based on the position information of the at least two feature points of the part to be detected, obtain the coordinate information of the at least two feature points of the part to be detected in the document photo to be detected.
- the position information of at least two feature points of the part to be detected outputted by the trained human body feature point detection model when the position information of at least two feature points of the part to be detected outputted by the trained human body feature point detection model is obtained, the position information of at least two feature points of the part to be detected may be obtained.
- the image coordinate system can be established on the ID photo to be detected, where the origin of the image coordinate system established on the ID photo to be detected can be the lower left corner of the ID photo to be detected, which can be The upper left corner can be the lower right corner of the ID photo to be detected, the upper right corner of the ID photo to be detected, or the center of the ID photo to be detected, etc., which is not limited here.
- the position information of the at least two characteristic points of the part to be detected may be corresponded to the image coordinate system established on the document photo to be detected, and the position information to be detected may be obtained.
- the coordinate information of the at least two feature points of the detection location in the image coordinate system is used as the coordinate information of the at least two feature points of the detection location in the identification photo to be detected.
- Step S340 Based on the coordinate information of the at least two feature points of the part to be detected in the document photo to be detected, it is determined whether the part to be detected is level in the document photo to be detected.
- the part to be tested after obtaining the coordinate information of the at least two feature points of the part to be detected in the document photo to be detected, it may be based on the coordinate information of the at least two feature points of the part to be detected in the document photo to be detected, Determine whether the part to be tested is level in the photo of the certificate to be tested.
- the detection may be performed based on the coordinate information of the at least two feature points of the location to be detected in the ID photo to be detected.
- the ordinates of at least two feature points of the part to be detected are the same in the document photo to be detected, wherein when it is detected that the ordinates of the at least two feature points of the part to be detected are the same in the document photo to be detected, it can be determined
- the detection part is horizontal in the document photo to be detected, and when it is detected that the ordinates of at least two feature points of the part to be detected are different in the document photo to be detected, it can be determined that the part to be detected is not horizontal in the document photo to be detected.
- Step S350 When it is determined that the part to be detected is not level in the document photo to be detected, it is determined that the document photo to be detected is not compliant, and a prompt message is output, where the prompt information is used to prompt change to be detected The posture of the part.
- Step S360 When it is determined that the to-be-detected part is level in the to-be-detected ID photo, it is determined that the to-be-detected ID photo is compliant, and the to-be-detected ID photo is output.
- step S350 to step S360 please refer to step S250 to step S260, which will not be repeated here.
- the ID photo detection method obtains the ID photo to be detected, inputs the ID photo to be detected into a trained human feature point detection model, and obtains detection information output by the trained human feature point detection model, where ,
- the detection information includes the position information of at least two feature points of the part to be detected in the document photo to be detected. Based on the position information of the at least two feature points of the part to be detected, the at least two feature points of the part to be detected are acquired.
- the coordinate information in the ID photo based on the coordinate information of at least two feature points of the location to be detected in the ID photo to be detected, determines whether the location to be detected is level in the ID photo to be detected.
- this embodiment When Central Africa is level, it is determined that the ID photo to be detected is not compliant, and a prompt message is output.
- the prompt information is used to prompt to change the posture of the part to be detected.
- it is determined that the part to be detected is level in the ID photo to be detected it is determined to be detected.
- the ID photos are compliant, and the ID photos to be tested will be output.
- this embodiment also obtains the coordinate information of at least two feature points of the location to be detected in the ID photo to be detected based on the position information of at least two feature points of the location to be detected. And based on the coordinate information, whether the part to be detected is level in the ID photo to be detected is detected to improve the detection accuracy.
- this embodiment also outputs prompt information when the object to be detected is not level in the ID photo to be detected, to remind the user to correct the posture in time to improve the effect of the ID photo, and output the waiting message when the object to be detected is level in the ID photo to be detected.
- prompt information when the object to be detected is not level in the ID photo to be detected, to remind the user to correct the posture in time to improve the effect of the ID photo, and output the waiting message when the object to be detected is level in the ID photo to be detected.
- FIG. 4 shows a schematic flow chart of a method for detecting ID photos according to another embodiment of the present application.
- the ID photo detection method may specifically include the following steps:
- Step S410 Obtain a training data set, where the training data set includes a plurality of ID photos and the position information of at least two feature points of the parts to be detected in each ID photo.
- this embodiment also includes a training method for the human feature point detection model.
- the training of the human feature point detection model may be based on the acquired training data set in advance. So, every subsequent detection of the ID photo to be detected, the detection can be performed according to the human feature point detection model, and there is no need to train the human feature point detection model every time the detection of the ID photo to be detected is performed.
- a training data set may be obtained, and the training data set includes a plurality of ID photos, and position information of at least two feature points of the parts to be detected in each ID photo of the plurality of ID photos.
- the training data set may include a plurality of ID photos, and the respective position information of the shoulder center, the first side of the shoulder, and the second side of the shoulder in each ID photo are marked.
- Step S420 Based on the training data set, each ID photo is used as input data, and the position information of at least two feature points of the part to be detected in each ID photo is used as output data, and the openpose human body feature The point model is trained to obtain the trained key point detection model of the human body.
- the position information of the multiple ID photos in the training data set and the at least two feature points of the parts to be detected in each ID photo can be input into the openpose human feature point model for training , To obtain the trained human feature point detection model. It is understandable that the one-to-one correspondence ID photo and the position information of at least two feature points of the part to be detected in the ID photo can be input into the openpose human feature point model in pairs, so as to train the openpose human feature point model to obtain The trained human body feature point detection model.
- the accuracy of the trained human body feature point detection model can also be verified, and it is determined that the trained human body feature point detection model is based on the input data. Whether the output information meets the preset requirements, when the output information of the trained human feature point detection model based on the input data does not meet the preset requirements, you can re-collect the training data set to train the openpose human feature point model, or get it again Multiple training data sets are used to calibrate the trained human body feature point detection model, which is not limited here.
- Step S430 Obtain the ID photo to be detected.
- Step S440 Input the to-be-detected ID photo into the trained human body feature point detection model, and obtain the detection information output by the trained human body feature point detection model, where the detection information includes the to-be-detected ID photo Position information of at least two feature points of the part to be detected in.
- Step S450 Based on the position information of at least two characteristic points of the to-be-detected part, determine whether the to-be-detected part is level in the to-be-detected ID photo.
- step S430 to step S450 please refer to step S110 to step S130, which will not be repeated here.
- Step S460 When it is determined that the part to be detected is not level in the document photo to be detected, it is determined that the document photo to be detected is not compliant, and a prompt message is output, wherein the prompt information is used to prompt change to be detected.
- the posture of the part is not determined that the part to be detected is not level in the document photo to be detected, it is determined that the document photo to be detected is not compliant, and a prompt message is output, wherein the prompt information is used to prompt change to be detected. The posture of the part.
- Step S470 When it is determined that the part to be detected is level in the document photo to be detected, it is determined that the document photo to be detected is compliant, and the document photo to be detected is output.
- step S460-step S470 please refer to step S250-step S260, which will not be repeated here.
- the ID photo detection method obtained by another embodiment of the present application obtains a training data set.
- the training data set includes a plurality of ID photos and the position information of at least two feature points of the parts to be detected in each ID photo, based on the training data Set, use each ID photo as input data, and the position information of at least two feature points of the part to be detected in each ID photo as output, train the openpose human feature point model to obtain the trained key points of the human body Check the model.
- the ID photo to be detected input the ID photo to be detected into the trained human feature point detection model, and obtain the detection information output by the trained human feature point detection model, where the detection information includes the part to be detected in the ID photo to be detected
- the detection information includes the part to be detected in the ID photo to be detected
- a prompt message is output.
- the prompt information is used to prompt to change the posture of the part to be inspected.
- this embodiment trains the openpose human body feature point model by obtaining a training data set to obtain a trained human body key point detection model to improve the to-be-detected image in the image to be detected The accuracy of judging whether the part is level.
- this embodiment also outputs prompt information when the object to be detected is not level in the ID photo to be detected, to remind the user to correct the posture in time to improve the effect of the ID photo, and output the waiting message when the object to be detected is level in the ID photo to be detected.
- Check ID photos so that users can obtain compliant ID photos.
- FIG. 5 shows a block diagram of the ID photo detection device 200 provided by an embodiment of the present application.
- the ID photo detection device 200 includes: ID photo acquisition module 210, detection information acquisition module 220, and ID photo detection module 230, where:
- the ID photo acquisition module 210 is used to obtain the ID photo to be tested.
- the detection information acquisition module 220 is configured to input the to-be-detected ID photo into a trained human body feature point detection model, and obtain detection information output by the trained human body feature point detection model, wherein the detection information includes all The position information of at least two characteristic points of the part to be detected in the document photo to be detected.
- the ID photo detection module 230 is configured to determine whether the to-be-detected location is level in the to-be-detected ID photo based on the position information of at least two characteristic points of the to-be-detected location.
- the part to be detected includes a shoulder
- the position information of at least two feature points of the part to be detected includes position information of the center of the shoulder, position information of the first side of the shoulder, and position information of the second side of the shoulder.
- the ID photo detection module 230 includes: a position relationship acquisition sub-module and a first ID photo detection sub-module, where:
- the position relationship acquisition sub-module is used to acquire the center of the shoulder, the first side of the shoulder, and the position information of the second side of the shoulder based on the position information of the shoulder center, the position information of the first side of the shoulder, and the position information of the second side of the shoulder. The positional relationship of the second side of the shoulder.
- the first ID photo detection sub-module is configured to determine whether the shoulder is level in the ID photo to be detected based on the position relationship.
- the ID photo detection module 230 includes: a coordinate information acquisition sub-module and a second ID photo detection sub-module, wherein:
- the coordinate information acquisition sub-module is configured to acquire the coordinate information of the at least two feature points of the to-be-detected location in the to-be-detected ID photo based on the position information of the at least two feature points of the to-be-detected location.
- the second ID photo detection sub-module is configured to determine whether the to-be-detected location is level in the to-be-detected ID photo based on the coordinate information of at least two feature points of the to-be-detected location in the to-be-detected ID photo .
- the ID photo detection device 200 further includes: a prompt information output module, wherein:
- the prompt information output module is configured to, when it is determined that the part to be detected is not level in the to-be-detected credential photo, determine that the to-be-detected credential photo is not compliant, and output prompt information, wherein the prompt information is used for Prompt to change the posture of the part to be detected.
- the certificate photo detection device 200 further includes: a certificate photo output module to be detected, wherein:
- the to-be-detected ID photo output module is used to determine that the to-be-detected ID photo is compliant when it is determined that the to-be-detected part is in the level of the to-be-detected ID photo, and output the to-be-detected ID photo.
- the ID photo detection device 200 further includes: a training data set acquisition module and a model training module, wherein:
- the training data set acquisition module is used to acquire a training data set, the training data set includes a plurality of ID photos and the position information of at least two feature points of the parts to be detected in each ID photo.
- the model training module is configured to use each ID photo as input data, and the position information of at least two feature points of the part to be detected in each ID photo as output data based on the training data set,
- the openpose human body feature point model is trained to obtain the trained human body key point detection model.
- the coupling between the modules may be electrical, mechanical or other forms of coupling.
- the functional modules in the various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
- the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
- FIG. 6 shows a structural block diagram of an electronic device 100 according to an embodiment of the present application.
- the electronic device 100 may be an electronic device capable of running application programs, such as a smart phone, a tablet computer, or an e-book.
- the electronic device 100 in this application may include one or more of the following components: a processor 110, a memory 120, and one or more application programs.
- One or more application programs may be stored in the memory 120 and configured to be Or multiple processors 110 execute, and one or more programs are configured to execute the method described in the foregoing method embodiment.
- the processor 110 may include one or more processing cores.
- the processor 110 uses various interfaces and lines to connect various parts of the entire electronic device 100, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120.
- Various functions and processing data of the electronic device 100 may adopt at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA).
- DSP Digital Signal Processing
- FPGA Field-Programmable Gate Array
- PDA Programmable Logic Array
- the processor 110 may integrate one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
- the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing the content to be displayed; the modem is used for processing wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 110, but may be implemented by a communication chip alone.
- the memory 120 may include random access memory (RAM) or read-only memory (Read-Only Memory).
- the memory 120 may be used to store instructions, programs, codes, code sets or instruction sets.
- the memory 120 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing the operating system and instructions for implementing at least one function (such as touch function, sound playback function, image playback function, etc.) , Instructions used to implement the following various method embodiments, etc.
- the storage data area can also store data (such as phone book, audio and video data, chat record data) created by the electronic device 100 during use.
- FIG. 7 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
- the computer-readable medium 300 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
- the computer-readable storage medium 300 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
- the computer-readable storage medium 300 includes a non-transitory computer-readable storage medium.
- the computer-readable storage medium 300 has storage space for the program code 310 for executing any method steps in the above-mentioned methods. These program codes can be read from or written into one or more computer program products.
- the program code 310 may be compressed in a suitable form, for example.
- the ID photo detection method, device, electronic device, and storage medium provided in the embodiments of the application obtain the ID photo to be detected, input the ID photo to be detected into the trained human feature point detection model, and obtain the trained The detection information output by the human body feature point detection model, where the detection information includes the position information of at least two feature points of the part to be detected in the ID photo to be detected. Based on the position information of the at least two feature points of the part to be detected, the detection information is determined to be Whether the detection part is level in the ID photo to be detected, the detected ID photo is detected through the trained human feature point detection model, and the level of the detection part is determined based on the detection information, so as to improve the detection accuracy and efficiency of the ID photo .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本申请公开了一种证件照检测方法、装置、电子设备以及存储介质,涉及电子设备技术领域。所述方法包括:获取待检测证件照,将待检测证件照输入已训练的人体特征点检测模型,并获取已训练的人体特征点检测模型输出的检测信息,其中,检测信息包括待检测证件照中的待检测部位的至少两个特征点的位置信息,基于待检测部位的至少两个特征点的位置信息,确定待检测部位在待检测证件照中是否水平。本申请实施例提供的证件照检测方法、装置、电子设备以及存储介质通过已训练的人体特征点检测模型对待检测证件照进行检测,并基于检测信息确定待检测部位是否水平,以提升证件照的检测准确性和检测效率。
Description
相关申请的交叉引用
本申请要求于2020年03月16日提交的申请号为CN202010183509.X的中国申请的优先权,其在此出于所有目的通过引用将其全部内容并入本文。
本申请涉及电子设备技术领域,更具体地,涉及一种证件照检测方法、装置、电子设备以及存储介质。
随着科学技术的发展,电子设备的使用越来越广泛,功能越来越多,已经成为人们日常生活中的必备之一。其中,随着电子设备技术的发展,越来越多的电子设备可以支持证件照的生成功能,以及支持对证件照的合规检测功能。
发明内容
鉴于上述问题,本申请提出了一种证件照检测方法、装置、电子设备以及存储介质,以解决上述问题。
第一方面,本申请实施例提供了一种证件照检测方法,所述方法包括:获取待检测证件照;将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的待检测部位的至少两个特征点的位置信息;基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平。
第二方面,本申请实施例提供了一种证件照检测装置,所述装置包括:证件照获取模块,用于获取待检测证件照;检测信息获取模块,用于将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的待检测部位的至少两个特征点的位置信息;证件照检测模块,用于基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平。
第三方面,本申请实施例提供了一种电子设备,包括存储器和处理器,所述存储器耦接到所述处理器,所述存储器存储指令,当所述指令由所述处理器执行时所述处理器执行上述方法。
第四方面,本申请实施例提供了一种计算机可读取存储介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述方法。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中 所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1示出了本申请一个实施例提供的证件照检测方法的流程示意图;
图2示出了本申请又一个实施例提供的证件照检测方法的流程示意图;
图3示出了本申请再一个实施例提供的证件照检测方法的流程示意图;
图4示出了本申请另一个实施例提供的证件照检测方法的流程示意图;
图5示出了本申请实施例提供的证件照检测装置的模块框图;
图6示出了本申请实施例用于执行根据本申请实施例的证件照检测方法的电子设备的框图;
图7示出了本申请实施例的用于保存或者携带实现根据本申请实施例的证件照检测方法的程序代码的存储单元。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
目前,用户在很多场合都会用到证件照。其中,电子设备可以生成或者获取证件照,并对证件照是否合规进行检测判断,但是,目前电子设备对证件照进行合规检测的方法是通过传统的图像处理方式来完成,会面临识别准确率不高,误判情况严重的问题。
针对上述问题,发明人经过长期的研究发现,并提出了本申请实施例提供的证件照检测方法、装置、电子设备以及存储介质,通过已训练的人体特征点检测模型对待检测证件照进行检测,并基于检测信息确定待检测部位是否水平,以提升证件照的检测准确性和检测效率。其中,具体地证件照检测方法在后续的实施例中进行详细的说明。
请参阅图1,图1示出了本申请一个实施例提供的证件照检测方法的流程示意图。所述证件照检测方法用于通过已训练的人体特征点检测模型对待检测证件照进行检测,并基于检测信息确定待检测部位是否水平,以提升证件照的检测准确性和检测效率。在具体的实施例中,所述证件照检测方法应用于如图5所示的证件照检测装置200以及配置有证件照检测装置200的电子设备100(图6)。下面将以电子设备为例,说明本实施例的具体流程,当然,可以理解的,本实施例所应用的电子设备可以为智能手机、平板电脑、穿戴式电子设备等,在此不做限定。下面将针对图1所示的流程进行详细的阐述,所述证件照检测方法具体可以包括以下步骤:
步骤S110:获取待检测证件照。
在本实施例中,电子设备可以获取证件照作为待检测证件照。
在一些实施方式中,电子设备可以通过摄像头采集用户图像,并基于采集到的用户图像生成待检测证件照。作为一种方式,电子设备可以通过前置摄像头采集用户图像,并基于前置摄像头采集到的用户图像生成待检测证件照。作为又一种方式,电子设备可以通过后置摄像头采集用户图像,并基于后置摄像 头采集到的用户图像生成待检测证件照。作为再一种方式,电子设备可以通过转动摄像头采集用户图像,并基于转动摄像头采集到的用户图像生成待检测证件照。作为另一种方式,电子设备可以通过滑动摄像头采集用户图像,并基于滑动摄像头采集到的用户图像生成待检测证件照。当然,上述方式仅为本实施例的列举,电子设备还可以通过其他方式采集用户图像,在此不做限定。
在一些实施方式中,电子设备可以从相册获取待检测证件照。其中,电子设备从相册获取的待检测证件照可以包括:预先通过摄像头采集后存储在本地相册的证件照,预先从网络下载后存储在本地相册的证件照等,在此不做限定。
在一些实施方式中,电子设备可以从网络下载待检测证件照。其中,电子设备从网络下载待检测证件照的方式可以包括:通过无线网络从服务器下载的证件照,通过数据网络从服务器下载的证件照,通过无线网络从其他电子设备获取的证件照,通过数据网络从其他电子设备获取的证件照等,在此不做限定。
步骤S120:将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的待检测部位的至少两个特征点的位置信息。
在本实施例中,在获取待检测证件照后,可以将待检测证件照输入已训练的人体特征点检测模型中,并获取该已训练的人体特征点检测模型输出的检测信息,其中,该已训练的人体特征点检测模型输出的检测信息可以包括:待检测证件照中的待检测部位的至少两个特征点的位置信息。其中,该已训练的人体特征点检测模型是通过机器学习获得的,具体地,首先采集训练数据集,其中,训练数据集中的一类数据的属性或特征区别于另一类数据,然后通过将采集的训练数据集按照预设的算法对神经网络进行训练建模,从而基于该训练数据集总结出规律,得到已训练的人体特征点检测模型。于本实施例中,该训练数据集例如可以是多个证件照以及每个证件照中的待检测部位的至少两个特征点的位置信息,该神经网络例如可以是openpose人体特征点模型等,在此不做限定。
在一些实施方式中,该已训练的人体特征点检测模型可以预先训练完成后存储在与电子设备通信连接的服务器。基于此,电子设备在获取到待检测证件照后,可以通过网络发送指令至存储在服务器的已训练的人体特征点检测模型,以指示该已训练的人体特征点检测模型通过网络读取电子设备获取的待检测证件照,或者电子设备可以通过网络将待检测证件照发送至存储在服务器的已训练的人体特征点检测模型,从而通过将已训练的人体特征点检测模型存储在服务器的方式,减少对电子设备的存储空间的占用,降低对电子设备正常运行的影响。其中,当已训练的人体特征点检测模型存储在服务器时,该已训练的人体特征点检测模型可以是基于以backbone网络作为模型结构的openpose人体特征点检测模型训练获得。
在一些实施方式中,该已训练的人体特征点检测模型可以预先训练完成后存储在电子设备本地。基于此,电子设备获取待检测证件照后,可以直接在本地调用该已训练的人体特征点检测模型,例如,可以直接发送指令至人体特征点检测模型,以指示该已训练的人体特征点检测模型在目标存储区域 读取该待检测证件照,或者电子设备可以直接将该待检测证件照输入存储在本地的已训练的人体特征点检测模型,从而有效避免由于网络因素的影响降低待检测证件照输入已训练的人体特征点检测模型的速度,以提升已训练的人体特征点检测模型获取待检测证件照的速度,提升用户体验。其中,当已训练的人体特征点检测模型存储在电子设备时,该已训练的人体特征点检测模型可以是基于以mobilenetv2作为模型结构的openpose人体特征点检测模型训练获得。
在一些实施方式中,待检测部位可以是预先指定的可以在待检测证件照中呈现出来的用户的身体部位。例如,该待检测部位可以是预先指定的可以在待检测证件照中呈现出来的用户的肩膀、该待检测部位可以是预先指定的可以在待检测证件照中呈现出来的用户的眼睛、该待检测部位可以是预先指定的可以在待检测证件照中呈现出来的用户的眉毛、该待检测部位可以是预先指定的可以在待检测证件照中呈现出来的用户的耳朵等,在此不做限定。
在一些实施方式中,所获取的检测信息包括待检测部位的至少两个特征点的位置信息,即可以获取待检测部位的两个特征点的位置信息、三个特征点的位置信息、四个特征点的位置信息等。例如,当待检测部位为肩膀时,则检测信息可以包括肩膀的至少两个特征点,当待检测部位为眼睛时,则检测信息可以包括眼睛的至少两个特征点,当待检测部位为眉毛时,则检测信息可以包括眉毛的至少两个特征点,当待检测部位为耳朵时,则检测信息可以包括耳朵的至少两个特征点等,在此不做限定。
步骤S130:基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平。
在本实施例中,在获得已训练的人体特征点检测模型输出的待检测部位的至少两个特征点的位置信息时,可以基于该待检测部位的至少两个特征点的位置信息,确定待检测部位在待检测证件照中是否水平。
在一些实施方式中,在获得已训练的人体特征点检测模型输出的待检测部位的至少两个特征点的位置信息,可以基于待检测部位的至少两个特征点的位置信息,计算获得待检测部位的至少两个特征点的位置关系,基于待检测部位的至少两个特征点的位置关系,确定待检测部位在待检测证件照中是否水平。例如,在获得已训练的人体特征点检测模型输出的肩膀的至少两个特征点的位置信息时,可以基于肩膀的至少两个特征点的位置信息,计算获得肩膀的至少两个特征点的位置关系,基于肩膀的至少两个特征点的位置关系,确定肩膀在证件照中是否水平。
本申请一个实施例提供的证件照检测方法,获取待检测证件照,将待检测证件照输入已训练的人体特征点检测模型,并获取已训练的人体特征点检测模型输出的检测信息,其中,检测信息包括待检测证件照中的待检测部位的至少两个特征点的位置信息,基于待检测部位的至少两个特征点的位置信息,确定待检测部位在待检测证件照中是否水平,从而通过已训练的人体特征点检测模型对待检测证件照进行检测,并基于检测信息确定待检测部位是否水平,以提升证件照的检测准确性和检测效率。
请参阅图2,图2示出了本申请又一个实施例提供的证件照检测方法的流程示意图。下面将针对图2所示的流程进行详细的阐述,所述证件照检测方法具体可以包括以下步骤:
步骤S210:获取待检测证件照。
其中,步骤S210的具体描述请参阅步骤S110,在此不再赘述。
步骤S220:将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息。
在本实施例中,待检测部位包括肩膀,待检测部位的至少两个特征点的位置信息包括肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息,因此,在本实施例中,在将待检测证件照输入已训练的人体特征点检测模型后,可以获取该已训练的人体特征点检测模型输出的肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息。
在一些实施方式中,当待检测部位为肩膀时,该待检测部位的至少两个特征点的位置信息还可以包括:肩膀中心的位置信息和肩膀第一侧的位置信息、肩膀中心的位置信息和肩膀第二侧的位置信息、肩膀第一侧的位置信息和肩膀第二侧的位置信息等,在此不做限定。
步骤S230:基于所述肩膀中心的位置信息、所述肩膀第一侧的位置信息以及所述肩膀第二侧的位置信息,获取所述肩膀中心、所述肩膀第一侧以及所述肩膀第二侧的位置关系。
在本实施例中,在获取肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息后,可以基于肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息,获取该肩膀中心、肩膀第一侧以及肩膀第二侧的位置关系。在一些实施方式中,可以基于肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息,获取肩膀中心相对肩膀第一侧的相对偏移距离和相对偏移角度、肩膀中心相对肩膀第二侧的相对偏移距离和相对偏移角度、以及肩膀第一侧相对肩膀第二侧的相对偏移距离和相对偏移角度,并基于获取到的相对偏移距离和相对偏移角度,获取肩膀中心、肩膀第一侧以及肩膀第二侧的位置关系。
步骤S240:基于所述位置关系,确定所述肩膀在所述待检测证件照中是否水平。
在本实施例中,在获得肩膀中心、肩膀第一侧以及肩膀第二侧的位置关系后,可以基于该位置关系,确定肩膀在待检测证件照中是否水平。
在一些实施方式中,在获得肩膀中心、肩膀第一侧以及肩膀第二侧的位置关系后,可以基于该位置关系,确定肩膀中心、肩膀第一侧以及肩膀第二侧是否在一条水平线上,当确定肩膀中心、肩膀第一侧以及肩膀第二侧在一条水平线上时,可以确定肩膀在待检测证件照中水平,当确定肩膀中心、肩膀第一侧以及肩膀第二侧不在一条水平线上时,可以确定肩膀在待检测证件照中不水平。
在一些实施方式中,在获得肩膀中心、肩膀第一侧以及肩膀第二侧的位置 关系后,可以基于该位置关系,确定肩膀中心、肩膀第一侧以及肩膀第二侧的连线是否与水平线平行或者接近于平行,当确定肩膀中心、肩膀第一侧以及肩膀第二侧的连线与水平线平行或者接近于平行时,可以确定肩膀在待检测证件照中水平,当肩膀中心、肩膀第一侧以及肩膀第二侧的连线不与水平线平行或者接近于平行时,可以确定肩膀在待检测证件照中不水平。
步骤S250:当确定所述待检测部位在所述待检测证件照中非水平时,确定所述待检测证件照不合规,并输出提示信息,其中,所述提示信息用于提示改变待检测部位的姿态。
其中,由于证件照一般要求是用户的处于标准拍准拍照姿势下采集的照片,即一般要求作为证件照的照片中的用户的肩膀水平、两只耳朵水平、两只眼睛水平、两条眉毛水平等。
因此,在本实施例中,在确定待检测部位在待检测证件照中非水平时,可以认为该待检测证件照不合规,并输出提示信息,以提醒用户基于该提示信息改变待检测部位的姿态以重新采集合规的证件照。例如,在确定肩膀在待检测证件照中非水平时,可以认为该待检测证件照不合规,并输出提示信息,以提醒用户基于该提示信息改变肩膀的姿态以重新采集合规的证件照。又例如,在确定两只耳朵在待检测证件照中非水平时,可以认为该待检测证件照不合规,并输出提示信息,以提醒用户基于该提示信息改变两只耳朵(头部)的姿态以重新采集合规的证件照。
在一些实施方式中,电子设备输出的提示信息可以包括语音提示信息、文字提示信息、图片提示信息、闪光灯提示信息等,在此不做限定。其中,电子设备输出的提示信息可以仅包括待检测证件照不合规的信息,也可以是包括待检测证件照不合规信息和指导改变待检测部位的姿态的信息,在此不做限定。
步骤S260:当确定所述待检测部位在所述待检测证件照中水平时,确定所述待检测证件照合规,并输出所述待检测证件照。
在本实施例中,在确定待检测部位在待检测证件照中水平时,可以认为该待检测证件照合规,并输出该待检测证件照,以供用户使用。例如,在确定肩膀在待检测证件照中水平时,可以认为该待检测证件照合规,并输出该待检测证件照。又例如,在确定两只耳朵在待检测证件照中水平时,可以认为该待检测证件照合规,并输出该待检测证件照。再例如,在确定肩膀和两只耳朵在证件照中均水平时,可以认为待检测证件照合规,并输出该待检测证件照,在确定肩膀和两只耳朵中的其中一个部位在证件照中非水平时,可以认为待检测证件照不合规,并输出提示信息。
本申请又一个实施例提供的证件照检测方法,获取待检测证件照,将待检测证件照输入已训练的人体特征点检测模型,并获取已训练的人体特征点检测模型输出的检测信息,其中,检测信息包括待检测证件照中的肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息,基于肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息,获取肩膀中心、肩膀第一侧以及肩膀第二侧的位置关系,基于该位置关系,确定肩膀在待检测证件照中是否水平,当确定待检测部位在待检测证件照中非水平时,确定待检 测证件照不合规,并输出提示信息,其中,提示信息用于提示改变待检测部位的姿态,当确定待检测部位在待检测证件照中水平时,确定待检测证件照合规,并输出待检测证件照。相较于图1所示的证件照检测方法,本实施例还针对待检测证件照中的肩膀是否水平进行检测,以提升证件照合规效果。另外,本实施例还在待检测对象在待检测证件照中非水平时输出提示信息,以提醒用户及时改正姿态提升证件照的效果,以及在待检测对象在待检测证件照中水平时输出待检测证件照,以使用户获得合规的证件照。
请参阅图3,图3示出了本申请再一个实施例提供的证件照检测方法的流程示意图。下面将针对图3所示的流程进行详细的阐述,所述证件照检测方法具体可以包括以下步骤:
步骤S310:获取待检测证件照。
步骤S320:将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的待检测部位的至少两个特征点的位置信息。
其中,步骤S310-步骤S320的具体描述请参阅步骤S110-步骤S120,在此不再赘述。
步骤S330:基于所述待检测部位的至少两个特征点的位置信息,获取所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息。
在本实施例中,在获得已训练的人体特征点检测模型输出的待检测部位的至少两个特征点的位置信息时,可以基于该待检测部位的至少两个特征点的位置信息,获取该待检测部位的至少两个特征点在待检测证件照的坐标信息。
在一些实施方式中,可以在待检测证件照上建立图像坐标系,其中,在待检测证件照上建立的图像坐标系的原点可以为待检测证件照的左下角、可以在待检测证件照的左上角、可以在待检测证件照的右下角、可以在待检测证件照的右上角,也可以在待检测证件照的中心等,在此不做限定。其中,在获取待检测部位的至少两个特征点的位置信息后,可以将待检测部位的至少两个特征点的位置信息对应至在待检测证件照上建立的图像坐标系,并获取该待检测部位的至少两个特征点在图像坐标系中的坐标信息,作为待检测部位的至少两个特征点在待检测证件照中的坐标信息。
步骤S340:基于所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息,确定所述待检测部位在所述待检测证件照中是否水平。
在本实施例中,在获取待检测部位的至少两个特征点在待检测证件照中的坐标信息后,可以基于该待检测部位的至少两个特征点在待检测证件照中的坐标信息,确定该待检测部位在待检测证件照中是否水平。
在一些实施方式中,在获取待检测部位的至少两个特征点在待检测证件照中的坐标信息后,可以基于待检测部位的至少两个特征点在待检测证件照中的坐标信息,检测待检测部位的至少两个特征点在待检测证件照中的纵坐标是否相同,其中,当检测到待检测部位的至少两个特征点在待检测证件照中的纵坐标相同时,可以确定待检测部位在待检测证件照中水平,当检测到待检测部位的至少两个特征点在待检测证件照中的纵坐标不相同时,可以确定待检测部位 在待检测证件照中非水平。
步骤S350:当确定所述待检测部位在所述待检测证件照中非水平时,确定所述待检测证件照不合规,并输出提示信息,其中,所述提示信息用于提示改变待检测部位的姿态。
步骤S360:当确定所述待检测部位在所述待检测证件照中水平时,确定所述待检测证件照合规,并输出所述待检测证件照。
其中,步骤S350-步骤S360的具体描述请参阅步骤S250-步骤S260,在此不再赘述。
本申请再一个实施例提供的证件照检测方法,获取待检测证件照,将待检测证件照输入已训练的人体特征点检测模型,并获取已训练的人体特征点检测模型输出的检测信息,其中,检测信息包括待检测证件照中的待检测部位的至少两个特征点的位置信息,基于待检测部位的至少两个特征点的位置信息,获取待检测部位的至少两个特征点在待检测证件照中的坐标信息,基于待检测部位的至少两个特征点在待检测证件照中的坐标信息,确定待检测部位在待检测证件照中是否水平,当确定待检测部位在待检测证件照中非水平时,确定待检测证件照不合规,并输出提示信息,其中,提示信息用于提示改变待检测部位的姿态,当确定待检测部位在待检测证件照中水平时,确定待检测证件照合规,并输出待检测证件照。相较于图1所示的证件照检测方法,本实施例还基于待检测部位的至少两个特征点的位置信息获取待检测部位的至少两个特征点在待检测证件照中的坐标信息,并基于坐标信息对待检测部位在待检测证件照中是否水平进行检测,以提升检测精度。另外,本实施例还在待检测对象在待检测证件照中非水平时输出提示信息,以提醒用户及时改正姿态提升证件照的效果,以及在待检测对象在待检测证件照中水平时输出待检测证件照,以使用户获得合规的证件照。
请参阅图4,图4示出了本申请另一个实施例提供的证件照检测方法的流程示意图。下面将针对图4所示的流程进行详细的阐述,所述证件照检测方法具体可以包括以下步骤:
步骤S410:获取训练数据集,所述训练数据集包括多个证件照,以及每个证件照中的待检测部位的至少两个特征点的位置信息。
针对前述实施例中已训练的人体特征点检测模型,本实施例中还包括对该人体特征点检测模型的训练方法,其中,对人体特征点检测模型的训练可以是根据获取的训练数据集预先进行的,后续在每次进行待检测证件照的检测时,则可以根据该人体特征点检测模型进行检测,而无需每次进行待检测证件照的检测时对人体特征点检测模型进行训练。
在一些实施方式中,可以获取训练数据集,该训练数据集包括多个证件照,以及多个证件照中的每个证件照中的待检测部位的至少两个特征点的位置信息。例如,该训练数据集可以包括多个证件照,以及在每个证件照中的肩膀中心、肩膀第一侧以及肩膀第二侧标注各自的位置信息。
步骤S420:基于所述训练数据集,将所述每个证件照作为输入数据,以及所述每个证件照中的待检测部位的至少两个特征点的位置信息作为输出数据, 对openpose人体特征点模型进行训练,获得已训练的人体关键点检测模型。
在一些实施方式中,在获取训练数据集后,可以将训练数据集中的多个证件照以及每个证件照中的待检测部位的至少两个特征点的位置信息输入openpose人体特征点模型进行训练,以获得已训练的人体特征点检测模型。可以理解的,可以将一一对应的证件照和证件照中的待检测部位的至少两个特征点的位置信息成对输入openpose人体特征点模型,以进行openpose人体特征点模型的训练,从而获得已训练的人体特征点检测模型。
在一些实施方式中,在获得已训练的人体特征点检测模型后,还可以对该已训练的人体特征点检测模型的准确性进行验证,并判断该已训练的人体特征点检测模型基于输入数据的输出信息是否满足预设要求,当该已训练的人体特征点检测模型基于输入数据的输出信息不满足预设要求时,可以重新采集训练数据集对openpose人体特征点模型进行训练,或者再获取多个训练数据集对已训练的人体特征点检测模型进行校正,在此不做限定。
步骤S430:获取待检测证件照。
步骤S440:将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的待检测部位的至少两个特征点的位置信息。
步骤S450:基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平。
其中,步骤S430-步骤S450的具体描述请参阅步骤S110-步骤S130,在此不再赘述。
步骤S460:当确定所述待检测部位在所述待检测证件照中非水平时,确定所述待检测证件照不合规,并输出提示信息,其中,所述提示信息用于提示改变待检测部位的姿态。
步骤S470:当确定所述待检测部位在所述待检测证件照中水平时,确定所述待检测证件照合规,并输出所述待检测证件照。
其中,步骤S460-步骤S470的具体描述请参阅步骤S250-步骤S260,在此不再赘述。
本申请另一个实施例提供的证件照检测方法,获取训练数据集,训练数据集包括多个证件照,以及每个证件照中的待检测部位的至少两个特征点的位置信息,基于训练数据集,将每个证件照作为输入数据,以及每个证件照中的待检测部位的至少两个特征点的位置信息作为输出输,对openpose人体特征点模型进行训练,获得已训练的人体关键点检测模型。获取待检测证件照,将待检测证件照输入已训练的人体特征点检测模型,并获取已训练的人体特征点检测模型输出的检测信息,其中,检测信息包括待检测证件照中的待检测部位的至少两个特征点的位置信息,基于待检测部位的至少两个特征点的位置信息,确定待检测部位在待检测证件照中是否水平。当确定待检测部位在待检测证件照中非水平时,确定待检测证件照不合规,并输出提示信息,其中,提示信息用于提示改变待检测部位的姿态,当确定待检测部位在待检测证件照中水平时,确定待检测证件照合规,并输出待检测证件照。相较于图1所示的证件照检测 方法,本实施例还通过获取训练数据集对openpose人体特征点模型进行训练,获得已训练的人体关键点检测模型,以提升对待检测图像中的待检测部位是否水平的判断准确度。另外,本实施例还在待检测对象在待检测证件照中非水平时输出提示信息,以提醒用户及时改正姿态提升证件照的效果,以及在待检测对象在待检测证件照中水平时输出待检测证件照,以使用户获得合规的证件照。
请参阅图5,图5示出了本申请实施例提供的证件照检测装置200的模块框图。下面将针对图5所示的框图进行阐述,所述证件照检测装置200包括:证件照获取模块210、检测信息获取模块220以及证件照检测模块230,其中:
证件照获取模块210,用于获取待检测证件照。
检测信息获取模块220,用于将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的待检测部位的至少两个特征点的位置信息。
证件照检测模块230,用于基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平。
进一步地,所述待检测部位包括肩膀,所述待检测部位的至少两个特征点的位置信息包括肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息,所述证件照检测模块230包括:位置关系获取子模块和第一证件照检测子模块,其中:
位置关系获取子模块,用于基于所述肩膀中心的位置信息、所述肩膀第一侧的位置信息以及所述肩膀第二侧的位置信息,获取所述肩膀中心、所述肩膀第一侧以及所述肩膀第二侧的位置关系。
第一证件照检测子模块,用于基于所述位置关系,确定所述肩膀在所述待检测证件照中是否水平。
进一步地,所述证件照检测模块230包括:坐标信息获取子模块和第二证件照检测子模块,其中:
坐标信息获取子模块,用于基于所述待检测部位的至少两个特征点的位置信息,获取所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息。
第二证件照检测子模块,用于基于所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息,确定所述待检测部位在所述待检测证件照中是否水平。
进一步地,所述证件照检测装置200还包括:提示信息输出模块,其中:
提示信息输出模块,用于当确定所述待检测部位在所述待检测证件照中非水平时,确定所述待检测证件照不合规,并输出提示信息,其中,所述提示信息用于提示改变待检测部位的姿态。
进一步地,所述证件照检测装置200还包括:待检测证件照输出模块,其中:
待检测证件照输出模块,用于当确定所述待检测部位在所述待检测证件照中水平时,确定所述待检测证件照合规,并输出所述待检测证件照。
进一步地,所述证件照检测装置200还包括:训练数据集获取模块和模型训练模块,其中:
训练数据集获取模块,用于获取训练数据集,所述训练数据集包括多个证件照,以及每个证件照中的待检测部位的至少两个特征点的位置信息。
模型训练模块,用于基于所述训练数据集,将所述每个证件照作为输入数据,以及所述每个证件照中的待检测部位的至少两个特征点的位置信息作为输出数据,对openpose人体特征点模型进行训练,获得已训练的人体关键点检测模型。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
请参阅图6,其示出了本申请实施例提供的一种电子设备100的结构框图。该电子设备100可以是智能手机、平板电脑、电子书等能够运行应用程序的电子设备。本申请中的电子设备100可以包括一个或多个如下部件:处理器110、存储器120以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
其中,处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个电子设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行电子设备100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图形处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责待显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述 各个方法实施例的指令等。存储数据区还可以存储电子设备100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
请参阅图7,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质300中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质300可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质300包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质300具有执行上述方法中的任何方法步骤的程序代码310的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码310可以例如以适当形式进行压缩。
综上所述,本申请实施例提供的证件照检测方法、装置、电子设备以及存储介质,获取待检测证件照,将待检测证件照输入已训练的人体特征点检测模型,并获取已训练的人体特征点检测模型输出的检测信息,其中,检测信息包括待检测证件照中的待检测部位的至少两个特征点的位置信息,基于待检测部位的至少两个特征点的位置信息,确定待检测部位在待检测证件照中是否水平,从而通过已训练的人体特征点检测模型对待检测证件照进行检测,并基于检测信息确定待检测部位是否水平,以提升证件照的检测准确性和检测效率。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。
Claims (20)
- 一种证件照检测方法,其特征在于,所述方法包括:获取待检测证件照;将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的待检测部位的至少两个特征点的位置信息;基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平。
- 根据权利要求1所述的方法,其特征在于,所述基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平,包括:基于所述待检测部位的至少两个特征点的位置信息,计算获得所述待检测部位的至少两个特征点的位置关系;基于所述待检测部位的至少两个特征点的位置关系,确定所述待检测部位在所述待检测证件照中是否水平。
- 根据权利要求1所述的方法,其特征在于,所述待检测部位包括肩膀,所述待检测部位的至少两个特征点的位置信息包括肩膀中心的位置信息、肩膀第一侧的位置信息以及肩膀第二侧的位置信息,所述基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平,包括:基于所述肩膀中心的位置信息、所述肩膀第一侧的位置信息以及所述肩膀第二侧的位置信息,获取所述肩膀中心、所述肩膀第一侧以及所述肩膀第二侧的位置关系;基于所述位置关系,确定所述肩膀在所述待检测证件照中是否水平。
- 根据权利要求3所述的方法,其特征在于,所述基于所述肩膀中心的位置信息、所述肩膀第一侧的位置信息以及所述肩膀第二侧的位置信息,获取所述肩膀中心、所述肩膀第一侧以及所述肩膀第二侧的位置关系,包括:基于所述肩膀中心的位置信息、所述肩膀第一侧的位置信息以及所述肩膀第二侧的位置信息,获取所述肩膀中心相对所述肩膀第一侧的相对偏移距离和相对偏移角度以及所述肩膀中心相对所述肩膀第二侧的相对偏移距离和相对偏移角度;基于所述肩膀中心相对所述肩膀第一侧的相对偏移距离和相对偏移角度以及所述肩膀中心相对所述肩膀第二侧的相对偏移距离和相对偏移角度,获取所述肩膀中心、所述肩膀第一侧以及所述肩膀第二侧的位置关系。
- 根据权利要求3或4所述的方法,其特征在于,所述基于所述位置关系,确定所述肩膀在所述待检测证件照中是否水平,包括:基于所述位置关系,确定所述肩膀中心、所述肩膀第一侧以及所述肩膀第二侧是否在一条水平线上;当确定所述肩膀中心、所述肩膀第一侧以及所述肩膀第二侧在一条水平线 上时,确定所述肩膀在所述待检测证件照中水平;当确定所述肩膀中心、所述肩膀第一侧以及所述肩膀第二侧不在一条水平线上时,确定所述肩膀在所述待检测证件照中非水平。
- 根据权利要求1所述的方法,其特征在于,所述基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平,包括:基于所述待检测部位的至少两个特征点的位置信息,获取所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息;基于所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息,确定所述待检测部位在所述待检测证件照中是否水平。
- 根据权利要求6所述的方法,其特征在于,所述待检测证件照上建立有图像坐标系,所述基于所述待检测部位的至少两个特征点的位置信息,获取所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息,包括:基于所述待检测部位的至少两个特征点的位置信息,将所述待检测部位的至少两个特征点对应至所述图像坐标系;获取所述待检测部位的至少两个特征点在所述图像坐标系中的坐标信息,作为所述待检测部位的至少两个特点在所述待检测证件照中的坐标信息。
- 根据权利要求6或7所述的方法,其特征在于,所述基于所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息,确定所述待检测部位在所述待检测证件照中是否水平,包括:基于所述待检测部位的至少两个特征点在所述待检测证件照中的坐标信息,检测所述待检测部位的至少两个特征点在所述待检测证件照中的纵坐标是否相同;当检测到所述待检测部位的至少两个特征点在所述待检测证件照中的纵坐标相同时,确定所述待检测部位在所述待检测证件照中水平;当检测到所述待检测部位的至少两个特征点在所述待检测证件照中的纵坐标不相同时,确定所述待检测部位在所述待检测证件照中非水平。
- 根据权利要求1所述的方法,其特征在于,所述已训练的人体特征点检测模型存储在电子设备中,且所述已训练的人体特征点检测模型是基于以mobilenetv2作为模型结构的openpose人体特征点模型训练获得。
- 根据权利要求1-9任一项所述的方法,其特征在于,所述基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平之后,还包括:当确定所述待检测部位在所述待检测证件照中非水平时,确定所述待检测证件照不合规,并输出提示信息,其中,所述提示信息用于提示改变待检测部位的姿态。
- 根据权利要求10所述的方法,其特征在于,所述提示信息包括语音提示信息、文字提示信息、图片提示信息以及闪光灯提示信息中的至少一种。
- 根据权利要求10或11所述的方法,其特征在于,所述提示信息包括 待检测证件照不合规信息和指导改变待检测部位的姿态的信息。
- 根据权利要求1-12任一项所述的方法,其特征在于,所述基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平之后,还包括:当确定所述待检测部位在所述待检测证件照中水平时,确定所述待检测证件照合规,并输出所述待检测证件照。
- 根据权利要求1-12任一项所述的方法,其特征在于,所述获取待检测证件照之前,还包括:获取训练数据集,所述训练数据集包括多个证件照,以及每个证件照中的待检测部位的至少两个特征点的位置信息;基于所述训练数据集,将所述每个证件照作为输入数据,以及所述每个证件照中的待检测部位的至少两个特征点的位置信息作为输出数据,对openpose人体特征点模型进行训练,获得已训练的人体关键点检测模型。
- 根据权利要求1-14任一项所述的方法,其特征在于,所述已训练的人体关键点检测模型存储在与电子设备连接的服务器,所述将所述待检测证件照输入已训练的人体特征点检测模型,包括:通过网络发送指令至存储在所述服务器的所述已训练的人体特征点检测模型,以指示所述已训练的人体特征点检测模型通过网络读取所述待检测证件照;或者通过网络将所述待检测证件照发送至存储在所述服务器的所述已训练的人体特征点检测模型。
- 根据权利要求1-14任一项所述的方法,其特征在于,所述已训练的人体关键点检测模型存储在电子设备的本地,所述将所述待检测证件照输入已训练的人体特征点检测模型,包括:调用存储在所述电子设备的本地的所述已训练的人体特征点检测模型,以指示所述已训练的人体特征点检测模型读取所述待检测证件照;或者将所述待检测证件照输入存储在所述电子设备的本地的所述已训练的人体特征点检测模型。
- 根据权利要求1-16任一项所述的方法,其特征在于,所述待检测证件照通过电子设备的摄像头采集获得、通过从电子设备的相册获得或者通过从网络下载获得。
- 一种证件照检测装置,其特征在于,所述装置包括:证件照获取模块,用于获取待检测证件照;检测信息获取模块,用于将所述待检测证件照输入已训练的人体特征点检测模型,并获取所述已训练的人体特征点检测模型输出的检测信息,其中,所述检测信息包括所述待检测证件照中的待检测部位的至少两个特征点的位置信息;证件照检测模块,用于基于所述待检测部位的至少两个特征点的位置信息,确定所述待检测部位在所述待检测证件照中是否水平。
- 一种电子设备,其特征在于,包括存储器和处理器,所述存储器耦接到所述处理器,所述存储器存储指令,当所述指令由所述处理器执行时所述处理器执行如权利要求1-17任一项所述的方法。
- 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-17任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010183509.XA CN111401242B (zh) | 2020-03-16 | 2020-03-16 | 证件照检测方法、装置、电子设备以及存储介质 |
CN202010183509.X | 2020-03-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021184966A1 true WO2021184966A1 (zh) | 2021-09-23 |
Family
ID=71431046
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/073878 WO2021184966A1 (zh) | 2020-03-16 | 2021-01-27 | 证件照检测方法、装置、电子设备以及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111401242B (zh) |
WO (1) | WO2021184966A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114494673A (zh) * | 2022-01-21 | 2022-05-13 | 南昌大学 | 一种基于数字图像处理和深度学习的标准证件照采集方法 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401242B (zh) * | 2020-03-16 | 2023-07-25 | Oppo广东移动通信有限公司 | 证件照检测方法、装置、电子设备以及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216881A (zh) * | 2007-12-28 | 2008-07-09 | 北京中星微电子有限公司 | 一种图像自动获取方法和装置 |
CN101216884A (zh) * | 2007-12-29 | 2008-07-09 | 北京中星微电子有限公司 | 一种人脸认证的方法及系统 |
JP2010140100A (ja) * | 2008-12-09 | 2010-06-24 | Yurakusha:Kk | 顔パターン分析システム |
CN103914676A (zh) * | 2012-12-30 | 2014-07-09 | 杭州朗和科技有限公司 | 一种在人脸识别中使用的方法和装置 |
CN106887091A (zh) * | 2017-04-27 | 2017-06-23 | 石家庄瀚申网络科技有限公司 | 一种出国、出境证件自助办理的方法及其系统和装置 |
CN109740513A (zh) * | 2018-12-29 | 2019-05-10 | 青岛小鸟看看科技有限公司 | 一种动作行为分析方法和装置 |
CN111401242A (zh) * | 2020-03-16 | 2020-07-10 | Oppo广东移动通信有限公司 | 证件照检测方法、装置、电子设备以及存储介质 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8811686B2 (en) * | 2011-08-19 | 2014-08-19 | Adobe Systems Incorporated | Methods and apparatus for automated portrait retouching using facial feature localization |
CN105046246B (zh) * | 2015-08-31 | 2018-10-09 | 广州市幸福网络技术有限公司 | 可进行人像姿势拍摄提示的证照相机及人像姿势检测方法 |
CN110852150B (zh) * | 2019-09-25 | 2022-12-20 | 珠海格力电器股份有限公司 | 一种人脸验证方法、系统、设备及计算机可读存储介质 |
-
2020
- 2020-03-16 CN CN202010183509.XA patent/CN111401242B/zh active Active
-
2021
- 2021-01-27 WO PCT/CN2021/073878 patent/WO2021184966A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216881A (zh) * | 2007-12-28 | 2008-07-09 | 北京中星微电子有限公司 | 一种图像自动获取方法和装置 |
CN101216884A (zh) * | 2007-12-29 | 2008-07-09 | 北京中星微电子有限公司 | 一种人脸认证的方法及系统 |
JP2010140100A (ja) * | 2008-12-09 | 2010-06-24 | Yurakusha:Kk | 顔パターン分析システム |
CN103914676A (zh) * | 2012-12-30 | 2014-07-09 | 杭州朗和科技有限公司 | 一种在人脸识别中使用的方法和装置 |
CN106887091A (zh) * | 2017-04-27 | 2017-06-23 | 石家庄瀚申网络科技有限公司 | 一种出国、出境证件自助办理的方法及其系统和装置 |
CN109740513A (zh) * | 2018-12-29 | 2019-05-10 | 青岛小鸟看看科技有限公司 | 一种动作行为分析方法和装置 |
CN111401242A (zh) * | 2020-03-16 | 2020-07-10 | Oppo广东移动通信有限公司 | 证件照检测方法、装置、电子设备以及存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114494673A (zh) * | 2022-01-21 | 2022-05-13 | 南昌大学 | 一种基于数字图像处理和深度学习的标准证件照采集方法 |
Also Published As
Publication number | Publication date |
---|---|
CN111401242B (zh) | 2023-07-25 |
CN111401242A (zh) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI751161B (zh) | 終端設備、智慧型手機、基於臉部識別的認證方法和系統 | |
WO2020199906A1 (zh) | 人脸关键点检测方法、装置、设备及存储介质 | |
WO2018028546A1 (zh) | 一种关键点的定位方法及终端、计算机存储介质 | |
WO2016169432A1 (zh) | 身份验证方法、装置及终端 | |
WO2017185630A1 (zh) | 基于情绪识别的信息推荐方法、装置和电子设备 | |
WO2020199611A1 (zh) | 活体检测方法和装置、电子设备及存储介质 | |
WO2020042754A1 (zh) | 一种全息防伪码校验方法及装置 | |
WO2021169616A1 (zh) | 非活体人脸的检测方法、装置、计算机设备及存储介质 | |
CN109951635B (zh) | 拍照处理方法、装置、移动终端以及存储介质 | |
WO2021237909A1 (zh) | 一种表格还原方法、装置、设备及存储介质 | |
WO2022166532A1 (zh) | 人脸识别方法、装置、电子设备及存储介质 | |
WO2021184966A1 (zh) | 证件照检测方法、装置、电子设备以及存储介质 | |
WO2021179856A1 (zh) | 内容识别方法、装置、电子设备及存储介质 | |
CN112069863B (zh) | 一种面部特征的有效性判定方法及电子设备 | |
CN110287862B (zh) | 基于深度学习的防偷拍检测方法 | |
US11200414B2 (en) | Process for capturing content from a document | |
WO2017219450A1 (zh) | 一种信息处理方法、装置及移动终端 | |
WO2020124993A1 (zh) | 活体检测方法、装置、电子设备及存储介质 | |
CN106778574A (zh) | 用于人脸图像的检测方法和装置 | |
WO2022033264A1 (zh) | 人体特征点的筛选方法、装置、电子设备以及存储介质 | |
US10242253B2 (en) | Detection apparatus, detection method, and computer program product | |
CN109684993B (zh) | 一种基于鼻孔信息的人脸识别方法、系统和设备 | |
WO2023005813A1 (zh) | 图像方向调整方法、装置、存储介质及电子设备 | |
JP2019517079A (ja) | 形状検知 | |
CN108596122A (zh) | 一种身份验证方法、装置、身份验证机和计算机可读介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21770659 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21770659 Country of ref document: EP Kind code of ref document: A1 |