CN112733575B - Image processing method, device, electronic equipment and storage medium - Google Patents
Image processing method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112733575B CN112733575B CN201910974869.9A CN201910974869A CN112733575B CN 112733575 B CN112733575 B CN 112733575B CN 201910974869 A CN201910974869 A CN 201910974869A CN 112733575 B CN112733575 B CN 112733575B
- Authority
- CN
- China
- Prior art keywords
- target
- score
- scores
- target object
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 37
- 210000000697 sensory organ Anatomy 0.000 claims abstract description 124
- 230000008859 change Effects 0.000 claims abstract description 39
- 238000011156 evaluation Methods 0.000 claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 18
- 230000000694 effects Effects 0.000 claims description 74
- 238000013507 mapping Methods 0.000 claims description 29
- 238000003062 neural network model Methods 0.000 claims description 16
- 238000003709 image segmentation Methods 0.000 claims description 10
- 210000001331 nose Anatomy 0.000 description 26
- 238000010586 diagram Methods 0.000 description 17
- 210000000214 mouth Anatomy 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 8
- 230000003796 beauty Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 210000001508 eye Anatomy 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000012549 training Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 210000004709 eyebrow Anatomy 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009429 electrical wiring Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the disclosure provides an image processing method, an image processing device, electronic equipment and a storage medium; the method comprises the following steps: image acquisition is carried out on a target object to obtain a plurality of continuous first frame images; acquiring a plurality of second frame images containing the target object when the image acquisition state is determined to be in a stable state based on the plurality of continuous first frame images; respectively obtaining scores of target areas in the five sense organs of the target object in each second frame image, wherein the scores are used for indicating the aesthetic degree of the target areas; when the obtained score changes of the scores reach the change threshold value, respectively adjusting each score so that the score changes of the scores meet the change condition; determining a target score corresponding to the target region based on the adjusted scores; and based on the target scores, presenting the evaluation results corresponding to the target objects in a graphical interface containing the target objects.
Description
Technical Field
The embodiment of the disclosure relates to computer technology, in particular to an image processing method, an image processing device, electronic equipment and a storage medium.
Background
With the development of communication technology and terminal devices, various terminal devices such as mobile phones, tablet computers and the like have become an indispensable part of people's work and life, and with the increasing popularity of terminal devices, interactive applications based on terminal devices have become a main channel of communication and entertainment. In the related technology, the face of the user can be identified through the interactive application, and the aesthetic degree of the five sense organs of the user is scored, but in the scoring process, the problems of face loss and unstable scoring exist.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including:
Image acquisition is carried out on a target object to obtain a plurality of continuous first frame images;
acquiring a plurality of second frame images containing the target object when the image acquisition state is determined to be in a stable state based on the plurality of continuous first frame images;
respectively obtaining scores of target areas in the five sense organs of the target object in each second frame image, wherein the scores are used for indicating the aesthetic degree of the target areas;
When the obtained score changes of the scores reach the change threshold value, respectively adjusting each score so that the score changes of the scores meet the change condition;
Determining a target score corresponding to the target region based on the adjusted scores;
and based on the target scores, presenting the evaluation results corresponding to the target objects in a graphical interface containing the target objects.
In the above aspect, the determining that the image acquisition state is in a stable state based on the plurality of continuous first frame images includes:
respectively carrying out face recognition on each first frame image to obtain an object in each first frame image;
And determining that the image acquisition state is in a stable state when the first frame images are determined to contain the target object based on the object in the first frame images.
In the above aspect, the obtaining the score of the target area in the five sense organs of the target object in each of the second frame images includes:
Respectively carrying out image segmentation on each second frame image to obtain images corresponding to target areas in each second frame image;
And inputting the images corresponding to the target areas in the second frame images into a neural network model to obtain the scores of the target areas in the five sense organs of the target object in the second frame images.
In the above aspect, the obtaining the score of the target area in the five sense organs of the target object in each of the second frame images includes:
and performing similarity matching on the second frame images and a preset image to respectively obtain similarity scores of target areas in the five sense organs of the target object and corresponding areas in the preset image.
In the above solution, the adjusting each score includes:
Obtaining the mapping relation between the score and the adjusted score;
And mapping each score based on the mapping relation to determine an adjusted score corresponding to each score.
In the above aspect, the determining, based on the adjusted scores, a target score corresponding to the target region includes:
and determining an average value of the adjusted scores based on the scores, and obtaining a target score corresponding to the target area.
In the above solution, the presenting, based on the target score, in a graphical interface including the target object, an evaluation result corresponding to the target object includes:
And when the target score reaches a score threshold, presenting a first special effect corresponding to the target area in a graphical interface containing the target object, wherein the aesthetic degree of the target area indicated by the first special effect is matched with the target score.
In the above solution, the presenting, based on the target score, in a graphical interface including the target object, an evaluation result corresponding to the target object includes:
respectively obtaining target scores of all areas except the target area in the five sense organs of the target object;
Determining a target score of the target area and a sum of target scores of areas except the target area;
And when the determined sum of the target scores reaches a score threshold, presenting a second special effect corresponding to the whole of the five sense organs of the target object in a graphical interface containing the target object, wherein the aesthetic degree of the whole of the five sense organs of the target object indicated by the second special effect is matched with the target score.
In the above solution, the presenting, based on the target score, in a graphical interface including the target object, an evaluation result corresponding to the target object includes:
obtaining a target score of at least one region outside the target region in the five sense organs of the target object;
And based on the target score of the target region and the target score of the at least one region, presenting a third special effect corresponding to the region of which the target score reaches a score threshold in a graphical interface containing the target object, wherein the third special effect is used for indicating that the aesthetic degree of the region of which the target score reaches the score threshold is matched with the target score.
In the above solution, the presenting, based on the target score, in a graphical interface including the target object, an evaluation result corresponding to the target object includes:
Obtaining target scores of all areas except the target area in the five sense organs of the target object;
Comparing the target scores of the target areas with the target scores of the areas to determine the area with the highest target score;
And in the graphical interface containing the target object, a fourth special effect corresponding to the area with the highest target score is presented, and the aesthetic degree of the area with the highest target score indicated by the fourth special effect is matched with the target score.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus including:
The first acquisition unit is used for acquiring images of the target object to obtain a plurality of continuous first frame images;
The second acquisition unit is used for acquiring a plurality of second frame images containing the target object when the image acquisition state is determined to be in a stable state based on the plurality of continuous first frame images;
The scoring unit is used for respectively acquiring scores of target areas in the five sense organs of the target object in each second frame image, wherein the scores are used for indicating the aesthetic degree of the target areas;
The adjustment unit is used for respectively adjusting each score when the obtained score changes of the scores reach the change threshold value, so that the score changes of the scores meet the change condition;
a determining unit configured to determine a target score corresponding to the target region based on the adjusted scores;
and the presentation unit is used for presenting the evaluation result corresponding to the target object in a graphical interface containing the target object based on the target score.
In the above scheme, the second acquisition unit is further configured to perform face recognition on each of the first frame images to obtain an object in each of the first frame images;
And determining that the image acquisition state is in a stable state when the first frame images are determined to contain the target object based on the object in the first frame images.
In the above scheme, the scoring unit is further configured to perform image segmentation on each of the second frame images to obtain an image corresponding to the target area in each of the second frame images;
And inputting the images corresponding to the target areas in the second frame images into a neural network model to obtain the scores of the target areas in the five sense organs of the target object in the second frame images.
In the above scheme, the scoring unit is further configured to perform similarity matching between each second frame image and a preset image, so as to obtain similarity scores of a target region in the five sense organs of the target object and a corresponding region in the preset image in each second frame image.
In the above scheme, the adjusting unit is further configured to obtain a mapping relationship between the score and the adjusted score;
And mapping each score based on the mapping relation to determine an adjusted score corresponding to each score.
In the above aspect, the determining unit is further configured to determine an average value of the adjusted scores based on the adjusted scores, to obtain a target score corresponding to the target area.
In the above solution, the presenting unit is further configured to present, in a graphical interface including the target object, a first special effect corresponding to the target area when the target score reaches a score threshold, where an aesthetic degree of the target area indicated by the first special effect is adapted to the target score.
In the above solution, the presenting unit is further configured to obtain target scores of areas other than the target area in the five sense organs of the target object respectively;
Determining a target score of the target area and a sum of target scores of areas except the target area;
And when the determined sum of the target scores reaches a score threshold, presenting a second special effect corresponding to the whole of the five sense organs of the target object in a graphical interface containing the target object, wherein the aesthetic degree of the whole of the five sense organs of the target object indicated by the second special effect is matched with the target score.
In the above aspect, the presenting unit is further configured to obtain a target score of at least one area other than the target area in the five sense organs of the target object;
And based on the target score of the target region and the target score of the at least one region, presenting a third special effect corresponding to the region of which the target score reaches a score threshold in a graphical interface containing the target object, wherein the third special effect is used for indicating that the aesthetic degree of the region of which the target score reaches the score threshold is matched with the target score.
In the above solution, the presenting unit is further configured to obtain a target score of each area other than the target area in the five sense organs of the target object;
Comparing the target scores of the target areas with the target scores of the areas to determine the area with the highest target score;
And in the graphical interface containing the target object, a fourth special effect corresponding to the area with the highest target score is presented, and the aesthetic degree of the area with the highest target score indicated by the fourth special effect is matched with the target score.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
A memory for storing executable instructions;
and the processor is used for realizing the image processing method provided by the embodiment of the disclosure when executing the executable instructions.
In a fourth aspect, the present disclosure provides a storage medium storing executable instructions that, when executed, are configured to implement the image processing method provided by the embodiments of the present disclosure.
The embodiment of the disclosure has the following beneficial effects:
1) Acquiring a plurality of second frame images containing the target object when the image acquisition state is in a stable state based on the plurality of continuous first frame images, and further acquiring the score of the target area in the five sense organs of the target object in each second frame image; because the image acquisition is determined to be in a stable state, the problem of face loss in the scoring process is solved;
2) When the obtained score changes of the scores reach the change threshold value, respectively adjusting each score so that the score changes of the scores meet the change condition; when the score fluctuation range is large, the score is adjusted, so that the problem of unstable score can be solved, and the stability of an output result is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a schematic architecture diagram of an image processing system provided by an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an electronic device 100 provided in an embodiment of the present disclosure;
fig. 3 is a flowchart of an image processing method provided in an embodiment of the present disclosure;
fig. 4 is an interface schematic diagram of face feature point detection provided in an embodiment of the present disclosure;
FIG. 5 is an interface schematic of a first effect presentation provided by an embodiment of the present disclosure;
FIG. 6 is an interface schematic of a second effect presentation provided by an embodiment of the present disclosure;
FIG. 7 is an interface schematic of a third effect presentation provided by an embodiment of the present disclosure;
FIG. 8 is an interface schematic of a fourth effect presentation provided by an embodiment of the present disclosure;
Fig. 9 is a flowchart of an image processing method provided by an embodiment of the present disclosure;
fig. 10 is a flowchart of an image processing method provided by an embodiment of the present disclosure;
Fig. 11 is a schematic diagram of the composition structure of an image processing apparatus provided in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Referring next to fig. 1, fig. 1 is a schematic architecture diagram of an image processing system according to an embodiment of the disclosure, to support an exemplary application, a terminal 400 (including a terminal 400-1 (including a graphical interface 410-1) and a terminal 400-2 (including a graphical interface 410-2)), where the terminal 400 is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two, and uses a wireless link to implement data transmission.
A terminal 400 (e.g., terminal 400-1) for performing image acquisition on a target object to obtain a plurality of continuous first frame images; based on the plurality of continuous first frame images, when the image acquisition state is determined to be in a stable state, acquiring a plurality of second frame images containing the target object, and transmitting the plurality of second frame images to the server 200;
The server 200 is configured to score target areas in the five sense organs of the target object in each second frame image, and send the scores of the target areas in the five sense organs of the target object in each second frame image to the terminal 400; the score is used for indicating the aesthetic degree of the target area;
The terminal 400 (e.g., the terminal 400-1) is further configured to, when determining that the score changes of the obtained scores reach the change threshold, adjust each score so that the score changes of the scores satisfy the change condition; determining a target score of the corresponding target region based on the adjusted scores; based on the target scores, in a graphical interface containing the target objects, the evaluation results of the corresponding target objects are presented.
In some embodiments, a terminal 400 (e.g., a terminal 400-1) is provided with a client, and the terminal presents an evaluation result of a corresponding target object based on the implementation of the client, and the client performs image acquisition on the target object to obtain a plurality of continuous first frame images; based on a plurality of continuous first frame images, when the image acquisition state is determined to be in a stable state, acquiring a plurality of second frame images containing a target object; respectively obtaining scores of target areas in the five sense organs of the target object in each second frame image, wherein the scores are used for indicating the aesthetic degree of the target areas; when the obtained score changes of the scores reach the change threshold, respectively adjusting the scores so that the score changes of the scores meet the change condition; determining a target score of the corresponding target region based on the adjusted scores; based on the target scores, in a graphical interface containing the target objects, the evaluation results of the corresponding target objects are presented.
Referring now to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. The electronic device may be various terminals including mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, personal Digital Assistants (PDA), tablet computers (PAD), portable multimedia players (PMP, portable MEDIA PLAYER), in-vehicle terminals (e.g., in-vehicle navigation terminals), and fixed terminals such as digital Televisions (TVs), desktop computers, and the like. The electronic device shown in fig. 2 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 2, the electronic device 20 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 210 that may perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 220 or a program loaded from a storage means 280 into a random access Memory (RAM, random Access Memory) 230. In the RAM 230, various programs and data required for the operation of the electronic device 20 are also stored. The processing device 210, the ROM 220, and the RAM 230 are connected to each other by a bus 240. An Input/Output (I/O) interface 250 is also connected to bus 240.
In general, the following devices may be connected to the I/O interface 250: input devices 260 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 270 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, etc.; storage 280 including, for example, magnetic tape, hard disk, etc.; and a communication device 290. The communication means 290 may allow the electronic device 20 to communicate wirelessly or by wire with other devices to exchange data. While fig. 2 shows the electronic device 120 with various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described by the provided flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 290, or from the storage device 280, or from the ROM 220. When the computer program is executed by the processing apparatus 210, the functions in the image processing method of the embodiment of the present disclosure are performed.
It should be noted that, the computer readable medium described above in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (EPROM, erasable Programmable Read Only Memory), a flash memory, an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the disclosed embodiments, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the disclosed embodiments, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including electrical wiring, optical fiber cable, radio Frequency (RF), the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device 20; or may exist alone without being assembled into the electronic device 20.
The computer readable medium carries one or more programs which, when executed by the electronic device 20, cause the electronic device to perform the image processing method provided by the embodiments of the present disclosure.
Computer program code for carrying out operations in embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computers may be connected to the user computer through any kind of network, including a local area network (LAN, local Area Network)) and a wide area network (WAN, wide Area Network), or may be connected to external computers (e.g., through the internet using an internet service provider).
The flowcharts and block diagrams provided by the embodiments of the present disclosure illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit does not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit for image acquisition of the target object".
Fig. 3 is a flowchart of an image processing method provided by an embodiment of the present disclosure, referring to fig. 3, the image processing method of the embodiment of the present disclosure includes:
Step 301: and the terminal acquires images of the target object to obtain a plurality of continuous first frame images.
In practical application, a client, such as an instant messaging client, a microblog client, a short video client, etc., is provided on the terminal, and a user can click a video shooting button on a user interface of the client to trigger a video shooting instruction, so that the terminal invokes an image acquisition sensor, such as a camera, to acquire a target object. It should be noted that, the target object of the video shooting is a shot user, where the number of users may be one or more; the number of first frame images acquired may be preset, for example, five consecutive first frame images may be acquired.
Step 302: and acquiring a plurality of second frame images containing the target object when the image acquisition state is determined to be in a stable state based on the plurality of continuous first frame images.
Here, the steady state refers to a case where no face loss occurs in a plurality of consecutive first frame images. If the number of faces in the first frame image is 1 and the number of faces in the second frame image is 0 in the acquired multiple continuous first frame images, the face is indicated to be lost, and the image acquisition state is not in a stable state; if the face A is in the first frame image and the face B is in the second frame image in the acquired continuous first frame images, the face is lost, and the image acquisition state is not in a stable state.
In some embodiments, the terminal may determine that the image state is in a stable state by: the terminal respectively carries out face recognition on each first frame image to obtain objects in each first frame image; and determining that the image acquisition state is in a stable state when the first frame images are determined to contain the target object based on the object in the first frame images.
In practical implementation, the Face recognition technology can adopt the Face recognition technologies such as iOS self-contained Face recognition, openCV Face recognition, face++, sensetime, tengxun optimal diagram Face recognition and the like. The face identification in the first frame image can be determined through the face identification technology, and then the object in the first frame image is determined according to the face identification.
It should be noted that, if the image state is not in a stable state, the terminal may continue to perform image acquisition on the target object until the image state is in a stable state. For example, when the five continuous first frame images all contain the target object, the image acquisition state is determined to be in a stable state, and then the terminal acquires the target object until the five first frame images all contain the target object, and then acquires a plurality of second frame images containing the target object.
Step 303: and respectively obtaining the scores of the target areas in the five sense organs of the target object in each second frame image.
Here, the score is used to indicate the aesthetic degree of the target area, and the higher the score means the higher the aesthetic degree of the portion of the target user corresponding to the target area in the five sense organs. The target region in the five sense organs of the target image may be an image region corresponding to any part of the five sense organs, such as a region corresponding to a nose, a region corresponding to eyes, and the like. In practical application, the terminal can determine the image area corresponding to the target part in the five sense organs as the target area, and score the target area.
In some embodiments, the terminal may determine an image area corresponding to a target portion in the five sense organs of the target object based on a face feature point, where the feature point is a point in the image that can reflect local features (such as color features, shape features and texture features) of the object, and is generally a set of multiple pixel points, and taking a face image as an example, the feature point may be an eye feature point, a mouth feature point or a nose feature point.
In practical implementation, the terminal detects feature points of the second frame image, identifies feature points included in the target portion of the five sense organs, and forms an image area of the corresponding target portion of the five sense organs by the feature points, as shown in fig. 4, fig. 4 is an interface schematic diagram of face feature point detection provided in the embodiment of the disclosure, the target portion of the five sense organs is a nose, and a dashed frame is an image area determined by the feature points included in the nose, where the image area is the target area.
In some embodiments, the terminal may obtain the score of the target region in the five sense organs of the target object in each second frame image through a neural network model (such as a recurrent neural network (RNN, recurrent Neural Network)) obtained by training: respectively carrying out image segmentation on each second frame image to obtain images corresponding to target areas in each second frame image; and inputting the images corresponding to the target areas in the second frame images into a neural network model to obtain the scores of the target areas in the five sense organs of the target object in the second frame images.
In actual implementation, the terminal inputs the images corresponding to the target areas in the second frame images obtained by segmentation into an input layer of the neural network, outputs the images through an output layer to obtain the scores of the target areas in the five sense organs of the target objects in the second frame images through the hidden layer, and the higher the aesthetic scores are, the higher the aesthetic degree of the target areas in the five sense organs is.
In practical applications, a neural network model is usually trained by taking, as a sample, pictures of target areas of a plurality of five sense organs considered to be the most attractive by the public, and here, the training of the neural network model is described, and the terminal can train the neural network model as follows:
Initializing an input layer, an intermediate layer and an output layer which are included in the neural network model; constructing a training sample set, wherein the training sample set comprises images corresponding to target areas in the scored facial features marked with corresponding images; and updating model parameters of the neural network model according to a loss function of the neural network model by taking an image corresponding to a target area in the facial features as input and taking scores of the corresponding images as output.
The terminal inputs the image corresponding to the mouth area in the five sense organs of the target object into the input layer of the trained neural network model, and the score of the mouth area is obtained through the output of the output layer through the hidden layer, wherein the higher the score is, the higher the aesthetic degree of the mouth area in the five sense organs is.
In some embodiments, the terminal may determine the score of the target region in the five sense organs by matching similarity of the second frame image and the preset image: and performing similarity matching on the second frame images and the preset images to obtain similarity scores of target areas in the five sense organs of the target objects and corresponding areas in the preset images in the second frame images respectively.
In practical implementation, the terminal respectively performs image segmentation on each second frame image to obtain images corresponding to target areas in the five sense organs of the target object in each second frame image, respectively performs similarity matching on the images corresponding to the target areas in the five sense organs of the target object in each second frame image and the images corresponding to the target areas in the five sense organs of the recognized beauty, and respectively obtains similarity scores of the target areas in the five sense organs of the target object in each second frame image and the target areas in the five sense organs of the recognized beauty. For example, the terminal performs similarity matching on an image corresponding to an eye region in the five sense organs of the target object and an eye image recognized to be the most attractive, so as to obtain a similarity scoring value of 85%. Here, the higher the similarity score, the higher the aesthetic degree of the corresponding portion in the five sense organs.
Step 304: when the obtained score changes of the scores reach the change threshold, the scores are respectively adjusted so that the score changes of the scores meet the change condition.
Here, the score change refers to a change from a minimum value to a maximum value in the plurality of scores, and the larger the score change is, the less stable the score is, and each score needs to be adjusted to reduce the score change of the plurality of scores, even if the score is stable.
In some embodiments, the terminal may adjust the scores by: obtaining the mapping relation between the score and the adjusted score; and mapping each score based on the mapping relation to determine an adjusted score corresponding to each score.
In practical implementation, the mapping relation between the score and the adjusted score is preset, that is, a large-range score is mapped to a small-range score. For example, the percentile scores are mapped to the ten scores, assuming that the obtained scores are: 20. 77, 79, according to the mapping relation, the score after adjustment is 2, 8, the score change is reduced from 59 to 6, and the score after adjustment is more stable.
Step 305: and determining the target score of the corresponding target area based on the adjusted scores.
Here, a target score corresponding to the target region is determined based on the plurality of adjusted scores. In some embodiments, the target score for the target region may be determined by: and the terminal determines the average value of the adjusted scores based on the adjusted scores to obtain the target score of the corresponding target area. For example, the adjusted scores are 2, 8, then the target score is 6.
Step 306: based on the target scores, in a graphical interface containing the target objects, the evaluation results of the corresponding target objects are presented.
Here, the terminal may directly present the target score of the corresponding target object or may present the special effect of the corresponding target object in the image interface including the target object based on the target score, so as to improve entertainment of the client.
In some embodiments, the goal scoring based may be achieved by presenting, in a graphical interface containing the goal objects, the evaluation results of the corresponding goal objects: and when the target score reaches the score threshold, presenting a first special effect corresponding to the target area in a graphical interface containing the target object, wherein the aesthetic degree of the target area indicated by the first special effect is matched with the target score.
In practical implementation, when the target score reaches the score threshold value, which indicates that the aesthetic degree of the target area is very high, a special effect can be triggered when the target score reaches the score threshold value, so that a first special effect corresponding to the target area is presented in a graphical interface containing the target object, and the self-confidence of a user is improved.
For example, the score threshold is 8, the score obtained from the nose area is 9, which indicates that the nose of the target object is very good, the first special effect corresponding to the nose is presented, fig. 5 is a schematic diagram of the interface presented by the first special effect provided by the embodiment of the disclosure, and as shown in fig. 5, a curve frame is presented in the nose area to highlight the target area, and characters of "the proud nose bridge" are presented in the overhead area.
In some embodiments, the goal scoring based may be achieved by presenting, in a graphical interface containing the goal objects, the evaluation results of the corresponding goal objects: respectively obtaining target scores of all areas except the target area in the five sense organs of the target object; determining a target score of the target area and a sum of target scores of all areas except the target area; and when the sum of the determined target scores reaches a score threshold, presenting a second special effect corresponding to the whole of the five sense organs of the target object in a graphical interface containing the target object, wherein the aesthetic degree of the whole of the five sense organs of the target object indicated by the second special effect is matched with the target score.
Here, in addition to the target scores of the target areas, the target scores of the other individual areas in the five sense organs are also acquired, that is, the target scores of the eyebrows, eyes, nose, mouth, and ears can be obtained. Here, the target score of each other region is obtained in the same manner as the target score of the target region.
In actual implementation, the sum of the target scores of all the areas in the five sense organs is obtained and is used for indicating the aesthetic degree of the whole five sense organs of the target object, when the sum of the determined target scores reaches a score threshold value, the whole five sense organs of the target object are very good, then when the sum of the determined target scores reaches the score threshold value, a special effect is triggered, and in a graphical interface containing the target object, a second special effect corresponding to the whole five sense organs of the target object is presented.
For example, the target scores for each region in the five sense organs are obtained as: the sum of the target scores of all the areas in the five sense organs is 41 points, and the score threshold value is 40 points, so that the sum of the target scores of all the areas in the five sense organs reaches the score threshold value and the second special effect corresponding to the whole five sense organs is presented. Fig. 6 is an interface schematic diagram of a second special effect presentation provided by the embodiment of the present disclosure, as shown in fig. 6, in which crown and characters of "flourishing beauty" are presented in the overhead area, so as to improve user experience.
In some embodiments, the goal scoring based may be achieved by presenting, in a graphical interface containing the goal objects, the evaluation results of the corresponding goal objects: obtaining a target score of at least one region outside a target region in the five sense organs of the target object; and based on the target score of the target region and the target score of at least one region, presenting a third special effect corresponding to the region of which the target score reaches the score threshold in the graphical interface containing the target object, wherein the third special effect is used for indicating that the aesthetic degree of the region of which the target score reaches the score threshold is matched with the target score.
Here, in addition to the target scores of the target areas, the target scores of at least one other area in the five sense organs are obtained, and when the aesthetic degrees of the plurality of areas reach the score threshold, the third special effects corresponding to the plurality of areas are presented.
For example, the target area is a nose, the target score of the mouth is obtained, and when the nose and the mouth areas are both larger than the scoring threshold, a third special effect corresponding to the nose and the mouth is presented. Fig. 7 is an interface schematic diagram of third special effect presentation provided in the embodiment of the present disclosure, referring to fig. 7, dashed boxes are respectively presented in the nose and mouth areas to highlight the object to which the special effect is directed, and characters "the nose bridge of the proud person" and "white teeth red lips" adapted to the target scores of the nose and mouth are presented at the same time.
In some embodiments, the goal scoring based may be achieved by presenting, in a graphical interface containing the goal objects, the evaluation results of the corresponding goal objects: obtaining target scores of all areas except the target area in the five sense organs of the target object; comparing the target scores of the target areas and the target scores of the areas to determine the area with the highest target score; in the graphical interface containing the target object, a fourth special effect corresponding to the area with the highest target score is presented, and the aesthetic degree of the area with the highest target score indicated by the fourth special effect is matched with the target score.
Here, the target scores of the regions in the five sense organs are obtained, and the target scores of the regions are compared to determine the region with the highest target score, namely, the region with the most beautiful region in the five sense organs of the target object, and the fourth special effect corresponding to the region with the most beautiful region in the five sense organs of the target object is presented, so that the user knows the most beautiful part in the five sense organs of the user.
For example, the target scores for each region in the five sense organs are obtained as: eyebrows (8 minutes), eyes (7 minutes), nose (6 minutes), mouth (9 minutes) and ears (5 minutes), then the most beautiful of the five sense organs of the target object is the mouth, and then the fourth special effect corresponding to the mouth is presented. Fig. 8 is an interface schematic diagram of a fourth special effect presentation provided in an embodiment of the present disclosure, as shown in fig. 8, a dashed frame is presented in a mouth area to highlight a most beautiful area in the five sense organs, and characters of "the most beautiful five sense organs mouth" are presented on the top of the head, so that a user can know the most beautiful part in the five sense organs.
By applying the embodiment of the disclosure, on one hand, when the image acquisition state is determined to be in a stable state based on a plurality of continuous first frame images, a plurality of second frame images containing the target object are acquired, and then the score of the target area in the five sense organs of the target object in each second frame image is acquired; because the image acquisition is determined to be in a stable state, the problem of face loss in the scoring process is solved; on the other hand, when the obtained score changes of the scores reach the change threshold, respectively adjusting the scores so that the score changes of the scores meet the change condition; when the score fluctuation range is large, the score is adjusted, so that the problem of unstable score can be solved, and the stability of an output result is improved.
The image processing method provided by the embodiment of the present disclosure is described below by taking the special effect of presenting the corresponding target area as an example, and the image processing method can be implemented by a client set on the terminal. Fig. 9 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. Referring to fig. 9, an image processing method of an embodiment of the present disclosure includes:
Step 401: and the client acquires images of the target object to obtain a plurality of continuous first frame images.
Here, in practical application, the client may be a social network client, such as a short video client, an instant messaging client, or an image processing client, such as a beauty camera client. And the user triggers a shooting instruction by clicking a shooting button on the client so that the client can acquire images of the target object.
Step 402: and respectively carrying out face recognition on each first frame image, and determining the object in each first frame image.
In practical implementation, the client may perform Face recognition on each first frame image by using Face recognition technologies such as iOS self-contained Face recognition, openCV Face recognition, face++, sensetime, tengzhen optimal image Face recognition, and the like.
Step 403: and acquiring a plurality of second frame images containing the target object when the first frame images are determined to contain the target object based on the object in the first frame images.
For example, if the target object is user a and the objects in each first frame image are user a, a plurality of second frame images including user a are acquired.
Step 404: and respectively carrying out image segmentation on each second frame image to obtain images corresponding to the nose area of the target object in each second frame image.
Here, when each first frame image includes the target image, it is indicated that the face is not lost, the image acquisition state is in a stable state, and a plurality of second frame images including the target object are acquired.
Step 405: and inputting the images corresponding to the nose areas of the target objects in the second frame images into a neural network model to obtain the scores of the nose areas of the target objects in the second frame images.
Here, the score is used to indicate the aesthetic degree of the nose, and the higher the score, the better the nose is.
Step 406: and when the score changes of the obtained scores reach the change threshold, obtaining the mapping relation between the scores and the adjusted scores.
Here, the score change means a score change from a minimum score to a maximum score among the plurality of scores.
Step 407: and mapping each score based on the mapping relation to determine an adjusted score corresponding to each score.
In practice, the percentile score may be mapped to a ten-to-ten score, e.g., 80 points, and mapped to an adjusted score of 8 points.
Step 408: and determining the average value of the adjusted scores to obtain the target score of the corresponding nose area.
Step 409: when the target score reaches the score threshold, the characters of 'the nose bridge of the person' are presented in the image interface containing the target object.
For example, as shown in fig. 5, in the image interface including the target object, the overhead area of the target object presents the word "proud nose bridge".
The image processing method provided by the embodiment of the present disclosure is described below by taking an example of presenting an overall special effect corresponding to the five sense organs, where the image processing method can be implemented by cooperation of a client and a server set on a terminal. Fig. 10 is a flowchart illustrating an image processing method according to an embodiment of the disclosure. Referring to fig. 10, an image processing method of an embodiment of the present disclosure includes:
step 501: and the client acquires images of the target object to obtain a plurality of continuous first frame images.
Here, in practical application, the client may be a social network client, such as a short video client, an instant messaging client, or an image processing client, such as a beauty camera client. And the target user triggers a shooting instruction by clicking a shooting button on the client so that the client can acquire images of the target object.
Step 502: and the client side respectively carries out face recognition on each first frame image and determines the object in each first frame image.
In practical implementation, the client may perform Face recognition on each first frame image by using Face recognition technologies such as iOS self-contained Face recognition, openCV Face recognition, face++, sensetime, tengzhen optimal image Face recognition, and the like.
Step 503: and the client acquires a plurality of second frame images containing the target object when determining that each first frame image contains the target object based on the object in each first frame image.
Here, when each first frame image includes the target image, it is indicated that the face is not lost, the image acquisition state is in a stable state, and a plurality of second frame images including the target object are acquired.
Step 504: the client transmits the plurality of second frame images to the server.
Step 505: and the server performs image segmentation on the second frame images aiming at each second frame image to respectively obtain images corresponding to each region of the five sense organs of the target object in the second frame images.
Here, each partial region of the five sense organs includes: eyebrow area, eye area, nose area, mouth area, and ear area.
Step 506: the server performs the following operations for each second frame image: and matching the similarity between each region of the five sense organs of the target object in the second frame image and the image corresponding to each region of the five sense organs recognized to be the most beautiful, and obtaining the score of each region of the five sense organs of the target object in each second frame image.
Here, the five sense organs of the target object each have a plurality of scores.
Step 507: and the server sends the scores of the areas of the five sense organs of the target object in each second frame image to the client.
Step 508: the client side respectively executes the following operations for each region of the five sense organs of the target object: and when the score changes of the scores of the regions of the five sense organs reach the change threshold value, respectively adjusting the scores corresponding to the regions.
Here, the score change refers to a score change from a minimum value to a maximum value among the plurality of scores, and the score change of the plurality of scores satisfies a change condition by adjusting each score corresponding to the region. In actual implementation, the scores corresponding to the regions may be adjusted based on the mapping relationship.
Step 509: and the client acquires the average value of the adjusted scores corresponding to the region to obtain the target score of the region.
Step 510: the client obtains the sum of the target scores of the regions of the five sense organs based on the target scores of the regions of the five sense organs.
Here, the sum of the target scores of the respective areas in the five sense organs is used to indicate the aesthetic degree of the entire five sense organs of the target object.
Step 511: when the client determines that the sum of the target scores reaches a preset threshold, the characters of 'flourishing beauty' are presented in the image interface containing the target object.
For example, when the sum of the target scores reaches a preset threshold, it is indicated that the overall five sense organs of the target object are very good, and as shown in fig. 6, in the image interface including the target object, the top-of-head area of the target object presents the words of "flourishing beauty".
The description will be continued on the software implementation of the image processing apparatus provided by the embodiment of the present disclosure. Referring to fig. 11, fig. 11 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present disclosure, referring to fig. 11, an image processing apparatus 60 according to an embodiment of the present disclosure includes:
a first acquisition unit 61, configured to acquire images of a target object, so as to obtain a plurality of continuous first frame images;
A second acquisition unit 62, configured to acquire a plurality of second frame images including the target object when determining that the image acquisition state is in a stable state based on the plurality of continuous first frame images;
a scoring unit 63, configured to obtain scores of target areas in the five sense organs of the target object in each of the second frame images, where the scores are used to indicate aesthetic degrees of the target areas;
an adjusting unit 64, configured to adjust each score when determining that the obtained score changes of the scores reach a change threshold, so that the score changes of the scores satisfy a change condition;
A determining unit 65, configured to determine a target score corresponding to the target region based on each of the scores after adjustment;
and a presenting unit 66, configured to present, based on the target score, an evaluation result corresponding to the target object in a graphical interface including the target object.
In some embodiments, the second acquisition unit 62 is further configured to perform face recognition on each of the first frame images to obtain an object in each of the first frame images;
And determining that the image acquisition state is in a stable state when the first frame images are determined to contain the target object based on the object in the first frame images.
In some embodiments, the scoring unit 63 is further configured to perform image segmentation on each of the second frame images to obtain an image corresponding to the target area in each of the second frame images;
And inputting the images corresponding to the target areas in the second frame images into a neural network model to obtain the scores of the target areas in the five sense organs of the target object in the second frame images.
In some embodiments, the scoring unit 63 is further configured to perform similarity matching between each of the second frame images and a preset image, so as to obtain a similarity score between a target region in the five sense organs of the target object and a corresponding region in the preset image in each of the second frame images.
In some embodiments, the adjusting unit 64 is further configured to obtain a mapping relationship between the score and the adjusted score;
And mapping each score based on the mapping relation to determine an adjusted score corresponding to each score.
In some embodiments, the determining unit 65 is further configured to determine, based on the adjusted scores, an average value of the adjusted scores, and obtain a target score corresponding to the target area.
In some embodiments, the presenting unit 66 is further configured to present, in a graphical interface including the target object, a first special effect corresponding to the target area when the target score reaches the score threshold, where the aesthetic degree of the target area indicated by the first special effect is adapted to the target score.
In some embodiments, the presenting unit 66 is further configured to obtain target scores of respective areas other than the target area in the five sense organs of the target object;
Determining a target score of the target area and a sum of target scores of areas except the target area;
And when the determined sum of the target scores reaches a score threshold, presenting a second special effect corresponding to the whole of the five sense organs of the target object in a graphical interface containing the target object, wherein the aesthetic degree of the whole of the five sense organs of the target object indicated by the second special effect is matched with the target score.
In some embodiments, the presenting unit 66 is further configured to obtain a target score of at least one region other than the target region in the five sense organs of the target object;
And based on the target score of the target region and the target score of the at least one region, presenting a third special effect corresponding to the region of which the target score reaches a score threshold in a graphical interface containing the target object, wherein the third special effect is used for indicating that the aesthetic degree of the region of which the target score reaches the score threshold is matched with the target score.
In some embodiments, the presenting unit 66 is further configured to obtain a target score of each region other than the target region in the five sense organs of the target object;
determining a region with the highest target score based on comparing the target scores of the target regions and the target scores of the regions;
And in the graphical interface containing the target object, a fourth special effect corresponding to the area with the highest target score is presented, and the aesthetic degree of the area with the highest target score indicated by the fourth special effect is matched with the target score.
According to one or more embodiments of the present disclosure, there is provided an image processing method including:
Image acquisition is carried out on a target object to obtain a plurality of continuous first frame images;
acquiring a plurality of second frame images containing the target object when the image acquisition state is determined to be in a stable state based on the plurality of continuous first frame images;
respectively obtaining scores of target areas in the five sense organs of the target object in each second frame image, wherein the scores are used for indicating the aesthetic degree of the target areas;
When the obtained score changes of the scores reach the change threshold value, respectively adjusting each score so that the score changes of the scores meet the change condition;
Determining a target score corresponding to the target region based on the adjusted scores;
and based on the target scores, presenting the evaluation results corresponding to the target objects in a graphical interface containing the target objects.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, the determining that the image acquisition state is in a stable state based on the plurality of continuous first frame images, including:
respectively carrying out face recognition on each first frame image to obtain an object in each first frame image;
And determining that the image acquisition state is in a stable state when the first frame images are determined to contain the target object based on the object in the first frame images.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, wherein the obtaining the score of the target area in the five sense organs of the target object in each of the second frame images includes:
Respectively carrying out image segmentation on each second frame image to obtain images corresponding to target areas in each second frame image;
And inputting the images corresponding to the target areas in the second frame images into a neural network model to obtain the scores of the target areas in the five sense organs of the target object in the second frame images.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, wherein the obtaining the score of the target area in the five sense organs of the target object in each of the second frame images includes:
and performing similarity matching on the second frame images and a preset image to respectively obtain similarity scores of target areas in the five sense organs of the target object and corresponding areas in the preset image.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, wherein the adjusting each score includes:
Obtaining the mapping relation between the score and the adjusted score;
And mapping each score based on the mapping relation to determine an adjusted score corresponding to each score.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, wherein the determining a target score corresponding to the target region based on each of the scores after adjustment includes:
and determining an average value of the adjusted scores based on the scores, and obtaining a target score corresponding to the target area.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, wherein the presenting, in a graphical interface including the target object, an evaluation result corresponding to the target object based on the target score includes:
And when the target score reaches a score threshold, presenting a first special effect corresponding to the target area in a graphical interface containing the target object, wherein the aesthetic degree of the target area indicated by the first special effect is matched with the target score.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, wherein the presenting, in a graphical interface including the target object, an evaluation result corresponding to the target object based on the target score includes:
respectively obtaining target scores of all areas except the target area in the five sense organs of the target object;
Determining a target score of the target area and a sum of target scores of areas except the target area;
And when the determined sum of the target scores reaches a score threshold, presenting a second special effect corresponding to the whole of the five sense organs of the target object in a graphical interface containing the target object, wherein the aesthetic degree of the whole of the five sense organs of the target object indicated by the second special effect is matched with the target score.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, wherein the presenting, in a graphical interface including the target object, an evaluation result corresponding to the target object based on the target score includes:
obtaining a target score of at least one region outside the target region in the five sense organs of the target object;
And based on the target score of the target region and the target score of the at least one region, presenting a third special effect corresponding to the region of which the target score reaches a score threshold in a graphical interface containing the target object, wherein the third special effect is used for indicating that the aesthetic degree of the region of which the target score reaches the score threshold is matched with the target score.
According to one or more embodiments of the present disclosure, there is provided the above image processing method, wherein the presenting, in a graphical interface including the target object, an evaluation result corresponding to the target object based on the target score includes:
Obtaining target scores of all areas except the target area in the five sense organs of the target object;
Comparing the target scores of the target areas with the target scores of the areas to determine the area with the highest target score;
And in the graphical interface containing the target object, a fourth special effect corresponding to the area with the highest target score is presented, and the aesthetic degree of the area with the highest target score indicated by the fourth special effect is matched with the target score.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus including:
The first acquisition unit is used for acquiring images of the target object to obtain a plurality of continuous first frame images;
The second acquisition unit is used for acquiring a plurality of second frame images containing the target object when the image acquisition state is determined to be in a stable state based on the plurality of continuous first frame images;
The scoring unit is used for respectively acquiring scores of target areas in the five sense organs of the target object in each second frame image, wherein the scores are used for indicating the aesthetic degree of the target areas;
The adjustment unit is used for respectively adjusting each score when the obtained score changes of the scores reach the change threshold value, so that the score changes of the scores meet the change condition;
a determining unit configured to determine a target score corresponding to the target region based on the adjusted scores;
and the presentation unit is used for presenting the evaluation result corresponding to the target object in a graphical interface containing the target object based on the target score.
According to one or more embodiments of the present disclosure, there is provided an electronic device including:
A memory for storing executable instructions;
and the processor is used for realizing the image processing method provided by the embodiment of the disclosure when executing the executable instructions.
According to one or more embodiments of the present disclosure, there is provided a storage medium storing executable instructions that, when executed, are configured to implement the image processing method provided by the embodiments of the present disclosure.
The foregoing description is only illustrative of the embodiments of the present disclosure and the technical principles employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
Claims (18)
1. An image processing method, the method comprising:
Image acquisition is carried out on a target object to obtain a plurality of continuous first frame images;
acquiring a plurality of second frame images containing the target object when the image acquisition state is determined to be in a stable state based on the plurality of continuous first frame images;
respectively obtaining scores of target areas in the five sense organs of the target object in each second frame image, wherein the scores are used for indicating the aesthetic degree of the target areas;
When the obtained score changes of the scores reach the change threshold value, respectively adjusting each score so that the score changes of the scores meet the change condition;
Determining a target score corresponding to the target region based on the adjusted scores;
Presenting an evaluation result corresponding to the target object in a graphical interface containing the target object based on the target score;
wherein said adjusting each score separately comprises:
Obtaining the mapping relation between the score and the adjusted score;
And mapping each score based on the mapping relation, and mapping the score of a first range to the score of a second range to determine an adjusted score corresponding to each score, wherein the first range is larger than the second range.
2. The method of claim 1, wherein the determining that the image acquisition state is in a steady state based on the plurality of consecutive first frame images comprises:
respectively carrying out face recognition on each first frame image to obtain an object in each first frame image;
And determining that the image acquisition state is in a stable state when the first frame images are determined to contain the target object based on the object in the first frame images.
3. The method according to claim 1, wherein the obtaining the score of the target area in the five sense organs of the target object in each of the second frame images includes:
Respectively carrying out image segmentation on each second frame image to obtain images corresponding to target areas in each second frame image;
And inputting the images corresponding to the target areas in the second frame images into a neural network model to obtain the scores of the target areas in the five sense organs of the target object in the second frame images.
4. The method according to claim 1, wherein the obtaining the score of the target area in the five sense organs of the target object in each of the second frame images includes:
and performing similarity matching on the second frame images and a preset image to respectively obtain similarity scores of target areas in the five sense organs of the target object and corresponding areas in the preset image.
5. The method of claim 1, wherein determining a target score corresponding to the target region based on each of the adjusted scores comprises:
and determining an average value of the adjusted scores based on the scores, and obtaining a target score corresponding to the target area.
6. The method of claim 1, wherein presenting, based on the target score, an evaluation result corresponding to the target object in a graphical interface containing the target object, comprises:
And when the target score reaches a score threshold, presenting a first special effect corresponding to the target area in a graphical interface containing the target object, wherein the aesthetic degree of the target area indicated by the first special effect is matched with the target score.
7. The method of claim 1, wherein presenting, based on the target score, an evaluation result corresponding to the target object in a graphical interface containing the target object, comprises:
respectively obtaining target scores of all areas except the target area in the five sense organs of the target object;
Determining a target score of the target area and a sum of target scores of areas except the target area;
And when the determined sum of the target scores reaches a score threshold, presenting a second special effect corresponding to the whole of the five sense organs of the target object in a graphical interface containing the target object, wherein the aesthetic degree of the whole of the five sense organs of the target object indicated by the second special effect is matched with the target score.
8. The method of claim 1, wherein presenting, based on the target score, an evaluation result corresponding to the target object in a graphical interface containing the target object, comprises:
obtaining a target score of at least one region outside the target region in the five sense organs of the target object;
And based on the target score of the target region and the target score of the at least one region, presenting a third special effect corresponding to the region of which the target score reaches a score threshold in a graphical interface containing the target object, wherein the third special effect is used for indicating that the aesthetic degree of the region of which the target score reaches the score threshold is matched with the target score.
9. The method of claim 1, wherein presenting, based on the target score, an evaluation result corresponding to the target object in a graphical interface containing the target object, comprises:
Obtaining target scores of all areas except the target area in the five sense organs of the target object;
Comparing the target scores of the target areas with the target scores of the areas to determine the area with the highest target score;
And in the graphical interface containing the target object, a fourth special effect corresponding to the area with the highest target score is presented, and the aesthetic degree of the area with the highest target score indicated by the fourth special effect is matched with the target score.
10. An image processing apparatus, characterized in that the apparatus comprises:
The first acquisition unit is used for acquiring images of the target object to obtain a plurality of continuous first frame images;
The second acquisition unit is used for acquiring a plurality of second frame images containing the target object when the image acquisition state is determined to be in a stable state based on the plurality of continuous first frame images;
The scoring unit is used for respectively acquiring scores of target areas in the five sense organs of the target object in each second frame image, wherein the scores are used for indicating the aesthetic degree of the target areas;
The adjustment unit is used for respectively adjusting each score when the obtained score changes of the scores reach the change threshold value, so that the score changes of the scores meet the change condition;
a determining unit configured to determine a target score corresponding to the target region based on the adjusted scores;
a presentation unit, configured to present, based on the target score, an evaluation result corresponding to the target object in a graphical interface that includes the target object;
the adjusting unit is further used for obtaining the mapping relation between the score and the adjusted score;
And mapping each score based on the mapping relation, and mapping the score of a first range to the score of a second range to determine an adjusted score corresponding to each score, wherein the first range is larger than the second range.
11. The apparatus of claim 10, wherein,
The second acquisition unit is further used for carrying out face recognition on each first frame image respectively to obtain objects in each first frame image;
And determining that the image acquisition state is in a stable state when the first frame images are determined to contain the target object based on the object in the first frame images.
12. The apparatus of claim 10, wherein,
The scoring unit is further configured to perform image segmentation on each of the second frame images to obtain images corresponding to the target areas in each of the second frame images;
And inputting the images corresponding to the target areas in the second frame images into a neural network model to obtain the scores of the target areas in the five sense organs of the target object in the second frame images.
13. The apparatus of claim 10, wherein,
The scoring unit is further configured to perform similarity matching on each second frame image and a preset image, so as to obtain similarity scores of a target region in the five sense organs of the target object and a corresponding region in the preset image in each second frame image.
14. The apparatus of claim 10, wherein,
The determining unit is further configured to determine an average value of the adjusted scores based on the adjusted scores, and obtain a target score corresponding to the target area.
15. The apparatus of claim 10, wherein,
And the presentation unit is further used for presenting a first special effect corresponding to the target area in a graphical interface containing the target object when the target score reaches a score threshold value, and the aesthetic degree of the target area indicated by the first special effect is matched with the target score.
16. The apparatus of claim 10, wherein,
The presentation unit is further used for respectively acquiring target scores of all areas except the target area in the five sense organs of the target object;
Determining a target score of the target area and a sum of target scores of areas except the target area;
And when the determined sum of the target scores reaches a score threshold, presenting a second special effect corresponding to the whole of the five sense organs of the target object in a graphical interface containing the target object, wherein the aesthetic degree of the whole of the five sense organs of the target object indicated by the second special effect is matched with the target score.
17. An electronic device, the electronic device comprising:
A memory for storing executable instructions;
A processor for implementing the image processing method according to any one of claims 1 to 9 when executing said executable instructions.
18. A non-transitory computer readable storage medium storing executable instructions which, when executed, are operable to implement the image processing method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974869.9A CN112733575B (en) | 2019-10-14 | 2019-10-14 | Image processing method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910974869.9A CN112733575B (en) | 2019-10-14 | 2019-10-14 | Image processing method, device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112733575A CN112733575A (en) | 2021-04-30 |
CN112733575B true CN112733575B (en) | 2024-07-19 |
Family
ID=75588568
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910974869.9A Active CN112733575B (en) | 2019-10-14 | 2019-10-14 | Image processing method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112733575B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113326775B (en) * | 2021-05-31 | 2023-12-29 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107295252A (en) * | 2017-06-16 | 2017-10-24 | 广东欧珀移动通信有限公司 | Focusing area display methods, device and terminal device |
KR101832791B1 (en) * | 2017-02-17 | 2018-02-28 | (주)유플러스시스템 | Hybrid computer scoring system and method based on image for increasing reliability and accuracy |
CN110188652A (en) * | 2019-05-24 | 2019-08-30 | 北京字节跳动网络技术有限公司 | Processing method, device, terminal and the storage medium of facial image |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102622588B (en) * | 2012-03-08 | 2013-10-09 | 无锡中科奥森科技有限公司 | Dual-certification face anti-counterfeit method and device |
CN106503614B (en) * | 2016-09-14 | 2020-01-17 | 厦门黑镜科技有限公司 | Photo obtaining method and device |
CN107194817B (en) * | 2017-03-29 | 2023-06-23 | 腾讯科技(深圳)有限公司 | User social information display method and device and computer equipment |
CN109214298B (en) * | 2018-08-09 | 2021-06-08 | 盈盈(杭州)网络技术有限公司 | Asian female color value scoring model method based on deep convolutional network |
-
2019
- 2019-10-14 CN CN201910974869.9A patent/CN112733575B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101832791B1 (en) * | 2017-02-17 | 2018-02-28 | (주)유플러스시스템 | Hybrid computer scoring system and method based on image for increasing reliability and accuracy |
CN107295252A (en) * | 2017-06-16 | 2017-10-24 | 广东欧珀移动通信有限公司 | Focusing area display methods, device and terminal device |
CN110188652A (en) * | 2019-05-24 | 2019-08-30 | 北京字节跳动网络技术有限公司 | Processing method, device, terminal and the storage medium of facial image |
Also Published As
Publication number | Publication date |
---|---|
CN112733575A (en) | 2021-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11887231B2 (en) | Avatar animation system | |
US11158102B2 (en) | Method and apparatus for processing information | |
US11455830B2 (en) | Face recognition method and apparatus, electronic device, and storage medium | |
CN109902659B (en) | Method and apparatus for processing human body image | |
CN111476871B (en) | Method and device for generating video | |
CN109993150B (en) | Method and device for identifying age | |
CN111696176B (en) | Image processing method, image processing device, electronic equipment and computer readable medium | |
US11409794B2 (en) | Image deformation control method and device and hardware device | |
US11734804B2 (en) | Face image processing method and apparatus, electronic device, and storage medium | |
US11922721B2 (en) | Information display method, device and storage medium for superimposing material on image | |
CN112183173B (en) | Image processing method, device and storage medium | |
WO2020248900A1 (en) | Panoramic video processing method and apparatus, and storage medium | |
CN109600559B (en) | Video special effect adding method and device, terminal equipment and storage medium | |
CN112785669B (en) | Virtual image synthesis method, device, equipment and storage medium | |
CN112560540A (en) | Beautiful makeup putting-on recommendation method and device | |
CN116229311B (en) | Video processing method, device and storage medium | |
CN112733575B (en) | Image processing method, device, electronic equipment and storage medium | |
CN110059739B (en) | Image synthesis method, image synthesis device, electronic equipment and computer-readable storage medium | |
CN111507143B (en) | Expression image effect generation method and device and electronic equipment | |
CN112307323A (en) | Information pushing method and device | |
CN110619602A (en) | Image generation method and device, electronic equipment and storage medium | |
CN110264431A (en) | Video beautification method, device and electronic equipment | |
CN109816791A (en) | Method and apparatus for generating information | |
CN112053450B (en) | Text display method and device, electronic equipment and storage medium | |
WO2021121291A1 (en) | Image processing method and apparatus, electronic device and computer-readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TG01 | Patent term adjustment |