CN109376582B - Interactive face cartoon method based on generation of confrontation network - Google Patents

Interactive face cartoon method based on generation of confrontation network Download PDF

Info

Publication number
CN109376582B
CN109376582B CN201811024740.3A CN201811024740A CN109376582B CN 109376582 B CN109376582 B CN 109376582B CN 201811024740 A CN201811024740 A CN 201811024740A CN 109376582 B CN109376582 B CN 109376582B
Authority
CN
China
Prior art keywords
cartoon
image
face
hair
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811024740.3A
Other languages
Chinese (zh)
Other versions
CN109376582A (en
Inventor
李宏亮
梁小娟
邓志康
颜海强
尹康
袁欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201811024740.3A priority Critical patent/CN109376582B/en
Publication of CN109376582A publication Critical patent/CN109376582A/en
Application granted granted Critical
Publication of CN109376582B publication Critical patent/CN109376582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an interactive face cartoon method based on a generation countermeasure network. According to the method, firstly, interactive segmentation processing is carried out on an image to be processed to obtain eyebrow, mouth and nose, hair and a face image, then the eyebrow, mouth and nose and hair are respectively input into three trained eye, mouth and nose and hair generation models, and corresponding cartoon facial features images are output; directly obtaining a cartoon face based on cartoon processing of the face image; and synthesizing the facial features on the cartoon face, and superposing the facial features with the hair effect to obtain a final cartoon image. The invention utilizes the advantages of the interactive mode and the generation of the confrontation network, obtains the human hair, the facial form and the facial features through the interactive mode segmentation, eliminates the difference caused by different backgrounds among training samples, and carries out style conversion on each part through the generation of the confrontation network, thereby keeping the information of detailed parts such as the eye corner, the mouth corner and the like as much as possible.

Description

Interactive face cartoon method based on generation of confrontation network
Technical Field
The invention belongs to the technical field of image processing and computer vision, and relates to an interactive human face cartoon method based on a generation countermeasure network.
Background
In recent years, with the rapid development of mobile networks and mobile phones, particularly the vigorous development of mobile internet, digital entertainment is increasingly scaled in the industry by virtue of its special advantages, and the technology of human face cartoon drawing starts, which mainly utilizes image processing and computer programs to automatically generate cartoon results, thus greatly reducing the workload of painters and improving cartoon efficiency.
At present, the human face cartoon method mainly comprises an interactive method and an unsupervised learning method. The interactive method adopts a man-machine interaction mode to extract the human face contour lines, and then artificially exaggerates the salient regions of the human face features to generate the individual cartoon pictures. The unsupervised learning method mainly utilizes a machine learning or deep learning method to carry out cartoon conversion on the face image. In the method based on machine learning, one type is that the method of deformation or color transformation is used to process the pixels of the input human face image, so as to obtain a certain cartoon effect; the other type is a sample learning-based method, which is used for extracting the characteristics of the whole human face image, learning the mapping relation from the human face image to the style image and then generating the cartoon human face. The deep learning is used as a branch of machine learning, the idea of machine learning is used, and the neural network is established, so that the learning method has higher semantic features, has higher learning capability, and is widely applied to the fields of object detection, classification, segmentation, image generation and the like.
The generation countermeasure network is also one of deep neural networks, and is essentially a combination of a generation model and a discriminant model, wherein the generation model is a conditional probability distribution from input to output through training samples, namely a generation relation, and the discriminant model is a decision function or a prediction function which is directly learned by data, namely decision or prediction of a result. The generation countermeasure network combines the two models, establishes a competitive relationship, converts input noise or pictures into a style image by using the generation model, judges whether the generated image is true or false by using the discrimination model, and records the generated image as false if the real sample is true. The training process is a mutual game process, a generated model is about to cheat a discrimination network as much as possible, and the discrimination network is about to recognize false pictures as much as possible. Based on the principle, the generation of the confrontation network is widely applied to image generation and style migration, and compared with a general generation model, the network can generate clearer images, but the details are still insufficient, and meanwhile, a large number of training samples with similar styles are relied on.
Disclosure of Invention
The invention aims to: in view of the problems existing in the prior art, the invention provides
The general cartoonization conversion is affected by a complex background, and the quality of the generated image is poor. In order to apply the training model to images outside a data set, the interactive segmentation method is tried to be carried out before the cartoon of the face head portrait is carried out to obtain the hair of a person, the face outline and the facial features of the person, the cartoon conversion is respectively carried out, the detail processing of the cartoon is improved, and finally the cartoon image is synthesized into a complete cartoon image.
The invention discloses an interactive face cartoon method based on generation of an antagonistic network, which comprises the following steps:
step S1: based on the real face images and the cartoon face images which are in one-to-one correspondence, a cartoon generation model which generates confrontation network training and is related to hair, eyebrows and noses is adopted:
carrying out size normalization processing on the real face image and the cartoon face image; performing interactive segmentation processing on all the images to obtain an eyebrow database, a mouth and nose database and a hair database;
constructing three generation countermeasure networks, wherein each generation countermeasure network comprises two pairs of generators and discriminators, and a generator G A For generating cartoon face images from real face images, generator G B For generating cartoon images into reality A face image;
during training, the generated images output by the two generators are used as positive samples to be added into a positive training sample set of each other, and cyclic training is carried out until the judgment precision meets the precision requirement; then the generator G in the trained generation countermeasure network is used A As a cartoon generative model, obtaining the cartoon generative model about hair, eyebrow, mouth and nose;
the final loss function of each cartoon generative model is L G =loss GA +loss GB1 L cyc (G A ,G B )+λ 2 L idt Wherein loss GA 、loss GB Respectively represent generators G A 、G B Of the final loss function, L cyc (G A ,G B ) Representing the cyclic loss function, L idt Representing a reconstruction loss function;
wherein L is idt The method specifically comprises the following steps:
L idt =E y~pdata(y) [||decoder B (encoder B (y))||-y]+E x~pdata(x) [||decoder A (encoder A (x))||-x],
wherein x represents the generator G A I.e. the image of a real human face, y represents the generator G B The input image of (2), namely a cartoon face image;
encoder B (. to) denotes an encoder which extracts image features of cartoon images in parentheses A (·) denotes extracting true image features in parentheses;
decoder B (. to) represents the generator G B Generated real face image decoder A (. to) represents the generator G A Generating cartoon face images;
Figure BDA0001788227630000021
denotes the mean value, subscript
Figure BDA0001788227630000022
For representing images
Figure BDA0001788227630000023
The image type of which is
Figure BDA0001788227630000024
pdata (x) represents a real image, pdata (y) represents a cartoon image;
step S2: carrying out interactive segmentation processing on a real face image to be processed to obtain eyebrow, mouth and nose, hair and face contour images, respectively inputting the eyebrow, mouth and nose and hair into corresponding generating models, and outputting corresponding cartoon images: cartoon eyebrow, cartoon nose and cartoon hair;
Filling the average pixel value of the complexion of the face contour image to obtain a cartoon face;
and finally, combining the generated cartoon eyebrow and cartoon nose on the cartoon face, and overlapping the cartoon hair to obtain a final cartoon image.
The interactive segmentation processing in the invention specifically comprises the following steps:
respectively marking a hair area and a background area by lines with different colors, and marking the positions of the eyes and the lips by points with different colors, wherein the colors of the eyes and the lips are the same, but are different from the colors of the hair and the background;
taking the color area of the hair with the color as a foreground and other areas as backgrounds to segment a hair image;
taking the connected region of the color point of the labeling colors of the two eyes as a foreground, and taking other regions as a background, and segmenting a human face part to obtain a human face contour image (a face image);
and then cutting out left and right eyebrow images and lip images through the two eyes and the lip mark points.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that: the invention obtains the human hair, the facial form and the facial features through interactive segmentation by simultaneously utilizing the advantages of interactive and generation of the confrontation network, eliminates the difference between training samples caused by different backgrounds, and carries out style conversion on each part through the generation of the confrontation network, thereby keeping the information of detailed parts such as the canthus, the mouth corner and the like as much as possible.
Drawings
FIG. 1: the invention is a flow chart.
FIG. 2: in the invention, a structure diagram of the countermeasure network is generated.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
The invention provides an interactive face cartoon method based on generation of an antagonistic network, which utilizes the advantages of an interactive mode and the generation of the antagonistic network, obtains the hair, the face shape and the facial features of a person through interactive segmentation, eliminates the difference caused by different backgrounds among training samples, carries out style conversion on each part through the generation of the antagonistic network, retains the information of detailed parts such as the canthus, the mouth angle and the like as much as possible, and finally synthesizes the human face cartoon.
First, a cartoon database is constructed, which comprises: eye brow database (only left or right one can be constructed, and then turned over to obtain symmetrical eye brow database), nose and mouth database, and hair database.
In this embodiment, in order to seek better effects, the cartoon head portrait of the same style drawn by the professional painter for 100 volunteers is used as the cartoon mapping image corresponding to the face image.
All face images and cartoon head portrait pictures are firstly cut to the same size, for example 256 by 256. Then, segmenting all images by using a classical interactive segmentation method, and simultaneously labeling hair, background and eyes, wherein the specific method comprises the following steps:
1. the hair area (blue), the background area (green), the eyes position and the lips position (red dots) are marked with thick lines of different colors.
2. And taking the blue area as a foreground and the other areas as a background to segment the hair.
3. And segmenting the face part by taking the red point connected region as a foreground and other regions as backgrounds so as to obtain a face outline, and filling the average pixel value of the complexion of the face to be used as a follow-up cartoon face.
4. The left and right eyebrow and lip parts are cut out by the position of the red dot.
5. The above obtained fractions were each composed as the following data sets: eyebrow database (turn right eye over), nose database, hair database for subsequent training.
And after the database is obtained, the cartoon is converted by utilizing the generated countermeasure network. Three generation countermeasure networks are set for the three databases, and are respectively recorded as: g _ eye, G _ mouth, G _ hair. The network training process is consistent, taking the eye brow training chart as an example, the specific method is as follows:
The network structure of the invention comprises two generators and two discriminators, which respectively realize the conversion from human faces to cartoons and the conversion from cartoons to human faces; the generator adopts a structure of encoding-decoding (Encoder-Decoder). The network architecture is shown in fig. 2. The meaning of a network can be interpreted as: for two types of style images A and B, Encoder A represents the feature of the A type image to be extracted, Decode A represents the feature to be converted into the A type image, Encoder B represents the feature of the B type image to be extracted, and Decode B represents the feature to be generated into the B type image.
First, the eyebrow pattern x is input into the generation countermeasure network, passing through EncodeA- > Decoder B (G) B ) Obtaining a generated image fake _ y, and simultaneously inputting the generated image y and the real cartoon image into a discriminator D B If the image is a real image, 1 is output, otherwise, the output is 0, and accordingly, the loss function is designed as follows, and the update D is propagated reversely by the minimized loss function B And (4) parameters. Wherein G is B (x) I.e. fake _ y, representing the cartoon image generated by the generator, G B (y) is fake _ x, which represents the real image generated by the generator, E [ 2 ]]Indicating the mean value, and the subscript is used to distinguish the type of image to which the object belongs.
Figure BDA0001788227630000041
Similarly, inputting the real cartoon image y to generate a confrontation network through Encoder B- > Decoder A (G) A ) Obtaining a generated image fake x, and inputting the generated image and the real eyebrow into a discriminator D simultaneously A In which a loss function is calculated, minimizing the loss function back-propagating the update D A And (4) parameters. The loss function is as follows: (wherein G is A And (y) is fake _ x, and represents the real eyebrow image generated by the generator. Wherein G is A (x) Namely, fake _ y, represents the cartoon eyebrow image generated by the generator. )
Figure BDA0001788227630000051
It can be obtained from the formula, when each iteration is updated, the data in the training sample is taken as a positive sample, the generated image is taken as a negative sample, and finally, in order to achieve a balance, the negative sample can be similar to the positive sample as much as possible, but although the whole generated image is very similar to the real image, the image generated finally is easy to be very fuzzy, and the discriminator still cannot correctly distinguish, so the invention keeps the image generated last time in the training process, which is recorded as fake', each epoch (training amount per time) updates its data, and at the same time, counts in the loss function of the next epoch: (wherein D represents D A Or D B )
loss′ D =loss D +E[log(fake′)] (3)
For the generator, training update is performed according to the output of the discriminator, and in order to make the generated image as close to the real image as possible, the generated image should be trained as a positive sample, so the loss function of the generator can be expressed as:
Figure BDA0001788227630000052
Figure BDA0001788227630000053
In order to make the generated image closer to the real image, the invention also uses the cyclic loss function in cycleGAN:
L cyc (G A ,G B )=E y~Pdata(y) [||G B (G A (y))-y||]
+E x~Pdata(x) [||G A (G B (x))-x||] (6)
the cyclic loss function can be referred to in the following document, "Unaccessing Image-to-Image transformation using Cycle-dependent adaptive Networks".
Meanwhile, the invention increases the reconstruction loss, namely any image can be restored to the original image by using a decoder of the type after passing through the encoder, so as to prove the usefulness of the characteristics. The loss function is:
L idt =E y~Pdata(y) [||decoder B (encoder B (y))-y||]
+E x~Pdata(x) [||decoder A (encoder A (x))-x||] (7)
wherein the encoder B (. to) denotes an encoder which extracts image features of cartoon images in parentheses A (·) denotes extracting true image features in parentheses; decoder B (. cndot.) represents the generator G B Generated real face image decoder A (. to) represents the generator G A Generating cartoon face images; i.e. decoder B (encoder B (y)) means that the original image y is obtained by the decoder of this type and then by the encoder of this type, and should be as consistent as possible with the original image, i.e. the original image should be as restored as possible.
Thus, the final generator penalty can be expressed as: (wherein λ) 1 λ 2 Parameters set for training)
Figure BDA0001788227630000061
Based on the three cartoon generation models constructed by the invention, the human face cartoon processing is carried out on the human face image to be processed in real time to obtain three cartoon images: and synthesizing the cartoon images of the eyebrows, the mouth, the nose and the hair according to the positioning to obtain the final cartoon image.
Examples
Step S1: and constructing a training set.
Step S101: cutting the original data into uniform size 256 × 256;
step S102: performing interactive segmentation treatment, marking a hair area by using a blue thick line, marking a background area by using a green thick line, and representing the left eye, the right eye and the lips by using red dots;
step S103: the eyebrow image and lip image are clipped according to the red dot coordinates (square clipping), and the upper left coordinate position and the clipped image size are recorded. Normalizing the size to 64 x 64 to form a training database;
step S104: obtaining a face contour according to the face segmentation image, and filling skin color to obtain a cartoon face image; .
Step S105: filling the background of the segmented hair image into white to form a hair training set;
step S2: and training to generate an antagonistic network to obtain three generation models.
And training the corresponding generation countermeasure network by using the three data respectively. Three generative models were obtained.
Step S3: firstly, carrying out interactive segmentation processing on an image to be processed to obtain eyebrow, mouth and nose, hair and a face image, respectively inputting the eyebrow, the mouth and nose and the hair into three corresponding generated models, and outputting a corresponding cartoon five sense organ image; directly obtaining a cartoon face based on cartoon processing of the face image;
And restoring the generated cartoon image into the size during cutting, synthesizing the facial features on the cartoon face according to the coordinate position, and superimposing a hair effect to obtain the final cartoon image.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (2)

1. An interactive face cartoon method based on generation of an antagonistic network is characterized by comprising the following steps:
step S1: based on the real face images and the cartoon face images which are in one-to-one correspondence, a cartoon generation model which generates confrontation network training and is related to hair, eyebrows and noses is adopted:
carrying out size normalization processing on the real face image and the cartoon face image; performing interactive segmentation processing on all the images to obtain an eyebrow database, a mouth and nose database and a hair database; the cartoon face image acquisition mode is as follows: taking the cartoon head portrait of the same style drawn by a professional painter as a cartoon mapping picture corresponding to a real face image;
Constructing three generation countermeasure networks, wherein each generation countermeasure network comprises two pairs of generators and discriminators, and a generator G A For generating cartoon face images from real face images, generator G B For generating cartoon images into real face images, discriminator D A Is input to generator G A Generated cartoon face image G A (y) and cartoon face image y; discriminator D B Is input to generator G B Generated real face image G B (x) And a real face image y;
during training, the generated images output by the two generators are used as positive samples to be added into a positive training sample set of each other, and cyclic training is carried out until the judgment precision meets the precision requirement; then the generator G in the trained generation countermeasure network is used A As a cartoon generative model, obtaining the cartoon generative model about hair, eyebrow, mouth and nose;
during training, the loss function of the discriminator is:
Figure FDA0003587969420000011
Figure FDA0003587969420000012
and the image generated last time is kept in the training process and is marked as fake', each epoch updates the data thereof and simultaneously accounts for the loss function of the next epoch: loss 'of' D =loss D +E[log(fake′)]Wherein D represents a discriminator D A Or D B "epoch" means the amount of training per time, E2]Means taking an average;
the generators perform training and updating according to the output of the discriminators, and during training, generated images output by the two generators are used as positive samples to be added into a positive training sample set of each other for circular training until the discrimination precision meets the precision requirement; then the generator G in the trained generation countermeasure network is used A As a cartoon generative model, obtaining cartoon generative models about hair, eyebrows, mouth and nose;
the loss function of the generator during training is:
Figure FDA0003587969420000013
Figure FDA0003587969420000014
the final loss function of each cartoon generative model is L G =loss GA +loss GB1 L cyc (G A ,G B )+λ 2 L idt Wherein loss GA 、loss GB Respectively represent generators G A 、G B Of the final loss function, L cyc (G A ,G B ) Representing the cyclic loss function, L idt Representing reconstruction lossA loss function;
the reconstruction loss function L idt The method specifically comprises the following steps:
L idt =E y~pdata(y) [||decoder B (encoder B (y))||-y]+E x~pdata(x) [||decoder A (encoder A (x))||-x],
wherein x represents the generator G A I.e. the image of a real human face, y represents the generator G B The input image of (2), namely a cartoon face image;
encoder B (. to) denotes an encoder which extracts image features of cartoon images in parentheses A (·) denotes extracting true image features in parentheses;
decoder B (. to) represents the generator G B Generated real face image decoder A (. to) represents the generator G A Generating cartoon face images;
Figure FDA0003587969420000021
denotes the mean value, subscript
Figure FDA0003587969420000022
For representing images
Figure FDA0003587969420000023
The image type of which is
Figure FDA0003587969420000024
pdata (x) represents a real image, pdata (y) represents a cartoon image;
step S2: carrying out interactive segmentation processing on a real face image to be processed to obtain eyebrow, mouth and nose, hair and face contour images, respectively inputting the eyebrow, mouth and nose and hair into corresponding generating models, and outputting corresponding cartoon images: cartoon eyebrow, cartoon nose and cartoon hair;
Filling the average pixel value of the complexion of the face contour image to obtain a cartoon face;
and combining the generated cartoon eyebrow and cartoon mouth on a cartoon human face, and overlapping the cartoon hair to obtain a final cartoon image.
2. The method of claim 1, wherein the interactive segmentation process is specifically:
respectively marking a hair area and a background area by lines with different colors, and marking the positions of the eyes and the lips by points with different colors, wherein the colors of the eyes and the lips are the same, but are different from the colors of the hair and the background;
taking the color area of the hair with the color as a foreground and other areas as a background, and segmenting a hair image;
taking the connected region of the color points of the marked colors of the two eyes as the foreground and the other regions as the background, and segmenting the face part to obtain a face contour image;
and then cutting out left and right eyebrow images and lip images through the two eyes and the lip mark points.
CN201811024740.3A 2018-09-04 2018-09-04 Interactive face cartoon method based on generation of confrontation network Active CN109376582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811024740.3A CN109376582B (en) 2018-09-04 2018-09-04 Interactive face cartoon method based on generation of confrontation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811024740.3A CN109376582B (en) 2018-09-04 2018-09-04 Interactive face cartoon method based on generation of confrontation network

Publications (2)

Publication Number Publication Date
CN109376582A CN109376582A (en) 2019-02-22
CN109376582B true CN109376582B (en) 2022-07-29

Family

ID=65404459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811024740.3A Active CN109376582B (en) 2018-09-04 2018-09-04 Interactive face cartoon method based on generation of confrontation network

Country Status (1)

Country Link
CN (1) CN109376582B (en)

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886216B (en) * 2019-02-26 2023-07-18 华南理工大学 Expression recognition method, device and medium based on VR scene face image restoration
CN110070483B (en) * 2019-03-26 2023-10-20 中山大学 Portrait cartoon method based on generation type countermeasure network
CN110097086B (en) * 2019-04-03 2023-07-18 平安科技(深圳)有限公司 Image generation model training method, image generation method, device, equipment and storage medium
CN109902767B (en) * 2019-04-11 2021-03-23 网易(杭州)网络有限公司 Model training method, image processing device, model training apparatus, image processing apparatus, and computer-readable medium
CN110335332B (en) * 2019-05-14 2023-04-18 杭州火烧云科技有限公司 Automatic drawing method of human face cartoon
CN110288677B (en) * 2019-05-21 2021-06-15 北京大学 Pedestrian image generation method and device based on deformable structure
CN110232722B (en) * 2019-06-13 2023-08-04 腾讯科技(深圳)有限公司 Image processing method and device
CN110517200B (en) * 2019-08-28 2022-04-12 厦门美图之家科技有限公司 Method, device and equipment for obtaining facial sketch and storage medium
CN110675353A (en) * 2019-08-31 2020-01-10 电子科技大学 Selective segmentation image synthesis method based on conditional generation countermeasure network
CN110689546A (en) * 2019-09-25 2020-01-14 北京字节跳动网络技术有限公司 Method, device and equipment for generating personalized head portrait and storage medium
CN111046763B (en) * 2019-11-29 2024-03-29 广州久邦世纪科技有限公司 Portrait cartoon method and device
CN111246176A (en) * 2020-01-20 2020-06-05 北京中科晶上科技股份有限公司 Video transmission method for realizing banding
CN111275784B (en) * 2020-01-20 2023-06-13 北京百度网讯科技有限公司 Method and device for generating image
CN113223128B (en) * 2020-02-04 2022-09-13 北京百度网讯科技有限公司 Method and apparatus for generating image
CN113327191B (en) * 2020-02-29 2024-06-21 华为技术有限公司 Face image synthesis method and device
CN111369428B (en) * 2020-03-09 2023-07-21 北京百度网讯科技有限公司 Virtual head portrait generation method and device
CN111445484B (en) * 2020-04-01 2022-08-02 华中科技大学 Image-level labeling-based industrial image abnormal area pixel level segmentation method
CN111444881B (en) * 2020-04-13 2020-12-25 中国人民解放军国防科技大学 Fake face video detection method and device
CN113486688A (en) * 2020-05-27 2021-10-08 海信集团有限公司 Face recognition method and intelligent device
CN111709875B (en) * 2020-06-16 2023-11-14 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium
CN111488865B (en) * 2020-06-28 2020-10-27 腾讯科技(深圳)有限公司 Image optimization method and device, computer storage medium and electronic equipment
CN112132208B (en) * 2020-09-18 2023-07-14 北京奇艺世纪科技有限公司 Image conversion model generation method and device, electronic equipment and storage medium
CN112215927B (en) * 2020-09-18 2023-06-23 腾讯科技(深圳)有限公司 Face video synthesis method, device, equipment and medium
CN112132922B (en) * 2020-09-24 2024-10-15 扬州大学 Method for cartoon image and video in online class
CN112258387A (en) * 2020-10-30 2021-01-22 北京航空航天大学 Image conversion system and method for generating cartoon portrait based on face photo
CN112465936A (en) * 2020-12-04 2021-03-09 深圳市优必选科技股份有限公司 Portrait cartoon method, device, robot and storage medium
CN112561782B (en) * 2020-12-15 2023-01-03 哈尔滨工程大学 Method for improving reality degree of simulation picture of offshore scene
CN112561786B (en) * 2020-12-22 2024-11-08 作业帮教育科技(北京)有限公司 Online live broadcast method and device based on image cartoon and electronic equipment
CN112926554B (en) * 2021-04-27 2022-08-16 南京甄视智能科技有限公司 Construction of training data set of portrait cartoon stylized model and model generation
CN113409187B (en) * 2021-06-30 2023-08-15 深圳万兴软件有限公司 Cartoon style image conversion method, device, computer equipment and storage medium
CN113706645A (en) * 2021-06-30 2021-11-26 酷栈(宁波)创意科技有限公司 Information processing method for landscape painting
CN113343931B (en) * 2021-07-05 2024-07-26 Oppo广东移动通信有限公司 Training method for generating countermeasure network, image vision correction method and device
CN113436280A (en) * 2021-07-26 2021-09-24 韦丽珠 Image design system based on information acquisition
CN113570689B (en) * 2021-07-28 2024-03-01 杭州网易云音乐科技有限公司 Portrait cartoon method, device, medium and computing equipment
CN114170065B (en) * 2021-10-21 2024-08-02 河南科技大学 Cartoon loss-based cartoon-like method for generating countermeasure network
CN113822798B (en) * 2021-11-25 2022-02-18 北京市商汤科技开发有限公司 Method and device for training generation countermeasure network, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038823A (en) * 2017-12-06 2018-05-15 厦门美图之家科技有限公司 Image-type becomes the training method of network model, image-type becomes method and computing device
CN108182657A (en) * 2018-01-26 2018-06-19 深圳市唯特视科技有限公司 A kind of face-image conversion method that confrontation network is generated based on cycle
CN108229349A (en) * 2017-12-21 2018-06-29 中国科学院自动化研究所 Reticulate pattern facial image identification device
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8437514B2 (en) * 2007-10-02 2013-05-07 Microsoft Corporation Cartoon face generation
CN103456010B (en) * 2013-09-02 2016-03-30 电子科技大学 A kind of human face cartoon generating method of feature based point location
CN107577985B (en) * 2017-07-18 2019-10-15 南京邮电大学 The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038823A (en) * 2017-12-06 2018-05-15 厦门美图之家科技有限公司 Image-type becomes the training method of network model, image-type becomes method and computing device
CN108229349A (en) * 2017-12-21 2018-06-29 中国科学院自动化研究所 Reticulate pattern facial image identification device
CN108182657A (en) * 2018-01-26 2018-06-19 深圳市唯特视科技有限公司 A kind of face-image conversion method that confrontation network is generated based on cycle
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Auto-painter: Cartoon Image Generation from Sketch by Using Conditional Generative Adversarial Networks;Yifan Liu等;《https://arxiv.org/pdf/1705.01908.pdf》;20170508;1-12 *
基于人像字典集的卡通自动生成方法;孙晶晶等;《系统仿真学报》;20150430;第27卷(第4期);682-688 *

Also Published As

Publication number Publication date
CN109376582A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109376582B (en) Interactive face cartoon method based on generation of confrontation network
CN108229278B (en) Face image processing method and device and electronic equipment
CN103456010B (en) A kind of human face cartoon generating method of feature based point location
US11587288B2 (en) Methods and systems for constructing facial position map
US11562536B2 (en) Methods and systems for personalized 3D head model deformation
CN112950661A (en) Method for generating antithetical network human face cartoon based on attention generation
JP7462120B2 (en) Method, system and computer program for extracting color from two-dimensional (2D) facial images
CN113807265B (en) Diversified human face image synthesis method and system
CN110796593A (en) Image processing method, device, medium and electronic equipment based on artificial intelligence
CN111046763A (en) Portrait cartoon method and device
US11417053B1 (en) Methods and systems for forming personalized 3D head and facial models
CN113034355B (en) Portrait image double-chin removing method based on deep learning
CN110853119A (en) Robust reference picture-based makeup migration method
CN114120389A (en) Network training and video frame processing method, device, equipment and storage medium
CN113486944A (en) Face fusion method, device, equipment and storage medium
CN110110603A (en) A kind of multi-modal labiomaney method based on facial physiologic information
Shen et al. Sd-nerf: Towards lifelike talking head animation via spatially-adaptive dual-driven nerfs
CN111275778B (en) Face simple drawing generation method and device
CN111612687A (en) Automatic face image makeup method
CN113763498A (en) Portrait simple-stroke region self-adaptive color matching method and system for industrial manufacturing
CN113947520A (en) Method for realizing face makeup conversion based on generation of confrontation network
CN114862716A (en) Image enhancement method, device and equipment for face image and storage medium
CN113781372A (en) Deep learning-based opera facial makeup generation method and system
Tao et al. Face Recognition Based Beauty Algorithm in Smart City Applications
CN106097373B (en) A kind of smiling face's synthetic method based on branch's formula sparse component analysis model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant