CN110428473B - Color image graying method of confrontation generation network based on auxiliary variable - Google Patents

Color image graying method of confrontation generation network based on auxiliary variable Download PDF

Info

Publication number
CN110428473B
CN110428473B CN201910529133.0A CN201910529133A CN110428473B CN 110428473 B CN110428473 B CN 110428473B CN 201910529133 A CN201910529133 A CN 201910529133A CN 110428473 B CN110428473 B CN 110428473B
Authority
CN
China
Prior art keywords
image
network
color image
color
grayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910529133.0A
Other languages
Chinese (zh)
Other versions
CN110428473A (en
Inventor
刘且根
李婧源
周瑾洁
何卓楠
李嘉晨
全聪
谢文军
王玉皞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang University
Original Assignee
Nanchang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang University filed Critical Nanchang University
Priority to CN201910529133.0A priority Critical patent/CN110428473B/en
Publication of CN110428473A publication Critical patent/CN110428473A/en
Application granted granted Critical
Publication of CN110428473B publication Critical patent/CN110428473B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a color image graying method of a confrontation generation network based on auxiliary variables, which comprises the following steps: step A: checking whether the input image is a color image, if so, performing graying processing on the input image by using a gradient correlation similarity graying (GcsDecolor) algorithm, and copying the grayed image to obtain three grayed images serving as contrast images of a countermeasure generation network; and B: designing an auxiliary variable-based countermeasure generation network (AV-GAN), and training the AV-GAN network; and C: and testing the color image through the trained AV-GAN network to obtain a final gray image. The invention has the advantages of high color image gray scale calculation efficiency, capability of storing the remarkable characteristics of the color image, capability of keeping color sequencing of the gray scale image and better reflection of the structural similarity between the color image and the gray scale image.

Description

Color image graying method of confrontation generation network based on auxiliary variable
Technical Field
The invention belongs to the technical field of computer vision, particularly relates to the application of a color image graying technology, and particularly relates to a color image graying method of an confrontation generation network based on auxiliary variables.
Background
Today, color image technology has been widely used in the rapid development of digital media and scientific technology, but grayscale images are still active in various directions due to the characteristics of small data volume and convenient operation. Firstly, graying treatment has economic advantages, and in order to save printing cost, many textbooks, published papers and most newspapers tend to output grayscale images with low price and sharp contrast; graying also has great significance for helping color blindness crowds, and color-handicapped patients can select different platforms according to different requirements, so that the negative influence caused by the fact that the color-handicapped patients cannot distinguish colors can be solved to a certain extent. Secondly, some image processing techniques can more simply and conveniently operate on the gray-scale image, for example, in popular fields such as pattern recognition and machine vision which are widely researched at present, the gray-scale image with small data volume is selected to rapidly represent the color image during preprocessing, so that the processing speed of a subsequent algorithm can be improved, and the comprehensive application effect of the algorithm can be greatly improved. Finally, graying is also applied to image artistry, and black and white photography for obtaining grayscale images continues to be sought by some photographers. Therefore, the research on the gray scale of the color image has important significance and application value.
The goal of color-to-grayscale conversion is to map 3D vectors to 1D scalars, which is essentially a dimensionality reduction process that, unfortunately to the end, inevitably suffers from information loss. Therefore, many decolorization methods have been proposed from the viewpoint of human perception to solve this problem. Conventional methods for conveying color grayscale can be roughly divided into two categories: a local adjustment method and a global adjustment method. In the first category, the color-to-grayscale mapping values of pixels typically vary spatially, depending on the local color distribution. For example, Bala and Eschbach propose a method to preserve the color edges by adding the high frequency components of the chrominance to the luminance channel. Neumann et al reconstructs the gray scale image from the color image gradient to measure color and brightness contrast as the contrast of the gradient color space. Smith et al decomposes the image into several frequency components and adjusts the combined weight using the color channels. Although they have the advantage of retaining local features, constant color if mapped, the regions may not uniformly translate the change in area.
The global algorithm is mainly divided into two types, namely a dimensionality reduction based on transformation and an optimization algorithm based on a color difference value (pixel point color contrast). For the method of the dimension reduction of the transformation, the PCA transformation dimension reduction is taken as a main representative; for the method of pixel point color contrast, the idea of this kind of algorithm is to comprehensively utilize the luminance value information and color contrast information of the pixel point of the color image when constructing the mapping function from the color image to the gray image, and to map the different color contrast information of the neighboring area of the original color image to the gray image as much as possible, thereby increasing the contrast of the gray image. And after the mapping objective function is constructed, constructing an optimization equation corresponding to the mapping objective function, and solving the optimization equation to obtain the gray image closest to the target brightness value. Since the mapping function is flexible and changeable, different mapping functions can be constructed according to different purposes, so that after the color image information is mapped to the gray image, the situation that the same color information in different areas of the original color image is mapped to the same gray value may occur, and the situation that different color information in different areas of the color image is mapped to the same gray value may also occur. The purpose is mainly to distinguish the characteristics between adjacent pixel points with different colors in a color image. The final result is related to the color of the color image pixel and its neighborhood information.
These existing methods have two disadvantages: robustness and high computational cost. To address these difficulties, some researchers reconsider the use of the traditional simple RGB2GRAY model. In particular, it assumes that the gray scale output is a linear combination of the RGB channels in a color image, i.e.
Figure BDA0002098420230000021
Wherein Ir,Ig,IbRespectively representing RGB color channel components. In the rgb2gray function of classical Matlab, the weights of all images are fixed. Recently, researchers have attempted to adaptively select channel weights under certain measures. Lu et al discretize the solution space of the linear parametric model with 66 candidates and then determine the candidate that achieves the highest energy value as the best solution, which is the fastest algorithm at present. Liu et al propose a new gradient correlation similarity (Gcs) model, with input color images and generated grayscale images between each channel, that better reflects the perceptibility of retained features and the color-ordering transformation of colors into gray. They determined that the solution linear parametric model induces discrete search candidate images with the minimum function Gcs values.
Disclosure of Invention
The present invention aims to provide a color image graying method based on an auxiliary variable confrontation generation network, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a color image graying method based on an auxiliary variable confrontation generation network comprises the following steps:
step A: and checking whether the input image is a color image, if so, performing graying processing on the color image by using a gradient correlation similarity graying (GcsDecolor) algorithm, and copying the grayed image to obtain three grayed images serving as contrast images of the countermeasure generation network.
And B: designing an auxiliary variable-based countermeasure generation network (AV-GAN), and training the AV-GAN network.
And C: and testing the color image through the trained AV-GAN network to obtain a final gray image.
Further, the step A is as follows:
assuming that an input color image is in an RGB format, wherein R, G and B represent RGB channels, carrying out graying processing on the input color image by using a GcsDecolor algorithm to obtain a grayed image.
Using a first-order multivariate polynomial function c ═ r, g, b } and constraining the sum of weights to 1, the overall pixel similarity between the gradient magnitude in each channel of the original color image and the resulting grayscale image is calculated, i.e.:
Figure BDA0002098420230000031
Figure BDA0002098420230000032
next, the retention of the structure is described by adopting gradient correlation, the similarity between the obtained grayscale image and the original image is calculated in each channel of the RGB space to obtain three grayed channels, then the three grayed images are summed to obtain a final grayed image, and finally the obtained grayed image is copied to obtain three identical grayed images which are used as references for resisting the generation of the network discriminator.
Further, the step B is as follows:
constructing an auxiliary variable-based countermeasure generation network (AV-GAN) and training the network, wherein three channels of R, G and B in a color image are used as input of the network, the AV-GAN network comprises a generator and a discriminator, and the generator comprises 14 convolutional layers and a plurality of active layers; wherein the convolution layer is pixel-wise, minimizing the distance between two images, let F (x)i(ii) a θ) is i of ConvNet modelithThe output of the training, the loss of training is defined as:
Figure BDA0002098420230000033
where p represents each pixel and n represents the total number of pixels in the image; its overall goal can be expressed as:
Figure BDA0002098420230000041
wherein N represents the total number of training examples; the behavior of this loss function is to take the average as a result to minimize the loss.
The input of the discriminator is a gray scale picture generated after the color picture passes through the generator, wherein the gray scale picture is generated as follows: an implied dimension reduction technique is employed to achieve the transition from color to gray by adding auxiliary variables and constraining them through sample training, involving three features:
(1) the input and output are gradient domains;
(2) the number of input variable channels is the same as that of output variable channels through an auxiliary variable technology;
(3) the L1 norm is used to overcome the deficiency in gradient magnitude;
the network loss function used is:
Figure BDA0002098420230000042
wherein I ═ { I ═ IR,IG,IBAnd adding auxiliary variables on the basis of the loss function, and then carrying out local image processing to obtain a grayscale image.
The discriminator consists of 11 coding layers, similar to the coding of the generator, each coding layer consists of convolution operation with the step length larger than 1, batch standardization and leakage relu activation, the last layer is activated by sigmoid, a number from 0 to 1 is returned to explain the probability that the input is true or false, the three grayed images in the step A are used as judgment references, the judgment is true, then the step A returns to 1, and if not, the step A returns to 0.
For an AV-GAN network, both the generator and the discriminator make adjustments on the input x, via qgParameterizing the generator by qdParameterizing the discriminator, wherein a minimum maximum objective function is as follows:
Figure BDA0002098420230000043
taking into account the difference in L1 between the input x and the output y in the generator in a state that ensures its stable operation, in each iteration the discriminator maximizes q according to the above equationdAnd the generator will be minimized in the following way:
Figure BDA0002098420230000044
the network is trained according to the method, and the AV-GAN network is obtained.
Compared with the prior art, the invention has the beneficial effects that:
according to the color image graying method based on the auxiliary variable confrontation generation network (AV-GAN), the importance degree and the similarity of three subspaces of a color image are deeply researched, the reference information of the three subspaces is fully considered, and the three subspaces pass through the trained AV-GAN network to obtain a more accurate grayed image. The invention saves the remarkable characteristics of the color image by using the normalized correlation, not only ensures that the color image can be still distinguished in the gray level image, but also can keep the color sequencing; and the structural similarity between the color image and the gray image can be better reflected, and higher color image gray-scale capability is obtained. The color image graying method can improve the speed of image graying processing, ensure the precision of the image graying processing, be suitable for different scenes and prevent the problem of processing failure caused by the change of external factors.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of the AV-GAN network framework of the present invention;
FIG. 3 is a diagram of a builder framework in the AV-GAN network of the present invention;
FIG. 4 is a diagram of a framework of an authenticator in an AV-GAN network in accordance with the present invention;
FIG. 5 shows that the Set5 data Set is tested by the AV-GAN network to obtain the final gray image;
fig. 6 is a contrast diagram of color images in the Cadik dataset after different graying.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. The embodiments described herein are only for explaining the technical solution of the present invention and are not limited to the present invention.
A color image graying algorithm of the AV-GAN network according to the present invention will now be described with reference to fig. 1.
In step a: the specific implementation is as follows:
first, training data is generated: selecting a color picture of a BSDS300 database as input;
secondly, graying the three-channel image by using a gradient correlation similarity graying (GcsDecolor) algorithm to obtain a three-channel grayed image.
Assuming that an input color image is in an RGB format, wherein R, G and B represent RGB channels, pixel points of the image are averagely reduced to half of the original value in order to shorten the data processing time, RGB values of the pixel points are obtained by reading the value of each central point of the pixel points and are stored in an array, and the distance of pixel difference between the input color and the obtained gray image is minimized in order to keep characteristic identifiability in the color-to-gray conversion. That is, assuming that the input color image is in RGB format, where the indices R, G, B represent RGB channels, let
Figure BDA0002098420230000061
Is a color contrast with a signed value representing the difference of color pairs, and gx-gyRepresenting the gray scale difference between pixels, the energy function based on the classical L2 norm is:
Figure BDA0002098420230000062
p represents a pool of pixel pairs, which contains local and non-local candidates; and randomly arranging the array with the RGB values, subtracting the original array value to obtain a reference value, selecting the minimum difference, selecting the bit values corresponding to R, G and B according to the minimum value, determining the coefficients before R, G and B are grayed, and obtaining a grayed image. By bringing a long distanceThe difference of the pixels is integrated into the energy function, so that the model can make good use of the pixels of the nearest neighbors and the long-scale contrast region.
A first order multivariate polynomial function c is used which is { r, g, b } and constrains the sum of the weights to 1. Calculating the overall pixel similarity between the gradient amplitude in each channel of the original color image and the obtained gray level image, namely:
Figure BDA0002098420230000063
Figure BDA0002098420230000064
next, gradient correlation is used to describe the structure retention, rather than the common gradient error, and the similarity between the obtained grayscale image and the original image is calculated in each channel of the RGB space to obtain three grayed channels, and then the three grayed images are summed to obtain the final grayed image. And finally, copying the obtained grayed image to obtain three same grayed images which are used as references for resisting the generation of the network discriminator.
In step B:
constructing an auxiliary variable-based countermeasure generation network (AV-GAN) and training the network, wherein three channels of R, G and B in a color image are used as input of the network, the AV-GAN network comprises a generator and a discriminator, and the generator comprises 14 convolutional layers and a plurality of active layers; wherein the convolution layer is pixel-wise, minimizing the distance between two images, let F (x)i(ii) a θ) is i of ConvNet modelithThe output of the training, the loss of training is defined as:
Figure BDA0002098420230000065
where p denotes each pixel and n denotes the total number of pixels in the image. Its overall goal can be expressed as:
Figure BDA0002098420230000071
where N represents the total number of training examples. The behavior of this loss function is to take the average as a result to minimize the loss.
The input of the discriminator is a gray scale picture generated after the color picture passes through the generator, wherein the gray scale picture is generated as follows: an implied dimension reduction technique is employed to achieve the transition from color to gray by adding auxiliary variables and constraining them through sample training, involving three features:
(1) the input and output are gradient domains;
(2) the number of input variable channels is the same as that of output variable channels through an auxiliary variable technology;
(3) the L1 norm is used to overcome the deficiency in gradient magnitude;
the network loss function used is:
Figure BDA0002098420230000072
wherein I ═ { I ═ IR,IG,IBAnd adding auxiliary variables on the basis of the loss function, and then carrying out local image processing to obtain a grayscale image.
The discriminator consists of 11 coding layers, each consisting of convolution operations with a step greater than 1, batch normalization and leaky relu activation, similar to the generator coding. And activating the last layer by sigmoid, and returning a number from 0 to 1 to explain the probability that the input is true or false.
For an AV-GAN network, both the generator and the discriminator make adjustments on the input x, via qgParameterizing the generator by qdParameterizing the discriminator, wherein the minimum maximum objective function is as follows:
Figure BDA0002098420230000073
in a state that ensures its stable operation, we consider the L1 difference between the input x and the output y in the generator, and in each iteration, the discriminator maximizes q according to the above equationdAnd the generator will be minimized in the following way:
Figure BDA0002098420230000074
the network is trained according to the method, and the AV-GAN network is obtained.
In step C: and testing the color image through the trained AV-GAN network to obtain a final gray image.
As shown in fig. 6, the method (d) of the present invention was qualitatively analyzed in Cadik dataset and compared with Gcs2 algorithm (a), Gooch algorithm (b), Smith algorithm (c). The Gooch and Smith algorithms do not adequately account for significant stimuli and produce flat results for some images. The Gcs2 method uses normalized correlation and preserves the salient features of color images. The algorithm of the invention can not only make the color image distinguishable in the gray level image, but also can sequentially save the required color sequence.
The experimental result shows that the AV-GAN-based color image graying algorithm enables the number of input channels to be the same as that of output channels, obtains more detailed grayed images, and simultaneously uses a countermeasure generation network to enhance the graying effect of the images.
The foregoing merely represents preferred embodiments of the invention, which are described in some detail and detail, and therefore should not be construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes, modifications and substitutions can be made without departing from the spirit of the present invention, and these are all within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (2)

1. A color image graying method of an confrontation generation network based on auxiliary variables is characterized in that: the method comprises the following steps:
step A: checking whether the input image is a color image, if so, performing graying processing on the input image by using a gradient correlation similarity graying GcsDecolor algorithm, and copying the grayed image to obtain three grayed images serving as contrast images of a countermeasure generation network;
and B: designing an antagonistic generation network AV-GAN based on auxiliary variables, and training the AV-GAN network;
constructing a countermeasure generation network AV-GAN based on auxiliary variables, training the countermeasure generation network AV-GAN, and taking three channels of R, G and B in a color image as input of the network, wherein the AV-GAN network comprises a generator and a discriminator, and the generator comprises 14 convolutional layers and a plurality of active layers; wherein the convolution layer is pixel-wise, minimizing the distance between two images, let F (x)i(ii) a θ) is i of ConvNet modelithThe output of the training, the loss of training is defined as:
Figure FDA0003587399270000011
where p represents each pixel and n represents the total number of pixels in the image; its overall goal can be expressed as:
Figure FDA0003587399270000012
wherein N represents the total number of training examples; the behavior of this loss function is to take the average as a result to minimize the loss;
the input of the discriminator is a gray picture generated after a color picture passes through a generator, the discriminator consists of 11 coding layers, the coding is similar to the coding of the generator, each coding layer consists of convolution operation with the step length larger than 1, batch standardization and leakage relu activation, the last layer is activated by sigmoid, a number from 0 to 1 is returned for explaining the probability that the input is true or false, the three gray images in the step A are used as judgment references, the judgment is true, the 1 is returned, and if the judgment is not true, the 0 is returned;
for an AV-GAN network, both the generator and the discriminator make adjustments on the input x, via qgParameterizing the generator by qdParameterizing the discriminator, wherein the minimum maximum objective function is as follows:
Figure FDA0003587399270000013
taking into account the difference in L1 between the input x and the output y in the generator in a state that ensures its stable operation, in each iteration the discriminator maximizes q according to the above equationdAnd the generator will be minimized in the following way:
Figure FDA0003587399270000014
training the network according to the method to obtain an AV-GAN network;
and C: and testing the color image through the trained AV-GAN network to obtain a final gray image.
2. The method of graying color images based on an auxiliary variable countermeasure generation network of claim 1, wherein: the step A is as follows:
assuming that an input color image is in an RGB format, wherein R, G and B represent RGB channels, and performing graying processing on the input color image by using a Gcs Decolor algorithm to obtain a grayed image;
using a first-order multivariate polynomial function c ═ { r, g, b } and constraining the sum of weights to 1, the overall pixel similarity between the gradient magnitude in each channel of the original color image and the resulting grayscale image is calculated, i.e.:
Figure FDA0003587399270000021
Figure FDA0003587399270000022
next, the retention of the structure is described by adopting gradient correlation, the similarity between the obtained grayscale image and the original image is calculated in each channel of the RGB space to obtain three grayed channels, then the three grayed images are summed to obtain a final grayed image, and finally the obtained grayed image is copied to obtain three identical grayed images which are used as references for resisting the generation of the network discriminator.
CN201910529133.0A 2019-06-18 2019-06-18 Color image graying method of confrontation generation network based on auxiliary variable Active CN110428473B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910529133.0A CN110428473B (en) 2019-06-18 2019-06-18 Color image graying method of confrontation generation network based on auxiliary variable

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910529133.0A CN110428473B (en) 2019-06-18 2019-06-18 Color image graying method of confrontation generation network based on auxiliary variable

Publications (2)

Publication Number Publication Date
CN110428473A CN110428473A (en) 2019-11-08
CN110428473B true CN110428473B (en) 2022-06-14

Family

ID=68408659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910529133.0A Active CN110428473B (en) 2019-06-18 2019-06-18 Color image graying method of confrontation generation network based on auxiliary variable

Country Status (1)

Country Link
CN (1) CN110428473B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI741437B (en) * 2019-12-09 2021-10-01 財團法人資訊工業策進會 Image analysis apparatus and image analysis method
CN111696026B (en) * 2020-05-06 2023-06-23 华南理工大学 Reversible gray scale graph algorithm and computing equipment based on L0 regular term
CN113450272B (en) * 2021-06-11 2024-04-16 广州方图科技有限公司 Image enhancement method based on sinusoidal variation and application thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023268A (en) * 2016-05-30 2016-10-12 南昌大学 Color image graying method based on two-step parameter subspace optimization
CN107862293A (en) * 2017-09-14 2018-03-30 北京航空航天大学 Radar based on confrontation generation network generates colored semantic image system and method
CN109635774A (en) * 2018-12-21 2019-04-16 中山大学 A kind of human face synthesizing method based on generation confrontation network
CN109635511A (en) * 2019-01-16 2019-04-16 哈尔滨工业大学 A kind of high-rise residential areas forced-ventilated schemes generation design method generating confrontation network based on condition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10586310B2 (en) * 2017-04-06 2020-03-10 Pixar Denoising Monte Carlo renderings using generative adversarial neural networks
US10803347B2 (en) * 2017-12-01 2020-10-13 The University Of Chicago Image transformation with a hybrid autoencoder and generative adversarial network machine learning architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023268A (en) * 2016-05-30 2016-10-12 南昌大学 Color image graying method based on two-step parameter subspace optimization
CN107862293A (en) * 2017-09-14 2018-03-30 北京航空航天大学 Radar based on confrontation generation network generates colored semantic image system and method
CN109635774A (en) * 2018-12-21 2019-04-16 中山大学 A kind of human face synthesizing method based on generation confrontation network
CN109635511A (en) * 2019-01-16 2019-04-16 哈尔滨工业大学 A kind of high-rise residential areas forced-ventilated schemes generation design method generating confrontation network based on condition

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Fast-Converging Conditional Generative Adversarial Networks for Image Synthesis";Chengcheng Li 等;《2018 25th IEEE International Conference on Image Processing (ICIP)》;20180906;第2132-2136页 *
"GcsDecolor: Gradient Correlation Similarity for Efficient Contrast Preserving Decolorization";Qiegen Liu 等;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20150930;第24卷(第9期);第2889-2904页 *
"基于生成对抗网络的多属性人脸生成及辅助识别研究";万里鹏;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190115;第I138-2878页 *

Also Published As

Publication number Publication date
CN110428473A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN111950453B (en) Random shape text recognition method based on selective attention mechanism
US20190294970A1 (en) Systems and methods for polygon object annotation and a method of training an object annotation system
WO2019100723A1 (en) Method and device for training multi-label classification model
CN111639692A (en) Shadow detection method based on attention mechanism
CN111695467A (en) Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion
CN110428473B (en) Color image graying method of confrontation generation network based on auxiliary variable
CN110929080B (en) Optical remote sensing image retrieval method based on attention and generation countermeasure network
CN113326930B (en) Data processing method, neural network training method, related device and equipment
WO2021137946A1 (en) Forgery detection of face image
JP2002516440A (en) Image recognition and correlation system
CN111028327A (en) Three-dimensional point cloud processing method, device and equipment
CN110175248B (en) Face image retrieval method and device based on deep learning and Hash coding
CN110245683B (en) Residual error relation network construction method for less-sample target identification and application
CN111899203B (en) Real image generation method based on label graph under unsupervised training and storage medium
CN116758340A (en) Small target detection method based on super-resolution feature pyramid and attention mechanism
Laha et al. Land cover classification using fuzzy rules and aggregation of contextual information through evidence theory
CN109711442B (en) Unsupervised layer-by-layer generation confrontation feature representation learning method
CN111260655A (en) Image generation method and device based on deep neural network model
CN116468995A (en) Sonar image classification method combining SLIC super-pixel and graph annotation meaning network
Pichel et al. A new approach for sparse matrix classification based on deep learning techniques
CN114898357B (en) Defect identification method and device, electronic equipment and computer readable storage medium
TW202127312A (en) Image processing method and computer readable medium thereof
CN118097261A (en) Small sample image classification method and system based on extrusion excitation
CN111914809A (en) Target object positioning method, image processing method, device and computer equipment
CN109460772B (en) Spectral band selection method based on information entropy and improved determinant point process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant