Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running the program #3

Open
Devnary opened this issue Aug 5, 2021 · 2 comments
Open

Error when running the program #3

Devnary opened this issue Aug 5, 2021 · 2 comments

Comments

@Devnary
Copy link

Devnary commented Aug 5, 2021

I runned the code normaly with 878 images of the CelebAMask-HQ dataset.
30k images take too lot of time with my CPU. I don't have GPU

......Traceback (most recent call last):
  File "PGGAN-Tensorflow.py", line 1253, in <module>
    WGAN_GP_train_d_step(generator, discriminator, image, alpha_tensor,
ValueError: in user code:

    PGGAN-Tensorflow.py:1142 WGAN_GP_train_d_step  *
        fake_mixed_pred = discriminator([fake_image_mixed, alpha], training=True)
    PGGAN-Tensorflow.py:292 call  *
        y = tf.reshape(inputs, [group_size, -1, s[1], s[2], s[3]])   # [GMHWC] Split minibatch into M groups of size G.
ValueError: Dimension size must be evenly divisible by 32768 but is 114688 for '{{node model_1/minibatch_stddev/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](model_1/conv2d_up_channel/compute_weights/conv2d_3/LeakyRelu, model_1/minibatch_stddev/Reshape/shape)' with input shapes: [14,4,4,512], [5] and with input tensors computed as partial shapes: input[1] = [4,?,4,4,512].

Versions

  • Python 3.8
  • Tensorflow 2.5.0
  • numpy 1.19.4
  • matplotlib 3.4.2
@henry32144
Copy link
Owner

Hi,

I think the problem is because your batch_size setting, the batch_size should be divisible by (or smaller than) group_size defined in the MinibatchSTDDEV, and here the default of the group_size is 4.

class MinibatchSTDDEV(tf.keras.layers.Layer):
    """
    Reference from official pggan implementation
    https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py
    
    Arguments:
      group_size: a integer number, minibatch must be divisible by (or smaller than) group_size.
    """
    def __init__(self, group_size=4):
        super(MinibatchSTDDEV, self).__init__()
        self.group_size = group_size

    def call(self, inputs):
        group_size = tf.minimum(self.group_size, tf.shape(inputs)[0])     # Minibatch must be divisible by (or smaller than) group_size.
        s = inputs.shape                                             # [NHWC]  Input shape.

By the way, only 878 images may not be enough to train this model, especially when it grows the resolution. You may utilize Google Colab to train a model. Though it has the limitation of GPU usage. But, you can save the model regularly and remember to download the model to your PC (otherwise it will disappear when you disconnect from Colab).

I have a sample of this notebook on the Colab. You may try this.
https://colab.research.google.com/drive/1SdfNdom68koJLdhl3wumjOOvPgfdBJV9?usp=sharing#scrollTo=LOsY-eRGT2wt

@Devnary
Copy link
Author

Devnary commented Aug 6, 2021

Hi, well I didn't change anything in the code.
The batch_size is 16 and the group_size is 4.
I noticed that the number of images change something.

And thx, the Colab works 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants