Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EEGAN_pretrained_doesn't work #6

Open
ajitaru opened this issue Nov 17, 2020 · 7 comments
Open

EEGAN_pretrained_doesn't work #6

ajitaru opened this issue Nov 17, 2020 · 7 comments

Comments

@ajitaru
Copy link

ajitaru commented Nov 17, 2020

Is there a new pretrained EEGAN weights file?
I got the following error message>
Key generator/conv_m2/weight not found in checkpoint

System:
tensorflow 1.10.0
python 3.6

@paugalles
Copy link

paugalles commented Jan 27, 2021

Same problem here.
I have also trained for some epochs and used my trained weights for testing. Got the same problem

@paugalles
Copy link

paugalles commented Jan 28, 2021

@kuihua if you could please double check. I think you might have pushed the wrong version of the file. Is that possible?

It seems the TESTGAN.py is wrong.
The layer named conv_m2 does not exist in the trained weights.
Instead of this layer, all this block seems to be missing:

            res_in = x_f
            # frame
            for i in range(3):
                with tf.variable_scope('block{}ex2'.format(i+1)):
                    x1=x2=x3=x_f
                    for j in range(3):
                        with tf.variable_scope('block{}_{}ex1'.format(i+1,j+1)):
                            with tf.variable_scope('ud1'):
                                a1 = lrelu(deconv_layer(x1, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                                #a1 = batch_normalize(a1, is_training)
                            with tf.variable_scope('ud2'):
                                b1 = lrelu(deconv_layer(x2, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                                #b1 = batch_normalize(b1, is_training)
                            with tf.variable_scope('ud3'):
                                c1 = lrelu(deconv_layer(x3, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                                #c1 = batch_normalize(c1, is_training)
                            sum = tf.concat([a1,b1,c1],3)
                            #sum = batch_normalize(sum, is_training)
                            with tf.variable_scope('ud4'):
                                x1 = lrelu(deconv_layer(tf.concat([sum,x1],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                                #x1 = batch_normalize(x1, is_training)
                            with tf.variable_scope('ud5'):
                                x2 = lrelu(deconv_layer(tf.concat([sum,x2],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                                #x2 = batch_normalize(x2, is_training)
                            with tf.variable_scope('ud6'):
                                x3 = lrelu(deconv_layer(tf.concat([sum,x3],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                                #x3 = batch_normalize(x3, is_training)
                    with tf.variable_scope('ud7'):
                        block_out = lrelu(deconv_layer(tf.concat([x1, x2, x3],3), [3, 3, 64, 192], [self.batch_size, self.weight, self.height, 64], 1))
                    #x = x1+x2+x3+x
                    x_f+=block_out
            with tf.variable_scope('conv_e8'):
                x_f = conv_layer(x_f, [3, 3, 64, 256], 1)
                x_f = lrelu(x_f)
            #res_in = x_f
            # mask

Then in test.py comment this problematic lines:

            fake = sess.run(
                [model.ZConv_VDSR],
                feed_dict={x: input_, is_training: False})

And change it with pass

This fixes seem to work for me. I hope they are correct

@hqmf8104
Copy link

@kuihua if you could please double check. I think you might have pushed the wrong version of the file. Is that possible?

It seems the TESTGAN.py is wrong.
The layer named conv_m2 does not exist in the trained weights.
Instead of this layer, all this block seems to be missing:

            res_in = x_f
            # frame
            for i in range(3):
                with tf.variable_scope('block{}ex2'.format(i+1)):
                    x1=x2=x3=x_f
                    for j in range(3):
                        with tf.variable_scope('block{}_{}ex1'.format(i+1,j+1)):
                            with tf.variable_scope('ud1'):
                                a1 = lrelu(deconv_layer(x1, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                                #a1 = batch_normalize(a1, is_training)
                            with tf.variable_scope('ud2'):
                                b1 = lrelu(deconv_layer(x2, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                                #b1 = batch_normalize(b1, is_training)
                            with tf.variable_scope('ud3'):
                                c1 = lrelu(deconv_layer(x3, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                                #c1 = batch_normalize(c1, is_training)
                            sum = tf.concat([a1,b1,c1],3)
                            #sum = batch_normalize(sum, is_training)
                            with tf.variable_scope('ud4'):
                                x1 = lrelu(deconv_layer(tf.concat([sum,x1],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                                #x1 = batch_normalize(x1, is_training)
                            with tf.variable_scope('ud5'):
                                x2 = lrelu(deconv_layer(tf.concat([sum,x2],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                                #x2 = batch_normalize(x2, is_training)
                            with tf.variable_scope('ud6'):
                                x3 = lrelu(deconv_layer(tf.concat([sum,x3],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                                #x3 = batch_normalize(x3, is_training)
                    with tf.variable_scope('ud7'):
                        block_out = lrelu(deconv_layer(tf.concat([x1, x2, x3],3), [3, 3, 64, 192], [self.batch_size, self.weight, self.height, 64], 1))
                    #x = x1+x2+x3+x
                    x_f+=block_out
            with tf.variable_scope('conv_e8'):
                x_f = conv_layer(x_f, [3, 3, 64, 256], 1)
                x_f = lrelu(x_f)
            #res_in = x_f
            # mask

Then in test.py comment this problematic lines:

            fake = sess.run(
                [model.ZConv_VDSR],
                feed_dict={x: input_, is_training: False})

And change it with pass

This fixes seem to work for me. I hope they are correct

Hi! Thanks for this. This is a bit of stupid question, but where do you need to put the pretrained weights? Also, are you saying that :

        with tf.variable_scope('conv_m2'):
            x_f = conv_layer(x_f, [3, 3, 64, 256], 1)
            x_f = lrelu(x_f)
        res_in = x_f

should be replaced by:

        res_in = x_f
        # frame
        for i in range(3):
            with tf.variable_scope('block{}ex2'.format(i+1)):
                x1=x2=x3=x_f
                for j in range(3):
                    with tf.variable_scope('block{}_{}ex1'.format(i+1,j+1)):
                        with tf.variable_scope('ud1'):
                            a1 = lrelu(deconv_layer(x1, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                            #a1 = batch_normalize(a1, is_training)
                        with tf.variable_scope('ud2'):
                            b1 = lrelu(deconv_layer(x2, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                            #b1 = batch_normalize(b1, is_training)
                        with tf.variable_scope('ud3'):
                            c1 = lrelu(deconv_layer(x3, [3, 3, 64, 64], [self.batch_size, self.weight, self.height, 64], 1))
                            #c1 = batch_normalize(c1, is_training)
                        sum = tf.concat([a1,b1,c1],3)
                        #sum = batch_normalize(sum, is_training)
                        with tf.variable_scope('ud4'):
                            x1 = lrelu(deconv_layer(tf.concat([sum,x1],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                            #x1 = batch_normalize(x1, is_training)
                        with tf.variable_scope('ud5'):
                            x2 = lrelu(deconv_layer(tf.concat([sum,x2],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                            #x2 = batch_normalize(x2, is_training)
                        with tf.variable_scope('ud6'):
                            x3 = lrelu(deconv_layer(tf.concat([sum,x3],3), [1, 1, 64, 256], [self.batch_size, self.weight, self.height, 64], 1))
                            #x3 = batch_normalize(x3, is_training)
                with tf.variable_scope('ud7'):
                    block_out = lrelu(deconv_layer(tf.concat([x1, x2, x3],3), [3, 3, 64, 192], [self.batch_size, self.weight, self.height, 64], 1))
                #x = x1+x2+x3+x
                x_f+=block_out
        with tf.variable_scope('conv_e8'):
            x_f = conv_layer(x_f, [3, 3, 64, 256], 1)
            x_f = lrelu(x_f)
        #res_in = x_f
        # mask

@kuijiang94
Copy link
Owner

I am sorry for this issue. the model.ZConv_VDSR should be [model.imitation_sr, model.base_sr, model.frame_sr]. I have corrected it.

@hqmf8104
Copy link

I am sorry for this issue. the model.ZConv_VDSR should be [model.imitation_sr, model.base_sr, model.frame_sr]. I have corrected it.

I'm afraid this generated the following error for me:

File "test.py", line 67, in
[model.model.imitation_sr, model.base_sr, model.frame_sr],
AttributeError: 'Model' object has no attribute 'model'

@kuijiang94
Copy link
Owner

kuijiang94 commented Feb 22, 2021 via email

@zensenlon
Copy link

zensenlon commented Jun 23, 2021

Same problem here.
I have also trained for some epochs and used my trained weights for testing. Got the same problem

Did you solved that, I also got this problem but it seems that the hqmf8104's solution doesn't work for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants