We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2025-02-18 15:13:14.431495: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2025-02-18 15:13:15.373157: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. Using TensorFlow backend. D:\SRGAN-master\SRGAN-master\srgan_env\lib\site-packages\tensorlayerx_init_.py:47: UserWarning: The version of the backend you have installed does not match the specified backend version and may not work, please install version tensorflow 2.4.0. warnings.warn("The version of the backend you have installed does not match the specified backend version " [TLX] [!] samples exists ... [TLX] [!] models exists ... [TLX] Conv2d conv2d_1: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv2d_2: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation 2025-02-18 15:13:22.182368: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. [TLX] BatchNorm batchnorm2d_1: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_3: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_2: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_4: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_3: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_5: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_4: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_6: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_5: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_7: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_6: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_8: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_7: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_9: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_8: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_10: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_9: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_11: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_10: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_12: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_11: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_13: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_12: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_14: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_13: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_15: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_14: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_16: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_15: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_17: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_16: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_18: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_17: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_19: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_18: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_20: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_19: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_21: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_20: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_22: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_21: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_23: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_22: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_24: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_23: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_25: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_24: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_26: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_25: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_27: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_26: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_28: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_27: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_29: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_28: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_30: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_29: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_31: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_30: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_32: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_31: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True [TLX] Conv2d conv2d_33: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_32: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_34: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_33: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_35: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] SubpixelConv2d subpixelconv2d_1: scale: 2 act: ReLU [TLX] Conv2d conv2d_36: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] SubpixelConv2d subpixelconv2d_2: scale: 2 act: ReLU [TLX] Conv2d conv2d_37: out_channels : 3 kernel_size: (1, 1) stride: (1, 1) pad: SAME act: Tanh [TLX] Conv2d conv2d_38: out_channels : 64 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: LeakyReLU [TLX] Conv2d conv2d_39: out_channels : 128 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_34: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True [TLX] Conv2d conv2d_40: out_channels : 256 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_35: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True [TLX] Conv2d conv2d_41: out_channels : 512 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_36: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True [TLX] Conv2d conv2d_42: out_channels : 1024 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_37: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True [TLX] Conv2d conv2d_43: out_channels : 2048 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_38: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True [TLX] Conv2d conv2d_44: out_channels : 1024 kernel_size: (1, 1) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_39: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True [TLX] Conv2d conv2d_45: out_channels : 512 kernel_size: (1, 1) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_40: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Conv2d conv2d_46: out_channels : 128 kernel_size: (1, 1) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_41: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True [TLX] Conv2d conv2d_47: out_channels : 128 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_42: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True [TLX] Conv2d conv2d_48: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation [TLX] BatchNorm batchnorm2d_43: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True [TLX] Elementwise elementwise_1: fn: add act: LeakyReLU [TLX] Flatten flatten_1: [TLX] Linear linear_1: 1 No Activation [TLX] Conv2d conv1_1: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv1_2: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] MaxPool2d pool1: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False [TLX] Conv2d conv2_1: out_channels : 128 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv2_2: out_channels : 128 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] MaxPool2d pool2: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False [TLX] Conv2d conv3_1: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv3_2: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv3_3: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv3_4: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] MaxPool2d pool3: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False [TLX] Conv2d conv4_1: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv4_2: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv4_3: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] Conv2d conv4_4: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU [TLX] MaxPool2d pool4: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False [TLX] Restore pre-trained weights [TLX] Loading (3, 3, 3, 64) in conv1_1 [TLX] Loading (64,) in conv1_1 [TLX] Loading (3, 3, 64, 64) in conv1_2 [TLX] Loading (64,) in conv1_2 [TLX] Loading (3, 3, 64, 128) in conv2_1 [TLX] Loading (128,) in conv2_1 [TLX] Loading (3, 3, 128, 128) in conv2_2 [TLX] Loading (128,) in conv2_2 [TLX] Loading (3, 3, 128, 256) in conv3_1 [TLX] Loading (256,) in conv3_1 [TLX] Loading (3, 3, 256, 256) in conv3_2 [TLX] Loading (256,) in conv3_2 [TLX] Loading (3, 3, 256, 256) in conv3_3 [TLX] Loading (256,) in conv3_3 [TLX] Loading (3, 3, 256, 256) in conv3_4 [TLX] Loading (256,) in conv3_4 [TLX] Loading (3, 3, 256, 512) in conv4_1 [TLX] Loading (512,) in conv4_1 [TLX] Loading (3, 3, 512, 512) in conv4_2 [TLX] Loading (512,) in conv4_2 [TLX] Loading (3, 3, 512, 512) in conv4_3 [TLX] Loading (512,) in conv4_3 [TLX] Loading (3, 3, 512, 512) in conv4_4 [TLX] Loading (512,) in conv4_4 [TLX] Input _inputlayer_1: (16, 3, 96, 96) Traceback (most recent call last): File "D:\SRGAN-master\SRGAN-master\train.py", line 115, in G.init_build(tlx.nn.Input(shape=(16, 3, 96, 96))) # NHWC 格式 File "D:\SRGAN-master\SRGAN-master\srgan_env\lib\site-packages\tensorlayerx\nn\core\core_tensorflow.py", line 634, in init_build self.forward(*inputs, **kwargs) File "D:\SRGAN-master\SRGAN-master\srgan.py", line 71, in forward x = self.subpiexlconv1(x) # 移除 data_format 参数 File "D:\SRGAN-master\SRGAN-master\srgan_env\lib\site-packages\tensorlayerx\nn\core\core_tensorflow.py", line 173, in call output = self.forward(inputs, *args, **kwargs) File "D:\SRGAN-master\SRGAN-master\srgan_env\lib\site-packages\tensorlayerx\nn\layers\convolution\super_resolution.py", line 184, in forward outputs = self.depth_to_space(inputs, data_format="NHWC") TypeError: call() got an unexpected keyword argument 'data_format'
TF_ENABLE_ONEDNN_OPTS=0
how to solve? thank you
The text was updated successfully, but these errors were encountered:
No branches or pull requests
2025-02-18 15:13:14.431495: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable
TF_ENABLE_ONEDNN_OPTS=0
.2025-02-18 15:13:15.373157: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable
TF_ENABLE_ONEDNN_OPTS=0
.Using TensorFlow backend.
D:\SRGAN-master\SRGAN-master\srgan_env\lib\site-packages\tensorlayerx_init_.py:47: UserWarning: The version of the backend you have installed does not match the specified backend version and may not work, please install version tensorflow 2.4.0.
warnings.warn("The version of the backend you have installed does not match the specified backend version "
[TLX] [!] samples exists ...
[TLX] [!] models exists ...
[TLX] Conv2d conv2d_1: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv2d_2: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
2025-02-18 15:13:22.182368: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
[TLX] BatchNorm batchnorm2d_1: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_3: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_2: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_4: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_3: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_5: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_4: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_6: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_5: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_7: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_6: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_8: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_7: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_9: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_8: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_10: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_9: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_11: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_10: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_12: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_11: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_13: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_12: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_14: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_13: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_15: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_14: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_16: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_15: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_17: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_16: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_18: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_17: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_19: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_18: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_20: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_19: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_21: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_20: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_22: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_21: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_23: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_22: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_24: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_23: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_25: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_24: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_26: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_25: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_27: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_26: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_28: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_27: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_29: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_28: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_30: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_29: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_31: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_30: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_32: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_31: momentum: 0.900000 epsilon: 0.000010 act: ReLU is_train: True
[TLX] Conv2d conv2d_33: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_32: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_34: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_33: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_35: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] SubpixelConv2d subpixelconv2d_1: scale: 2 act: ReLU
[TLX] Conv2d conv2d_36: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] SubpixelConv2d subpixelconv2d_2: scale: 2 act: ReLU
[TLX] Conv2d conv2d_37: out_channels : 3 kernel_size: (1, 1) stride: (1, 1) pad: SAME act: Tanh
[TLX] Conv2d conv2d_38: out_channels : 64 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: LeakyReLU
[TLX] Conv2d conv2d_39: out_channels : 128 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_34: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True
[TLX] Conv2d conv2d_40: out_channels : 256 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_35: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True
[TLX] Conv2d conv2d_41: out_channels : 512 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_36: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True
[TLX] Conv2d conv2d_42: out_channels : 1024 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_37: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True
[TLX] Conv2d conv2d_43: out_channels : 2048 kernel_size: (4, 4) stride: (2, 2) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_38: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True
[TLX] Conv2d conv2d_44: out_channels : 1024 kernel_size: (1, 1) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_39: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True
[TLX] Conv2d conv2d_45: out_channels : 512 kernel_size: (1, 1) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_40: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Conv2d conv2d_46: out_channels : 128 kernel_size: (1, 1) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_41: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True
[TLX] Conv2d conv2d_47: out_channels : 128 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_42: momentum: 0.900000 epsilon: 0.000010 act: LeakyReLU is_train: True
[TLX] Conv2d conv2d_48: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: No Activation
[TLX] BatchNorm batchnorm2d_43: momentum: 0.900000 epsilon: 0.000010 act: No Activation is_train: True
[TLX] Elementwise elementwise_1: fn: add act: LeakyReLU
[TLX] Flatten flatten_1:
[TLX] Linear linear_1: 1 No Activation
[TLX] Conv2d conv1_1: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv1_2: out_channels : 64 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] MaxPool2d pool1: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False
[TLX] Conv2d conv2_1: out_channels : 128 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv2_2: out_channels : 128 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] MaxPool2d pool2: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False
[TLX] Conv2d conv3_1: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv3_2: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv3_3: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv3_4: out_channels : 256 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] MaxPool2d pool3: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False
[TLX] Conv2d conv4_1: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv4_2: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv4_3: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] Conv2d conv4_4: out_channels : 512 kernel_size: (3, 3) stride: (1, 1) pad: SAME act: ReLU
[TLX] MaxPool2d pool4: kernel_size: (2, 2) stride: (2, 2) padding: SAME return_mask: False
[TLX] Restore pre-trained weights
[TLX] Loading (3, 3, 3, 64) in conv1_1
[TLX] Loading (64,) in conv1_1
[TLX] Loading (3, 3, 64, 64) in conv1_2
[TLX] Loading (64,) in conv1_2
[TLX] Loading (3, 3, 64, 128) in conv2_1
[TLX] Loading (128,) in conv2_1
[TLX] Loading (3, 3, 128, 128) in conv2_2
[TLX] Loading (128,) in conv2_2
[TLX] Loading (3, 3, 128, 256) in conv3_1
[TLX] Loading (256,) in conv3_1
[TLX] Loading (3, 3, 256, 256) in conv3_2
[TLX] Loading (256,) in conv3_2
[TLX] Loading (3, 3, 256, 256) in conv3_3
[TLX] Loading (256,) in conv3_3
[TLX] Loading (3, 3, 256, 256) in conv3_4
[TLX] Loading (256,) in conv3_4
[TLX] Loading (3, 3, 256, 512) in conv4_1
[TLX] Loading (512,) in conv4_1
[TLX] Loading (3, 3, 512, 512) in conv4_2
[TLX] Loading (512,) in conv4_2
[TLX] Loading (3, 3, 512, 512) in conv4_3
[TLX] Loading (512,) in conv4_3
[TLX] Loading (3, 3, 512, 512) in conv4_4
[TLX] Loading (512,) in conv4_4
[TLX] Input _inputlayer_1: (16, 3, 96, 96)
Traceback (most recent call last):
File "D:\SRGAN-master\SRGAN-master\train.py", line 115, in
G.init_build(tlx.nn.Input(shape=(16, 3, 96, 96))) # NHWC 格式
File "D:\SRGAN-master\SRGAN-master\srgan_env\lib\site-packages\tensorlayerx\nn\core\core_tensorflow.py", line 634, in init_build
self.forward(*inputs, **kwargs)
File "D:\SRGAN-master\SRGAN-master\srgan.py", line 71, in forward
x = self.subpiexlconv1(x) # 移除 data_format 参数
File "D:\SRGAN-master\SRGAN-master\srgan_env\lib\site-packages\tensorlayerx\nn\core\core_tensorflow.py", line 173, in call
output = self.forward(inputs, *args, **kwargs)
File "D:\SRGAN-master\SRGAN-master\srgan_env\lib\site-packages\tensorlayerx\nn\layers\convolution\super_resolution.py", line 184, in forward
outputs = self.depth_to_space(inputs, data_format="NHWC")
TypeError: call() got an unexpected keyword argument 'data_format'
how to solve?
thank you
The text was updated successfully, but these errors were encountered: