-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ Add GANomaly #70
✨ Add GANomaly #70
Conversation
Nice 🎁 |
anomalib/models/ganomaly/config.yaml
Outdated
tiling: | ||
apply: false | ||
tile_size: null | ||
stride: null | ||
remove_border_count: 0 | ||
use_random_tiling: False | ||
random_tile_count: 16 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think tiling should be on by default. The encoder and decoder network in this model is capable of handling 64x64 images max. So we need to use either 32x32 or 64x64 windows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll probably approve after the tiling changes are added
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, overall the PR looks good. Just some comments on the structure of the encoder and decoder modules, but this could be taste. Feel free to ignore them if you disagree.
self.model.add_module( | ||
f"initial-conv-{num_input_channels}-{n_features}", | ||
nn.Conv2d(num_input_channels, n_features, kernel_size=4, stride=2, padding=1, bias=False), | ||
) | ||
self.model.add_module(f"initial-relu-{n_features}", nn.LeakyReLU(0.2, inplace=True)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this module a bit hard to read. Maybe instead of collecting all the layers in the self.model
attribute, you could separate them into multiple modules, and then call them in the forward
method? Something like
def __init__():
self.initial_conv = nn.Conv2d(...)
self.initial_relu = nn.LeakyReLU(...)
self.extra_layers = nn.Sequential(...)
self.pyramid_features = nn.Sequential(...)
...
def forward(x):
x = self.init_conv(x)
x = self.init_relu(x)
x = self.extra_layers(x)
x = self.pyramid_features(x)
...
I always like to be able to understand the data flow in a torch module by just looking at the forward mnethod.
self.model.add_module( | ||
f"initial-{latent_vec_size}-{n_input_features}-convt", | ||
nn.ConvTranspose2d(latent_vec_size, n_input_features, kernel_size=4, stride=1, padding=0, bias=False), | ||
) | ||
self.model.add_module(f"initial-{n_input_features}-batchnorm", nn.BatchNorm2d(n_input_features)) | ||
self.model.add_module(f"initial-{n_input_features}-relu", nn.ReLU(True)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as in Encoder
. I think it would be good to split self.model
up into smaller chunks.
@@ -0,0 +1,227 @@ | |||
"""Torch models defining encoder, decoder, Generator and Discriminator.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reminder: We need to check if this code contains any duplication from DCGAN repo. If it does, we should either acknowledge the authors (depending on the license) or write our own implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥🔥🔥
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes. Just a minor comment
"""Return latent vectors.""" | ||
|
||
output = self.input_layers(input_tensor) | ||
if len(self.extra_layers) > 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this check is needed. If the sequential module is empty, calling it will just return the unaltered input.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I didn't know this. I have updated the PR.
"""Return generated image.""" | ||
output = self.latent_input(input_tensor) | ||
output = self.inverse_pyramid(output) | ||
if len(self.extra_layers) > 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
Description
Add GANomaly algorithm.
Checklist: