Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

n_bins value #17

Open
talesa opened this issue Oct 5, 2020 · 4 comments
Open

n_bins value #17

talesa opened this issue Oct 5, 2020 · 4 comments

Comments

@talesa
Copy link

talesa commented Oct 5, 2020

I think there might be a tiny mistake in the dequantization process at the moment.

I think that https://github.com/rosinality/glow-pytorch/blob/master/train.py#L99 should be
n_bins = 2. ** args.n_bits - 1.
rather than
n_bins = 2. ** args.n_bits
since as far as I understand, in the following code snippet the minimum difference in the input levels/bins values (a[1:]-a[:-1]).min() should be the same as 1/n_bins (run after image, _ = next(dataset) on line 109 in train.py https://github.com/rosinality/glow-pytorch/blob/master/train.py#L109)

In[1]: a = torch.unique(image.reshape(-1))

In[2]: (a[1:]-a[:-1]).min().item()
Out[2]: 0.003921568393707275

In[3]: 1/255.
Out[3]: 0.00392156862745098

In[4]: 1/256.
Out[4]: 0.00390625

Also, it's a bit confusing that by default, the n_bits is set to 5, whereas by default n_bits for CelebA is 8, I'd change it to 8.

@rosinality
Copy link
Owner

Yes, it seems like that treatment for n_bits is problematic compared to the official implementation. (I don't know why I have missed it.)

I have used n_bits = 5 because the official implementation have used it for celeba-hq.

@talesa
Copy link
Author

talesa commented Oct 6, 2020

I have used n_bits = 5 because the official implementation have used it for celeba-hq.

It seems to me that this implementation is only using the 8-bit version of the dataset (the default if I'm not mistaken), as it doesn't seem to be decreasing the number-of-bits of the input data like in https://github.com/openai/glow/blob/654ddd0ddd976526824455074aa1eaaa92d095d8/model.py#L153-L158, correct me if I'm wrong somewhere, I don't know the openai/glow repo much.

@rosinality
Copy link
Owner

I have forgot to add it. Anyway, 97081ff will resolve the issue.

@talesa
Copy link
Author

talesa commented Oct 7, 2020

Thanks a lot!
I'm sorry I haven't created a pull request straight away myself, I just wanted to check it with you first!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants