-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
n_bins
value
#17
Comments
Yes, it seems like that treatment for I have used |
It seems to me that this implementation is only using the 8-bit version of the dataset (the default if I'm not mistaken), as it doesn't seem to be decreasing the number-of-bits of the input data like in https://github.com/openai/glow/blob/654ddd0ddd976526824455074aa1eaaa92d095d8/model.py#L153-L158, correct me if I'm wrong somewhere, I don't know the openai/glow repo much. |
I have forgot to add it. Anyway, 97081ff will resolve the issue. |
Thanks a lot! |
I think there might be a tiny mistake in the dequantization process at the moment.
I think that https://github.com/rosinality/glow-pytorch/blob/master/train.py#L99 should be
n_bins = 2. ** args.n_bits - 1.
rather than
n_bins = 2. ** args.n_bits
since as far as I understand, in the following code snippet the minimum difference in the input levels/bins values
(a[1:]-a[:-1]).min()
should be the same as1/n_bins
(run afterimage, _ = next(dataset)
on line 109 intrain.py
https://github.com/rosinality/glow-pytorch/blob/master/train.py#L109)Also, it's a bit confusing that by default, the
n_bits
is set to 5, whereas by defaultn_bits
for CelebA is 8, I'd change it to 8.The text was updated successfully, but these errors were encountered: