You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was running into issues trying to re-create the original paper, and stumbled upon this repository.
I was able to re-create the results when using the caffe pretrained model (which has images in the range of [0, 255]), but had drastically different results when using pytorch's pretrained model (which has images in the range of [0, 1]). I noticed this tidbit of code in your repository:
I applied that same transformation, and got results that are comparable to the original paper. I am somewhat confused about why this works, though. If pytorch's vgg19 is trained on millions of images in the range of [0, 1], wouldn't it just interpret anything above 1 as being pure white?
The text was updated successfully, but these errors were encountered:
I have it on my backlog to try and make it work on the [0, 1] range as it feels more natural for PyTorch models as they were pre-trained, as you already said, on [0, 1] range imagery in contrast with those old caffe models.
What I did, because I was puzzled the same as you are, was pass in a [0, 255] say dog image into VGG and check whether the classification output is correct. And it was. The argument being (my hypothesis) that of symmetry. That's why it's working. VGG is able to do correct classifications even for [0, 255] range.
It should/must work for [0, 1] range I'd just need a bit more experimentation. If you figure it out before me please feel free to create a PR and notify me.
I was running into issues trying to re-create the original paper, and stumbled upon this repository.
I was able to re-create the results when using the caffe pretrained model (which has images in the range of [0, 255]), but had drastically different results when using pytorch's pretrained model (which has images in the range of [0, 1]). I noticed this tidbit of code in your repository:
pytorch-neural-style-transfer/utils/utils.py
Lines 43 to 49 in f5650de
I applied that same transformation, and got results that are comparable to the original paper. I am somewhat confused about why this works, though. If pytorch's vgg19 is trained on millions of images in the range of
[0, 1]
, wouldn't it just interpret anything above 1 as being pure white?The text was updated successfully, but these errors were encountered: