Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Virtual batch Normalization #8

Open
sahiliitm opened this issue May 10, 2017 · 3 comments
Open

Virtual batch Normalization #8

sahiliitm opened this issue May 10, 2017 · 3 comments

Comments

@sahiliitm
Copy link

sahiliitm commented May 10, 2017

If I understand the code correctly, it uses virtual batch normalization only for the inputs and not for the intermediate layers.

Was this done in the Atari context for getting the results stated in the paper?

Also, what was the network architecture used for the Atari domain?

@joyousrabbit
Copy link

@sahiliitm Hello, in the paper, it is stated " We used the same preprocessing and feedforward CNN architecture used by (Mnih et al., 2016)". So it should be the traditional two layers FF.

@PatrykChrabaszcz
Copy link

PatrykChrabaszcz commented May 18, 2017

Hi @sahiliitm

Could you point me to the code where you see virtual batch normalisation implementation?
I thought it is:

@property
def needs_ref_batch(self):
     return False

Which is currently not implemented

@louiskirsch
Copy link

In the code here we have just z-normalization for the inputs, no virtual batch norm.
Also no hyperparameters for Atari. OpenAI, please be more reproducible! :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants