Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to reproduce the results on Criteo. #2

Open
cjxnn opened this issue Jan 8, 2019 · 2 comments
Open

Unable to reproduce the results on Criteo. #2

cjxnn opened this issue Jan 8, 2019 · 2 comments

Comments

@cjxnn
Copy link

cjxnn commented Jan 8, 2019

We are trying the author's PIN code on Criteo and Avazu. We are able to reproduce the AUC score of 78.72% on Avazu. But we can only achieve an AUC score of 80.18% on Criteo. However, if we use a different embedding size for each field, we are able to get an AUC score of 80.21% on Criteo. But this is not the setting claimed the paper. Could the authors clarify on this issue?

@Atomu2014
Copy link
Owner

Hi,

I think you can uncomment line 285 and 574 of tf_main.py, and tune "eval_level" and "decay", e.g., eval_level = 5, decay = 0.8.

Learning rate decay could alleviate overfitting, but the code is commented due to incompatibility with the distributed version. I used learning rate decay on every model on Criteo, which showed improvements. I think I forget to claim the special setting of Criteo in the paper.

Besides, table 6 compares models under restricted settings. You can find better results of corresponding models in table 9, which are carefully tuned with grid search.

Willing to provide further help~

@Atomu2014
Copy link
Owner

Be careful that learning rate decay is not fully implemented in the distributed version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants