You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are trying the author's PIN code on Criteo and Avazu. We are able to reproduce the AUC score of 78.72% on Avazu. But we can only achieve an AUC score of 80.18% on Criteo. However, if we use a different embedding size for each field, we are able to get an AUC score of 80.21% on Criteo. But this is not the setting claimed the paper. Could the authors clarify on this issue?
The text was updated successfully, but these errors were encountered:
I think you can uncomment line 285 and 574 of tf_main.py, and tune "eval_level" and "decay", e.g., eval_level = 5, decay = 0.8.
Learning rate decay could alleviate overfitting, but the code is commented due to incompatibility with the distributed version. I used learning rate decay on every model on Criteo, which showed improvements. I think I forget to claim the special setting of Criteo in the paper.
Besides, table 6 compares models under restricted settings. You can find better results of corresponding models in table 9, which are carefully tuned with grid search.
We are trying the author's PIN code on Criteo and Avazu. We are able to reproduce the AUC score of 78.72% on Avazu. But we can only achieve an AUC score of 80.18% on Criteo. However, if we use a different embedding size for each field, we are able to get an AUC score of 80.21% on Criteo. But this is not the setting claimed the paper. Could the authors clarify on this issue?
The text was updated successfully, but these errors were encountered: