-
Notifications
You must be signed in to change notification settings - Fork 610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stopping criterion of ALS #9
Comments
It usually doesn't hurt to run more iterations - for instance this post run this code and calculated p@5 out at each iteration and found that it converged around 15 iterations or so (which is the default here). I'm going to add some code for evaluation here (calculating training loss, and maybe MAP) at some point, but I don't think that using that for early termination is worthwhile - it can be slow to calculate things like MAP or P@K because we have to sort the predictions : while the training itself can leverage sparsity of the dataset, sorting means there is a cost for missing items in the validation phase. (the paper you linked to is on the explicit case where you can safely ignore missing items). |
I added code to calculate the loss at each iteration here da1a7fa |
Is it a good idea to stop ALS by validation dataset based on some criteria(RMSE, etc.)? The paper use probe datasets as validation-set. Once the RMSE is less than 1e-9, they stop the iteration.
The text was updated successfully, but these errors were encountered: