Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature request / help] Evaluate on different dataset to training dataset #2290

Closed
1 task done
Peter-Devine opened this issue Jan 22, 2024 · 1 comment · Fixed by #4691
Closed
1 task done

[Feature request / help] Evaluate on different dataset to training dataset #2290

Peter-Devine opened this issue Jan 22, 2024 · 1 comment · Fixed by #4691
Labels
solved This problem has been already solved

Comments

@Peter-Devine
Copy link

Reminder

  • I have read the README and searched the existing issues.

Reproduction

I would like to know if there is an option to evaluate the model on a different dataset to the training dataset.

For example, I would like to train on MADLAD and evaluate my model at every evaluation epoch on UltraChat-200K. Is this possible?

If not, could this be implemented in a future release?

Thanks!

Expected behavior

No response

System Info

No response

Others

No response

@hiyouga hiyouga added pending This problem is yet to be addressed in-progress The related features are in the progress labels Jan 22, 2024
@Katehuuh
Copy link
Contributor

Katehuuh commented Apr 3, 2024

I would like to know if there is an option to evaluate the model on a different dataset

Similar to request #2616

@hiyouga hiyouga removed the in-progress The related features are in the progress label Apr 25, 2024
@hiyouga hiyouga mentioned this issue Jul 14, 2024
2 tasks
@hiyouga hiyouga added solved This problem has been already solved and removed pending This problem is yet to be addressed labels Jul 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants