Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking error handling #177

Closed
katxiao opened this issue Jan 17, 2023 · 0 comments · Fixed by #178
Closed

Benchmarking error handling #177

katxiao opened this issue Jan 17, 2023 · 0 comments · Fixed by #178
Assignees
Milestone

Comments

@katxiao
Copy link
Contributor

katxiao commented Jan 17, 2023

Problem Description

If a benchmarking run fails for a (synthesizer, dataset) combination, we should catch the error and proceed to try benchmarking all (synthesizer, dataset) combinations. For the failed runs, we can display NaN in the results table for those rows. If a detailed_results_folder is provided, we can write the error to the detailed_results_folder.

Expected behavior

If there are errors, we should display NaN in the results table for the entries that were unable to be calculated.

image

@katxiao katxiao self-assigned this Jan 17, 2023
@katxiao katxiao added this to the 0.6.0 milestone Jan 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant