-
Notifications
You must be signed in to change notification settings - Fork 709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐞 Fix inferencer in Gradio #332
Conversation
Returns: | ||
Namespace: List of arguments. | ||
""" | ||
parser = ArgumentParser() | ||
parser.add_argument("--config", type=Path, required=True, help="Path to a model config file") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could keep config
to be consistent with other entrypoints?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't sure what to call it. We can either pass the model config or the model name now. If I switch back to config should I drop calling inferencer with just model name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--config
would be inline with the new PL CLI, so I would prefer that.
If I switch back to config should I drop calling inferencer with just model name?
Not sure if I get this part
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean is that initially with config
parameter we had to pass the yaml
file to the inferencer. I changed it to model
so that we can either pass yaml
or only the model name. So, I wasn't sure sure what to call this parameter. If I change it back to config
then passing just the model name might not match with the parameter name. In which case we can drop passing only the model name and keep yaml
as the only option. It might be better in some sense as it will force people to ensure that their train config matches with the config they use for inference. Otherwise inferencer might pick up the default config with just the model name which might not match with the path for config that was used to train with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Passing only config would ensure the right config file is passed. Otherwise, it would be the default config, which may not be the same as the one that is used to train the model.
tools/inference_gradio.py
Outdated
inferencer = OpenVINOInferencer( | ||
config=config, path=weight_path, meta_data_path=meta_data | ||
) | ||
inferencer = OpenVINOInferencer(config=config, path=weight_path, meta_data_path=meta_data_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry for being pedantic, but one final comment :)
tools/inference_gradio.py
Outdated
parser.add_argument("--config", type=Path, required=True, help="Path to a model config file") | ||
parser.add_argument("--weight_path", type=Path, required=True, help="Path to a model weights") | ||
parser.add_argument("--meta_data", type=Path, required=False, help="Path to JSON file containing the metadata.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a bit of inconsistency in naming. In fact, this is the case for other entrypoints. I think we should stick to one of the following
- config, weights, meta_data
- config_path, weight_path, meta_data_path
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's alright. The whole point of the review is to ensure code quality
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Description
Passes correct parameter to Inferencers.
Fixes Gradio Error #296
Changes
Checklist