-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust serving notebook to account for underlying shape changes #631
Conversation
Those two changes are a workaround to make everything line up while we sort out shapes and ragged support in T4R, but should be okay for now AFAIK. We use padding in the NVT workflow, and capture the `sparse_max` list lengths in the schema, and the serving notebook then runs successfully.
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Documentation previewhttps://nvidia-merlin.github.io/Transformers4Rec/review/pr-631 |
@@ -70,8 +70,56 @@ | |||
"text": [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line #1. for col_name, col_schema in input_schema.column_schemas.items():
so this cell is needed to tell Ensemble api that model does not consume ragged inputs since value count min and max are same? would input_schema
wont print out required properties without executing this cell?
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's right. This is a manual step to capture the info from the sparse_max
dictionary in the schemas, which helps the serving code know the shapes of the inputs we ended up with after padding in the Workflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the longer term, we'll probably want to re-evaluate this and see if we can migrate from sparse_max
to capturing shapes in the model schemas (which you're already working on the first part of.)
thanks for that. LGTM. I only added one comment, out of curiosity. thanks! |
Those two changes are a workaround to make everything line up while we sort out shapes and ragged support in T4R, but should be okay for now AFAIK. We use padding in the NVT workflow, and capture the `sparse_max` list lengths in the schema, and the serving notebook then runs successfully.
Those two changes are a workaround to make everything line up while we sort out shapes and ragged support in T4R, but should be okay for now AFAIK. We use padding in the NVT workflow, and capture the
sparse_max
list lengths in the schema, and the serving notebook then runs successfully.