You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the ONNX C API through the Rust bindings, in more than one place we rely on the input model containing the shape for input and output tensors - this is fine in set_input, where the guest provides the shape of the input tensor (if the model doesn't contain it, just use the one supplied by the guest) - but is prone to failing in compute and get_output, when we have to store the output tensors and then return them to the guest into a unidimensional buffer, without any information about its shape.
Currently, in compute, we have to unwrap on the output's dimensions(), which could very well be None.
let shape = outputs
.get(index).unwrap().dimensions().map(|d| d.unwrap()).collect::<Vec<usize>>();
The text was updated successfully, but these errors were encountered:
Update: we can handle this gracefully within the Tract runtime, still need some more experimentation on how to do it properly when using the native ONNX runtime.
When using the ONNX C API through the Rust bindings, in more than one place we rely on the input model containing the shape for input and output tensors - this is fine in
set_input
, where the guest provides the shape of the input tensor (if the model doesn't contain it, just use the one supplied by the guest) - but is prone to failing incompute
andget_output
, when we have to store the output tensors and then return them to the guest into a unidimensional buffer, without any information about its shape.Currently, in
compute
, we have tounwrap
on the output'sdimensions()
, which could very well beNone
.The text was updated successfully, but these errors were encountered: