You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the odometry message currently published by the driver doesn't include any covariance estimates. This means that when using other downstream fusion algorithms (like robot_localization) they have to be estimated somehow experimentally (in addition to the issue discussed here: #29).
The SDK docs for the proto API, however, do specify a pose covariance message here but I didn't find any use of it in the ros2 driver code. Does the robot output any valid estimates in this message a would it be possible to acquire the data and pass it into the ROS odometry message?
Especially when using the vision-based odometry from robot, I think it would be a more precise to have it coming directly from the internal algorithm than any second-hand experimental estimates.
The text was updated successfully, but these errors were encountered:
Looking at the sdk docs, it looks like the only places this covariance message is exposed is through graph nav waypoint detection and fiducial detection. I unfortunately don't think it's possible to get the covariance of the robot's odometry pose itself through the sdk currently. This could be a good issue to raise with Boston Dynamics customer support as I agree it would be best to have the covariance directly from the robot.
Thanks for the response. Just to follow up, I opened a support ticket with them, but got a negative response. So, sadly, I don't think that this is going to be a thing anytime soon.
Hello,
the odometry message currently published by the driver doesn't include any covariance estimates. This means that when using other downstream fusion algorithms (like
robot_localization
) they have to be estimated somehow experimentally (in addition to the issue discussed here: #29).The SDK docs for the proto API, however, do specify a pose covariance message here but I didn't find any use of it in the ros2 driver code. Does the robot output any valid estimates in this message a would it be possible to acquire the data and pass it into the ROS odometry message?
Especially when using the vision-based odometry from robot, I think it would be a more precise to have it coming directly from the internal algorithm than any second-hand experimental estimates.
The text was updated successfully, but these errors were encountered: