-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simultaneous Tracking and Mapping? #4
Comments
Sure, I will add a demo for online tracking and mapping later this week. The demo code is already doing simultaneous pose estimation and mapping just over a small video clip. But I can make an additional demo showing how the code can be used for a full video sequence. |
Hi, I just added a new demo showing how DeepV2D can be used as a SLAM system on NYU. |
Cool~ Thanks! |
Hi @zachteed , I modified your SLAM code a bit for KITTI sequence, but it seemed that the SLAM system cannot recover absolute scale of the translation part (I need to do global scale alignment for evaluation). I wonder if the SLAM system can predict absolute scale camera pose or not, or did you have some sort of scaling factor during training? For KITTI sequence input, I scaled and crop the images following the data preparation code. And I also used the KITTI config file and the KITTI pre-trained model. Thank you and look forward to your response. |
Hi, you should be able to recover the absolute scale of translation on the KITTI dataset. You may need to scale the outputs by 10, because the output units on KITTI are .1 meters. |
Thanks! |
Hi, I wonder if I also need to scale the output if I am using the nyu pre-trained models. I am testing it on some sequences from TUM RGB-D dataset, but seems that the scale is not correct. |
Hi, thanks for the great work. I wonder if you can provide a demo code to perform tracking (camera pose estimation) and mapping (depth estimation) simultaneously.
The text was updated successfully, but these errors were encountered: