Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simultaneous Tracking and Mapping? #4

Open
Yuliang-Zou opened this issue Jun 10, 2019 · 7 comments
Open

Simultaneous Tracking and Mapping? #4

Yuliang-Zou opened this issue Jun 10, 2019 · 7 comments

Comments

@Yuliang-Zou
Copy link

Hi, thanks for the great work. I wonder if you can provide a demo code to perform tracking (camera pose estimation) and mapping (depth estimation) simultaneously.

@zachteed
Copy link
Collaborator

Sure, I will add a demo for online tracking and mapping later this week. The demo code is already doing simultaneous pose estimation and mapping just over a small video clip. But I can make an additional demo showing how the code can be used for a full video sequence.

@zachteed
Copy link
Collaborator

Hi, I just added a new demo showing how DeepV2D can be used as a SLAM system on NYU.

@Yuliang-Zou
Copy link
Author

Cool~ Thanks!

@Yuliang-Zou
Copy link
Author

Hi @zachteed , I modified your SLAM code a bit for KITTI sequence, but it seemed that the SLAM system cannot recover absolute scale of the translation part (I need to do global scale alignment for evaluation). I wonder if the SLAM system can predict absolute scale camera pose or not, or did you have some sort of scaling factor during training?

For KITTI sequence input, I scaled and crop the images following the data preparation code. And I also used the KITTI config file and the KITTI pre-trained model.

Thank you and look forward to your response.

@zachteed
Copy link
Collaborator

Hi, you should be able to recover the absolute scale of translation on the KITTI dataset. You may need to scale the outputs by 10, because the output units on KITTI are .1 meters.

@Yuliang-Zou
Copy link
Author

Thanks!

@Yuliang-Zou
Copy link
Author

Hi, I wonder if I also need to scale the output if I am using the nyu pre-trained models. I am testing it on some sequences from TUM RGB-D dataset, but seems that the scale is not correct.

@Yuliang-Zou Yuliang-Zou reopened this Oct 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants