You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am testing your code on the NYUv2 dataset. ^ _ ^
I have used your code in folder named 'preproess_data/superpixel/' and have made the image like this
Then transform the image to bin using your code.
I make the semantic segmentation using the FCN, in which there is model on the dataset.
The segmentation image like this
Then translate the image into bin using the code:
net.blobs['prob'].data[0].transpose([1,2,0]).flatten().tofile('/home/x/bin/nyu'+ str(i) + '.bin')
And I get the pose through the ORB-SLAM2
Then I run the mapping_3d, there isn't any warning and error.
However, the result seems to be wrong. -_-
The image produced in the folder named 'crf_3d_reproj' seems to be wrong, like this
As a result, the semantic 3d mapping looks bad and like this
Could you please help me and tell me what the possible reasons are ?
Thanks!
The text was updated successfully, but these errors were encountered:
Hi, thanks for testing. I cannot root the problem right now. There might be some parameters tuned for outdoor KITTI, which may not be suitable for small scale indoors. For example, grid resolution, size, CRF position kernel std, weight. Before tuning params, make sure to give correct calibration and image size in the main file. Then do experiments step by step. First verify whether geometric 3D mapping is good or not. Then without CRF optimization (only label back-projection). Then with CRF, but no hierarchical. Finally with hierarchical. I look through the code and seems to find a bug in the main file: I set curr_transToWolrd(2,3)=1.0 when saving image. Can you also try commenting it.
Hi, thanks for your share!
I am testing your code on the NYUv2 dataset. ^ _ ^
I have used your code in folder named 'preproess_data/superpixel/' and have made the image like this
Then transform the image to bin using your code.
I make the semantic segmentation using the FCN, in which there is model on the dataset.
The segmentation image like this
Then translate the image into bin using the code:
net.blobs['prob'].data[0].transpose([1,2,0]).flatten().tofile('/home/x/bin/nyu'+ str(i) + '.bin')
And I get the pose through the ORB-SLAM2
Then I run the mapping_3d, there isn't any warning and error.
However, the result seems to be wrong. -_-
The image produced in the folder named 'crf_3d_reproj' seems to be wrong, like this
As a result, the semantic 3d mapping looks bad and like this
Could you please help me and tell me what the possible reasons are ?
Thanks!
The text was updated successfully, but these errors were encountered: