You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for sharing this great work.
I'm wondering: where does the nyu_train.tfrecords file (https://github.com/princeton-vl/DeepV2D#nyuv2-1) come from?
It seems there are 13776 examples, each with 9 RGB images, 1 depth image and smaller things like intrinsics and poses.
It's about 138GB but NYU Depth V2 is more like 400GB, which surprises me (even though encoding is not the same). Maybe this file was built using NYU Depth V1, which is 90GB? Is this file the one used in the experiments reported in the paper?
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for sharing this great work.
I'm wondering: where does the nyu_train.tfrecords file (https://github.com/princeton-vl/DeepV2D#nyuv2-1) come from?
It seems there are 13776 examples, each with 9 RGB images, 1 depth image and smaller things like intrinsics and poses.
It's about 138GB but NYU Depth V2 is more like 400GB, which surprises me (even though encoding is not the same). Maybe this file was built using NYU Depth V1, which is 90GB? Is this file the one used in the experiments reported in the paper?
The text was updated successfully, but these errors were encountered: