Skip to content

ICCP2021: Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation

License

Notifications You must be signed in to change notification settings

computational-imaging/DepthFromDefocusWithLearnedOptics

Repository files navigation

Project

This is the code repository for Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation (ICCP 2021) .

Conda environment

Run the following command to create the conda environment.

conda create --name learned_defocus python=3.8 kornia pytorch-lightning=1.0.2 cudatoolkit=11.0 pytorch=1.7 \
  numpy scipy numba scikit-image torchvision matplotlib opencv pytest openexr-python -c pytorch -c conda-forge -y
pip install git+https://github.com/cheind/[email protected] --no-deps

Dataset for training

Download the datasets from SceneFlow and DualPixel. To complete the sparse depth map, the central view of DualPixel dataset is filled with a python port of NYU Depth V2's toolbox. After downloading the datasets, place them under data/training_data directory. You can change the path to the dataset in dataset/dualpixel.py and dataset/sceneflow.py.

How to run the training code

python snapshotdepth_trainer.py \
  --gpus 4 --batch_sz 3 --distributed_backend ddp  --max_epochs 100  --optimize_optics  --psfjitter  --replace_sampler_ddp False

Checkpoint and captured data

An example of a trained checkpoint and a trained DOE is available from GoogleDrive (checkpoint and image).

How to run the inference code on a real captured data

Download the captured image and the checkpoint, and place them in data directory.

python run_trained_snapshotdepth_on_captured_images.py \
  --ckpt_path data/checkpoint.ckpt \
  --captimg_path data/captured_data/outdoor1_predemosaic.tif 

This inference code runs on CPU.

Example input and output:

Example input Example estimated image Example estimated depth

Raw data for the fabricated DOE

The design for the fabricated DOE is available here. Unit is in meter, and the pixel size is 1μm.

Citation

Hayato Ikoma, Cindy M. Nguyen, Christopher A. Metzler, Yifan Peng, Gordon Wetzstein, Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation, International Conference on Computational Photography 2021

@article{Ikoma:2021,
author = {Hayato Ikoma and Cindy M. Nguyen and Christopher A. Metzler and Yifan Peng and Gordon Wetzstein},
title = {Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation},
journal = {IEEE International Conference on Computational Photography (ICCP)},
year={2021}
}

Contact

Please direct questions to [email protected].

Acknowledgement

We thank the open source software used in our project, which includes pytorch, pytorch-lightning, numpy, scipy, kornia , pytorch-debayer, matplotlib , OpenCV and Fiji.

About

ICCP2021: Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages