You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 18, 2023. It is now read-only.
As it has been mentioned in issue #9 "DAVIS datafiles uncomplete":
"datafiles.tar in provided "Google Drive" download link consists only triangulation data.
There are no "JPEGImages/1080p" and "Annotation//1080p" folders that "python ./scripts/preprocess/davis/generate_frame_midas.py" refers to."
So, I manually downloaded missing data from https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-Unsupervised-trainval-Full-Resolution.zip
After that the structure as follow:
├── datafiles
├── DAVIS
├── Annotations --- missing in supplied download links, downloaded manually from DAVIS datasets
├── 1080p
├── dog
├── train
├── JPEGImages --- missing in supplied download links, downloaded manually from DAVIS datasets
├── 1080p
├── dog
├── train
├── triangulation -- data from supplied link
Only after that I could successfully performed all steps of suggested in "Davis data preparation":
Run python ./scripts/preprocess/davis/generate_frame_midas.py.
Run python ./scripts/preprocess/davis/generate_flows.py
Run python ./scripts/preprocess/davis/generate_sequence_midas.py
However still couldn't reproduce the presented result, running: bash ./experiments/davis/train_sequence.sh 0 --track_id dog
Output & Stacktrace:
D:\dynamic-video-depth-main>bash ./experiments/davis/train_sequence.sh 0 --track_id dog
python train.py --net scene_flow_motion_field --dataset davis_sequence --track_id train --log_time --epoch_batches 2000 --epoch 20 --lr 1e-6 --html_logger --vali_batches 150 --batch_size 1 --optim adam --vis_batches_vali 4 --vis_every_vali 1 --vis_every_train 1 --vis_batches_train 5 --vis_at_start --tensorboard --gpu 0 --save_net 1 --workers 4 --one_way --loss_type l1 --l1_mul 0 --acc_mul 1 --disp_mul 1 --warm_sf 5 --scene_lr_mul 1000 --repeat 1 --flow_mul 1 --sf_mag_div 100 --time_dependent --gaps 1,2,4,6,8 --midas --use_disp --logdir './checkpoints/davis/sequence/' --suffix 'track_{track_id}_{loss_type}_wreg_{warm_reg}_acc_{acc_mul}_disp_{disp_mul}_flowmul_{flow_mul}_time_{time_dependent}_CNN_{use_cnn}_gap_{gaps}_Midas_{midas}_ud_{use_disp}' --test_template './experiments/davis/test_cmd.txt' --force_overwrite --track_id dog
File "train.py", line 106
str_warning, f'ignoring the gpu set up in opt: {opt.gpu}. Will use all gpus in each node.')
^
SyntaxError: invalid syntax
Noticed that there is no folder named ".checkpoints"
Similar issue has been mentioned in issue #8 "SyntaxError: invalid syntax"
Specs:
Windows 10
Anaconda: conda 4.11.0
Python 3.7.10
GPU 12Gb Quadro M6000
All specified dependencies including RAFT are installed
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
As it has been mentioned in issue #9 "DAVIS datafiles uncomplete":
"datafiles.tar in provided "Google Drive" download link consists only triangulation data.
There are no "JPEGImages/1080p" and "Annotation//1080p" folders that "python ./scripts/preprocess/davis/generate_frame_midas.py" refers to."
So, I manually downloaded missing data from https://data.vision.ee.ethz.ch/csergi/share/davis/DAVIS-2017-Unsupervised-trainval-Full-Resolution.zip
After that the structure as follow:
Only after that I could successfully performed all steps of suggested in "Davis data preparation":
However still couldn't reproduce the presented result, running:
bash ./experiments/davis/train_sequence.sh 0 --track_id dog
Output & Stacktrace:
Noticed that there is no folder named ".checkpoints"
Similar issue has been mentioned in issue #8 "SyntaxError: invalid syntax"
Specs:
Windows 10
Anaconda: conda 4.11.0
Python 3.7.10
GPU 12Gb Quadro M6000
All specified dependencies including RAFT are installed
The text was updated successfully, but these errors were encountered: