A PyTorch adapted implementation of the video-to-command model described in the paper:
"Translating Videos to Commands for Robotic Manipulation with Deep Recurrent Neural Networks" in ICRA 2018. Check the author's original implementation in Tensorflow.
- PyTorch (tested on 1.3)
- TorchVision
- numpy
- PIL
- coco-caption, a modified version is used to support Python3
- OpenCV (optional, if you need to extract features on your own video)
The video2command model is an Encoder-Decoder neural network that learns to generate a short sentence which can be used to command a robot to perform various manipulation tasks. The architecture of the network is listed below:
Compared to the architecture used in the original implementation, the implementation here takes more inspiration from the seq2seq architecture where we will inject the state of the video encoder directly into the command decoder instead. This promotes a 2~3% improvement in the BLEU 1-4 scores.
To repeat the video2command experiment:
-
Clone the repository.
-
Download the IIT-V2C dataset, extract the dataset and setup the directory path as
datasets/IIT-V2C
. -
For CNN features, two options are provided:
-
Use the pre-extracted ResNet50 features provided by the original author.
-
Perform feature extraction yourself. Firstly run
avi2frames.py
under folderexperiments/experiment_IIT-V2C
to convert all videos into images. Download the *.pth weights for ResNet50 converted from Caffe. Runextract_features.py
under folderexperiments/experiment_IIT-V2C
afterwards. -
Note that the author's pre-extracted features seem to have a better quality and lead to a possible 1~2% higher metric scores.
-
-
To begin training, run
train_iit-v2c.py
. -
For evaluation, firstly run
evaluate_iit-v2c.py
to generate predictions given all saved checkpoints. Runcocoeval_iit-v2c.py
to calculate scores for the predictions.
If you find this repository useful, please give me a star. Please leave me an issue if you find any potential bugs inside the code.
Some references which help my implementation: