The Google RefExp dataset is a collection of text descriptions of objects in images which builds on the publicly available MS-COCO dataset. Whereas the image captions in MS-COCO apply to the entire image, this dataset focuses on text descriptions that allow one to uniquely identify a single object or region within an image. See more details in this paper: Generation and Comprehension of Unambiguous Object Descriptions
Green dot is the object that is being referred to. Sentences are generated by humans in a way that uniquely describes the chosen object.
- python 2.7 (Need numpy, scipy, matlabplot, PIL packages. All included in Anaconda)
cd $YOUR_PATH_TO_THIS_TOOLBOX
python setup.py
Running the setup.py script will do the following five things:
- Download Google Refexp Data
- Download and compile the COCO toolbox
- Download COCO annotations
- Download COCO images
- Align the Google Refexp Data with COCO annotations
At each step you will be prompted by keyboard whether or not you would like to skip a step. Note that the MS COCO images (13GB) and annotations (158MB) are very large and it takes some time to download all of them.
You can download the GoogleRefexp data directly from this link.
If you have already played with MS COCO and do not want to have two copies of them, you can choose to create a symbolic link from external to your COCO toolkit. E.g.:
cd $YOUR_PATH_TO_THIS_TOOLBOX
ln -sf $YOUR_PATH_TO_COCO_TOOLBOX ./external/coco
Please make sure the following on are on your path:
- compiled PythonAPI at external/coco/PythonAPI
- the annotation file at external/coco/annotations/instances_train2014.json
- COCO images at external/coco/images/train2014/.
You can create symbolic links if you have already downloaded the data and compiled the COCO toolbox.
Then run setup.py to download the Google Refexp data and compile this toolbox. You can skip steps 2, 3, 4.
For visualization and utility functions, please see google_refexp_dataset_demo.ipynb.
For automatic and Amazon Mechanical Turk (AMT) evaluation of the comprehension and generation tasks, please see google_refexp_eval_demo.ipynb; The appropriate output format for a comprehension/generation algorithm is described in ./evaluation/format_comprehension_eval.md and ./evaluation/format_generation_eval.md
We also provide two sample outputs for reference. For the comprehension task, we use a naive baseline which is a random shuffle of the region candidates (./evaluation/sample_results/sample_results_comprehension.json). For the generation task, we use a naive baseline which outputs the class name of an object (./evaluation/sample_results/sample_results_generation.json).
If you are not familiar with AMT evaluations, please see this tutorial The interface and APIs provided by this toolbox have already grouped 5 evaluations into one HIT. In our experiment, paying 2 cents for one HIT leads to reasonable results.
If you find the dataset and toolbox useful in your research, please consider citing:
@inproceedings{mao2016generation,
title={Generation and Comprehension of Unambiguous Object Descriptions},
author={Mao, Junhua and Huang, Jonathan and Toshev, Alexander and Camburu, Oana and Yuille, Alan and Murphy, Kevin},
booktitle={CVPR},
year={2016}
}
This data is released by Google under the following license:
This work is licensed under a Creative Commons Attribution 4.0 International License.