This repository contains the code of TRANS, the benchmark introduced in our paper for explicitly studying the stop
and go behaviors of pedestrians in urban traffic. TRANS is built on top of several existing autonomous driving datasets annotated with walking behaviors
of pedestrians (see Table I), so that it includes transition
samples from diverse traffic scenarios with a unified interface
# To clone the repository using HTTPS
git clone https://github.com/vita-epfl/pedestrian-transition-dataset.git
The project is written and tested using python 3.8. The interface also require several external libraries. All required packages can be found in requirements.txt
and to install dependencies run:
pip install -r requirements.txt
Please ensure all expected modules are properly installed before using the interface.
Currently we augment three existing self-driving datasets datasets:
- Joint Attention in Autonomous Driving(JAAD) Dataset
- Pedestrian Intention Estimation(PIE) Dataset
- Trajectory Inference using Targeted Action priors Network(TITAN) Dataset
- Download the videos and annotations from official page.
- Use the scripts provided to extract images from videos.
- Use JAAD's interface to generate complete annotations in the form of python dictionary.
More precisely use
generate_database()
to obtain the all JAAD annotations in dictionary form as a pkl file. Please rename the .pkl file asJAAD_DATA.pkl
and place it inDATA/annotations/JAAD/anns
- Train / val / test split: link.
- Download the videos and annotations from offical page.
- Use the scripts provided to extract images from videos.
- Use PIE's interface to generate complete annotations in the form of python dictionary.
In detail, please use
generate_database()
to obtain the all JAAD annotations in dictionary form as a pkl file. Rename the .pkl file asPIE_DATA.pkl
and place it inDATA/annotations/PIE/anns
- Train / val / test split: link
To obtain the dataset, please refer to this page and contact them directly.
After download the data, please place the images, annotations and video split ids in DATA. The expected result:
├── DATA/
│ ├── annotations/
│ │ ├── JAAD/
│ │ │ ├── anns/
│ │ │ │ ├── JAAD_DATA.pkl # see Data-preparation-download-JAAD
│ │ │ └── splits/
│ │ │ │ ├── all_videos/
│ │ │ │ │ ├── train.txt
│ │ │ │ │ └── val.txt
│ │ │ │ │ └── test.txt
│ │ │ │ └── default/
│ │ │ │ │ ├── train.txt
│ │ │ │ │ └── val.txt
│ │ │ │ │ └── test.txt
│ │ │ │ └── high_visibility/
│ │ │ │ │ ├── train.txt
│ │ │ │ │ └── val.txt
│ │ │ │ │ └── test.txt
│ │ └── PIE/
│ │ │ ├── anns/
│ │ │ │ ├── PIE_DATA.pkl # see Data-preparation-download-PIE
│ │ └── TITAN/
│ │ │ ├── anns/
│ │ │ │ ├── clip_x.csv
│ │ │ └── splits/
│ │ │ │ │ ├── train_set.txt
│ │ │ │ │ └── val_set.txt
│ │ │ │ │ └── test_set.txt
│ └── images/
│ │ ├── JAAD/
│ │ │ ├── video_xxxx/ # 346 videos
│ │ └── PIE/
│ │ │ ├── set_0x/ # 6 sets
│ │ │ │ ├── video_xxxx/
│ │ └── TITAN/
│ │ │ ├── clip_xxx/ # 786 clips
└── (+ files and folders containing the raw data)
Note : The benchmark works with arbitrary subsets of supported datasets when we give only the paths to desired ones.
At the heart of TRANS is the TransDataset
class.
TransDataset
generated transition samples from original annotations of JAAD, PIE and TITAN.
Using attributes of TransDataset
, the user can conveniently extract the frame sequences related to the stop & go of pedestrians:
The extracted samples each has an unique id specifying the source dataset, transition type, data split
and sample index. We provide customized dataloader
for loading samples according to the user's need.
For basic usage please check the example in example.ipynb .
All unique pedestrians in the original datasets can be categorized into Walk, Stand, Stop and Go. Overall we have 1,008 go events annd 1,138 stop events in TRANS. For more detailed analysis, please refer to the paper. You can explore the statistics of different datastes using the verbose feature.
TRANS serves as the primary data source for the project "Pedestrian Stop and Go Forecasting" in VITA. The interface is still under development. In future we may expect more relevant datasets to be integrated in this benchmark. Please leave remarks if you encounter any problems using the interface or have suggestions for improving the usability.
If you use this project in your research, please cite the corresponding paper:
@article{guo2022pedestrian,
title={Pedestrian Stop and Go Forecasting with Hybrid Feature Fusion},
author={Guo, Dongxu and Mordan, Taylor and Alahi, Alexandre},
journal={arXiv preprint arXiv:2203.02489},
year={2022}
}