This dataset contains object tagging of right and left hands on a piano background.
Its purpose is to detect hands from videos of piano playing.
The directory versions includes all of the available versions for this dataset.
They can also be found under the releases
tab on GitHub.
We fine-tuned a "Faster RCNN Inception v2 COCO" model from the tensorflow official repository. Note our model does not include any tracking. Every video frame is processed independently.
Easy Video - Alan Walker - Faded
(You might need to open this GIF in a new tab)
Harder Video - Liszt - La Campanella
(You might need to open this GIF in a new tab)
We include the training script, checkpoint, and a simple python file for example of use. PRO TIP: If you train a model, make sure there is no "horizontal flip" augmentation. If you know there are a maximum of 2 hands in a frame, you can heuristically fix mistakes of tagging both hands as left or right.
- From
videos
executedownload.sh
which downloads 200+ youtube videos. - From
videos
executeextract_frames.py
which creates a top levelframes
directory with random frames from the videos. - Using labelImg tag the directory
frames
toannotations
. - From
versions
executegenerate_version.py
, after changing the version number.