DeepPrior is a simple approach based on Deep Learning that predicts the joint 3D locations of a hand given a depth map. Since its publication early 2015, it has been outperformed by several impressive works. Here we show that with simple improvements: adding ResNet layers, data augmentation, and better initial hand localization, we achieve better or similar performance than more sophisticated recent methods on the three main benchmarks (NYU, ICVL, MSRA) while keeping the simplicity of the original method.
Results
Material
Poster: ICCVW’17 poster
Our results: Each line is the estimated hand pose of a frame. The pose is parametrized by the locations of the joints in (u, v, d) coordinates, ie image coordinates and depth. The coordinates of each joint are stored in sequential order.
- ICVL dataset of D. Tang: ICCVW’17 DeepPrior++
- NYU dataset of J. Tompson: ICCVW’17 DeepPrior++
- MSRA dataset of X. Sun: ICCVW’17 DeepPrior++
Code
Here you can find the code for our ICCVW’17 paper “DeepPrior++: Improving Fast and Accurate 3D Hand Pose Estimation”. It is distributed as a single package DeepPrior++ under GPLv3. It also includes pretrained models for the NYU, MSRA and ICVL dataset. There is no proper documentation yet, but a basic readme file is included. If you have questions please do not hesitate to contact us. If you use the code, please cite us (see below).
Citation
@InProceedings{Oberweger2017,
title = {DeepPrior++: Improving Fast and Accurate 3D Hand Pose Estimation},
author = {M.~Oberweger and V.~Lepetit},
booktitle = {Proc.~of International Conference on Computer Vision Workshops},
Year = {2017}
}