Skip to content

Gaze estimatin code. The Pytorch implementation of "Appearance-Based Gaze Estimation Using Dilated-Convolutions".

Notifications You must be signed in to change notification settings

yihuacheng/Dilated-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dilated-Net

The Pytorch Implementation of "Appearance-Based Gaze Estimation Using Dilated-Convolutions". (updated in 2021/04/28)

We build benchmarks for gaze estimation in our survey "Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark". This is the implemented code of the "Dilated-Net" method in our benchmark. Please refer our survey for more details.

We recommend you to use data processing codes provided in GazeHub. You can direct run the method' code using the processed dataset.

Links to gaze estimation codes.

Performance

The method is evaluated in three tasks. Please refer our survey for more details. benchmarks benchmarks

License

The code is under the license of CC BY-NC-SA 4.0 license.

Introduction

We provide two similar projects for leave-one-person-out evaluation and evaluation of common training-test split. They have the same architecture but different started modes.

Each project contains following files/folders.

  • model.py, the model code.
  • train.py, the entry for training.
  • test.py, the entry for testing.
  • config/, this folder contains the config of the experiment in each dataset. To run our code, you should write your own config.yaml.
  • reader/, the data loader code. You can use the provided reader or write your own reader.

Getting Started

Writing your own config.yaml

Normally, for training, you should change

  1. train.save.save_path, The model is saved in the $save_path$/checkpoint/.
  2. train.data.image, This is the path of image, please use the provided data processing code in GazeHub
  3. train.data.label, This is the path of label.
  4. reader, This indicates the used reader. It is the filename in reader folder, e.g., reader/reader_mpii.py ==> reader: reader_mpii.

For test, you should change

  1. test.load.load_path, it is usually the same as train.save.save_path. The test result is saved in $load_path$/evaluation/.
  2. test.data.image, it is usually the same as train.data.image.
  3. test.data.label, it is usually the same as train.data.label.

Training

In the leaveout folder, you can run

python train.py config/config_mpii.yaml 0

This means the code will run with config_mpii.yaml and use the 0th person as the test set.

You also can run

bash run.sh train.py config/config_mpii.yaml

This means the code will perform leave-one-person-out training automatically.
run.sh performs iteration, you can change the iteration times in run.sh for different datasets, e.g., set the iteration times as 4 for four-fold validation.

In the traintest folder, you can run

python train.py config/config_mpii.yaml

Test

In the leaveout folder, you can run

python test.py config/config_mpii.yaml 0

or

bash run.sh test.py config/config_mpii.yaml

In the traintest folder, you can run

python test.py config/config_mpii.yaml

Result

After training or test, you can find the result from the save_path in config_mpii.yaml.

Citation

If you use our code, please cite:

@InProceedings{Chen_2019_ACCV,
	author="Chen, Zhaokang
	and Shi, Bertram E.",
	editor="Jawahar, C.V.
	and Li, Hongdong
	and Mori, Greg
	and Schindler, Konrad",
	title="Appearance-Based Gaze Estimation Using Dilated-Convolutions",
	booktitle="Computer Vision -- ACCV 2018",
	year="2019",
	publisher="Springer International Publishing",
	address="Cham",
	pages="309--324",
	isbn="978-3-030-20876-9"
}

@article{Cheng2021Survey,
        title={Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark},
        author={Yihua Cheng and Haofei Wang and Yiwei Bao and Feng Lu},
        journal={arXiv preprint arXiv:2104.12668},
        year={2021}
}

Contact

Please email any questions or comments to [email protected].

Reference

  1. MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation
  2. EYEDIAP Database: Data Description and Gaze Tracking Evaluation Benchmarks
  3. Learning-by-Synthesis for Appearance-based 3D Gaze Estimation
  4. Gaze360: Physically Unconstrained Gaze Estimation in the Wild
  5. ETH-XGaze: A Large Scale Dataset for Gaze Estimation under Extreme Head Pose and Gaze Variation
  6. Appearance-Based Gaze Estimation in the Wild
  7. Appearance-Based Gaze Estimation Using Dilated-Convolutions
  8. RT-GENE: Real-Time Eye Gaze Estimation in Natural Environments
  9. It’s written all over your face: Full-face appearance-based gaze estimation
  10. A Coarse-to-fine Adaptive Network for Appearance-based Gaze Estimation
  11. Eye Tracking for Everyone
  12. Adaptive Feature Fusion Network for Gaze Tracking in Mobile Tablets
  13. On-Device Few-Shot Personalization for Real-Time Gaze Estimation
  14. A Generalized and Robust Method Towards Practical Gaze Estimation on Smart Phone

About

Gaze estimatin code. The Pytorch implementation of "Appearance-Based Gaze Estimation Using Dilated-Convolutions".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published