Skip to content
forked from yihuacheng/GazeTR

The codes and models in 'Gaze Estimation using Transformer'.

Notifications You must be signed in to change notification settings

TummyWang/GazeTR

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GazeTR

We provide the code of GazeTR-Hybrid in "Gaze Estimation using Transformer".

We recommend you to use data processing codes provided in GazeHub. You can direct run the method' code using the processed dataset.

Requirements

We build the project with pytorch1.7.0.

The warmup is used following here.

Usage

Directly use our code.

You should perform three steps to run our codes.

  1. Prepare the data using our provided data processing codes.

  2. Modify the config/train/config_xx.yaml and config/test/config_xx.yaml.

  3. Run the commands.

To perform leave-one-person-out evaluation, you can run

python trainer/leave.py -s config/train/config_xx.yaml -p 0

Note that, this command only performs training in the 0th person. You should modify the parameter of -p and repeat it.

To perform training-test evaluation, you can run

python trainer/total.py -s config/train/config_xx.yaml    

To test your model, you can run

python trainer/leave.py -s config/train/config_xx.yaml -t config/test/config_xx.yaml -p 0

or

python trainer/total.py -s config/train/config_xx.yaml -t config/test/config_xx.yaml

Build your own project.

You can import the model in model.py for your own project.

We give an example. Note that, the line 114 in model.py uses .cuda(). You should remove it if you run the model in CPU.

from model import Model
GazeTR = Model()

img = torch.ones(10, 3, 224 ,224).cuda()
img = {'face': img}
label = torch.ones(10, 2).cuda()

# for training
loss = GazeTR(img, label)

# for test
gaze = GazeTR(img)

Pre-trained model

You can download from google drive or baidu cloud disk with code 1234.

This is the pre-trained model in ETH-XGaze dataset with 50 epochs and 512 batch sizes.

Performance

ComparisonA

ComparisonB

Links to gaze estimation codes.

License

The code is under the license of CC BY-NC-SA 4.0 license.

Contact

Please email any questions or comments to [email protected].

About

The codes and models in 'Gaze Estimation using Transformer'.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%