Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Many false positive even for close face detection #1093

Open
tyanai opened this issue Jun 30, 2023 · 4 comments
Open

Many false positive even for close face detection #1093

tyanai opened this issue Jun 30, 2023 · 4 comments

Comments

@tyanai
Copy link

tyanai commented Jun 30, 2023

Hi,

I have a good working setup of Reolink (for 1920p) with snapshots taken on the highest resolution over to DoubleTake and CompreFace.

The issue I have is that CompreFace gives me 98% accuracy even for unknown people standing close to the camera. Sometime it even mark my ankle as my face with 92%. This happens also with other people I'm training for, even that I only train for relatively closer images.

I also only train based on what the camera detect, didn't upload any other image.

Question is - Is this something to do with the Camera, or should I upload a more rich images of myself first?

Thanks,

Tal.

@pospielov
Copy link
Collaborator

pospielov commented Jul 20, 2023

Ideally, training images should be taken on the same camera and same conditions as during recognition.
If it's not possible - it's better to use images with the best quality for training.
One detail, I recommend using only one image per subject to avoid false positives.
One more idea, have you tried custom builds? They should be more accurate, especially SubCenter-ArcFace-r100:
https://github.com/exadel-inc/CompreFace/blob/master/docs/Custom-builds.md

@tyanai
Copy link
Author

tyanai commented Jul 20, 2023

Thanks, can you please elaborate what did you refer with "one image per subject"? Are you saying that more than one image within a training is less beneficial?

@pospielov
Copy link
Collaborator

Imagine you are a security guy and you were given a photo of John - the person needs to recognize.
It's not an easy task to recognize a person using a photo, so there is a chance that you won't recognize John when you see him, or there is a chance that you will recognize another person as John
So now, imagine you were given the second photo of John. So when a person approaches you, you compare the first photo and then a second photo with the person. The chance that you recognize John will increase, even if he doesn't look like in the first photo, he will probably look similar in the second photo. But the chance that somebody else would look like John in one of the photos also increased. So there is a bigger chance of recognizing another person as John.
What would you do if you met a person who looks similar to John from one photo, but different from another photo?
The logic will depend on your needs:

  • you say that this is John only if he looks similar to both photos
  • you say that this is John only if he looks similar to one of the photos
  • you say that this is John only if he looks similar to 50% of photos (if you have more than 2 photos)
  • in the case of an automated security system, you can automatically approve a person who looks like john on all photos, if not, send it to human check
  • If you have several people to recognize, you can take top 5 results and make a logic on how to determine the person.
    Because of all those possibilities, we return the similarity to each photo, and you should decide how your system should behave.
    Loading only one image per person and using a threshold on similarity is the simplest way to decrease incorrect recognitions. This is why I usually recommend it for most systems.

@tyanai
Copy link
Author

tyanai commented Jul 25, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants