Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.

Trust Tokens are not useful for anti-fraud in general #13

Open
bakkot opened this issue Aug 30, 2021 · 1 comment
Open

Trust Tokens are not useful for anti-fraud in general #13

bakkot opened this issue Aug 30, 2021 · 1 comment

Comments

@bakkot
Copy link

bakkot commented Aug 30, 2021

I work at Shape Security on our anti-fraud product, which is widely used by banks, retailers, and other organizations to defend against credential stuffing, user account takeovers, and other forms of abuse. For illustration, we block hundreds of millions of malicious login attempts on banks every day, which would have amounted to many thousands of compromised bank accounts.

Tools in the anti-fraud space (where the fraud in question is more like credential stuffing, not just click fraud), including those of my employer, rely on the ability to answer "is this visitor someone we've seen before". That's not fingerprinting as defined in the privacy sandbox - in particular, there's nothing cross-site about it - but the techniques currently used to make this determination have some overlap with the cross-site fingerprinting techniques, and my understanding is that the privacy sandbox proposal intends to significantly limit their use. (IP addresses are not sufficient; in practice, anyone doing this kind of fraud at scale will be making use of a residential proxy.)

Unfortunately, as far as I can tell, no consideration has been given to this kind of anti-fraud. As I understand it, the only kind considered by this proposal is advertising-related, which trust tokens are intended to address. Trust tokens are not particularly useful for the kind of anti-fraud I'm discussing here because we can't outright block people just because they haven't visited Facebook before (or whichever other TT issuer). And falling back to a CAPTCHA isn't a viable alternative; in practice, CAPTCHAs are both trivially easy to break for automation (e.g. through services like 2captcha) and wildly inaccessible for humans, which is why odds are very good that your bank makes use of our product rather than a CAPTCHA.

If the web platform makes it harder to defend against this kind of fraud without providing an effective alternative, we risk making these attacks much more prevalent, at significant cost to users. I'd like this to be more of a consideration in the discussion of the privacy budget.

I'm happy to talk about this more here, or on a call, or you can also reach me in the #privacy-sandbox channel on the Chromium slack. (I've also raised this issue in a few other places (x, x), without much response. If there's a better place to raise this, please let me know.)

@dgstpierre
Copy link

I completely agree. I also work on anti-fraud (for DeviceForensIQ), in the Market Research sector, preventing fraudulent survey taking. The privacy sandbox measures including fingerprinting are not looking at the fraud prevention side of things outside of ad tech. We use fingerprinting not to track users, but to recognize fraudulent behaviors and prevent duplicate survey taking. It would appear that you are eliminating the ability to do passive fraud prevention. Removing and even limiting the ability to fingerprint will have broad implications on our client base, and make it easier for cybercriminals to commit fraud.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants