-
Notifications
You must be signed in to change notification settings - Fork 336
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
user-agent
header control
#37
Comments
I think we 'inherited' User-Agent from a list of headers Flash disallowed changing in its implementation. There was some speculation that some services might exist that only allowed specific customized browsers to access them, using UA string as a sort of auth token, but I don't think anybody ever found a real-life case of such a system. (Obviously it would be astonishingly poor design). Personally, I think we can and should drop User-Agent from the list of headers you're not allowed to set. |
If we allow setting it to any value and effectively replace anything the user agent would set we should probably make it a parameter of
|
(It might be worth noting that GitHub itself recommends User-Agent set to your GH user name in API requests - this is of course trivial if you write Python/Node/whatever clients, but not currently possible from an API consumer running in a browser. So there's a big and important use case right here.) |
Hmm, I'm not sure I necessarily agree. It seems a straightforward extension to say that headers provided by the RequestInit overwrite default headers. What do you think would be confusing about allowing User-Agent to be set through the normal mechanisms? |
https://developer.github.com/v3/#user-agent-required This helps us contact application owners when there are problems, like a rogue script making lots of API calls. However, many API resources are only available to authenticated sessions, so the user account is already known. And when accessing the API through a browser, the CORS request includes the The |
If fetch('/users', {
headers: {
'User-Agent': 'Web/2.0',
'Accept': 'application/json'
}
}) |
I guess that might be okay as well. We could have a step in https://fetch.spec.whatwg.org/#http-network-fetch that appends it if it is not already present in HTTPRequest's header list. |
So actually, the reason you want it to be an argument rather than part of |
Probable rationale for why it is currently forbidden is in this email by @sicking: https://lists.w3.org/Archives/Public/public-webapi/2008May/0456.html |
Can you expand on this? I don't quite understand why one location or the other would matter for this, or why you would force it to be omitted. |
If we offer customization, I think it would make sense to also offer omitting it altogether e.g. to reduce the size of the request or debug a server. The location matters since by default it is included and the |
Oh, I see! Would it be conceivable not to set the header by default at all? Hmm, probably not very good, I'm being too Node-influenced here... So how would you omit it? |
|
Why would Related question: when you do |
|
Right, so why is It looks like from my reading of the spec there's no actual way to tell what One last point before I go to sleep: I think what's a bit tricky for me here is that the most straightforward mental model for the programmer is that in the constructor, there's some kind of this.headers = mergeMultimaps(defaultHeaders, passedHeaders); That is, just naively looking at the shape of the API, and knowing the bit of extra information that there are more headers sent in the request/response than you set, I think most programmers will guess that there's a set of default headers and you can set new headers or override old ones by using the For example in that model I think if It seems like that is not the model though, as evidenced by the |
Basically the model is that the network layer appends a set of headers that are not in control of the developer. And that |
FWIW, if that email from me is really the reason that we don't allow setting the user-agent, then I believe that that's fixable. If we simply treat user-agent as a "custom" header, then things seem fine with me. I.e. we can let it be set completely by the page for same-origin requests. For cross-origin requests we can allow it to be set, but it'd cause a preflight, and would require the server to send a "access-control-allow-headers: user-agent" header in the response. I don't have opinions about how to deal with getting the default value or removing the header completely. |
Given the requirements and existing constraints I see a couple of options:
2 seems a little cleaner to me, but I don't care strongly. |
Despite the discussion having progressed a lot already, do you consider XSS or CRSF-to-XSS via UA string in scope for the spec? Or would that be something, the implementers have to take care of on their own? P.S. @mathiasbynens You are faster than light :D |
One note about my context maybe: Right now, for penetration tests, we use malformed UA strings to aim for persistent XSS or Intranet XSS. Giving an attacker control over the UA string via |
The header can only be set for fetches that are same-origin or subject to CORS (with a preflight where the server needs to opt into |
@annevk So, to quote @sicking here:
If that means, the request will not even be sent with the modified UA in case of cross-origin requests and a failed CORS preflight, then spec-wise this should be fine. |
@annevk Alright, I had a closer look at how If the custom UA string header is in fact being implemented the same way as any other custom header - meaning that the particular header has to be permitted via CORS for cross-origin requests, then this should indeed be safe! No objections from my side so far. |
There is value in having a reliable user-agent header. Historically we've had some browser bugs where it was possible to protect the user server-side, but where the fix would be too expensive (in terms of cost, perf or user annoyance) to apply to all users. More commonly, a new browser feature might allow some feature to be reimplemented in a more secure way; with a reliable user-agent header, it's easy to disable the "unsafe" back-compat implementation in decent browsers. If a malicious script can lie about the browser version, protecting against such attacks becomes a lot harder. (The fact that this is same-origin-or-CORS-only helps a lot, of course, but not in the case where the particular browser bug is that the browser is confused about what counts as the same origin...) Would a solution where the user data is either appended or prepended to the 'responsible' user agent be an acceptable compromise? |
If the browser is confused about same-origin that would be a much bigger problems than setting |
Wait, I think you got that backwards ("if the zombies attack, you have bigger problems than your broken shotgun"). I'm not worried about the user agent as an attack vector; I want to keep using it for defense. |
@steike I believe, with this feature, you have even more possibilities to use the UA string header as a defensive feature. Differently, yes - but more powerful too. |
@steike you already ceded that same-origin-or-CORS helps and your counter argument to that was bogus. If a browser is confused about same-origin that would be a high priority security bug. |
Yes, a high-pri bug. That doesn't mean it wouldn't take the vendor months to fix it, or that users would upgrade instantly once the fix was out. Let's say that 10% of users have browsers that are vulnerable to a bug. Let's say when this particular vulnerability is exploited, there is a detectable header anomaly. Let's say 1% of all users happen to be behind various crappy firewalls that introduce the same anomaly for legitimate requests. With a working user-agent header, we can block the attack by telling 0.1% of users that they must upgrade their browser before they can use the site. Without, the choice would be to force-upgrade 10% of users, outright block 1% of users, or hope no one finds the bug. ... It's not the end of the world, of course. We can add a few bits to some cookie; it'll just be one more layer of web cruft to carry around. You asked for a rationale for keeping a working UA header. What I have is "those of us who need it will have to reimplement the feature if you take it away". To be fair there's probably not that many of us. If the benefit of |
@steike I don't understand your attack scenario. The And cross-origin resources need to explicitly opt-in to allowing |
+1 to this route. I agree with @domenic's comment on "mergeMultimaps" mental model (#37 (comment)) as being developer friendly and most intuitive. |
Whatever syntax we end up using, we should make it very explicit that "removing" the user-agent header should from a CORS point of view be equivalent to setting it. So it would still require server opt-in. Adding a note to this effect might increase the chances that the browser actually tests for this case. |
.02 - it sure would be nice if UA could be appended to, so you can add "mywidget/1.0" or "(test 3)" for example, rather than blowing the entire thing away. Syntax here: |
navigator.userAgent + 'my string' |
@sicking I don't understand why omitting the header altogether is a cause for concern. It wasn't when we stopped sending |
I'm inclined to define 1 in #37 (comment) but leave the |
If we need |
chromium bug: https://crbug.com/571722 |
Is User-Agent still forbidden after dab09b0 ? |
It's not as per this change. |
The Fetch spec has allowed it for a while (in other words, it's no longer forbidden): * https://fetch.spec.whatwg.org/#terminology-headers * https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name Cf. also * whatwg/fetch#37 * whatwg/fetch@dab09b0 [ChangeLog][QtQml][XmlHttpRequest] It is now possible to set the User-Agent header. Change-Id: I1d5bd785223e9df2883011f873d440a63e363a24 Reviewed-by: Qt CI Bot <[email protected]> Reviewed-by: Ulf Hermann <[email protected]> Reviewed-by: Timur Pocheptsov <[email protected]> Reviewed-by: Fabian Kosmale <[email protected]>
`User-Agent` used to be a forbidden request header until it was removed in whatwg/fetch#37. However, this was never added as a WPT test, and Chrome still treats it as forbidden. This change adds that test.
) `User-Agent` used to be a forbidden request header until it was removed in whatwg/fetch#37. However, this was never added as a WPT test, and Chrome still treats it as forbidden. This change adds that test.
…-platform-tests#39301) `User-Agent` used to be a forbidden request header until it was removed in whatwg/fetch#37. However, this was never added as a WPT test, and Chrome still treats it as forbidden. This change adds that test.
…forbidden request header, a=testonly Automatic update from web-platform-tests [fetch] Test that `User-Agent` is not a forbidden request header (#39301) `User-Agent` used to be a forbidden request header until it was removed in whatwg/fetch#37. However, this was never added as a WPT test, and Chrome still treats it as forbidden. This change adds that test. -- wpt-commits: 55ea64f9c5c0a073bfda1bb1b3343c0048258171 wpt-pr: 39301
Should
fetch()
setuser-agent
by default? Allow appending bytes? Allow replacing it? Allow it to be omitted?See w3c/ServiceWorker#348 (comment) for context.
And why is it on the forbidden header list? It has been since forever, but is there strong rationale?
The text was updated successfully, but these errors were encountered: