deftnerd 5 years ago

This is a good basic overview of the basic headers, but I suggest spending some time on Scott Helme's blog. He runs securityheaders.io, a free service that scans your site, and assigns it a letter grade based on what headers and configurations you've applied.

For instance, his explanation of Content Security Policy headers is much more detailed than in the OP's link.

https://scotthelme.co.uk/content-security-policy-an-introduc...

spectre256 5 years ago

It's definitely worth repeating the warning that, while very useful, Strict-Transport-Security should be deployed with special care!

While the author's example of `max-age=3600` means there's only an hour of potential problems, enabling Strict-Transport-Security has the potential to prevent people from accessing your site if for whatever reason you are no longer able to serve HTTPS traffic.

Considering another common setting is to enable HSTS for a year, its worth enabling only deliberately and with some thought.

  • txcwpalpha 5 years ago

    Unless your site is nothing but a dumb billboard serving nothing but static assets (and maybe even then...), the inability to serve HTTPS traffic should be considered a breaking issue and you shouldn't be serving anything until your HTTPS is restored. "Reduced security" is not a valid fallback option.

    That might not be something that a company's management team wants to hear, but indicating to your users that falling back to insecure HTTP is just something that happens sometimes and they should continue using your site is one of the worst things you can possibly do in terms of security.

    • bityard 5 years ago

      Here's a real example of how HSTS can break a site: My personal, non-public wiki is secured by HTTPS with a certificate valid for 5 years. I thought it would be neat to enable HSTS for it because what could go wrong?

      Well, just last week the HTTPS certificate expired in the middle of the day. I had about a half days' worth of work typed up into the browser's text field and when I clicked "submit", all of my work vanished and Firefox only showed a page stating that the certificate was invalid and that nothing could be done about it. I clicked the back button, same thing. Forward button, same thing. A half-days worth of work vanished into thin air.

      Is this my fault for letting the certificate expire? Absolutely. Should I have used letsencrypt so I didn't have to worry about it. Sure. Should I be using a notes system that doesn't throw away my work when there's a problem saving it? Definitely. I don't deny that there's lots that I could have done to prevent this from being a problem and lots that I need to fix in the future.

      But it does point out that if you use HSTS, you have to be _really_ sure that _all_ your ducks are in a row or it _will_ come back to bite you eventually.

      • txcwpalpha 5 years ago

        Your ducks didn't come back to bite you. Your ducks did exactly what they were supposed to do (and furthermore, exactly what you want them to do).

        Maybe you don't care about protecting whatever data you were entering into your wiki, but in most (if not all) cases of sending data to companies you interact with, you do not want your user-entered data being sent in the clear to the server, or even worse, being sent to the server of a malicious attacker performing a MITM attack. What you want is for your browser to stop sending the data entirely when it encounters a suspicious situation (such as an HTTPS->HTTP downgrade or an expired cert), which is exactly what happened.

        Again, "reduced security" is not a valid failure state. It's like having a button on your front door that says "Lost your key? Just press this button and the door will unlock." At that point, why even have a door lock anyway?

        See https://en.wikipedia.org/wiki/Downgrade_attack

      • tialaramex 5 years ago

        Without HSTS how do you think your scenario plays out differently? Your expired cert still isn't good, and I assure you Firefox isn't going to say "Oh, there's an insecure HTTP site we could try, would you like me to send the HTTP POST there instead?". So I think this only works out "fine" in the scenario where lack of HSTS means you just never use any security at all. Which is a fairly different proposition.

        Since the expired cert can't be distinguished from an attack my guess is that the text contents aren't lost when that transaction fails due to the expired cert (as then bad guys could throw your data away which isn't what we want) so I think you could just have paused work, got yourself a new valid certificate, and then carried on.

        Now, of course, it may be that your web app breaks if you do that, the prior session you were typing into becomes invalid when you restart, and new certificates can't be installed without restarting, that sort of thing, but that would be specific to your setup.

        • jazzdev 5 years ago

          Wouldn't the browser allow you to inspect the cert and choose to continue the connection? Then you can decide for yourself if you trust the cert.

      • contras1970 5 years ago

        > I had about a half days' worth of work typed up into the browser's text field and when I clicked "submit", all of my work vanished and Firefox only showed a page stating that the certificate was invalid and that nothing could be done about it.

        that's not a valid argument against HSTS! the browser behaviour with regard to your data is outrageous, and shouldn't be tolerated. and i'm saying this as a longtime firefox user. the browser just sucks, big time.

        "luckily", as a vim junkie, i can't stand the textarea at all, and do anything that requires more effort than, say, this comment, in vim, then copy/paste over when i'm done. still, we should have gotten $VISUAL embedding fifteen years ago: what's happened, Mozilla? lining up your Pockets the whole time?

      • ehPReth 5 years ago

        Curious: Wouldn’t you be singing the same tune if the browser/system crashed? Or the database backend for the wiki went down? Or..

        I thought the general way was to automatically save any progress in localstorage/etc ready to be retrieved if needed once the problem is fixed?

        • mixmastamyk 5 years ago

          I learned in the days of DOS to keep the "Ctrl+S" hotkey close and use it frequently. Combined with backups, haven't had a big data loss since that time ((crosses fingers)).

          For unreliable webforms, Ctrl-A, Ctrl-C is useful.

      • toomuchtodo 5 years ago

        Very politely, I want to comment that the problem wasn't your HTTPS cert expiring unexpectedly, but how your application handled storing in progress work.

      • mnutt 5 years ago

        Firefox is especially good about keeping previously filled in form fields around for a bit, so once you fixed the SSL issue there's a good chance that you could retrieve your form post.

      • brlewis 5 years ago

        If you don't use HSTS, you have to be _really_ sure that _all_ your users fully understand the risks of using an unencrypted connection.

        In the example you gave, wouldn't you have lost all your work anyway without HSTS? I don't think browsers supply an easy way to retry POST to the corresponding http: URL whether HSTS is set up or not.

        • half-kh-hacker 5 years ago

          Without HSTS, you can inspect the cert and click through the invalid certificate warning.

          With HSTS, that button goes away in browsers.

      • _57jb 5 years ago

        This is exactly what should have happened.

        HSTS worked perfectly, your poor maintenance of certificates and your site lost you half a days worth of work.

        Some people need to touch the stove and feel the pain...if you blame someone else for you touching the stove that is just willful ignorance.

      • mrmonkeyman 5 years ago

        This is one of the weirdest things I heard in a long time. It's on the level of my grandma her "computer issues".

        You typed for half a day, in a browser textarea, without even as much as a ctrl-a, ctrl-c every once in a while.

        Wow, just wow. I did not know you guys existed anymore.

    • ergothus 5 years ago

      > the inability to serve HTTPS traffic should be considered a breaking issue

      > "Reduced security" is not a valid fallback option.

      Agreed! But if my HTTPS is broken, I might well want to replace my site with an HTTP page explaining that we'll be back soon. If that is impossible until the max_age expires, that can lead to an awkward explanation to the higher-ups.

      • JoshTriplett 5 years ago

        > if my HTTPS is broken, I might well want to replace my site with an HTTP page explaining that we'll be back soon

        1) You're not going to be able to do that for anyone who has bookmarked the site, or loads it from their history / address bar, with the https already included. Under what circumstances, other than someone hand-typing a URL, do you expect anyone to reach your site by HTTP? (And note that any such user can potentially get compromised, such as by ISPs.)

        2) Search engines will link to your site with the https pages they found when crawling. And if you stay down long enough for search engines to notice, you have a bigger problem and a much more awkward explanation to give.

        3) Many kinds of failures will prevent anyone from reaching the whole domain, or the IP address the domain currently points to, or similar. Have an off-site status page, social media accounts, and similar.

    • gorkish 5 years ago

      Everything you said is true, but it does not provide any reasonable argument that HSTS as it is designed and implemented is a valid way to enforce this. The potential for malicious or accidental misuse to cause an effectively immediate and irreversible domain-wide DoS is simply too great. I am quite surprised that the feature made it through planning and implementation to begin with.

    • idlewords 5 years ago

      This is a silly and absolutist position to take on HTTP. Everything depends on context, and in many cases it is far better to serve things over open HTTP than go offline.

      • txcwpalpha 5 years ago

        If you had reason to previously set up your site with HTTPS, you should never fallback to serving anything other than static assets (and even then, you better have a damn good reason) over HTTP from that same domain. Period.

        Sorry, but sometimes security is absolute.

      • eropple 5 years ago

        In what situation that you can conjure up is being forcibly reduced to HTTP distinguishable from being down?

        Like, how does it happen, ever?

        And what happens to your users' credentials if you do?

        • _b8r0 5 years ago

          When you have publicly accessible resources that must be available to all, but you can't guarantee that the accessing systems are configured correctly to use HTTPS.

          There are plenty of scenarios in which this happens online:

          * Legacy systems (e.g. Aminet)

          * Software distribution (e.g. apt mirrors)

          * Anything involving FTP where a HTTP mirror would be useful (e.g. overcoming FW restrictions)

          * Anything where permissionless access is a requirement (HTTPS is a permissioned system)

  • sjwright 5 years ago

    Not being able to serve HTTPS is not a real concern. It seems possible but in reality it simply won’t happen. If it ever does break, you fix it, you don’t change protocols.

    Once you go HTTPS you’re all in regardless whether or not you’ve set HSTS headers. Let’s say your HTTPS certificate fails and you can’t get it replaced. So what, you’re going to temporarily move back to HTTP for a few days? Not going to happen! Everyone has already bookmarked/linked/shared/crawled your HTTPS URLs. There is no automated way to downgrade people to HTTP, so only the geeks who would even think to try removing the “s” will be able to visit. And most geeks won’t even do that because we’ve probably never encountered a situation where that has ever helped.

    • SquareWheel 5 years ago

      It happened to me. I served my site over TLS and used HSTS. Hosting got expensive, so I rebuilt my site on Github Pages and hosted there. It was another year and a half before they rolled out HTTPS for custom domains.

      In that case, old visitors were rejected due to the policy. I wish I had set a lower duration.

      • sjwright 5 years ago

        > old visitors were rejected due to the policy

        Also because their links and bookmarks would have all failed.

        • shawnz 5 years ago

          But HSTS also blocks the workaround of "just Google it and find the page again"

          • LinuxBender 5 years ago

            You can use the webmaster tools on Google to fix the indexing. It takes a few days, but worth the effort.

            • MichaelApproved 5 years ago

              The issue is that the browser won't let you visit the insecure URL, regardless of how you get to it.

              It won't work because the HSTS setting the visitor got months ago told it not to.

              • LinuxBender 5 years ago

                That is exactly what I am talking about.

                In the webmaster tools, you want to get google to remove all references to the non https versions. Ensure https is up on all URL's, then use their tools to re-index everything and remove all references to http://

                Are you saying you can't set up https on some of your URL's?

                • shawnz 5 years ago

                  The argument here is that enabling HSTS can be dangerous because if you enable it and then later become unable to serve HTTPS for some reason, you will have no way of turning it off. Even if you get your clients to manually edit their bookmarks to use HTTP again, their browsers will just rewrite the url to HTTPS anyway.

                  There's no issue with switching FROM HTTP to HTTPS: that's easy, just redirect them. The issue is if you have to switch back.

                  • LinuxBender 5 years ago

                    I completely understand. The bookmark scenario is even easier than the google links. You simply set up https and the cached HSTS entries will work.

                    • shawnz 5 years ago

                      The scenario is assuming you have a working HSTS setup but then become unable to serve HTTPS for some reason (e.g. cert expires and you can't acquire a new one, or the provider just drops support for SSL for some reason, or you are forced to change providers to one that doesn't support SSL)

                      HSTS can't be enabled on plain HTTP so it's not possible to create the problematic scenario if you never had SSL enabled to begin with. The problem is switching from SSL to non-SSL, not the other way around.

                      • LinuxBender 5 years ago

                        Even if you can't renew a cert you paid for, in most cases you should be able to get a temp cert from Lets Encrypt and renew it every couple of months. I have free wildcard certs for many of my domains. HSTS just requires HTTPS. It doesn't pin the cert to a particular CA. That is what CAA records are for.

                        Are you saying that you have applications that require HTTP port 80 only?

                        • shawnz 5 years ago

                          It's true that letsencrypt makes this less likely to be an issue. But there is still the possibility that maybe your hosting provider drops support for HTTPS or you are forced to switch to a provider that doesn't support HTTPS. The parent gave one example of this with their GitHub Pages situation.

                          Also: HSTS applies to all ports once applied, not just 80/443. That is another important thing to consider before turning it on.

  • Someone1234 5 years ago

    It is a good point.

    I would like to add that a lot of web-apps break if they aren't served over HTTPS regardless, due to the Secure flag being set on cookies. For example if we run ours in HTTP (even for development) it will successfully set the cookie (+Secure +HttpOnly) but cannot read the cookie back and you get stuck on the login page indefinitely.

    So we just set ours to a year, and consider HTTPS to be a mission critical tier component. It goes down the site is simply "down."

    HSTS is kind of the "secret sauce" that gives developers coverage to mandate Secure cookies only. Before then we'd get caught in "what if" bikeshedding[0].

    [0] https://en.wiktionary.org/wiki/bikeshedding

  • pimterry 5 years ago

    The only risk is if you've served HTTPS traffic properly with HSTS headers to users, and then your server is later unable to correctly handle HTTPS traffic. Note that HSTS headers on a non-HTTPS response are ignored.

    Whilst there's cases where you might fail to serve HTTPS traffic temporarily (i.e. if your cert expires and you don't handle it) almost all HTTPS problems are quick fixes, and are probably your #1 priority regardless of HSTS. If your HTTPS setup is broken and your application has any real security concerns at all then it's arguably better to be inaccessible, rather than quietly allow insecure traffic in the meantime, exposing all your users' traffic. I don't know many good reasons you'd suddenly want to go from HTTPS back to only supporting plain HTTP either. I just can't see any realistic scenarios where HSTS causes you extra problems.

  • BCharlie 5 years ago

    I think it's a good point which is why I set the time low, even though many other resources set it to a week or longer. I just don't like very long cache times for anything that can break, so that site owners have a little more flexibility in case something goes wrong down the line.

  • ehPReth 5 years ago

    Speaking of HSTS.. does anyone here know if Firebase Hosting (Google Cloud) plans to support custom HSTS headers with custom domains? I can’t add things like includesubdomains or preload at present unfortunately

  • mtgx 5 years ago

    > if for whatever reason you are no longer able to serve HTTPS traffic

    Isn't that how it should work? Would you rather use Gmail over HTTP if its HTTPS stopped working? Besides, just supporting HTTP fallback means you're much more vulnerable to downgrade attacks -- it's the first thing attackers will attempt to use.

  • zaarn 5 years ago

    I set HSTS to 10 years. My infrastructure isn't even capable of serving HTTP other than for LetsEncrypt certs. An outage on HTTPS is a full outage. Most of my sites handle user data in some way, so HTTPS is mandatory anyway, as per my interpretation of the GDPR.

  • tialaramex 5 years ago

    I don't get people who worry about _feature_ pinning like this.

    I imagine them looking at a business continuity plan and being aghast - why are we spending money to manage the risk from a wildfire in California overwhelming our site there, yet we haven't spent ten times as much on a zombie werewolf defence grid or to protect against winged bears?

    HSTS defends against a real problem that actually happens, like those Californian wildfires, whereas "whatever reason you are no longer able to serve HTTPS traffic" is a fantasy like the winged bears that you don't need to concern yourself with.

undecidabot 5 years ago

Nice list. You might want to consider setting a "Referrer-Policy"[1] for sites with URLs that you'd prefer not to leak.

Also, for "Set-Cookie", the relatively new "SameSite"[2] directive would be a good addition for most sites.

Oh, and for CSP, check Google's evaluator out[3].

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Re...

[2] https://www.owasp.org/index.php/SameSite

[3] https://csp-evaluator.withgoogle.com

  • will4274 5 years ago

    Referrer-Policy is nice, but browsers should just default to strict-origin-when-cross-origin and end the mess.

Avamander 5 years ago

Instead of X-Frame-Options one should use CSP's frame-ancestors option, it has wider support among modern browsers. But CSP deserves more than one paragraph in general.

He also missed Expect-Staple and Expect-CT, in addition to that, most of security headers have the option to specify an URI where failures are sent, very important in production environments.

  • tialaramex 5 years ago

    Expect-CT is pretty marginal. In principle a browser could implement Certificate Transparency but then only bother to enforce it if Expect-CT is present, in practice the policy ends up being that they'll enforce CT system-wide after some date. Setting Expect-CT doesn't have any effect on a browser that can't understand SCTs anyway, so that leaves basically no audience.

    Furthermore, especially with Symantec out of the picture, there is no broad consumer market for certificates from the Web PKI which don't have SCTs. The audience of people who know they want a certificate is hugely tilted towards people with very limited grasp of what's going on, almost all of whom definitely need embedded SCTs or they're in for a bad surprise. So it doesn't even make sense to have a checkbox for "I don't want SCTs" because 99% of people who click it were just clicking boxes without understanding them and will subsequently complain that the certificate doesn't "work" because it didn't have any SCTs baked into it.

    There are CAs with no logging for either industrial applications which aren't built out of a web browser (and so don't check SCTs) and are due to be retired before it'd make sense to upgrade them (most are gone in 2019 or 2020) or for specialist customers like Google whose servers are set up to go get them SCTs at the last moment, to be stapled later. Neither is a product with a consumer audience. Which means neither is a plausible source of certificates for your hypothetical security adversary.

    As a result, in reality Expect-CT doesn't end up defending you against anything that's actually likely to happen, making it probably a waste of a few bytes.

    • Avamander 5 years ago

      Unfortunately yes, Expect-CT could use more enforcement and support but I think spending those few bytes is worth in the sense of indicating people want to see CT enforced more.

  • BCharlie 5 years ago

    That is true! I do set frame-ancestors in the sample CSP for this reason. I could probably do a dedicated post on CSP to do it justice, but don't want to overwhelm anyone who just wants to start setting headers.

    One good reason to set both options, as I mention in the post, is that scanners who rate site security posture may penalize site owners who don't set both - no harm in doing it that I know of.

    • Jach 5 years ago

      Nitpicking as I like to see practical awareness posts like yours spread: you should link to the CSP spec (v3) as the official site. https://content-security-policy.com/ is useful to get started but is out of date and Foundeo isn't authoritative.

      For a "complete" guide (maybe "comprehensive starter guide"?) I'd at least add a note in the x-frame-options section that it's been superseded by CSP and only needed if you must support IE (or I guess please a tool), and if you have interesting frame requirements (i.e. more than one allowed ancestor but not all) you're going to have to use a hack to support that with the old header.

      Another interesting callout is that most of the CSP directives can be specified by a meta tag in the markup. Not only is this handy for quick serverless testing but can become necessary if you end up routing through something (like some CDNs) that has a max overall headers limit... CSP headers can get pretty big if you don't just bail out with a wildcard.

      Definitely agree CSP can have its own post. It's complicated and still evolving with new spec versions. I recently learned about chrome's feature-policy header proposal, which to me is like more granular script-src policies, so I wouldn't be surprised if some future CSP version just absorbs it...

      • BCharlie 5 years ago

        Thanks for the feedback! I did link the official site, but it's kinda buried in the paragraph and maybe not obvious.

        I added some text to the x-frame-options to note the CSP rules - it's a great addition.

        • Jach 5 years ago

          Thanks for considering! I think I wrote the nitpick poorly, it's still early for me. I meant that you're currently linking to https://content-security-policy.com/ as the "official site" but it's not really, just a useful reference (but great to link to and in any case it does link to the official CSP2 recommendation eventually so you're fine). The most "official site" though at the moment is the combination of https://www.w3.org/TR/CSP2/ and the newer https://www.w3.org/TR/CSP3/ that's already implemented by Chrome.

          I've reminded myself that v3 still hasn't fully stabilized into an official recommendation despite being in final-draft since October (it's basically closed for new things) so for now awareness of 2 and 3 is probably going to continue to be important for anyone responsible for producing a moderately complex string (guess who that is on my teams ;)). Though even at just level 2 there are a few things I could say about differences in behavior just between Chrome and Firefox... Testing is crucial!

Grollicus 5 years ago

Should mention for Access-Control-Allow-Origin that the default value is the safe default and setting this header weakens site security.

  • BCharlie 5 years ago

    Great point! I added a sentence to say that the default is all that's needed.

the_common_man 5 years ago

X-frame-options is obsolete. Most browsers complain loudly on the console or ignore the header. Use csp instead

  • will4274 5 years ago

    > X-frame-options is obsolete. Most browsers complain loudly on the console or ignore the header.

    The deny option seems to work just fine. My default browser (Firefox) doesn't complain. MDN doesn't indicate any browsers have dropped support. Plus, dropping support would be an unmitigated and unnecessary unforced security error, by making old sites insecure. Do you have a link to an example of a browser ignoring the header?

  • floatingatoll 5 years ago

    For those wondering, CSP ‘frame-ancestors’ if I remember correctly.

    • user5994461 5 years ago

      It's a shame browsers are breaking the X-Frame-Options.

      It was an easy option to force with load balancers or any intermediate server. Frames should always be blocked on the open internet.

      The content security policy can't be adjusted easily. It screws with applications and frameworks that use it for any of the twenty other options it covers.

      • floatingatoll 5 years ago

        Why? It’s been deprecated for years and years. You don’t have to set any of the other 20 CSP options to set CSP:frame-ancestors. There’s no reason to avoid it except taking a completionist approach to CSP headers (“we have to set all possible CSP attributes for maximum security in a single go on our first try”) which I strongly discourage.

        • user5994461 5 years ago

          You can't just do a "set header Content-Security-Policy frame-ancestors none" on all traffic. This is gonna break anything using CSP for any of the 20 settings it provides.

          • floatingatoll 5 years ago

            Correct. You would be expected to merge it into any CSP headers used by your app, either using (in your Apache scenario) If/Else and Header modify or by modifying your application where appropriate.

            While XFO is simpler to overwrite on a global basis, it’s imprecise and doesn’t permit “allow certain sites to frame, deny all others” and is likely to become fully unsupported whenever any CSP policy is defined, given its deprecated status. Taking the XFO way out will only help you short-term at best.

dalf 5 years ago

There is the Feature-Policy header too : allow and deny the use of browser features in its own frame. I've seen this header on a bank website.

Example :

  Feature-Policy: accelerometer 'none'; autoplay 'none'; camera 'none'; fullscreen 'none'
Documentation: https://developer.mozilla.org/en-US/docs/Web/HTTP/Feature_Po...
joecot 5 years ago

I'm a little confused by the examples for Access-Control-Allow-Origin:

> Access-Control-Allow-Origin: http://www.one.site.com

> Access-Control-Allow-Origin: http://www.two.site.com

And in the examples setting both. Because in my experience you cannot set multiple [1]. Lots of people instead set it to * which is both bad and restricts use of other request options (such as withCredentials). It looks like the current working solution is to use regexes to return the right domain [2], but I'm currently having trouble getting that to work, so if there's some better solution that works for people I'd love to hear it.

1. https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Error... 2. https://stackoverflow.com/questions/1653308/access-control-a...

  • jrockway 5 years ago

    I think the problem that people are running into with CORS is that their webserver was created before CORS was a thing, so it's tough to configure it correctly. What you want to do is if you allow the provided Origin, echo it back in Access-Control-Allow-Origin.

    Envoy has a plugin to do this (envoy.cors), allowing you to configure allowed origins the way people want (["*.example.com", "foo.com"]) and then emitting the right headers when a request comes in. It also emits statistics on how many requests were allowed or denied, so you can monitor that your rules are working correctly. If you are using something else, I recommend just having your web application do the logic and supply the right headers. (You should also be prepared to handle an OPTIONS request for CORS preflight.)

    • joecot 5 years ago

      Sure, I wish web servers had better options for this. If you're trying to do it on the web server level it seems like the current solution is a regex with your list of approved origins vs the origin header, and then setting Access-Control-Allow-Origin to the matching one. But the current examples, showing just setting the header multiple times, will lead devs down the garden path. Unless I'm missing something, which I very much hope I am.

      • jrockway 5 years ago

        You are right that this article isn't going to enable someone to setup CORS correctly.

        It is actually kind of weird that it's in here, because the other things they talk about add more security, but if you don't need CORS and you decide to just add it to your configuration for no reason, you actually now have less security. Especially if you return * for the allowed origin.

  • BCharlie 5 years ago

    You are right on this - I thought you could set multiple sites by setting multiple headers, but it doesn't work that way, which I should have known because headers don't work that way in general...

    The recommended way to do multiple sites seems to be to have the server read the request header, check it against a whitelist, then dynamically respond with it, which seems terrible.

    Thanks for catching this - I updated the post to reflect this and make it more clear.

    • unilynx 5 years ago

      Actually, headers _do_ often work that way. HTTP says:

      Multiple message-header fields with the same field-name MAY be present in a message if and only if the entire field-value for that header field is defined as a comma-separated list

      Which applies to HTTP headers such as Cache-Control:, and probably goes back to the email RFCs allowing multiple To: headers.

      It's just that Access-Control-Allow-Origin isn't defined to accept a comma list, just like Content-Security-Policy doesn't (which is another header breaking things if it appears more than once)

    • paulddraper 5 years ago

      Headers usually work exactly that way. Cookies and cors are oddball exceptions

hcheung 5 years ago

The nginx header directives are all not in correct syntax with the extra ":", and for those directives with multiple values, it should be wrapped within a "" (such as "1; mode=block"), here is the correct settings:

    ## General Security Headers
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Frame-Options deny;
    add_header X-Content-Type-Options nosniff;
    add_header Strict-Transport-Security "max-age=3600; includeSubDomains";
yyyk 5 years ago

The X-XSS-Protection header recommendation is a Zombie recommendation which is at best outdated and at worst harmful. Its origins are based on old IE bugs but it introduces worse issues.

IMHO, the best value for X-XSS-Protection is either 0 (disabling it completely like Facebook does) or not providing the value at all and just letting the client browser use its default. Why?

First, XSS 'protection' is about to not be implemented by most browsers. Google has decide to deprecate Chrome's XSS Auditor[0], and stop supporting XSS 'protection'. Microsoft has already removed its XSS filter from Edge[1]. Mozilla has never bothered to support it in Firefox.

So most leading net companies already think it doesn't work. Safari of course supports the much stronger CSP. So it's only possibly useful on IE - if you don't support IE, might as well save the bytes.

Second, XSS 'protection' protects less than one might think. In all implementing browsers, it has always been implemented as part of the HTML parser, making it useless against DOM-based attacks (and strictly inferior to CSP)[2].

Worse, the XSS 'protection' can be used to create security flaws. IE's default is to detect XSS and try to filter it out, this has been known to be buggy to the point of creating XSS on safe pages[3], which is why the typical recommendation has been the block behaviour. But blocking has been itself exploited in the past[4], and has side-channel leaks that even Google considers too difficult to catch[0] to the point of preferring to remove XSS 'protection' altogether. Blocking has an obvious social exploitation which can create attacks or make attacks more serious.[5]

In short, the best idea is to get rid of browsers' XSS 'protection' ASAP in favour of CSP, preferably by having all browsers deprecate it. This is happening anyway, so might as well save the bytes. But if you do provide the header, I suggest disable XSS 'protection' altogether.

[0] https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...

[1] https://developer.microsoft.com/en-us/microsoft-edge/platfor...

[2] e.g. https://github.com/WebKit/webkit/blob/d70365e65de64b8f6eaf1f...

[3] CVE-2014-6328, CVE-2015-6164, CVE-2016-3212..

[4] https://portswigger.net/blog/abusing-chromes-xss-auditor-to-...

[5] Assume that an attacker has enough access to normally allow XSS. If he does not, the filter is useless. If he does, the attacker can by definition trigger the filter. So trigger the filter, make a webpage be blocked, and call the affected user as "support". From there the exploitation is obvious, and can be much worse than mere XSS. Now, remember that all those XSS filters in all likelihood have false positives, that may not be blocked by other defences because they're not attacks. So It's quite possible the filter introduces a social attack that wouldn't be possible otherwise!

Hattip: https://frederik-braun.com/xssauditor-bad.html which gave me even more reasons to think browsers' XSS 'protection' is awful. I didn't know about [2] before reading his entry.

  • yyyk 5 years ago

    For [3] (exploiting IE's XSS filter default behaviour to create XSS) see also https://www.slideshare.net/codeblue_jp/xss-attacks-exploitin... .

    The author recommends either changing the default behaviour to block or disabling the filter altogether. I believe experience has shown this protection method cannot be fixed.

    Ultimately, safe code is code that can be reasoned about but there never was even any specification for this 'feature'. By comparison, CSP has a strict specification. It covers more attacks, and has a better failure mode between XSS protections' filter and block entire page load behaviours.

  • BCharlie 5 years ago

    Thanks for this response - lot's of new information here that I'll have to read up on!