beala 2 days ago

If I may attempt to summarize:

CORS is a mechanism for servers to explicitly tell browsers which cross-origin requests can read responses. By default, browsers block cross-origin scripts from reading responses. Unless explicitly permitted, the response cannot be read by the requesting domain.

For example, a script on evil.com might send a request to bank.com/transactions to try and read the victim's transaction history. The browser allows the request to reach bank.com, but blocks evil.com from reading the response.

CSRF protection prevents malicious cross-origin requests from performing unauthorized actions on behalf of an authenticated user. If a script on evil.com sends a request to perform actions on bank.com (e.g., transferring money by requesting bank.com/transfer?from=victim&to=hacker), the server-side CSRF protection at bank.com rejects it (likely because the request because it doesn’t contains a secret CSRF token).

In other words, CSRF protection is about write protection, preventing unauthorized cross-origin actions, while CORS is about read protection, controlling who can read cross-origin responses.

  • chuckadams 2 days ago

    > In other words, CSRF protection is about write protection, preventing unauthorized cross-origin actions, while CORS is about read protection, controlling who can read cross-origin responses.

    I apologize for the length of the reply, I didn't have time to write a short one. But to sum up, CSRF is about writes, while CORS protects both reads and writes, and they're two very different things.

    CSRF is a sort of "vulnerability", but really just a fact of the open web: that any site can create a form that POSTs any data to any other site. If you're on forum.evil.com and click the "reply" button (or anything at all), that could instead POST a transfer request to your.bank.com, and if you happen to be logged in, it'll happen with your currently authenticated session. When the bank implements CSRF protection, it ensures that a known token on the page (sometimes communicated through headers instead) is sent with the transfer. If that token isn't present, or doesn't match what's expected, reject the request. It ensures that only forms generated by bank.com will have any effect, and it works because evil.com can't use JS to read the content of the page from bank.com due to cross-origin restrictions.

    CORS on the other hand is an escape hatch from a different cross-origin security mechanism that browsers enable by default: that a script on foo.com cannot make requests to bar.com except for "simple" requests (the definition of which is anything but simple; just assume any request that can do anything interesting is blocked). CORS is a way for bar.com to declare with a header that foo.com is in fact allowed to make such requests, and to drop the normal cross-origin blocking that would occur otherwise. You only have to use CORS to remove restrictions: if you do nothing, maximum security is the default. It's also strictly a browser technology: non-browser user agents do not need or use CORS and can call any API anytime.

    Fun fact: you don't need CSRF protection at all if your API is strictly JSON-based, or uses any content type that isn't one of the built-in form enclosure types. The Powers That Be are talking about adding a json enclosure type to forms, but submitting it would be subject to cross-origin restrictions, same as it is with JS.

    • motorest a day ago

      > If you're on forum.evil.com and click the "reply" button (or anything at all), that could instead POST a transfer request to your.bank.com, and if you happen to be logged in, it'll happen with your currently authenticated session.

      It should be clarified that by "currently authenticated session" it actually means cookies, which browsers are designed to automatically include in requests when they send a request to the same origin domain.

      That's why CSRF attacks work: the attacker tricks the users' browser to send a request to a domain where the user was already authenticated, and that automatically inserts the user's session info in the request thus authorizing the request.

      CSRF tokens work by adding a kind of API key that the browser somehow loads in a way that it is not stored as cookies. Servers then check for the CSRF token in each request to determine if the request is authorized. CSRF attacks alone don't work because they just forward cookies, and they do not include the CSRF token thus are rejected.

    • jcmfernandes 2 days ago

      > Fun fact: you don't need CSRF protection at all if your API is strictly JSON-based, or uses any content type that isn't one of the built-in form enclosure types. The Powers That Be are talking about adding a json enclosure type to forms, but submitting it would be subject to cross-origin restrictions, same as it is with JS.

      AFAIK, this is not totally accurate because the internet is a messy place. For example, the OAuth authorization code grant flow blesses passing the authorization code to the relying party (RP) in a GET request as a query parameter. The RP must protect against CSRF when receiving the authorization code.

      • catlifeonmars 2 days ago

        > The RP must protect against CSRF when receiving the authorization code

        Is this via PKCE or (ID) token nonce validation?

      • chuckadams 2 days ago

        Ah yes, good catch. That's what the `state` parameter is about, right? But I'll weasel out and say that lack of a content type (being a GET) is one of the built-in types too ;)

    • PantaloonFlames 2 days ago

      > It's a nice way to make an API "public", or would be if CORS supported a goddam wildcard for the host.

      I don't get what you mean. Access-Control-Allow-Origin supports a wildcard. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ac...

      • LudwigNagasena 2 days ago

        Also, nothing prevents you from checking the host server-side according to arbitrary logic and putting it into the CORS header dynamically.

        • Retr0id a day ago

          Nothing prevents this, but the client must still emit preflight checks (an extra round-trip) for every endpoint, and again if headers need to change.

      • dopp0 a day ago

        I think what he meant is that the wildcard is not supported in the origin itself, for instance: https://*.somedomain.com.

        per documentation, it's either "*" or "https://subdomain.somedomain.com"

        • chuckadams a day ago

          That was in fact it, lack of a subdomain wildcard. I got really confused because I opened one project I thought had this issue, saw the ACAO header was set to *, and thought I hallucinated the whole thing out of some different issue. But it was a different project where I needed to allow internal access, which would have been easy with a hardwired response with a wildcard, but instead I needed to write a whole lambda endpoint just to pull out the requesting host and put it in the ACAO header. Also easy, but what a waste.

          Either way, kind of a digression into details of CORS that wasn't necessary for the introductory treatment, so I edited it out.

      • chuckadams 2 days ago

        Yah you saw that before I edited it out, I realized that gripe was actually about the behavior of AWS API Gateway rather than CORS itself. It hates the wildcard, or something, I can't even remember exactly what the issue was. Thus did I zap it.

    • limbero a day ago

      > You only have to use CORS to remove restrictions: if you do nothing, maximum security is the default.

      This is only true if you see CORS as a tool only to prevent reading data. I personally find it to be a useful tool to prevent writes, because the Origin header fulfils several of the purposes of a CSRF token. But that requires work on the backend to not actually perform writes unless the CORS parameters are valid. That sort of security is not the default (which is probably good)

  • layer8 2 days ago

    To add to that:

    CORS is implemented by browsers based on standardized HTTP headers. It’s a web-standard browser-level mechanism.

    CSRF protection is implemented server-side (plus parts of the client-side code) based on tokens and/or custom headers. It’s an application-specific mechanism that the browser is agnostic about.

    • femto113 2 days ago

      Some additional color:

      CORS today is just an annoying artifact of a poorly conceived idea about domain names somehow being a meaningful security boundary. It never amounted to anything more than a server asking the client not to do something with no mechanism to force the client to comply and no direct way for the server to tell if the client is complying. It has never offered any security value, workarounds were developed before it even became a settled standard. It's so much more likely to prevent legitimate use than protect against illegitimate use that browsers typically include a way to turn it off.

      With CSRF the idea is that the server wants to be able verify that a request from a client is one it invited (most commonly that a POST comes from a form that it served in an earlier GET). It's entirely up to the server to design the mechanism for that, the client typically has no idea its happening (it's just feeding back to the server on a later request something it got from the server on a previous request). Also notable is despite the "cross-site" part of the name it doesn't really have any direct relationship to "sites" or domains, servers can and do use the exact same mechanisms to detect or prevent issues like accidentally submitting the same form twice.

      • tsimionescu a day ago

        CSRF wouldn't work as easily if CORS (or, more precisely, the single origin policy that CORS allows you to circumvent in controlled ways) weren't there. And both cookies and TLS also rely entirely on domains being a meaningful security boundary.

        Without the SOP, evil.com could simply use JS to read the pages from bank.com, get a valid CSRF token, and then ask the browser to send a request to bank.com using its own CSRF token and the user's cookie. This maybe could be circumvented by tying the cookie and the original CSRF token together, but there might be other ways around that. Plus, if the browser wasn't enforcing the SOP, then the different tabs might just be able to read each other's variables, since that is a feature today for multiple tabs accessing the same origin.

      • dasil003 a day ago

        I’m not sure in what world domains aren’t a meaningful security boundary, but cross-origin prevention is absolutely necessary in a world with private web apps and scriptable browsers.

        Maybe you are of the opinion that the web should have stayed document only and apps should have stayed native binaries, but as far as the web is concerned the default cross-origin request policy is a critical security pillar.

      • robocat a day ago

        > domain names somehow being a meaningful security boundary

        That's your Internet opinion. Perhaps expand on why you think that?

        I reckon domains have quite a few strong security features. Strong enough that we use them to help access valuable accounts

      • smagin a day ago

        well it does make sense to assume that by default different origins belong to different people, and some of those people don't have to behave friendly to each other.

        There is little server can do with that, because of the request-based model. The state that persists between requests lives in cookies, and it's browser job not to expose those cookies all around. Turning off single origin policy would be a terrible idea. For one, it makes CSRF work by not allowing cross-origin reads.

  • alexashka 2 days ago

    Regarding CSRF - how would I be authenticated to do actions on bank.com when I'm on evil.com?

    It seems like the problem is at the level of login information somehow crossing domain boundaries?

    What stops a script on evil.com from going to bank.com to get a CSRF token and then including that in their evil request?

    • chuckadams 2 days ago

      > It seems like the problem is at the level of login information somehow crossing domain boundaries?

      The login information isn't so much crossing boundaries -- evil.com can't read your session cookie on bank.com -- but cookies that don't set a SameSite attribute allow anyone to send that information on your behalf, and effectively act as you in that request. Textbook example of a "confused deputy" attack.

      > What stops a script on evil.com from going to bank.com to get a CSRF token and then including that in their evil request?

      The token is either stored on the page that bank.com sends (for a html form) or was sent in a header and stored in local storage (for API clients). In neither case can evil.com read that information, due to cross-origin restrictions, and it changes with every form.

      • alexashka 2 days ago

        > In neither case can evil.com read that information

        What stops evil.com from having an API endpoint called evil.com/returnBankCSRFToken that goes to bank.com, scrapes token and returns it?

        CSRF tokens are just part of an html form - they are not hidden or obscured and thus scraping them is trivial.

        When I go to evil.com, it calls the endpoint, gets token and sends a request to bank.com using said token, thus bypassing CSRF?

        • hkpack 2 days ago

          Nothing stops, but it will be a different CSRF token, which will not match to the one generated for the original page.

          Server keeps track of which CSRF token is given to which client using cookies (usually some form of SessionID), and stores it on the server somewhere.

          It is a very common pattern and all frameworks support it with the concept of "sessions" on the back end.

          • hansonkd 2 days ago

            > stores it on the server somewhere

            you don't need to store anything on the server. cookies for that domain are sent with the request and it is enough for the server to check its cookie with the csrf request data.

            browsers would send the bank.com cookies with the bank.com request. It is security built into the browser which is why its so important to use secure browsers and secure cookies.

            If the malicious user convinces the user to use an insecure browser you can circumvent CSRF, but at that point there are probably other exploits you can do.

            • smagin a day ago

              > How does server know the cookie is valid if it doesn't store it

              depending on why you'are asking the question, * because it decrypts correctly * because it contains some user identifier

              People don't usually store sessions in cookies because cookies can't be very big, and session do become big. So what people do instead they store cookies in databases, and put session identifiers into cookies.

              • hansonkd a day ago

                You don't need to store CSRF in sessions. Django doesn't by default.

                CSRF token can be entirely separate from sessions.

                • smagin a day ago

                  not even you don't need to, you shouldn't. Sessions shouldn't be accessible to js at all

            • mewpmewp2 a day ago

              How does server know the cookie is valid if it doesn't store it and how does it know csrf token is valid if it doesn't store it and finally how does it know that this csrf token relates to this cookie session token if it doesn't store it?

              • hansonkd a day ago

                The CSRF token can have nothing to do with the cookie session information. you can store CSRF as a separate cookie.

                You can validate the CSRF is valid by keeping a key on your server and matching that the token you get can be derived from that key.

                See Django's implementation of CSRF for more details. CSRF tokens are separate from session and no CSRF information needs to be stored in database to validate CSRF.

        • chii a day ago

          > What stops evil.com from having an API endpoint called evil.com/returnBankCSRFToken that goes to bank.com, scrapes token and returns it?

          so evil.com will now require some sort of authentication mechanism with bank.com to scrape a valid CSRF token. If this authentication works (either if the user willingly gave their login information to evil.com, or they have a deal with bank.com directly), then there's no issues, and it works as expected.

    • gavinsyancey 2 days ago

      When you logged in to bank.com, it set a cookie that your browser presents when it makes any request to bank.com, regardless of how it was initiated (i.e. it would still send the cookie on a cross-site XHR initiated by evil.com's JavaScript).

      > What stops a script on evil.com from going to bank.com...

      CORS

      • JimDabell 2 days ago

        > > What stops a script on evil.com from going to bank.com...

        > CORS

        CORS does the exact opposite to what you think.

        Those types of cross-site requests are forbidden by default by the Same-Origin Policy (SOP) and CORS is designed so you can allow those requests where they would otherwise be forbidden.

        CORS is not a security barrier. Adding CORS removes a security barrier.

      • Sohcahtoa82 20 hours ago

        > > What stops a script on evil.com from going to bank.com...

        > CORS

        Incorrect. It's SOP that prevents an evil.com script from going to bank.com

        It's CORS that allows evil.com. CORS is an insecurity feature that relaxes the SOP.

      • Macha 2 days ago

        Note that cookies now have the SameSite cookie option which should prevent this

      • alexashka 2 days ago

        > it set a cookie that your browser presents when it makes any request to bank.com, regardless of how it was initiated

        Right, this seems like a very bad idea and now everyone has to do CSRF because of it?

        CORS doesn't prevent evil.com from sending a reqeust to bank.com, it only prevents reading the response, no?

        So again, what stops evil.com from sending a request to say transfer 1 BBBBillion dollars to bank.com and including a CSRF token it gets from visiting bank.com?

        • chuckadams 2 days ago

          > Right, this seems like a very bad idea and now everyone has to do CSRF because of it?

          Yep, that pretty much sums it up.

          CORS doesn't have to enter into it though: evil.com just has no way to read the CSRF token from bank.com, it's a one-time password that changes with every form (one hopes) and it's embedded in places that it can't access. It can send an arbitrary POST request, but no script originating from evil.com (or anywhere that is not bank.com) can get at the token it would need for that post to get past the CSRF prevention layer.

        • blincoln a day ago

          CSRF tokens are only valid for a particular user, or session, or sometimes even a particular page load.

          If there's a way for evil.com to obtain a CSRF token that's valid for an arbitrary user, it's a vulnerability, just like if evil.com could obtain the user's session token, JWT, etc.

        • voxic11 2 days ago

          > it only prevents reading the response, no?

          > So again, what stops evil.com from sending a request to say transfer 1 BBBBillion dollars to bank.com and including a CSRF token it gets from visiting bank.com?

          It can't read the response from bank.com so it can't read the CSRF token. The token is basically proving the caller is allowed to read bank.com with the user's credentials. Which is only possible if the caller lives on bank.com or a origin that bank.com has allowed via CORS.

Scaevolus 2 days ago

> JS-initiated requests are not allowed cross-site by default anyway

Incorrect. You can use fetch() to initiate cross-site requests as long as you only use the allowed headers.

https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...

  • dathinab a day ago

    Even more funny CORS allow * has very funky interactions as it simplified "doesn't allow the client to provide credentials" (e.g. Authorization header, cookies, etc.).

    If you have public call-by-JS focused HTTP API which should be accessible from pretty much anywhere and therefore set it to * but also want an `Authorization` header you are in for bad luck.

    Solution 1. use a custom header for your credentials like e.g. AWS does, works for JS focused APIs but cookies and as such e.g. cookie based XSS protection won't work.

    Solution 2. dynamically return the callers domain as allowed origin. Works but requires dynamic responses to pre-flight requests and kinda undermines the whole CORS system.

    honestly neither is really satisfying

  • duskwuff 2 days ago

    And JS can also indirectly initiate requests for resource or page fetches, e.g. by creating image tags or popup windows. It can't see the results directly, but it can make some inferences.

    • 1oooqooq 2 days ago

      there are so, so, so many ways to read this data back it's not even fun.

      • Muromec 2 days ago

        There are ways, but they generally need a cooperation of both sides of the inter-domain boundary. What you generally can't do is make arbitrary reads from the context of other domain (e.g. call GET on their api and read a result) into your domain without them explicitly allowing it.

        • duskwuff 2 days ago

          Right. What you can sometimes do is observe the effects of the content being loaded, e.g. see the dimensions of an image element change when its content is loaded.

          • RandomDistort 2 days ago

            Is there some document somewhere that lists all the potential ways of doing stuff like this?

            • Herrera 2 days ago

              Yeah, https://xsleaks.dev tracks most of the known ways to leak cross-origin data.

              • smagin 2 days ago

                oh hell yes. And oh yes iframes and postmessages, of course people would setup them incorrectly and even if they do some (probably not that important but still) data will leak if you're creative enough. Thanks for the link!

  • klysm a day ago

    To make things even more complicated, service workers can intercept requests and change the request mode.

  • smagin 2 days ago

    you're right, you can initiate cross-site requests that _could be_ form submissions. It was even in the post but I thought I'd omit that bit for clarity. I should have decided otherwise.

  • Evidlo a day ago

    Is that actually true? This SO seems to contradict that: https://stackoverflow.com/questions/44121593/sending-a-simpl...

    I just want to fetch publicly available information from my client-side app, but CORS gets in the way and forces me to use a sketchy CORS proxy. Makes me really hate CORS

    • reynaldi a day ago

      I agree with this, but in my past online discussions about fetching publicly available information, two main arguments often arise:

      1. The resource owner doesn’t want you fetching their resource.

      2. They don’t want to suddenly be flooded with requests.

      Each of these points has counterarguments. For example, the Same Origin Policy (SOP) only restricts fetches from the client side, and nothing stops people from fetching via a backend.

      The second argument makes sense, the resource owner doesn’t want their resource to be freely fetched and to suddenly receive thousands of requests that their server likely can’t handle. SOP helps prevent this, but if you’re fetching from the backend, you should implement caching to avoid repeatedly hitting the target resource.

      I created a CORS proxy [0] to handle this scenario, including caching responses.

      There are also several free CORS proxies [1] available, they might be considered sketchy, but they’re probably fine for testing.

      [0] https://corsfix.com

      [1] https://gist.github.com/reynaldichernando/eab9c4e31e30677f17...

rajnathani 4 hours ago

> Browser is important

> I want to emphasise how important browsers are in this whole security scheme.

This is the important point which maybe many developers may not think about, that CORS is just a security "feature" within the browser. And, that if one made an HTTP(S) request directly to the server that they could simply spoof the origin. Obviously, there is reasoning to still have CORS for in order to protect users from malicious scripts on trusted websites or simply malicious websites.

matsemann 2 days ago

One thing to note is that if you think you're safe from having to use csrf due to only serving endpoints you yourself consume by posting json, some libraries (like django rest framework) can also opaquely handle html forms if the content type header is set, accidentally opening you up for someone having a form on their site posting on users' behalf to yours.

  • chuckadams 2 days ago

    First thing I do on a Laravel site is add a middleware to all my API routes that allows only blessed content types (usually application/json and application/x-ndjson). With Symfony it's a couple lines of yaml.

yonran 2 days ago

To respond to a question in the blog post:

>> The motivation is that the <form> element from HTML 4.0 (which predates cross-site fetch() and XMLHttpRequest) can submit simple requests to any origin,…

> Question to readers: How is that in line with the SameSite initiative?

I actually added that little paragraph to the MDN CORS article in 2022 (https://github.com/mdn/content/pull/20922) to clarify where the term “simple request” from CORS came from, since previously the article only said that it is not mentioned in the fetch spec. You’re right that the paragraph did not mention the 2019 CSRF prevention in browsers that support or default to SameSite=Lax (https://www.ietf.org/archive/id/draft-ietf-httpbis-rfc6265bi...), so cross-site forms with method=POST will not have cookies anymore unless the server created the cookie with SameSite=None.

It is quite confusing that SameSite was added seemingly independently of CORS preflight. I wonder why browser makers didn’t just make all cross-origin POST requests require a preflight request instead of making same-site-flag a field of each cookie.

IgorPartola 2 days ago

What I never quite grasped despite working with HTTP for decades now: how come before CORS was a thing that you could send a request to any arbitrary endpoint that isn’t the page origin just not be able to see the response. Was this an accidental thing that made it into the spec? Was this done on purpose in anticipation of XSS-I-mean-mashups-I-mean-web-apps? Was it just what the dominant browser did and others just followed suit?

  • Muromec 2 days ago

    That makes perfect sense in the early model of internet where everything was just links and documents. You can make an HTML form with action attribute pointing to a different domain. That's a feature, not a bug and isn't a security vulnerability in itself. Common use for this is to make "search this site in google" widgets.

    Then you can make the form make post requests by changing a method. Nothing wrong with this either -- the browser will navigate there and serve the page to user, not to the origin server.

    What makes it problematic is the combination of cookies from the destination domain and programmatic input or hidden field from the origin domain. But the only problem it can cause is the side-effects that POST request causes on the back end, as it again doesn't let the origin page to read the result (i.e. content doesn't cross the domain boundary).

    Now in the world on JS applications that make requests on behalf of the user without any input and backend servers acting on POST requests as user input, the previously ignored side-effects are the main use and are a much bigger problem.

    • smagin 2 days ago

      "search this site in google" shouldn't even be a POST request, but yeah, when we'll have better defaults for cookies it should work nicer. And if you are a web developer, you should check your session cookie attributes and explicitly set them to SameSite=Lax HttpOnly unless your frameworks does that already and unless you know what you're doing

      • Muromec 2 days ago

        It was GET request, but the point is -- you can make the request from the browser to a different domain by making a form. With JS and DOM you can make a hidden iframe and make the request without user initiating it or noticing even, but in both cases you don't get to read the result.

  • fweimer 2 days ago

    I think it once was a common design pattern to have static HTML with a form that was submitted to a different server on a different domain, or at least a different protocol. For example, login forms served over HTTP were common, but the actual POST request was sent over HTTPS (which at least hid the username/password from passive observers). When Javascript added the capability to perform client-side form validation, it inherited this cross-domain POST capability.

    I don't know why <script> has the ability to perform cross-domain reads (the exception to the not-able-see-the-response rule). I doubt anyone had CDNs with popular Javascript on their minds when this was set in stone.

    • Muromec 2 days ago

      >I don't know why <script> has the ability to perform cross-domain reads

      That's because all scripts loaded on the page are operating in the same global namespace of the same javascript vm, which has origin of the page. Since there are no contexts granularity below the page level in VM, they have to either share it or not work.

      You can't read it however, you can ask browser to execute, the same way you can ask it to show an image. It's just execution can have result in a read as side-effect by calling a callback or setting well-know global variable

      • fweimer 2 days ago

        What I meant is that from a 1996 perspective, I don't see a good reason not to block cross-domain <script> loads. The risks must have already been obvious at the time (applications serving dynamically generated scripts that can execute out of origin and reveal otherwise inaccessible information). And the nascent web ad business had not yet adopted cross-domain script injection as the delivery method.

        • swatcoder 2 days ago

          The threat model of one site leveraging the user's browser to covertly and maliciously engage with a third-party site was something that emerged and matured gradually, as was the idea that a browser was somehow duty-bound to do something about it.

          Browsers were just software that rendered documents and ran their scripts, and it was taken for granted that anything they did was something the user wanted or would at least hold personal responsibility for. Outside of corporate IT environments, users didn't expect browsers to be nannying them with limitations inserted by anxious vendors and were more interested in seeing new capabilities become available than they were in seeing capabilities narrowed in the name of "safety" or "security".

          In that light, being able to acccess resources from other domains adds many exciting capabilities to a web session and opens up all kinds of innovate types of documents and web applications.

          It was a long and very gradual shift from that world to the one we're in now, where there's basically a cartel of three browser engines that decide what people can and can't do. That change is mostly for the best on net, when it comes to enabling a web that can offer better assurances around access to high-value personal and commercial data, but it took a while for a consensus to form that this was better than just having a more liberated and capable tool on one's computer.

        • Muromec 2 days ago

          There is no reason for browser to block them if the page is static and written by hand. If I added this script to my page, then I want to do the funny thing and if the script misbehaves, I remove it.

          • fweimer 2 days ago

            Are you talking about the risk from inclusion to the including page? The concern about cross-origin requests was in the other direction (to the remote resource, not the including page). That concern applies to <script> inclusion as well, in addition to the possibility of running unwanted or incompatible script code.

            • Muromec 2 days ago

              Yes, I was talking about the risk from the perspective of the including page. From the perspective of the risk to remote it makes even less sense from the 90ies point of view. Data isn't supposed to be in javascript anyway, it should be in XML. It's again on you (the remote) if you expose your secrets in the javascript that is dynamically generated per user.

              With a hindsight from this year -- of course you have a point.

              • fweimer 2 days ago

                Ahh, the data-in-XML argument is indeed very convincing from a historic perspective.

                • Muromec 2 days ago

                  XHR is called xmlthttprequest for a reason, but that was added in 2000ies I think. SOAP was late 90ies, but I don't think you can call it from browser. There was no reason to have data in javascript before DOM and XHR which were late additions. All the stuff was in the page itself rendered by server and you don't get that across the domain boundary.

                  • thaumasiotes a day ago

                    > 2000ies

                    Now this is interesting. I've seen a lot of "40ties", "90ies", etc around, and I'm not sure why people do that. But once you've done it, it's clear how to read the text. Most of the non-numeric suffix is redundant; people mean "forties", not "forty-ties".

                    But "2000ies" has no potential redundancy and no plausible pronunciation. It's spelled as if you're supposed to pronounce it "two thousandies", but there's no such thing as a thousandy.

                    • Muromec an hour ago

                      twothousandies and twenty-zeroes makes sense

                    • alt227 a day ago

                      I enjoyed your analysis.

                      I think people should really just be putting 's on the end, then it works for all:

                      40's, 50's, 2000's.

                      The apostrophe is unecesary, but IMO without it it looks like the 's' is a standard unit of measurement:

                      40s, 50s, 2000s

            • edoceo 2 days ago

              The point being, presumably, the page author explicitly chose to include a cross-domain script

              • Muromec 2 days ago

                The script author however didn't. CORS is more about the remote giving consent to exfiltrate the data than it is about preventing injecting the data into it. You can always reject the data coming in

  • PantaloonFlames 2 days ago

    I don’t have a Time Machine or a 1998-era browser but I’m not sure what you described was the case. I think in the before times, a browser could send a request to any arbitrary endpoint that was not the page origin, and it could also see the response. I might be wrong.

    But anyway, ancient history.

  • Macha a day ago

    I think you might have forgotten how immature the field of computer security, especially for network applications, was in the 90s/00s. This was the era of register_globals in PHP, CGI scripts written in C, HTTPS being some fancy luxury for banks and OS login dialogs you could kill with task manager.

  • LegionMammal978 2 days ago

    You still can make many kinds of requests [0] to an arbitrary endpoint that isn't the page origin, without being able to see the response. (Basically, anything that a <link> or a form submission could do.) And you can't include any cookies or other credentials in the request unless they have SameSite=None (except on ancient browsers), and if you do, then you still can't see the response unless the endpoint opts in.

    Really, there's exactly one thing that the mandatory CORS headers protect against: endpoints that authorize the request based on the requester's IP address and nothing else. (The biggest case of this would be local addresses in the requester's network, but they've been planning on adding even more mandatory headers for that [1].) They don't protect against data exfiltration, third-party cookie exfiltration (that's what the SameSite directive is for), or any other such attack vector.

    [0] https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simpl...

    [1] https://wicg.github.io/private-network-access/

    • IgorPartola 2 days ago

      Yes I know this is still the case today. My question is: how did this come about? It seems to me that in the olden days the idea of cross origin requests wasn’t really needed as you’d be lucky to have your own domain name, let alone make requests to separate services in an era of static HTML pages with no CSS or JavaScript. What exactly was this feature for? Or was it not a feature and just an oversight of the security model that got codified into the spec?

      • hinkley 2 days ago

        The field of web applications didn’t really blow open until we were into the DotCom era. By then Berners-Lee’s concept for the web was already almost ten years old. I think it’s hard for people to conceive today what it was like to have to own a bookshelf of books in order to be a productive programmer on Windows, for instance. Programming paradigms were measured literally in shelf-feet.

        Practically part of the reason Java took off was it was birthed onto the Internet and came with Javadoc. And even then the spec for Java Server Pages was so “late” to the party that I had already worked on my first web framework when the draft came out, for a company that was already on its second templating engine. Which put me in rarified air that I did not appreciate at the time.

        It was the Wild West and not in the gunslinger sense, but in the “one pair of wire cutters could isolate an entire community” sense.

      • LegionMammal978 2 days ago

        Hotlinking <img>s from other domains has been a thing forever, as far as I'm aware, and that's the archetypical example of a cross-origin request. <iframe>s (or rather, <frame>s) are another old example. And it's not like those would've been considered a security issue, since at worst it would eat up the other domain's bandwidth. The current status quo is a restriction on what scripts are allowed to do, compared to those elements.

        • IgorPartola 2 days ago

          By definition those do allow you to see the response so that’s not really what is being discussed.

          • Muromec 2 days ago

            Not really. You as an application generally can't read across origin boundaries, but you can ask browser to show it to the user.

shermantanktop 2 days ago

Whatever else these things do, one thing they don’t do is support easy diagnostic tracing when a legitimate use case isn’t quite configured properly.

I have stared many times at opaque errors that suggested (to me) one possible cause when the truth ended up being totally different.

actinium226 2 days ago

There's something that continues to confuse me about CSRF protection.

What's to stop an attacker, let's call her Eve, from going to goodsite.com, getting a CSRF token, putting it on badsite.com, and duping Alice into submitting a request to goodsite.com from badsite.com?

  • bastawhiz 2 days ago

    The csrf token is ideally tied to your session. If it's anonymous, Eve didn't need Alice to visit a page in the first place. If it's tied to a session, Eve can't create a working token for Alice.

    • smagin a day ago

      yup. You didn't imply it but just in case -- this token shouldn't be the same as session token. Session tokens should be `HttpOnly`, so that we don't even expose it to javascript

      • blincoln a day ago

        The HttpOnly flag isn't really practical in modern web apps where so much logic runs in JS in the browser and makes requests to APIs. It's a leftover from an earlier era of web app architecture.

        If it can be enabled without breaking something, sure, its a good idea, but unless your app is 2000s-era ASP.NET code or CGI script, preventing browser-side JS from accessing the session token will probably break something.

        • Macha a day ago

          Right, but if you're doing a SPA, your SPA makes the login call and stores a copy of the session token in local storage, which unlike a cookie isn't automatically sent on any request, never mind cross-origin ones. Doesn't prevent against XSS of course, but then that's what CSP is for.

        • bastawhiz a day ago

          It's only necessary to store the login token if your backend is on a different origin than your SPA is served from. It's not especially hard to avoid this.

        • smagin a day ago

          You shouldn't need your session token in JS, you can specify your fetch requests to include cookies, and you can setup CORS to allow that.

  • preinheimer 2 days ago

    Traditionally csrf tokens had two parts: something in a cookie(/server side data store that used a cookie as the id), and something in a form element on the page.

    So while an attacker could trick your browser to making a request to get the cookie, and trick your browser into submitting arbitrary form data, they couldn’t get the csrf tokens to match.

  • theogravity 2 days ago

    The CSRF token is usually stored in a cookie. I guess one could try stealing the cookie assuming the CSRF token hasn't been consumed.

    But if one's cookie happens to be stolen it can be assumed they already have access to your session in general anyways making CSRF moot.

0xbadcafebee a day ago

Can I get a flow chart please? This is complicated as hell.

Alternately, can I get an entirely new application platform and set of standards? It would be nice to not carry 35 years of technical baggage into every application I want to make or use. Thanks.

smagin a day ago

I asked that in the post but nobody answered still, let's try here.

Why are CSRF tokens rotated? OWASP says it's somehow more secure but I don't really see why.

ListeningPie a day ago

At the bottom the article links to this discussion, but not your other articles. Did you happen to find the discussion on Hacker news or post it yourself?

If you happened to have found it, has someone systematized linking their articles to their respective hacker news discussion?.

  • smagin a day ago

    This is my blog. I haven't posted all the articles from there to hackernews. I think some of them are not worth discussing because too old or too poorly written, but some are just not posted because I didn't think of it at the time. Thanks for the interest, I will re-read what I wrote and post some.

mjevans 2 days ago

A post I was replying to got deleted, but I'd still like to gripe about the positives and negatives of the current 'web browser + security things' model.

Better in the sense of not being locked into an outdated and possibly protocol insecure crypto-system model.

Worse in the sense: Random code from the Internet shouldn't have any chance to touch user credentials. At most it should be able to introspect the status of authentication, list of privileges, and what the end user has told their browser to do with that and the webpage.

If it weren't for E.G. IE6 and major companies allergic to things not invented there we'd have stronger security foundations but easier end user interfaces to manage them. IRL metaphors such as a key ring and use of a key (or figurative representations like a cash / credit card in a digital wallet) could be part of the user interface and provide context for _when_ to offer such tokens.

  • hu3 2 days ago

    > Random code from the Internet shouldn't have any chance to touch user credentials

    One thing that helps with this are HttpOnly cookies.

    "A cookie with the HttpOnly attribute can't be accessed by JavaScript, for example using Document.cookie; it can only be accessed when it reaches the server. Cookies that persist user sessions for example should have the HttpOnly attribute set — it would be really insecure to make them available to JavaScript. This precaution helps mitigate cross-site scripting (XSS) attacks."

    https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies

    • blincoln a day ago

      HttpOnly is mostly a vestigial artifact of the 2000s.

      As soon as web apps began relying on JS in the browser making calls to APIs, that same JS started needing access to the session token, or credentials equivalent to the session token.

      If you maintain an app that can get away with an HttpOnly session token cookie, it's a good idea, but unless it's legacy code, it will probably break something, or the JS is getting creds another way and the cookie having the flag set is giving you a false sense of security.

      • ndriscoll a day ago

        Httponly cookies are still added automatically to js initiated requests. It is not a legacy feature. You don't need to get it another way because you don't need to get it at all in js. It prevents XSS from stealing the cookie and sending it elsewhere (though a successful XSS could still use the cookie while the user is on the page).

        • hu3 16 hours ago

          Thank you.

          It's impressive how much misinformation there is in our field.

          Some people read 1 paragraph from documentation and assume Httponly cookies are useless for SPAs.

    • mjevans 2 days ago

      That does nothing to address Credentialing and Authorization issues.

      • hu3 2 days ago

        How so? Cookies are the most common way to implement persistent user sessions and by consequence, authn/authz.

        If it can't be accessed by JS, then it at least does something, to say the least.

        Could you expand your reasoning?

  • ndriscoll a day ago

    It does seem insane to me that we have things like oauth when browsers support (used to support?) client PKI. They just needed a couple pieces of UI to generate a self-signed identity or install e.g. your gmail identity.

webdever 2 days ago

subdomains are not always the same origin. See "public suffix list". For an example think of abc.github.io vs def.github.io

I didn't get the part at the end about trusting browsers. As a website owner you can't rely on browers as hackers don't have to use a browser to send requests and read responses

  • Muromec 2 days ago

    You do rely on browsers to isolate contexts. The problem with CSRF is that data leaks from one privileged context to another (think of reading from kernel memory of another vm on the same host on AWS). If you don't have the browser, you don't have the user session to abuse in the first place.

    The whole thing boils down to this:

    - browser has two tabs -- one with authenticated session to web banking, another with your bad app

    - you as a bad app can ask browser to make an http request to the bank API and the browser will not just happily do it, but also attach the cookie from the authenticated session the user has opened in the other tab. That's CSRF and it's not even a bug

    - you however can't as a bad app read the response unless the bank API tells browser you are allowed to, which is what CORS is for. maybe you have an integration with them or something

    Browser is holding both contexts and is there to enforce what data can cross the domain boundary. No browser, no problem

  • hinkley 2 days ago

    Or abc.wordpress.com

    Also a lot of university departments and divisions in large enough corporations need to be treated like separate entities.

cies a day ago

CORS is flawed. The attacks you typically want to circumvent CORS for are so involved that setting up a little bitcoin-rented VPS as a MITM is not a big part of the hassle.

browser ---> MITM VPS ---> real server

The browser does some (pre-flight) check to see the the origin (`malicious.net`) is allowed to make requests. The real server would reject it. The MITM just rewrites the origin from `malicious.net` to `acceptable.org` which the real server does accept.

  • smagin a day ago

    mitm means you can control users network, doesn't it?

    Another question, does this work with https?

    And the third one, if this was the thing some dishonest governments or vpn providers would do this already. Would be cool to read on that (genuinely, not implying this never happened)

    • cies 4 hours ago

      > mitm means you can control users network, doesn't it?

      Not necessarily. Maybe this is not called MITM, but you have to put something in the middle :)

      > Another question, does this work with https?

      Sure.

      > And the third one, if this was the thing some dishonest governments or vpn providers would do this already.

      You are confused what this attack is about. Say I want to embed some widget on my website by which I can receive payments. I have to register my websites domain (technically protocol+domain(+port), aka origin) with the widget's provider. CORS is then used to make sure no-one can embed the widget, but those with registered origins.

      Only browsers are known to enforce CORS (do the checks AND provide the correct origin when doing the checks). Hence the MITM attack I propose works: the MITM does NOT give the correct origin to the real server.

TheRealPomax 2 days ago

And why do browsers not let users go "I don't care about what this server's headers say, you will do as I say because I clicked the little checkbox that says I know better than you".

(On which note, no CSRF/CORS post is complete without talking about CSP, too)

  • tedunangst 2 days ago

    Because then you will get users whining I didn't know what the checkbox did and you shouldn't have let me check it.

  • syntheticcdo 2 days ago

    You can! Go ahead and launch chrome with the --disable-web-security argument.

    • TheRealPomax 2 hours ago

      That's confusing the statement "I want the option to overrule it" with "I want it disabled". I don't, there's very good reason for them to always be on, but there has to be a way to tell it what I want my browser to do, rather than what a server owner, who spent all of zero minutes accepting the default helmet options, says what my browser should do. In the same way that a browser should let me say which off-site domains, specific scripts, and on-page elements should be blocked.

  • LegionMammal978 2 days ago

    I'd think SameSite/Secure directives on cookies are genuinely important to avoid any malicious website from stealing all your credentials. Otherwise, I'd imagine it's the usual "Because those dastardly corporations will tell people to disable it, just because they can't get it to work!!!"

  • yoavm 2 days ago

    I wish the browser would just say "This website is trying to fetch data from example.com, do you agree?"

    The whole CORS thing is so off and it destroyed to ability to build so many things on the internet. I often think it protects websites more than it protects users. We could have at least allowed making cookie-less requests.

    • blincoln a day ago

      Modern web apps make a mind-boggling number of cross-origin requests, and many of them are to multi-tenant SaaS providers.

      You'd be constantly flooded with permission popups, and attackers would just host part of their code on one ot more of the popular domains that everyone had gotten used to clicking "allow" for.

    • bastawhiz 2 days ago

      > This website is trying to fetch data from example.com, do you agree

      I don't know, do I? How am I supposed to know? How am I supposed to explain to my mom when to click yes and when not to? The average person shouldn't ever have to think about this.

      Imagine if any website could ask to access any other website, for an innocent reason, and then scrape whatever account information they wanted? "Do you want to let this website access google.com?" Great, now your whole digital life belongs to that page. It's a privacy nightmare.

      > it destroyed to ability to build so many things on the internet

      It only destroyed the ability for any website to access another website as the current user. What it destroyed is the ability for a web page to impersonate users.

      • yoavm a day ago

        Reading cookie-less responses is also forbidden. I couldn't read your Google account information, just make an anonymous Google search through your browser. I fail to see what's the big deal.

        • bastawhiz a day ago

          Or make 10,000,000 Google searches through my browser and get my IP and account banned. Or process a request containing CSAM that has me reported to the government. Or using my residential internet connection to operate a proxy service, causing my IP's reputation to drop and now I'm stuck solving CAPTCHAs all day. Or, you make requests to lots of sites and see which ones have no responses: surprise! Now you know which websites I have cookies on. Why in the world would I want to let someone else use my device to make requests like that? Use your own device, not mine.

          Fundamentally this all boils down to you, the person building the site, being cheap. You don't want to pay the handful of dollars to make your own HTTP requests.

          • yoavm a day ago

            Making 10m requests to Google, asking for a CSAM image etc are all currently possible with an <img> tag. This does not prevent it in any way. Allowing GET requests would be a very lame proxy service, especially considering that the proxy falls when you close the tab, and that I mentioned the requests should only be allowed cookie-less (credentials: omit).

            I think it's much more about open web and letting the user decide than about being cheap.

        • smagin a day ago

          Why would you need that?

          Also, one thing I can speculate that phishing would become even easier if such things were allowed

          • yoavm a day ago

            Because defaulting to NOT allowing cross-origin requests made it so that being lazy creates a more closed web. And people are lazy.

        • Avamander a day ago

          Famous last words

          • yoavm a day ago

            Do elaborate please?

  • siva7 2 days ago

    Almost as pointless as "Yes, accept all cookies" for our european friends.