colmmacc 2 days ago

A significant wrinkle in how NAT works is IP fragmentation. UDP datagrams can be larger than an IP packet. When that happens the payload is split into multiple IP packets, but only the first packet has a UDP header in it. The NAT device needs to correlate these packets by looking at fragment IDs, and then rewrite the IP addresses in the headers.

That alone implies a second kind of state to maintain, but it gets worse. Fragments can arrive out of order. If the second or later packets arrive before the first, the NAT device has to buffer those fragments until they get the packet with the UDP header in it.

That might seem unlikely but it's surprisingly common. Modern protocols like DNSSEC do require fragmentation and in a large network with many paths fragments can end up taking different paths from each other.

Ordinarily when a network is using multiple links to load balance traffic, the routers will use flow steering. The routers look at the UDP or TCP header, make a hash of the connection/flow tuple, and then use that hash to pick a link to use. That way, all of the packets from the same connection or flow will be steered down the same link.

IP fragmentation breaks this too. Those second and subsequent packets don't have a UDP header in them, so they can't be flow steered statelessly. Smarter routers are clever enough to realize this from the beginning of the datagram and to only use a 3-tuple hash (source IP, dest IP, protocol) ... so the packets will still flow consistently. But many devices get this wrong - some just even assume there will be a UDP header and pick whatever values happen to be there.

The fragments end up taking different paths and if one link is more congested or latent enough than another, they'll ultimately arrive out of order.

This single wrinkle is probably responsible for half the complexity in a robust NAT implementation. Imagine having to solve for all of this in a highly-available and trasnactionally live-replicated implementation like managed NAT gateways.

Worst of all, this was all avoidable. If UDP datagrams were simply fragmented at the UDP layer, and every packet included a UDP header, none of this would be necessary. It's probably the worst mistake in TCP/IP. But obviously overall, it was a very successful design that brought on the Internet.

  • Bluecobra a day ago

    Not sure if I agree with it being the worst mistake. The beauty of UDP is its simplicity and you get the absolute minimum. (And that’s the way I like it!) I’ve worked on low latency financial networks that route 40+ Gb of UDP multicast daily and error free. Nobody is fragmenting UDP packets, and most packet sizes are less than 1000 bytes. All financial exchanges have their own proprietary format, but all use sequence numbers in the data gram to keep track of packets.

    • tptacek a day ago

      A UDP protocol that deliberately keeps datagram sizes below 1000 bytes to avoid fragmentation is essentially handling fragmentation itself, as Colm proposes UDP should have done to begin with.

  • zokier a day ago

    IP fragmentation does not really have anything to do with UDP, it can happen regardless of the inner protocol.

    > Worst of all, this was all avoidable.

    It is not that simple. To avoid fragmentation you need robust path mtu detection, which is another whole can of worms. Especially when packets can have multiple paths with different mtu.

  • EvanAnderson a day ago

    > It's probably the worst mistake in TCP/IP.

    I vote for TCP/IP lacking a session layer as being the worst mistake. We wouldn't have IP mobility issues if there'd been an explicit session layer to decouple IP from the upper layer protocols.

    • mindslight a day ago

      That's like the Nethead vs Bellhead argument though, and it's easy to say that with the benefit of several decades of adoption and development.

      • EvanAnderson a day ago

        I don't necessarily think a session layer protocol is automatically "Bell-headed". It's a natural place to plug-in per-byte billing and that ilk, for sure.

        I don't exactly know the timeline between the ITU protocol suite and/or DecNet (both of which have a concept of a session layer protocol) with IPv4. I think they were somewhat contemporaneous. Certainly, the idea of a session layer isn't something that came decades later than IPv4.

        Even just a host identifier, in lieu of the IP address of an interface, being used in the TCP tuple would have been so much better than what we have and probably would have been enough of a "session layer". It would be so amazing to have TCP connections that "just work" when clients or servers hop onto different IP networks, use different interfaces, etc.

        Edit: It has been mentioned that Vint Cerf regretted the decision to bind the IP into the TCP tuple, too. I don't have an exact quote but I know I've heard him mention it in a talk. Ref: https://argp.github.io/2006/03/05/vint-cerfs-talk/

        • mindslight a day ago

          > Even just a host identifier, in lieu of the IP address of an interface, being used in the TCP tuple would have been so much better

          What are you imagining as the implementation? Is it just in TCP, and IP (/ the network) is unchanged? I can see the benefit of that, but then there still needs to be some mechanism to change the binding of host->IP. And if it's not part of the core network, then it's not straightforward.

          There are also other more complex problems not solved by TCP (eg security). I'd rather have a host ID be a pubkey, than some small-namespace ID with a pubkey required on top of that.

          It feels like the real problem is the proliferation of different incompatible solutions to any of these problems, which was going to happen even if there was one less problem that needed to be solved.

          Another way of looking at it is that TCP got so entrenched because of NAT, and having a session ID within IP instead of (implicitly) within TCP/UDP might have allowed more flexibility with creating new protocols directly on top of IP. But 2+2 more bytes of addressing would have gone a long way too!

          • EvanAnderson a day ago

            > What are you imagining as the implementation?

            I haven't thought about it hard enough to be doing anything besides spouting bullshit. It's one of those lazy "I don't like what we've got but I can't say what we should have" kind of complaints.

            The way SCTP handles multi-homing and failover with the verification tag is what I guess I'm thinking of. I'm a little enamored with SCTP, admittedly, and I'd rather we were using it than TCP.

            If I were going back in time, without the 20+ years of real-world experience that went into SCTP, I'd propose something simple like having the initiator and receiver each put a (32-bit?) identifier into a couple of session tracking header fields (initiator on SYN, receiver on SYN/ACK). When an endpoint roamed to a new IP address they'd send a zero-byte ACK from the new IP address to the opposing end. The opposing end would provisionally update their IP binding and ACK back to the new IP address. They would continue to send to both IP addresses until they received an ACK from the new IP address, at which point the old IP address would be discarded.

            I can already see, doing "improvisational piano" state machine design here, that there are issues with this design. Like I said, spouting bullshit... >smile<

            • mindslight 8 hours ago

              Ah, I got you. I hear session layer and think something a bit more ingrained in the core network, but I can see how that would work. Relying on a secret 32 bit nonce chaffs me a bit, but there is a similar problem with guessing sequence numbers (modulo source IP spoofing, which has been greatly clamped down on). And there could always have been some extensions adding cryptographic security, timeout the roaming functionality if lots of wrong packets trying to guess session identifiers were received, etc.

              Developing on that last bit I threw out, especially since you're lamenting the non-adoption of SCTP. With the retrospective from NAT, it feels like it would have been good to factor out the "port" part of TCP/UDP and put it in the IP header instead ("flow ID" or something). Then define (saddr, daddr, protocol#, flowID) as a "flow tuple" for middle boxes to operate on. TCP/UDP/SCTP could then define and present those bits as port numbers.

              This could even be size-negative if you threw out that whole fragmentation thing in favor of completely punting it to a higher layer instead.

              (I realize this goes against the end to end principle and strict layer separation, but the fact is that despite the goal, middle boxes are mucking with packets, and when that mechanism cannot be prevented, facilitating it would be better for maintaining flexibility)

  • zokier a day ago

    > It's probably the worst mistake in TCP/IP.

    If you think fragmentation was mistake then what other alternative do you think would have been better while also feasible at the time when ipv4 was specified? IPv6 notably traded fragmentation for path mtu discovery, but I don't think requiring pmtud would have been realistic option in 1981.

  • commandersaki a day ago

    If UDP datagrams were simply fragmented at the UDP layer

    Agreed, but then it wouldn't be a datagram service anymore.

  • viveknathani_ a day ago

    hi! thanks for explaining this bit in detail. i agree, fragmentation should be handled in the transport layer!

gregw2 2 days ago

Nice writeup on the different type of NATs. I learned something, thank you!

One feedback; I would use a different word ("wrangling"?) rather than "mangling" in your title. Or mention IPv6.

The title use of "mangling" alone triggered flashbacks of tracking down TCP checksum corruption in low cost home routers, or bugs in OpenBSD networking stacks back when I worked on web conferencing software. I that kind of mangling commiseration when clicking your link, but your use of the term was more for an article describing NATv4 and arguing "what IPv4 NAT does is hacky mangling, let's all use IPv6". And while making that argument (which is wistfully fair) also not really acknowledging the benefit of NAT for reducing the attack surface of inbound packets from unsolicited sources and/or explaining why that isn't relevant if you do proper firewalling with IPv6 instead. And when would IPv6 Npt (network /prefix/ translation be desired?)... But I can see that starts to go beyond the scope of your intended argument/perspective perhaps...

  • akerl_ 2 days ago

    Mangle is the technical term used by the kernel for those parts of the process.

  • jeroenhd a day ago

    I think mentioning that IPv6 makes NAT unnecessary for most use cases is more than enough.

    Of course, NAT still exists in IPv6. It probably shouldn't, but tools like Docker will assign a full /64 to your local network even on systems like VPS servers where you only have a /112 or smaller available to you. Plus, NPT is a type of NAT that just happens to switch only part of the address around, you still need to mangle checksums and such.

    Most people could probably get away with Docker using your local GUA for addressing and proxying NDP directly (what's that chance your developers are actually using 2^64 addresses?) but because of the way Docker interacts with nftables and the way most Linux firewalls work, using NAT is probably easier to maintain safety for.

  • viveknathani_ a day ago

    hi, thanks! like somebody else mentioned, it is the term used in the linux kernel itself. although i do see your point - NAT does help in reducing the attack surface.

viveknathani_ 2 days ago

Wrote something about computer networking. Felt like posting it here. Happy to hear your thoughts, HN!

jekwoooooe a day ago

I remember back in the day I had to help a hospital set up some crazy double nat Cisco vpn to another hospital. Old school physical appliance and everything. It was such a pain

  • esseph a day ago

    "Old school physical appliance"

    Lololol

    It's so funny to me how much the past 10 years absolutely decimated on-prem skills a In some areas.

    I don't know what to tell you folks other than Real Locations doing Physical Things still exist, haven't gone away, and there's actually more of them now than there was.

    Given the current state of cyber attacks, all eggs in one basket is probably a very bad thing. For instance, CISA has put out many notices that they consider MSPs a massive security liability. Cloud services are also a weak point.

    Digital sovereignty anyone???

    • jekwoooooe a day ago

      On prem has become commoditized though. I would bet on aws having stronger security overall than someone running a bunch of physical appliances in their own rack.

      • esseph a day ago

        Aws doesn't make batch chemicals, they don't transport fuel or nuclear weapons, they don't control water plants or the electrical grid, etc.

        Those things cannot expect to have internet access, and should not.

        There are tons of billion dollar companies with multiple datacenters or presences in multiple datacenters because of this.

        There is more physical hardware right now deployed by companies of all shapes and sizes than there ever has been in history.

        CISA PPD-21 Critical Infrastructure Sectors:

        * Chemical Sector

        * Commercial Facilities Sector

        * Communications Sector

        * Critical Manufacturing Sector

        * Dams Sector

        * Defense Industrial Base Sector

        * Emergency Services Sector

        * Energy Sector

        * Financial Services Sector

        * Food and Agriculture Sector

        * Government Facilities Sector

        * Healthcare and Public Health Sector

        * Information Technology Sector

        * Nuclear Reactors, Materials, and Waste Sector

        * Transportation Systems Sector

        * Water and Wastewater Systems Sector

        These things need to operate without Internet, full stop. Most of these companies have been around for decades or even centuries. They're not interested in a lot of web/SaaS and can barely even spell SaaS. They're also probably likely to outlive the next few dozen frameworks or language fashions.

  • jofla_net a day ago

    vpn concentrator id wager

jxjnskkzxxhx a day ago

OT does anyone else find it off topic to see the word "grokking"? Does that mean understanding? Do we need a new word for this extremely basic concept?

  • GuinansEyebrows a day ago

    "Grok (/ˈɡrɒk/) is a neologism coined by the American writer Robert A. Heinlein for his 1961 science fiction novel Stranger in a Strange Land. While the Oxford English Dictionary summarizes the meaning of grok as "to understand intuitively or by empathy, to establish rapport with" and "to empathize or communicate sympathetically (with); also, to experience enjoyment", Heinlein's concept is far more nuanced, with critic Istvan Csicsery-Ronay Jr. observing that "the book's major theme can be seen as an extended definition of the term." The concept of grok garnered significant critical scrutiny in the years after the book's initial publication. The term and aspects of the underlying concept have become part of communities such as computer science. "

    https://en.wikipedia.org/wiki/Grok

  • theideaofcoffee a day ago

    It's a pretty common, well-accepted use in the hacker lexicon. See esr's Jargon File [0] where, by some sources [1][2], it started being used in its capacity as meaning 'understanding' for forty-ish years now at this point.

    [0] http://www.catb.org/jargon/html/G/grok.html

    [1] https://books.google.com/books?id=uS4EAAAAMBAJ&pg=PA32#v=one...

    [2] https://en.wikipedia.org/wiki/Grok#In_computer_programmer_cu...

  • throawayonthe a day ago

    the term is 60+ years old

    https://en.wikipedia.org/wiki/Grok

    • jxjnskkzxxhx a day ago

      What's your point? It's 60 years old therefore can't possibly be stupid?

      Or perhaps you have no point and are just nitpicking that I called it new? Compared to the word "to understand" it's new, it's pretty obvious that my use of the word new had a context attached.

      • wredcoll a day ago

        You just asked us if we need new slang words. The answer is so self-evidently obviously yes that no one can actually understand why you bothered.

nodesocket a day ago

I recently just created a NAT instance AMI (using Packer) for use on AWS based on Debian 12. The official AWS NAT instance AMI is horrendously outdated and based on end-of-life AWS Linux v1. At any rate, I was surprised to find it's incredibly easy to do using iptables. It's essentially just the following four iptables rules.

    sudo iptables -t nat -A POSTROUTING -o ens5 -j MASQUERADE
    sudo iptables -F FORWARD
    sudo iptables -A FORWARD -i ens5 -m state --state RELATED,ESTABLISHED -j ACCEPT
    sudo iptables -A FORWARD -o ens5 -j ACCEPT

    sudo iptables-save | sudo tee /etc/iptables/rules.v4 > /dev/null
Lastly a small change in sysctl to enable ipv4 forwarding:

    cat <<'EOF' | sudo tee /etc/sysctl.d/99-ip-forwarding.conf > /dev/null
    net.ipv4.ip_forward=1
    EOF

    sudo sysctl --system