protocolture 2 days ago

One of my past employers had a crazy simple networking philosophy. It killed me. He once had his provider shut the port facing his core switch down, due to STP BPDUs being transmitted. To remedy this, instead of disabling the feature on the switch, he replaced it with an unmanaged switch that doesnt speak STP.

  • kc3 2 days ago

    It has been a while, but I have seen some switches that don't "speak" STP but still forward the BPDU messages (so other switches receive them and still detect loops through the "non-STP switch") and other switches that drop the BPDUs (so other switches/bridges cannot see them, possibly leading to a Layer 2 forwarding loop through the "non-STP switch").

    • protocolture 2 days ago

      Yeah I had a TP Link in my homelab that I broke in half that did stuff like this.

  • api 2 days ago

    That's not great, but it's better than the opposite. I've been in networking for ages and have observed that most networking people will, as a rule, make networks as complicated as possible.

    Why have one layer of NAT when you can have four or five? Why not invent a bespoke addressing scheme? Why not cargo cult the documentation and/or scripts and config files from Stack Overflow or ChatGPT?

    Under-engineering is an easier problem to solve than over-engineering.

    • protocolture 2 days ago

      I have one customer, who runs a large wireless network, who when trying to design it has obviously taken lessons from US WISP operators on facebook managing trailer parks, and implemented a really low rent high complexity customer hand off circuit from that paradigm for one of his most valuable business customers. It worked, but was actually way more complex than he needed.

    • mleonhard 2 days ago

      I took an "Architecting on AWS" class and half of the content was how to replicate complicated physical networking architectures on AWS's software-defined network: layers of VPCs, VPC peering, gateways, NATs, and impossible-to-debug firewall rules. AWS knows their customers tho. Without this, a lot of network engineers would block migrations from on-prem to AWS.

      • protocolture 2 days ago

        Ages ago I deployed a sophos virtual appliance in AWS, so I could centrally enforce some basic firewall rules, in a way that my management could understand. There was only 1 server behind it, the same thing could have been achieved simply using the standard built in security rules. I think about it often.

        I do find Azures implementation of this stuff pretty baffling. Just in, networking concepts being digested by software engineers, and then regurgitated into a hierarchy that makes sense to them. Not impermeable, just weird.

      • p_l a day ago

        Main source of issues leading to overcomplex networking that I ever seen was "every VPC gets a 10./8" like approach replicated, so suddenly you have complex time trying to interconnect the networks later.

        • api a day ago

          IPv6 solves this but people are still afraid of it for stupid reasons.

          It's not hard, but it is a little bit different and there is a small learning curve to deploying it in non-trivial environments.

          • p_l a day ago

            Another issue (also driving some of the opposition to v6) is the pervasive use of numerical IPs everywhere instead of setting up DNS proper.

            • api a day ago

              I think this part is somewhat legitimate. Every network engineer knows "it's always DNS," to the point that there are jokes about it. DNS is a brittle and inflexible protocol that works well when it's working, but unfortunately network engineers are the ones who get called when it's not.

              A superior alternative to DNS would help a lot, but getting adoption for something at that level of the stack would be very hard.

              • p_l a day ago

                I find that a lot of "it's always DNS" falls down to "I don't know routing beyond default gateway" and "I never learnt how to run DNS". Might be a tad elitist of me, I guess, but solid DHCP, routing, and DNS setup makes for way more reliable network than anything else.

                DNS just tends to be part that is visible to random desktop user when things fail

      • kjs3 a day ago

        I had a very interesting conversation with an AWS guy about how hard they tried to make sure things like Wireshark worked the same inside AWS, because they had some much pushback from network engineers that expected their jobs to be exactly the same inside as on-prem.

    • bombcar 2 days ago

      Seriously, the number of setups where "they gave us 255 VLANs, use them all!" I've seen is amazing.

    • kjs3 a day ago

      most networking people will, as a rule, make networks as complicated as possible

      Yeah, we are kinda like that. So many toys...why can't I use them all.

      Seriously, tho...the worst is when you go in and you can tell immediately "oh, the guy running this is trying to get his CCIE cert", because there's all sorts of weirdness you'd never/rarely do in a prod network, but it's on the cert test so lets try it out. YOLO!

  • ThePowerOfFuet 2 days ago

    He had no business managing a network.

    • protocolture 2 days ago

      He had a particular niche that had few requirements on complex networking, and a business plan that was so brain dead simple it could be achieved easily.

      But yeah, really nice guy but unable to expand beyond that paradigm.

      I once helped him deal with a fibre ring, where there were 5 sites in the ring, but only 2 internet services. So sites 3, 4 and 5 had to communicate via sites 1 and 2.

      His team had put a sophos firewall, with NAT, and no routing, on every fibre connection and called me when it didnt work.

egberts1 a day ago

Internet Exchange (IX) for an 8th Grader

- Imagine a big hangout spot in the middle of town where all the school buses from different schools meet.

- Each bus is an Internet company (ISP), carrying kids (data) who need to visit kids from other schools.

- Without the hangout spot, the buses would have to drive all the way across the city through downtown traffic (expensive, slow routes through bigger Internet companies).

- But at the hangout spot (the IX), the buses can just pull up next to each other and drop kids off directly -- quick, cheap, and easy.

- The IX itself doesn’t tell kids where to go, it’s just the place with lots of parking spots and connections.

- The ISPs (buses) still decide who gets on and off by making agreements with each other. That’s like them saying, “Okay, I’ll let your kids ride with me if you let mine ride with you.”

So, an IX is basically a meeting place where Internet companies connect their networks directly, instead of going the long way around through someone else.

piggg 2 days ago

I remember in the 2000s a large-ish Telco network in the US was running ospf on an IX. A few of us on IRC did the what if? And one of us brought up the adjacency and it worked.

Same network also had all their network links in MRTG public too with no auth - if you only knew the hostname/URL you could see it all (which their staff would sometimes drop in Noc communication when linking a graph and you attempted to go there to poke around).

sevensor 2 days ago

I’m still a bit unclear about how an IX is situated relative to the internet and to end users. Per the article, it’s not meant to have desktops or print queues plugged into it, but it’s also a LAN. What sort of computer is meant to participate in an IX?

  • pumplekin 2 days ago

    The idea of an IX, or IX peering LAN is simple in concept. It is a LAN (a flat, layer2 network), to which multiple ISP's can plug in routers.

    Like your home LAN might have 192.168.0.1 = router, 192.168.0.2 = laptop, 192.168.0.3 = phone etc, a peering LAN will have things like 195.66.224.21 = HurricaneElectric, 195.66.224.22 = NTLI, 195.66.224.31 = Akamai, 195.66.224.48 = Arelion etc ...

    So instead of all these ISP's that want to exchange traffic with each other having to assign ports and run cables in a full mesh (which quickly would get out of control), everyone connects to the "big switch in the middle" with that peering LAN on it, and they use that.

    Back in the day, that might have been an actual single big switch, or a stack of switches. Now IXP infrastructures are much more complex, but the presentation to the end user is usually still a cable (or bundle of cables) that goes into something that looks to them like a "big switch".

    There is a LOT more to know about this space (Peering vs Transit, PNI's, L3 internet exchanges, what Google are doing by withdrawing from IXP's), but I wanted to write a comment that didn't turn into an essay.

  • yabones 2 days ago

    An IX is the internet, or at least a small part of it. It's sort of a network where each device shares their "routes" with the others, and then propagates those routes back through their own network. They're exchanging their respective pieces of the Internet with each other.

    The devices themselves are just routers, though much larger and more complex than what you'd see in a house or office. Instead of one route, "put all the traffic through eth0", they'll have hundreds, thousands, or millions of routes depending on their location relative to the "rest of the internet".

  • q3k 2 days ago

    It's a big peering LAN (not dissimilar to a LAN party, but you know, with equipment costing more than a nice car), scaling from a switch in a rack somewhere (eg. FCIX [1]) up to hundreds of switches in dozen of locations not even in the same city anymore (eg. DECIX, AMSIX).

    You connect to the IX either over a cross connect cable from some equipment nearby (in the same DC or same building) or across town via some leased line / dark fiber / lambda / etc. But usually what's at the other end of a connection to an IX is your edge router, on which you will then run BGP to all the other folks on the IX.

    However, it's not always this simple - for example, you might have another switch-like device in between the router and the IX, to maintain some level of flexibility for your own services, or because you're actually being bundled together alongside multiple customers into IX access by some provider. You might actually go into a whole MPLS backbone first, because that's how your provider is selling you transport to the IX. Or you might've set up some peering LAN bridging on your router to set up some hot standby and then plugged it into some switch for convenience to run it across the office LAN to your desk and then...

    What ends up happening, is with this amount of complex network devices along the way, and with how network equipment is generally provisioned (you SSH into it and then you do some mutable changes, then you remember the update the orga docs), mistakes happen.

    In a critical case of misconfiguration (also stale configuration and VLAN identifier reuse) it's not unimaginable for an IX peering LAN to then get accidentally bridged into some VLAN that actually reaches an old Windows laptop that was at some point plagged into another VLAN for troubleshooting. This is especially likely for customers that co-locate their office equipment, AD servers, web servers and edge router in the same building across the same infrastructure.

    [1] - FCIX is also a good source of how this sort of stuff actually looks like IRL, instead of the bullshit marketing 'rack of neat racks' renders on providers' websites: https://pbs.twimg.com/media/FIsd-JsVgAQLUh9.jpg?name=orig https://pbs.twimg.com/media/FP72xaiWQAsyPUM.jpg?name=orig

  • mrngm 2 days ago

    Ideally, participants in an internet exchange purely exchange routing information, usually using the Border Gateway Protocol (BGP). This implies that devices connected to an internet exchange are routers. These devices receive routes for other autonomous systems (AS), and (selectively) publish their routes to other parties on the internet exchange.

    (Internet exchanges typically offer a route server, such that every participant of the IX can easily publish routes for other participants, and simultaneously receive published routes of all other participants.)

    The _effect_ of exchanging routing information is that IX-local participants know where to send traffic destined for certain IP ranges from other participants.

    An autonomous system internally "knows" where each of their routers are located, and all these routers are typically connected with each other. When several routers of an AS are connected on different IXes, this means they can take informed decisions on where to send traffic destined for other ASes. It could be that AS 64496 is only present in IX-A, while AS 64510 is only present at IX-B. Suppose AS 64499 is connected to both IX-A and IX-B, traffic sourced from AS 64499 (e.g. endusers or "eyeballs") and destined for either 64496 or 64510 knows, through internally exchanged routes, where to send that traffic.

    Scale this to even more autonomous systems and IXes in different geographic reasons, and you'll find it becomes a network of networks, or: the internet.

    • Hikikomori 2 days ago

      >Ideally, participants in an internet exchange purely exchange routing information

      I would hope they exchange Internet traffic as well.

      • mrngm a day ago

        That really depends on local policy. It could be that an AS x present on multiple IXes (assuming another peer AS y is present on the same IXes) prefers sending traffic for AS y on IX t, while AS y prefers sending traffic for AS x on IX v.

        Of course, the goal is to transport traffic in many directions, but in essence only routing information is explicitly exchanged.

  • tecleandor 2 days ago

    In an IX you have mostly routers, firewalls and servers. But sometimes people will need to connect a "personal" device to the internet or the internal servers network, and instead of using a NATed/firewalled zone, they will use a public internet port (mostly by mistake), and then that kind of stuff can happen.

  • Palomides 2 days ago

    it's a network for ISPs and organizations with big networks to hook together to let traffic flow more directly, mostly only routers talking BGP

  • ranger207 2 days ago

    Say you're trying to go to google.com (dig resolves this for me right now as 64.233.177.100). Your packet leaves your laptop and travels over wifi to your access point, where it goes into a switch. The switch sees the IP, recognizes it's not part of your LAN (because you've set your network up as 192.168.168.0/24 and that IP's not part of that range) and forwards it to its gateway, the router. (Often the AP, switch, and router are colocated in one device, which can cause shortcuts.) The router forwards all its packets to its upstream, which depending on if you have fiber, cable, etc may mean a modem or something first. Say it goes to a modem, where it's put onto a coaxial cable, the other end of which is at your ISP.

    Now the packet is in the ISP's network. Logically this will be a big pile of switches with some servers attached, such as CDNs and the like, and connected at points to other ISPs/backbone providers at IXes. Physically it's a bunch of neighborhood racks feeding to ISP datacenters scattered around the country and interconnected wither by the ISP's own cables or backbone providers. Your packet will traverse this network in a number of ways depending on the specifics. Maybe the ISP's switch recognizes that 64.233.177.100 is a CDN on a server in the ISP's datacenter, so it forwards it there. Maybe the address is outside of the ISP's network. The switch that your modem is plugged into will pass the packet along to a router of some sort which will have a routing table (which is distributed and updated by internal routing protocols like OSPF or IS-IS) which will tell it where to sent the packet. If the address is outside of the ISP's network, then the router will direct it to one of the IXes based on its routing information.

    Now we're at the IX. Your packet has gone from your laptop, to your AP/switch/router, to your modem, to the ISP's switch, to the ISP's router, to several other ISP switches and routers, and is now at the ISP's router in the IX. The physical configuration of an IX is like any other datacenter: just a bunch of racks, this time with routers in them. There'll be Comcast's routers in one rack, Cox's in another, Lumen (a backbone provider) in another. Then each router will be cabled to each other router, just like you plug your desktop into your router at home. The routers will exchange routing information with BGP, and that information will include who has address 64.233.177.100 connected directly to their network, or if nobody has it then who has a connection to the network it's directly connected to, or if nobody has that then who has a connection to a network that's connected to the network that 64.233.177.100 is connected to, etc. Like your ISP's network, each other network is a logical mass of switches and routers, and a physical connection of datacenters, neighborhood routing racks, and IXes.

    Maybe in this case 64.233.177.100 is in a datacenter connected to another ISP at the IX; in that case the packet will go to that ISP and through its internal network to the datacenter. Or maybe nobody connected to this IX has 64.233.177.100. In that case the packet will probably travel through one of the backbone providers' internal network to the IX where the ISP that does have 64.233.177.100 connected to it is connected.

    Since physically an IX is just a bunch of routers in a rack, you can have things plugged into them other than other routers, although this is generally frowned upon. Many IXes have locked racks to prevent that

    • justusthane 2 days ago

      > The switch sees the IP, recognizes it's not part of your LAN (because you've set your network up as 192.168.168.0/24 and that IP's not part of that range) and forwards it to its gateway, the router.

      This may be nitpicky, but assuming we’re talking about a switch in the strictest definition (a layer 2 switch), this is not correct. Your computer sees that the destination IP address is not in its local subnet, and addresses the packet to the MAC address of its default gateway (the router). The switch receives the packet and forwards it to the appropriate interface based on the destination MAC address.

      Even if we are talking about a layer 3 switch, then we would be assuming that the gateway resides on the switch, and it is still the computer that makes the decision to send the packet to its default gateway.

      Given the rest of your comment I’m assuming you already know this, and I’m not posting this as a correction to you, but rather for the benefit of others.

bcrl 2 days ago

This is why IX like Torix lock down switch ports to a single MAC address and limited protocols (ARP, IPv4, IPv6). It's a bit of a pain when first setting up peering, but it reduces the exposure for all those connected to the IX. The reliability track record clearly demonstrates how worthwhile the strategy is.

liotier 2 days ago

I wish everyone used LLDP everywhere: it is harmless and immensely helps in finding the correct spaghetti in the plate.

  • madsushi 2 days ago

    All surface area carries risk, and LLDP is additional surface area when exposed on a public IX.

wmf 2 days ago

LLDP and STP are between the host and the switch and switches don't forward them. Why would you see that at an IX?

  • toast0 21 hours ago

    If the switch send it to you, you'll see it. If you're running the switch(!) and the host sends it to you, you'll see it.

  • jasonjayr 2 days ago

    Some misconfigured their far end, and bridged things that shoulden't have been into the segment.

lazide 2 days ago

I wish they didn’t defacto sort least-interesting-first :s

egberts1 a day ago

Cosmic Analogy of an Internet Exchange (IX)

- The IX as a Galactic Core:

The IX is like a gravitational hub at the center of a galactic cluster. It doesn’t create stars (data) itself, but it bends spacetime (network topology) in such a way that galaxies (ISPs) can orbit and interact efficiently without needing to sling data across the intergalactic void (expensive transit).

- ISPs as Stars/Planets: Each ISP is like a stellar system with its own mass (bandwidth capacity, user base). They orbit into the IX “gravitational well” by sending out fiber-optic “space-time bridges” (links) to dock into the exchange.

- Fiber Links as Wormholes: The optical fibers into the IX are essentially wormholes: direct, high-speed tunnels across spacetime that collapse distance. Instead of traveling light-years through transit (Tier-1 providers), local traffic jumps instantly between ISPs through the IX’s wormhole endpoints.

- Switch Fabric as the Interstellar Medium: The IX switching fabric is like a plasma medium in the galactic core — it doesn’t decide where stars go, it just provides the environment where matter (packets) can move around without collision. It’s neutral, like the intergalactic medium, just enabling the exchange.

- BGP as Gravity’s Equation: Just like Einstein’s field equations determine how mass curves spacetime, BGP policies determine how traffic paths curve through the IX. Each ISP “publishes its mass” (advertises prefixes), and neighboring ISPs decide how to route packets based on that gravitational influence.

- Local vs. Transit: Without an IX, traffic between two galaxies (ISPs) might have to pass through a supercluster-scale filament (Tier-1 backbone). With an IX, they just exchange starlight (packets) directly within the cluster, conserving energy and reducing latency — much like galaxies trading radiation locally instead of sending it across megaparsecs.

So in astrophysics terms: an IX is a galactic core enabling local wormhole trade routes, where BGP is the gravity law and fibers are Einstein-Rosen bridges connecting stellar systems (ISPs) into a dynamic, orbiting cluster.