caseysoftware a year ago

Bob has been an active member of the Austin startup community for 10+ years and I've talked with him many times. As a EE, it was cool meeting him the first time and once I'd chatted with him a few times, I finally asked the question I'd been dying to ask: How'd you come up with "Metcalfe's Law"?

Metcalfe's Law states the value of a network is proportional to the square of the number of devices of the system.

When I finally asked him, he looked at me and said "I made it up."

Me: .. what?

Him: I was selling network cards and I wanted people to buy more.

Me: .. what?

Him: If I could convince someone to buy 4 instead of 2, that was great. So I told them buying more made each of them more valuable.

It was mind blowing because so many other things were built on that "law" that began as a sales pitch. Lots of people have proven out "more nodes are more valuable" but that's where it started.

He also tells a story about declining a job with Steve Jobs to start 3Com and Steve later coming to his wedding. He also shared a scan of his original pitch deck for 3Com which was a set of transparencies because Powerpoint hadn't been invented yet. I think I kept a copy of it..

  • caseysoftware a year ago

    Btw, when I say "an active member of the Austin startup community" - I mean that seriously.

    Not only did he teach a class on startups at the University of Texas but regularly came to a coffee meetup for years, attended Startup Weekend demo time, came to Techstars Demo Day, and was generally present. I even got to do the Twilio 5 Minute Demo for one of his classes (circa 2012).

    It was always cool to have someone who shaped our industry just hanging out and chatting with people.

    • mbajkowski a year ago

      Absolutely correct. Chatted with him several times circa 2015 to 2016 when working out of Capital Factory in Austin. He was present for all sorts of event such as mentor hours, startup pitches, etc. Funnily enough, he would give you a very stern look if he thought you were taking him for a ride. Have not been there recently as as much as I would like, but I imagine he is still around to be found.

      • seehafer a year ago

        Had a very similar experience hanging out with him and his equally-brilliant wife Robyn in ATX between 2011-2012. Very approachable guy -- impressively so, given his stature in the industry -- but could be quick with the "what the hell are you talking about?" look.

  • lr4444lr a year ago

    I respect Metcalfe a lot, but halfway through undergraduate discrete math it was pretty obvious to most people in the class even before seeing a formal proof that a fully connected graph has O(n^2) edges. I just figured that people wowed by "Metcalfe's Law" were business types who didn't any formal theory into computing.

    • oh_sigh a year ago

      Metcalfe's law is about network impact or value, not about connections.

      • ivalm a year ago

        Yeah, but basically it’s a statement that value scales linearly with the number of pairwise connection.

        • fsckboy a year ago

          but it's a loose approximation so it's not good to overanalyze it.

          The number of pairwise connections grows as the number of pairwise connections, and connections ("how many people can you talk to") are valuable, so value grows. But individual connections to networks grow the pairwise connections by N, so that's even better.

          broadcast (one to many connections, like giving a speech to a crowd) is an efficiency hack, which is good, and efficiency hacks grow as the number of connections grow, so that's good too...

          ... is more how I think about what Metcalfe was talking about. Which aspects are x, which are x squared, which are log x is interesting, but that's not all bound up in his simple statment, despite his "as the square" wording.

          and Bob Metcalfe is personally a great guy in all the ways people are saying, but it's not soooo unique, that's the way a lot of tech types were as the mantle passed from the Greatest Generation to the Boomers (and what was that one in the middle, "lost" or "invisible" or something) I'm not suggesting we've lost that (we may have) just saying that's how it was, for instance as an undergrad you could walk into any professor's office and get serious attention.

      • btilly a year ago

        It counts connections and uses them as an estimate of value.

        However not all connections are equally valuable. And therefore the "law" is incorrect. An estimate in far better agreement with the data is O(n log(n)), and you can find multiple lines of reasoning arriving at that in https://www-users.cse.umn.edu/~odlyzko/doc/metcalfe.pdf.

        • gwern a year ago

          I see only two real 'quantitative' arguments in https://www-users.cse.umn.edu/~odlyzko/doc/metcalfe.pdf#page... . Your first argument, 'connections aren't of equal value', doesn't defeat Metcalfe. Your second argument, that Metcalfe's law would mean efficient markets would merge all networks into one, is both the most amazing overestimate of the competence & economic rationality of telecom giants I've ever seen and also not actually true as a matter of economic theory (https://gwern.net/doc/economics/automation/metcalfes-law/201...). So neither of your handwaving arguments was very good to begin with.

          Better agreement with what data, exactly? It's definitely not in that paper, and every single paper I find with data empirically testing your nlogn proposal against Metcalfe finds your nlogn doesn't fit the data at all while Metcalfe can fit well: Facebook (https://gwern.net/doc/economics/automation/metcalfes-law/201...), the entire EU (https://gwern.net/doc/economics/automation/metcalfes-law/201...), Tencent (https://gwern.net/doc/economics/automation/metcalfes-law/201...), and Bitcoin (https://gwern.net/doc/economics/automation/metcalfes-law/201...).

          • btilly a year ago

            Only two?

            The gravity law argument based on geographic distribution of traffic, Zipf's Law and Bradford's law all have empirical evidence behind them. That's three. Additionally another version of the same paper Bob Briscoe contributed data from British Telecom usage that supported the same scaling rule.

            The second paper that you gave is interesting. Odlyzko was the one who contributed that particular argument. It is right that there are rational reasons to not interconnect. But Metcalfe would imply more of a first mover advantage than we actually see. In social networks we had Friendster, MySpace and Facebook, each of which overtook the other. How could a new entrant supplant the king? Not once, but twice?

            Since then new social networks have continued to sprout and succeed. Facebook managed to stay on top, in part through purchasing other networks. One of which (Instagram) is on track to surpass Facebook in revenue.

            Now let's look at the 4 papers that you collected.

            The first and third have the same flaw. They are looking at revenue over time as the network grew. But the growth of the network is not the only change that happened over time.

            1. The Facebook product improved to become more compelling, even for the same users. In part by adding new channels through purchasing other networks.

            2. Facebook kept adding new ways to monetize people, improving revenue.

            3. People's behavior has shifted to more online over time. Thus it was easier to get value from the same users in 2014 than in 2008.

            Because so much has changed, comparing users in 2008 to users in 2014 is not apples to apples.

            Next, let's turn to the last paper. I'm in agreement with patio11 that Bitcoin's valuation has been driven by the largest Ponzi scheme in history. Therefore I view most of its valuation as fake. And so am not inclined to accept arguments from that valuation as valid.

            And I saved the best for last. In section 2.4 the EU paper argues that Briscoe's law (I think Odlyzko should be credited, but Bob Briscoe is in the EU) is more accurate than Metcalfe's law after you hit scale.

            Their argument in effect is a variant of one that was discussed privately before we wrote our paper. Our immediate perception of the size of a network is based on how much of our personal social groups are on it. The value we get from that network is based on the same. Therefore our perception of the size of the network is correlated with the value we get from it. If the network mostly contains parts of groups, you do get something like Metcalfe's Law out of this. But once the network contains a lot of completed social groups, members of those groups slow down how much value they gain as the network continues to grow.

            In other words when the connections in the network are a random sampling of the connections that matter to us, growing the network adds valuable connections. Once the network contains the connections that we know matters to us, most of us only benefit marginally from continued growth.

      • asciii a year ago

        Was it not specifically "compatibly communicating devices" or something and not users like how it was marketed.

  • passwordoops a year ago

    HN comment of the year winner right here! Makes you wonder how many other laws are built on nothing.

    If there's one thing I leaned doing a Ph.D. is if you dig deep enough, you find many foundational laws of nature rely on some necessary assumption that, if proven incorrect, would topple the whole thing

    • magic_hamster a year ago

      It's worth mentioning Moore's Law, which was actually a short term prediction, arguably turned into a business goal. The "law" states that the number of transistors in integrated circuits (like CPUs) will be doubled every two years (or 18 months by some variations). It wasn't entirely made up, as it was mostly based on advances in manufacturing technology, but it was a prediction made in 1965 that was supposed to hold for ten years. However reality kept up with this prediction for far longer than anticipated until the physical limits of silicon miniaturization became apparent in recent (ish) years, until the mid 00's (maybe later?).

      • vlovich123 a year ago

        I think it technically kept going into the early 2010s due to additional advancements and technically it hasn’t yet stopped but people are generally skeptical that TSMC and Samsung can keep this party going (a party that seems to have stopped for Intel itself apparently).

        Dennard scaling though did end in the mid 00s and this impacted Koomey’s law which talks about performance/watt and saw a similar petering out.

        Apparently the bound at even a conservative rate puts the thermodynamic doubling limit at 2080. After that we’ll have to really hope smart people have figured out how to make reversible computing a thing.

        • btilly a year ago

          CPU clock speed stopped improving slightly sooner than that. Performance continued to improve, but they switched from making single threaded code faster to adding more cores.

          This was a bit of a bummer for programmers working in single threaded languages who found that their code stopped getting faster for free.

        • logi2mus a year ago

          lots of (not only european) public funding made progress to euv of asml, zeiss and other possible.

      • chx a year ago

        Moore's law is still going , or so I thought. It's Dennard scaling that stopped around 2006.

        https://ourworldindata.org/grapher/transistors-per-microproc...

        • monocasa a year ago

          Sorta. It didn't hit a hard boundary per se, but SRAM has practically stopped scaling and even random logic is only scaling at about 1.5x every three years or so. Like a lot of cases it's an s-curve, and we're on the other side of the bend at this point.

      • bombcar a year ago

        Moore's law is almost the opposite of Metcalfe's - Metcalfe's encourages you to build out the network as fast as possible to get the most value; Moore's implies you should wait as long as possible before buying processing power to get the most you can.

      • jeffbee a year ago

        Moore’s law isn’t even dead. It says that the number of transistors per dollar rises at that rate, which is still going. Commenters tend to omit the cost component of Moore’s remark.

    • toyg a year ago

      CS "laws" like Metcalfe's are closer to Murphy's Law than Newton's...

    • fsckboy a year ago

      > Makes you wonder how many other laws are built on nothing.

      variance/standard deviation (also btw, a sum of squares concept)

      it marks the inflection points on the gaussian curves, but so what, the 2nd derivative points to something significant about the integral? not really. But even if we accept that it does, what does two standard deviations mean? a linear double on the x coordinates says what about the a hairy population density function? nothing.

      or similar to Metcalfe's Law, the very widely used Herfindahl Index (also squares!). It's a cross between a hash and a compression, it says something about the original numbers, but wildly different scenarios can collide.

    • swayvil a year ago

      When observation is translated to "law". That is an act of judgment on the part of the law-maker, purely. Call it "built on nothing" if you like. But as opposed to what?

    • syedkarim a year ago

      Do you know of any in particular?

    • RHSman2 a year ago

      Read ‚The Unlocking Project’

    • shuntress a year ago

      "IF proven incorrect" is the important part.

      This "law" isn't somehow less true just because it was originally used as a sales tactic.

      • oldgradstudent a year ago

        How would you even test such a vague law, let alone disproving it?

        • btilly a year ago

          The law implies testable consequences, such as what the economic incentives should be from interconnecting networks. They are good enough that we should expect to see more drive to interconnect, and stronger barriers to entry for future networks, than history actually shows.

          https://www-users.cse.umn.edu/~odlyzko/doc/metcalfe.pdf offers this and several other lines of evidence that the law is wrong, and O(n log(n)) is a more accurate scaling law.

  • jacquesm a year ago

    It is quite telling that when Bob Metcalfe 'makes stuff up' he still hits it out of the park.

    • riceart a year ago

      A little confirmation bias on this one. In addition to the infamous internet will collapse prediction he was also pretty whole hog on the Segway scooter revolutionizing transit.

      • jacquesm a year ago

        So let me enlighten you a bit: we did collapse the internet, and got a testy email from a bunch of backbone maintainers that they were going to block our live video streams (on port 2047) in four weeks time or so. Which resulted in us moving to the other side of the Atlantic to relieve the transatlantic cable. So even if it didn't make the news Metcalfe was 100% on the money on that particular prediction. The Segway never had a chance as far as I'm concerned but the other thing he got just so. But maybe he never knew (and I never knew about his bet).

        • riceart a year ago

          He made a very specific prediction - it didn’t pan out - that there have been near misses and even global degradation events multiple times in the past 3 decades is not relevant. He admitted he was wrong and literally ate his words.

    • stormfather a year ago

      > But I predict the Internet, which only just recently got this section here in InfoWorld, will soon go spectacularly supernova and in 1996 catastrophically collapse.

      - Bob Metcalfe

  • btilly a year ago

    Not only did he make it up, but it is false! Multiple lines of evidence point to a O(n log(n)) law instead.

    https://www-users.cse.umn.edu/~odlyzko/doc/metcalfe.pdf has the details.

    • NHQ a year ago

      From the paper:

      > In general, connections are not used with the same intensity... so assigning equal value to them is not justified. This is the basic objection to Metcalfe’s Law...

      In my architectonic opinion, the perfect network comprises all nodes operating equally. Ergo the ideal is indeed Metcalfe's law, but architecture and design can be costly, which is simple the inefficient use of resources. These being very precise machines, anything less than 99.999% is amateur, ergo the law obtains.

      • btilly a year ago

        We are talking about computer systems that connect a network of humans. Humans are notoriously imprecise and unreliable machines. Anything more than 0.00001% is therefore a miracle.

        • NHQ a year ago

          Lol, networking people has produced little of real value except the paradigm itself, and social networking is little more than making humans more efficient at marketing to each other. Networking is for DATA. When people behave like networked machines... well that's global capital communism tbqh.

  • chasd00 a year ago

    I remember trying to get NICs to work in Linux and the best advice was usually “just try the 3c509 driver”.

    • jabl a year ago

      I remember when I bought my first fast ethernet card, there was some Linux HOWTO that discussed various ethernet NIC's, and crucially, their Linux drivers in excruciating detail. And the takeaway was that if you had a choice, pick either 3com 5xx(?) or Intel card. The 3com card was slightly cheaper at the local computer shop, so that's what I ended up with (595 Vortex, maybe?).

      • IYasha a year ago

        Yeah, I had gold-plated 100Mb 3Com cards and they were the best. (something-905-series?) With full-duplex, hardware offloading, good drivers. I still have one lying somewhere. )

    • hinkley a year ago

      As a poor college student I scavenged 3c509 cards to build a computer network in an apartment I shared with two other chronic internet users.

      That was right about the time someone has solved a bug with certain revisions of the card behaving differently. So suddenly the availability jumped considerably.

    • xen2xen1 a year ago

      Similar to "try the HP 4si driver" for printers?

    • xbar a year ago

      Practically a mantra.

      • bombcar a year ago

        It was well known when I started that you got a card that would work with that (and later for gigabit it was e1000).

  • not2b a year ago

    Although he made it up, there's an argument that the value goes up more than linearly. But as the network grows, every node doesn't necessarily need to talk to every other node except in rare circumstances, or they can reach each other through an intermediate point. So maybe O(n log n) would be closer.

    • hinkley a year ago

      I recall seeing an article a number of years ago that argued just that. That the network effect is nlogn. Still enough to help explain why large networks grow larger, but it also means that overcoming the incumbent is not the insurmountable wall it may seem to be. You may only need to work twice as hard to catch up, rather than orders of magnitude harder.

  • dwheeler a year ago

    He may have "made it up" to improve sales, but from a certain viewpoint it's correct. If decide to measure the "value" of a network based on the number of node connections, then the number of connections for n nodes is n(n-1)/2 = 0.5n^2 - 0.5n which is O(n^2).

    Of course, the value of something is hard to measure. Typically you measure value as "benefits - costs", and try to convert everything to a currency. E.g., see: https://www.investopedia.com/terms/c/cost-benefitanalysis.as... . But there are often many unknowns, as well as intangible benefits and costs. That make that process - which seems rigorous at first - a lot harder to do in reality.

    So while he may have "made it up" on the spot, he had a deep understanding of networking, and I'm sure he knew that the number of connections is proportional to the square of the number of nodes. So I suspect his intuition grabbed a quick way to estimate value, using what he knew about connection growth. Sure, it's nowhere near as rigorous as "benefits - costs", but that is hard to really measure, and many decisions simply need adequate enough information to make a reasonable decision. In which case, he both "made it up" and made a claim that you can justify mathematically.

  • nonrandomstring a year ago

    And yet it's trivially true. Value accrues with connectivity, which is number of the edges in a fully connected graph being n(n-1)/2, which as n grows larger approximates to n^2. I would be surprised he said he "made it up", other than as a joke about elementary computer science.

    • cafeinux a year ago

      As n grows larger, the number of edges approximates n²/2. I may be pedantic but I feel that the difference between something and it's half is non-negligible.

      • not2b a year ago

        You're assuming complete connectivity; no one builds networks of nontrivial size that way.

  • pabs3 a year ago

    If that copy still exists, having it on archive.org might be interesting.

  • ksajadi a year ago

    Love the story, man!

jacquesm a year ago

Well deserved. I remember dealing with a whole raft of other networking technologies and Ethernet stood head-and-shoulders above anything else available at the time.

One thing that is not well appreciated today is how power efficient Ethernet was, even on launch in the coax era. Other network technologies (Token Ring as embodied by IBMs network cards, for instance) consumed power like there was no tomorrow. Leading to someone quipping renaming it to 'smoking thing'.

As the price came down (around the NE1000/2000 and 3C509 era) it suddenly was everywhere and economies of scale wiped out the competition until WiFi came along. But even today - and as I'm writing this on my ethernet connected laptop - I prefer wired networks to wireless ones. They seem more reliable to me and throughput is constant rather than spotty, which weighs heavier to me than convenience.

So thank you Bob Metcalfe, I actually think this award is a bit late.

Anybody remember Don Becker?

  • robin_reala a year ago

    I had no idea that Token Ring was inefficient with power, but it certainly had a bunch of other problems. Biggest (at least on PCs) was its inability to recover from a cable being unplugged without resetting a bunch of the system, and the type-1 token ring cables win the award for being the most needlessly bulky,[1] even if the connectors had a plug-into-each-other party trick.

    [1] https://en.wikipedia.org/wiki/Gender_of_connectors_and_faste...

  • retrocryptid a year ago

    I still have a soft spot in my heart for ARCNet. In the 80s it was cheaper than ethernet, but more reliable than token ring. And for the few places that prioritized determinism over throughput, it was indispensable.

    But ethernet kept improving speed and reliability while ARCnet retreated to shop-floor niche applications.

    Alas.

    • dugmartin a year ago

      ARCNet was nice, except for when people decided randomly to remove the terminator off the t-connector on the back of their desktop because "it looked weird" and thus taking down the whole network. That happened to me more than once doing network support in college.

    • AlbertCory a year ago

      ARCNet is mentioned heavily in The Big Bucks. I have to admit that I knew very little about it before doing research for the book.

      • retrocryptid a year ago

        One more book on the stack... Now I have to read it to find out how ARCNet worms it's way into a novel about sili valley.

        • AlbertCory a year ago

          This part's not in the book: Gordon Peterson, the architect of ARCNet, was a major source for me. He talked to Bob back in the day.

          Gordon's still bitter about it, and will gladly tell you why Ethernet is inferior.

          • jandrese a year ago

            Ethernet is one of those case studies in "worse is better".

            I remember the old saying that "Ethernet doesn't work in theory, but it does in practice". Mostly referring to the CSMA/CD scheme used before switches took over.

            The competitive advantage of being built out of cheap commodity hardware and cabling is hard to overstate. Nobody likes dealing with vendors, their salespeople, and especially support contracts. Especially since that is always more expensive and often solves problems you don't have, like minimum latency guarantees, at the cost of throughput and complexity.

            • AlbertCory a year ago

              There were a lot of LAN schemes back then. Mostly forgotten now.

              Many press commentators opined that of course "broadband" would be much better than "baseband" since it could carry voice and video, not just bits.

            • erosenbe0 a year ago

              Agreed. Can't overstate the cost effectiveness. In the late 80s or early 90s you could put hundreds of dumb terminals on one network with just hubs for signal integrity. Plenty of collisions but it all worked itself out somehow if the throughput was light, such as text applications, text email, and a small amount of printing or sharing. This meant every university could have some kind of network scheme, making it a universal for the next gen.

              • AlbertCory a year ago

                I even put a whole section with that old debate in Inventing the Future (Janet, working at Xerox, is arguing with her husband Ken, who's at Hughes Aircraft):

                ==============

                She’d heard that the product Xerox was going to ship was aiming for 20 megabits.1 It was amazing. But, as people at Xerox explained, imagine an entire office full of knowledge workers with their own computers, sending documents and email to each other and to the printer. How much bandwidth would they need? A lot! Someone in Palo Alto had done a whiteboard exercise on the bandwidth required for teleportation, in other words, sending an entire human being through the wire. It wasn’t that they took the prospect of teleportation seriously, but still, it was fun, and she enjoyed being around people who had fun at work.

                Ken scoffed and insisted that Ethernet would never work. This was a recurring argument with them that was starting to annoy her, although usually she’d just smile and change the subject. He kept hurling the word deterministic as if it were a magic talisman. On the Ethernet, you weren’t certain how long you’d have to wait to transmit your data. If the wire was busy when you wanted to talk, you had to wait.

                If someone else tried to transmit exactly when you did, you both had to back off and try again. An engineer could give you the probabilities of various results, but no more. This was not real engineering, and Ken was offended by it. He was even offended by the word “ether” since any beginning physics student knows that ether was a bogus concept that was disproven ages ago.

                The entire communications field that the two of them had studied at MIT was based on strict mathematical calculations and guarantees. For example, when you make a telephone connection, that circuit is yours until you’re done. No other calls interfere with yours. The telephone companies had spent decades perfecting this system, and they had a monopoly. How could any company, even one as big as Xerox, hope to change that?

                She tried telling him that packet-switching was a real discipline, and the Defense Department itself was backing it. It was originally designed to survive a nuclear attack that destroyed some of the military’s communications lines. With packet switching, the message was broken up into pieces, or packets, and the packets might arrive on separate paths. Ken didn’t believe it would ever have commercial applications. People in aerospace had a low opinion of commercial stuff anyway. 1 The original speed goal for the Xerox Wire was 20 megabits/sec. The first controller, for the Dolphin, had independent send and receive buffers, but could only be made to fit on the board using 10-Mbps CRC chips from Fairchild. Furthermore, in the lab, Tony found that 20-Mbps signaling caused spurious collision detects on the cable due to transceiver tap reflections.

            • EricE a year ago

              There were a lot of LAN schemes - and slightly incompatible ethernet implementations. I remember when the Interop tradeshow in Vegas required vendors to either attach and integrate with the show network or they would get kicked off the floor. Good times!

          • retrocryptid a year ago

            Well.. I still want to read the book. I'm a sucker for a well crafted story about old hardware from the days when technology gods walked the earth.

            I'm sure Ethernet's market domination is because the spec wasn't owned by a single company, and nothing to do with it's technical merits. After IBM's SNA, people seemed paranoid of a networking spec being owned by a single company. Do you know if Datapoint thought about that and whether they tried to build their own equivalent of the DIX consortium?

            I also think about SpaceWire / IEEE-1355 / Wormhole Routing and what might have been had we adopted systems where compute power could be easily upgraded.

            Oh! The good old days when everything was possible!

            • AlbertCory a year ago

              on DataPoint: my hero (sort of) Matt Feingold spends a summer internship at DataPoint. As far as he (and I) could tell, people still thought in terms of "account control" back then.

              There's actually a book on DataPoint (and almost every other company from way back when). I read them so you don't have to :)

    • aidenn0 a year ago

      I get the impression that 10BASE-T killed ARCNet, and it was the "T" rather than the "10" that did so. Running cheap CAT-5 to a set of interconnected hubs was just so much easier and more reliable than t-connectors, terminators &c.

  • eointierney a year ago

    Never met Don Becker but as it was the Beowulf project that got me interested in GNU/Linux he is synonymous with ethernet drivers

  • rejectfinite a year ago

    >They seem more reliable to me and throughput is constant rather than spotty, which weighs heavier to me than convenience.

    They ARE more reliable.

    I much rather use ethernet than wifi on desktops and laptop.

    Now with video meetings, high quality webcams, mics and gaming, latency and bandwith is king.

    WiFi is usually FAST but it is not as STABLE.

  • oaiey a year ago

    WiFi is not the competition ;) It is the brother in arms ;)

  • lunfard000 a year ago

    Sure, years ago. But today Ethernet is just as scammy as everyone else, we've been stuck at 1 Gbps on consumer grade hardware for more than 15 years. There are claims (unverified ofc) about their executives boasting about their stupid margins. 1 Gb switch is like 10-20 euros meanwhile 2.5 Gbps is like over 100...

    • RF_Savage a year ago

      2.5Gb is downshifted 10Gb with the same line coding, just with 1/4 the symbol rate. This means that it inherits all the complexities of 10GbE, while tolerating cheaper connectors and cables. 10GbE uses DSQ128 PAM-16 at 800Msym/s. 2.5G just does quarter-rate at 200Msym/s.

      1000BaseT uses trellis coded PAM-5, a significantly less complex modulation.

      When one factors in the complexity of the line code and all equalisation and other processing in the analog frontend things get expensive. Copper 10Gb interfaces run very hot for a reason. It takes quite a bit of signal processing and tricks to push 10Gb over copper, at least for any significant distances.

      • throw0101b a year ago

        > tolerating cheaper connectors and cables

        I always find the graphic below handy for telling which Cat cable can handle which Gig speed:

        * https://en.wikipedia.org/wiki/Ethernet_over_twisted_pair#Var...

        • toast0 a year ago

          It's not really about can handle, but more is specified to handle at maximum length in a dense conduit.

          At shorter lengths, and in single runs, it's always worth trying something beyond what the wiring jacket says. I've run gigE over a run with a small section of cat3 coupled to a longer cat5e run (repurposed 4-pair phone wire), and just recently setup a 10G segment on a medium length of cat5e. The only thing is while I think 2.5G/5G devices do test for wiring quality, the decimal speeds don't, auto-negotiation happens on the 1Mbps link pulses, unmanaged devices can easily negotiate to speeds that won't work, if your wiring is less than spec, you need to be able to influence negotiation on at least one side, in case it doesn't work out.

    • jacquesm a year ago

      I can't make heads or tails of your comment. What is scammy about Ethernet and what 'stupid margins' does Ethernet have? It's a networking standard, not a company.

      • themoonisachees a year ago

        2.5G or even 10G is not that much more expensive and companies making consumer electronics sell it as a considerable premium for what is essentially the same cost difference as making a 8gb vs 16 gb flash drive. Of course, regular internet users don't need more than 2.5G (and couldn't use it in most of the world due to ISP monopolies) so anything faster than gigabit is a target for segmentation.

        • gooroo a year ago

          The market at work. There is just no real demand for anything beyond 1G.

          The HN crowd is not representative of what would be needed to drive the price tags down on 2.5G stuff.

          • hinkley a year ago

            If you have a gigabit internet connection, then most of the value of 10G comes from data sharing within the intranet, which just never caught on outside of hobbyists. And a 1G switch can still handle a lot of that, You don’t even need 10G for LAN parties, and whether backups can go faster depends on the storage speed and whether you actually care. Background backups hide a lot of sins.

            I’m hoping a swing back to on-prem servers will justify higher throughput, but that still may not be the case. You need something big to get people to upgrade aging infrastructure. What would be enough to get people to pay for new cable runs? 20Gb? 40?

        • Dalewyn a year ago

          Rant aside, I think there is an argument to be made that 2.5gbps switches "should" be cheaper now that 2.5gbps NICs have become fairly commonplace in the mainstream market.

          Case in point, I have a few recent-purchase machines with 2.5gbps networking but no 2.5gbps switch to connect them to because I personally can't justify their cost yet.

          I suppose I could bond two 1gbps ports together, or something, but I like to think I have other yaks to shave right now.

          • bombcar a year ago

            You can get some basic switches that do 2.5gb but it's like $100, a bit more for a brand you might recognize.

            https://www.amazon.com/5-Port-Multi-Gigabit-Unmanaged-Entert...

            Personally I went with Mikrotik's 10gb switch but that needed SPF port thingies (which was fine for me, as I was connecting one old enterprise switch via fiber, direct copperering two servers, and using cabled cat7 or whatever for the Mac).

            2.5gb is silly in my opinion unless it's literally "free" - you're often better with old 10gb equipment.

            • toast0 a year ago

              > 2.5gb is silly in my opinion unless it's literally "free" - you're often better with old 10gb equipment.

              I think 2.5g is going to make it in the marketplace, because 2.5g switches are finally starting to come down in price, and 10g switches are roughly twice the price, and that might be for sfp+, so you'll likely need transceivers, unless you're close enough for DAC. (NIC prices are also pretty good now, as siblings noted. But if you go with used 10G, you can get good prices there too, I've got 4 dual 10G cards and paid between $25 and $35 shipped for each)

            • Dalewyn a year ago

              Yeah, it's that cost that is the problem. If I'm paying over a hundred bucks for a switch I might as well go higher and consider 10gbps options.

              2.5gbps hardware need to come down to at least the $30 to $40 dollar range if they want to make any sense. Otherwise, they'll stay as niche hardware specifically for diehard enthusiasts or specific professionals only.

              • BenjiWiebe a year ago

                The NICs can be had for $20 (pretty sure I saw a $11 one the other day but can't find it right now on mobile).

                • Dalewyn a year ago

                  The NICs are reasonable now, yes. The issue is the thing on the other side of the cable; 2.5gbps switches and routers need to come down in price.

          • jandrese a year ago

            The problem with 2.5G is that it's not enough of an upgrade over 1G to warrant buying all new switches and NICs to get it. For that matter few home users push around enough data for 10G to be a big win.

            IMHO this is why Ethernet has stalled out at 1G. People still don't have large enough data needs to make it worthwhile. See also: the average storage capacity of new personal computers. It has been stuck around 1TB for ages. Hell, it went down for several years during the SSD transition.

            • Dalewyn a year ago

              2.5gbps is literally 2.5x times the speed of gigabit ethernet, so that's going to be very noticable even for most home users if they do any amount of LAN file sharing.

              It's really just the cost that's the problem, because paying 4x to 5x or even 6x times the cost of gigabit hardware for a 2.5x times performance boost doesn't make a lot of sense.

              If 2.5gbps peripheral hardware costs would come down I will happily bet they will take off.

              • amoss a year ago

                This assumes that the LAN is the bottleneck. Gigabit ethernet tops out at 120MB/s, which is about the speed of spinning rust on a NAS.

                • Dalewyn a year ago

                  Yeah, but you probably have more than one drive RAID'd in that NAS so you will almost certainly get faster transfers (granted: sequential) if ethernet wasn't the bottleneck.

                  2.5gbps ethernet translates to roughly 250MB/s in real world transfer speeds, that's a lot. Literally over double real world gigabit transfer speeds, and far less likely to bottleneck you.

        • jacquesm a year ago

          But that has nothing to do with Ethernet as such, which isn't a 'company making consumer electronics'.

    • fulafel a year ago

      It's a big loss that wired networking speeds have plateaued but I feel it's more about apps and people adapting to slow and choppy wireless networks that penalise apps leveraging quality connectivity, and stand as bottlenecks in home networks (eg you don't need 10G broadband the wifi will cap everything to slow speeds anyway). And mobile devices that had much smaller screens and memories than computers for a decade+ stalling the demand driven by moore's law.

    • wongarsu a year ago

      People buy ethernet for reliable connection and reliable latency (no package drops), and to get 1Gbps. Few consumers have need for more, since internet speeds also rarely exceed 1Gbps.

      Sure, anyone with a NAS might like more, but that's a tiny market. And tiny markets lack economy of scale, causing prices to be high.

    • dale_glass a year ago

      You can have 10G with eg, Mikrotik at a reasonable price.

      One problem with it is that the copper tech is just power hungry. It may actually make sense to go with fiber, especially if you might want even more later (100G actually can be had at non-insane prices!)

      Another problem is that it's CPU intensive. It's actually not that hard to run into situations where quite modern hardware can't actually handle the load of dealing with 10G at full speed especially if you want routing, a firewall, or bridging.

      It turns out Linux bridge interfaces disable a good amount of the acceleration the hardware can provide and can enormously degrade performance, which makes virtualization with good performance a lot trickier.

      • throw0101b a year ago

        > Another problem is that it's CPU intensive.

        Are there 10GigE cards that do not do things like IP/TCP offloading at this point?

        Offloading dates back to (at least) 2005:

        * https://www.chelsio.com/independent-research-shows-10g-ether...

        * https://www.networkworld.com/article/2312690/tcp-offload-lif...

        • dale_glass a year ago

          You can go fast if you don't do anything fancy with the interface.

          If you say, want bridged networking for your VMs and add your 10G interface to virbr0, poof, a good chunk of your acceleration vanishes right there.

          Routing and firewalling also cost you a lot.

          There are ways to deal with this with eg, virtual functions, but the point is that even on modern hardware, 10G can be no longer a foolproof thing to have working at full capacity. You may need to actually do a fair amount of tweaking to have things perform well.

        • jandrese a year ago

          The other issue is that unless your computer is acting as a router or a bridge, you need to do something with that 10GB data stream. SSDs have only recently gotten fast enough to just barely support reading or writing that fast. But even if you do find one that supports writes that fast a 10GbeE card could fill an expensive 4TB drive in less than an hour. Good luck decoding JPEGs and blitting them out to a web browser window that fast.

          • Dalewyn a year ago

            >10GB data stream. SSDs have only recently gotten fast enough to just barely support reading or writing that fast.

            10gbps (gigabits per second) is not 10GB/s (gigabytes per second).

            Specifically, 10gbps is approximately 1.25GB/s or 1250MB/s.

            • jandrese a year ago

              Consumer SSDs used to max out at about 550MB/s, some still do. You need a larger and more modern drive to do 1.25GB/s sustained write. Even then buffering can get you.

              • Dalewyn a year ago

                That's due to the communication protocol.

                2.5 inch and M.2 SATA SSDs max out around 550MB/s due to the limits of SATA3 connections which cap out at 6gbps.

                M.2 NVME SSDs meanwhile communicate over PCIE, generally using four lanes, and the latest PCIE5 SSDs can do around 15GB/s if I recall. PCIE4 drives can get up to around 7GB/s, and PCIE3 drives up to around 3GB/s.

                Other potential bottlenecks can occur with the motherboard chipset, controller, and NAND flash, but details.

                TL;DR: Any NVME SSD can saturate a 10gbps ethernet connection.

        • iptrans a year ago

          TCP/IP offload isn’t the issue.

          The core problem is that the Linux kernel uses interrupts for handling packets. This limits Linux networking performance in terms of packets per second. The limit is about a million packets per second per core.

          For reference 10GE is about 16 million packets per second at line rate using small packets.

          This is why you have to use kernel bypass software in user space to get linerate performance above 10G in Linux.

          Popular software for this use case utilize DPDK, XDP or VPP.

          • toast0 a year ago

            You don't need an interrupt per packet, at least not with sensible NICs and OSes. Something like 10k interrupts per second is good enough, pick up a bunch of packets on each interrupt; you do lose out slightly on latency, but gain a lot of throughput. Look up 'interrupt moderation', it's not new, and most cards should support it.

            Professionlly, I ran dual xeon 2690v1 or v2 to 9Gbps for https download on FreeBSD; http hit 10G (only had one 10G to the internet on those machines), but crypto took too much CPU. Dual Xeon 2690v4 ran to 20Gbps, no problem (2x 14 core broadwell, much better AES acceleration, faster ram, more cores, etc, had dual 10G to the internet).

            Personally, I've just setup 10G between my two home servers, and can only manage about 5-8Gbps with iperf3, but that's with a pentium g2020 on one end (dual core Ivy Bridge, 10 years old at this point), and the network cards are configured for bridging, which means no tcp offloading.

            Edit: also, check out what Netflix has been doing with 800Gbps, although sendfile and TLS in the kernel cuts out a lot of userspace, kind of equal but opposite of cutting out kernelspace, http://nabstreamingsummit.com/wp-content/uploads/2022/05/202...

            • iptrans a year ago

              Interrupt moderation only gives a modest improvement, as can be seen from the benchmarking done by Intel.

              Intel would also not have gone through the effort to develop DPDK if all you had to do to achieve linerate performance would be to enable interrupt moderation.

              Furthermore, quoting Gbps numbers is beside the point when the limiting factor is packets per second. It is trivial to improve Gbps numbers simply by using larger packets.

              • toast0 a year ago

                I'm quoting bulk transfer, with 1500 MTU. I could run jumbo packets for my internal network test and probably get better numbers, but jumbo packets are hard. When I was quoting https download on public internet, that pretty much means MTU 1500 as well, but was definitely the case.

                If you're sending smaller packets, sure, that's harder. I guess that's a big deal if you're a DNS server, or voip (audio only); but if you're doing any sort of bulk transfer, you're getting large enough packets.

                > Intel would also not have gone through the effort to develop DPDK if all you had to do to achieve linerate performance would be to enable interrupt moderation.

                DPDK has uses, sure. But you don't need it for 10G on decent hardware, which includes 7 year old server chips, if you're just doing bulk transfer.

                • iptrans a year ago

                  Bulk transfers aren’t that being interesting from a networking perspective.

                  You gonna have a bad time if you optimize only for the best case scenario.

                  Even using IMIX is a low bar. The proper way to do things is linerate using small packets.

          • jabl a year ago

            Most Linux network drives support NAPI since a couple of decades. No panacea of course, but still, far from having one interrupt per packet.

ArtRichards a year ago

I think around 2011, they offered the first UT Longhorns Startup course, it was cool and hip and new, and they'd flown in mentors from SV and other places, so I figured, why not?

So, after applying, I had shown up at a hotel near campus. While waiting in the lobby, playing with their unsecured wifi, a rather distinguished looking gentleman came up to me, and asked, Hey are you here for the Startup Course interviews?

Yeah...

Well, why are you here in the lobby?

Well, I was told to wait here, and its been a half hour nobody called me.

He gave me a look, direct in the eyes, and said, oh, really? And you're just going to sit here and wait?

I was dumbfounded. Of course, it made sense, but it felt.. I didn't want to piss off the organizers, right?'

"Go in there, and get it!" as he clawed the air like a tiger. Damn, he was right.

So i ambled in, looked around, found a seat near the guy organizing (Josh Baer, another awesome guy) introduced myself and sat at a table by myself, just waiting for an in...

Then the gentleman from the lobby came in and sat in front of me, with a big grin.

Hi?

Hi.

You're a part of this?

Yes, my name's Bob Metcalfe.

Cool, thanks for the pep talk. So, whats your story?

Well, I founded 3Com, and helped come up with Ethernet.

Oh... damn.. cool..

...And my life has never been the same since!

If you read this, thanks Bob.

xp84 a year ago

> "Metcalfe insists on calling Wi-Fi by its original name, Wireless Ethernet, for old times’ sake."

Okay, besides all his contributions, I've decided this guy is my favorite for that alone. Imagine if he was your (great?) uncle and you're on a family vacation together. "What's the Wi-Fi password here, Bob?"

Bob: "What's the what now?"

You: "Excuse me. What's the Wireless Ethernet password?"

Bob: "Oh, it's HotelGuest2023"

  • bundie a year ago

    Okay that's funny

wistlo a year ago

The archetypal "good enough" solution:

Instead of preventing collisions, tolerating and managing them.

I think of Ethernet often when assessing how close to perfection I need to get in my work.

  • zokier a year ago

    It is also lesson of doing something now and rewriting it later. For example no modern ethernet network uses cd/csma anymore and it was pretty iconic part of original ethernet. Overall ethernet on physical layer has seen quite an evolution from coax and vampire taps, to twisted pair and hubs, to switched networks, and nowdays wireless, single-pair, optical, and virtual networks

    • AlbertCory a year ago

      You left out a step: ThinNet coax, without vampire taps!

      That's what was at 3Com when I joined in 1985. I even have a section in The Big Bucks where I took down the entire company for a few seconds by disconnecting the coax. No one noticed.

    • xenadu02 a year ago

      Ethernet is also an example of a tech that has an easy scaling path: hubs with switched uplink ports made it really easy to divide collision domains. In the early days before everything was switched you could instantly reduce collision losses with a little bit of hardware in the server closet with no other changes to the network.

    • bombcar a year ago

      I remember when hubs were still common; I don't know if any have been made for decades. Even bargain basement switches are switched now, and often even have spanning tree and other 'previously enterprise' features.

      • jandrese a year ago

        Hubs max out at 100Mbps. Everybody today is using Gigabit, so they're effectively extinct.

        Even at 100Mbps hubs were on the way out. They were pretty hacky. The hardware had two different hubs internally and joined them together with a bit of logic, but that logic was somewhat failure prone and it was common to have 10/100 hubs where the 10 clients couldn't talk with the 100 clients and vice versa. Autodetection was at best a roll of the dice so most people wired down their port settings instead. Everybody hated them and switches got cheap real fast so they didn't last very long. The only thing they were good for was network diagnostics.

        • EricE a year ago

          > The only thing they were good for was network diagnostics.

          Indeed - I still have a couple that I used for packet sniffing. Thankfully managed switches or switches smart enough to support port mirroring are inexpensive and thus fairly ubiquitous now.

  • jacquesm a year ago

    True, but it did detect transmission in progress (carrier sensing) which helped to avoid collisions in the first place.

Sporktacular a year ago

Ethernet was always inefficient, with a crazy amount of unused legacy space reserved in an unnecessarily large header. CSMA/CD for contention was one of the ugliest medium access solutions imaginable. The coax implementation needing termination plugs was also ugly. Its advantage was cost, having had no license fees, making it suited to consumer/commercial applications driving economies of scale. It's the VHS of datacomms.

It's evolved, thankfully, but it remains an ugly, inefficient standard that only has life because of its legacy. And it's been increasingly jimmied into professional, carrier applications for which it was never intended and where far superior, though more expensive solutions already existed.

That's not to say its creators don't deserve credit. It did its job well enough for its early days. But that's why this award comes too late. Because now Ethernet is the bloated, inelegant dinosaur we've built an ecosystem around, but to admire it is to forget the competitors it drove to extinction along the way.

  • jacquesm a year ago

    It was a lot less ugly than whatever else passed for networking standards at the physical level in those days.

    Arcnet, Twinax, Token Ring and so on, I've probably used them all, and at scale. Compared to Ethernet they all sucked, besides being proprietary they were slow, prone to breaking in very complex to troubleshoot ways (though ethernet had its own interesting failure modes in practice it was far more reliable), and some used tons of power which made them unusable for quite a few applications. On top of that it was way cheaper and carried broad support from different vendors, which enabled competition and helped to improve it and keep prices low.

    • mkovach a year ago

      Oh good heavens. Arcnet! When I first learned about writing Linux device drivers, it was trying to get a decent driver for some Arcnet cards that the company I worked at as using in some client installations. Can't remember exactly why we never completed it (well, yea I do. Ethernet worked better, a lot better) but since we never "released the product" they never let us send in the driver we did write to the kernel mailing list. That was in the kernel 1.x days.

      Now, I feel old. Time for a nap.

      • jacquesm a year ago

        > Now, I feel old. Time for a nap.

        Join the club...

        And it all seems like yesterday.

  • citrin_ru a year ago

    Ethernet evolved in backward compatible way for more than 30 years. If we would design a new standard from scratch to fit the same use cases we in theory can learn from the experience and improve things but at the same it would be hard to resist a temptation to make it future-proof by adding a lot of things just in case and this new standard likely will be even more wasteful. And having opportunity doesn't mean it will be used. I often see new design make mistakes avoided in older designs because people have limited time to learn and body of knowledge is too large to always successfully learn from the past.

    Also hardware is not like software where you can rewrite a site using a JS framework of the day every few years. Compatibility is really important.

  • williamDafoe a year ago

    You could not be more wrong! Efficiency and overhead are measured as a percent of frame size and 128-byte packets (X.25) or 48-byte frames (Atm) are abortions. 1500 bytes at the outset and the overhead is < 1% and < 0.2% with jumbograms (8kB). Every 802.11 standard is a superset of Ethernet and that makes DIX Ethernet the most scalable network protocol of all time!

    • Sporktacular a year ago

      Do you mean subset? It was first standardized as 802.3. Contention under CSMA/CD meant it was not scalable - as in it became inefficient as the segment grew. But you're right and I stand corrected in sense of the header/frame length ratio. I'd edit that first sentence if I still could.

  • nine_k a year ago

    What are some superior competing standards, and could they be implemented in a royalty-free way?

    • Sporktacular a year ago

      The point was more about competing technical choices made by designers, rather than the choice of standards made by consumers. For example TDMA can be arguably more scalable, bandwidth and energy efficient than CSMA/CD and can give consistent PL, PD and PDV, so might have even allowed early business grade voice. Variable header sizes would have allowed efficient use over bandwidth constrained media like radio. But the low cost and fast success of Ethernet formed a barrier to entry for competing LAN standards, where those arguably better technical choices may have found a footing.

      They eventually found application in other non-LAN standards, so guess royalties weren't an issue.

      • xenadu02 a year ago

        None of those things could be implemented in the 1970 or 1980s at reasonable cost so they're not actually solutions at all.

        Hell even making Ethernet fully switched didn't really happen until the 1990s thanks to Moore's law making the ASICs cheap enough.

        Without mass adoption there's no reason to invest. Look at Token Ring, Ethernet's only real competitor at scale: it quickly started to lag behind. Ethernet shipped 100Mbps several years before Token Ring. The 1Gbps Token Ring standard was never put to hardware.

        • Sporktacular a year ago

          TDMA is an extension of TDM, which goes back to the 60's. Synchronization was already solved. Variable header size could be implemented with the same preamble concept already used by Ethernet, but used to indicate the end of the header. These were not hard problems. The technology existed, the affordability would have largely depended on adoption, so it's hard to say.

          • xenadu02 a year ago

            We'll have to agree to disagree. Obviously TDM was known but implementing it for ethernet at a reasonable cost was just not an option at the time (in my opinion).

      • RF_Savage a year ago

        TDMA needs time synchronization and thus becomes more complex.

        Even in telecoms the packet switched connections are quickly replacing synchronous time division connections.

        • jabl a year ago

          The recent-ish 10base-t1 uses something called PLCA instead of CSMA/CD which doesn't require time synchronization, and gives each node in a subnet a dedicated transmission slot.

          • RF_Savage a year ago

            10Base-T1L is point to point, 10Base-T1S is multidrop, but very limited in nodes and how long the branches/stubs can be.

            We'll see how it actually performs in field. Microchip seems to be in the T1S boat and TI+AD in T1L.

    • WeylandYutani a year ago

      If they went extinct they were not superior.

      • eddieroger a year ago

        Betamax was superior to VHS, and it went the way of the dinosaur. Sometimes better means more expensive, and that's not always the popular choice. It wasn't the better survivor, but it was the better format. First to market, higher res, smaller tape, longer life, still lost. But don't take my word for it. https://kodakdigitizing.com/blogs/news/what-is-the-differenc...

        • bombcar a year ago

          Superior includes cost, and even things like number of suppliers.

      • asah a year ago

        the best tech doesn't always win, and in fact the "best" tech is typically promoted by people who focus more on the tech and less on go-to-market and competitive strategy. And thus, the "best" tech often loses to the tech that (for example) is better packaged or promoted.

        Python is a nice example: inelegant language with many deep flaws, but easy syntax and "batteries included" won the day.

      • Sporktacular a year ago

        That's just not true. Shoddy builders put quality builders out of business all the time.

        Guess it depends on your faith in markets and your definition of superior.

  • ethbr0 a year ago

    What would you have used (prior to affordable switches) instead of CSMA/CD?

    • mhandley a year ago

      There were a number of ring-based technologies such as Cambridge Ring that even predate Ethernet: https://en.wikipedia.org/wiki/Cambridge_Ring_(computer_netwo...

      The main reason Ethernet won, I think, is that it was really easy to deploy incrementally. It was much more plug-and-play than anything else at the time.

      • ethbr0 a year ago

        My memory is that every ring topology had pretty nasty failure characteristics around "a single misbehaving/failing client."

        Which Ethernet has too, but can generally tolerate a much higher level of imperfect reality, while still providing degraded service.

        Before you could get plentiful high-quality NICs and cabling, graceful degradation was a killer feature.

  • rcarmo a year ago

    You forgot to put on your ATM cap :)

AlbertCory a year ago

"Captain Bob" we called him, at 3Com.

In "The Big Bucks" I have two quotes from him, which he graciously allowed me to use as something he would have said (they're not very exciting). Normally I never have a real person appear and do anything; at most people speak of them in the third person.

In "Inventing the Future" I have the 1978 story about the lightning strike that took down the Ethernet between PARC and SDD. Bob had actually forgotten it, but he remembered the second lightning strike that helped sell Ethernet, because Ron Crane (RIP) had remembered the first one and engineered the Ethernet card to withstand them. As luck would have it, during a competition there actually was a lightning strike, and 3Com's survived it while the competitor's didn't.

madmax108 a year ago

Congrats Bob!

If anyone's interested in the history of the early internet, I recently read the book "Where Wizards Stay Up Late" by Katie Hafner and it is a very interesting read about how we went from ARPA to WWW, including a lot of the warts you associate with large scale projects like ARPANet grew into (and the book features Metcalfe quite extensively when talking about Ethernet and ALOHAnet).

Honestly, it's nice to see technology like ethernet, which is both "as simple as it should be but no simpler", and has also stood the test of time get recognized and rewarded!

dale_glass a year ago

Now if we could only break away from the frame size limit and have working jumbo frames without a lot of pain.

Having millions of packets per second is starting to get a bit ridiculous. Even 10G is still challenging, not to speak of 100G.

lispython a year ago

A veritable hero of our times boasts a mere 688 followers on his Twitter account: https://twitter.com/RobertMMetcalfe (as of the dispatch of this message).

  • jack_riminton a year ago

    I there must be another internet law which states that the best twitter accounts have around this number

  • zamadatix a year ago

    It's more a personal account that's 1 year old which opens with a tweet on cancel culture followed by his political leanings, cryptocurrency posts, and an overwhelming amount of basketball stats. It does have the occasional post about his involvement in geothermal energy though but beyond that following the account isn't going to get you any content he's known and respected for.

  • OJFord a year ago

    If he was tweeting anecdotes, or still working (seems he might be?) and tweeting about it it'd probably be a lot more - but it's mostly US basketball (and other sports/personal stuff) by the looks of it, so however veritable a hero it's just not interesting to the same audience (or at least, for the same reason, of course some of us will be basketball fans) - and probably if you're big into 'basketball Twitter' he's ~'nobody'.

cduzz a year ago

This is like when I heard Roger Penrose won a Nobel Prize in 2020 and I thought for a second "wait is this his second? What? You mean he hadn't been awarded one until now? Who was in line ahead of him and for what?"

Ozzie_osman a year ago

Reading the original ethernet paper was one of my favorite moments in college. Just a brilliantly pragmatic design (especially handling packet collisions with randomized retransmissions).

Made me appreciate how important it is for something to be simple and pragmatic, rather than over-engineered to perfection.

  • jacquesm a year ago

    I think in part what you are witnessing there is the power of a single well informed individual over a committee, which is how the competition was doing it.

throw0101b a year ago

Somewhat related:

The choice of 48 bits for the hardware/station address seems to have been a pretty good choice: it's been 40+ years and we still have no run out. I'm curious to know if anyone has done the math on when Ethernet address exhaustion will occur.

While the Ethernet frame has been tweaked with over the decades, addressing has been steady. Curious to know if any transition will ever been needed and how would that work.

In hindsight, IP's (initial) 32 bit address was too small, though for a network that was (primarily) created for research purposes, but ended up escaping 'into the wild' and accidentally becoming production, it was probably a reasonable choice: who expected >4 billion hosts on an academic/research-only network?

  • toast0 a year ago

    We're unlikely to ever actually run out. Ethernet addresses are expected to be universally unique, but they're only required to be unique within a collision domain. If someone started reusing addresses from 3c503s, chances are high nobody would notice. If we did run out, devices would need to start generating randomized addresses, and maybe probe for collisions, which isn't unworkable; the number of nodes in a collision domain tends to be low, and the space is large, you might only barely need to to probe for collisions at all if you have a good random source.

  • zamadatix a year ago

    Some quick napkin math on the current MAC vendors database: 46 bits of a MAC address are reserved for universally administered unicast (i.e. a globally unique MAC assigned to identify a device). So far we have assigned ~570 billion addresses via 24/28/36 bit range assignments for the same purpose which represents a little under 1% of the space. So nothing urgent, though if we stuck with Ethernet as much as we use it today then in <100 years I wouldn't be surprised if we were "out".

    At the same time there are also 46 bits of locally administered unicast addresses and, unlike IP, Ethernet addresses only care about the local network (and this isn't a "because we've co-opted them to to save space just like NAT broke IP protocols" rather the design intent of Ethernet). Even if you had 10 billion LANs with 100 devices each and they all used this random non-unique assignment there would only be a ~50% chance there one or more devices would have a collision.

    The only real advantage I've ever been able to find of programming in unique MAC addresses vs random MAC addresses you can look up what company the MAC was assigned to. It may seem like there is a risk random assignment can be done poorly (e.g. not very randomly) but honestly the same risk exist with assigned ranges as seen by network vendors cheaping out and re-using their MAC blocks (which is significantly more likely to conflict than if they just used random locally administered addresses in the first place).

    • williamDafoe a year ago

      So when XSIS (Xerox System Integration Standards?) first started selling MAC address blocks in the early 1980s, I think it cost $1k for 16M mac addresses (168 addresses per penny). So there are only 16M blocks of mac addreses available, and it IS possible to run out, if vendors waste the addresses. I don't know what registration costs from The IEEE right now, but once the equivalent of $16B is spent on mac addresses, we will run out of blocks.

      Maybe today they are selling smaller blocks but the MAC is basically divided into 16M blocks (specified by the first 24 bits / 3 bytes) - each with a registered owner or "unassigned") and 16M MACs within the block (the lower 24 bits). That's why you can enter a MAC address into a lookup website and find out who made the NIC (google search "mac address lookup"):

      https://aruljohn.com/mac.pl

  • williamDafoe a year ago

    It was Xerox & Yogen Dalal's choice, not Bob's choice! Xerox blew it with 8-bit station addresses in PuP (PARC Universal Protocol) and wanted to give each station a UID to break ties in database transactions, hence the 48-bits. XNS actually had 3-byte station and 3-byte network size to fit in 6-byte MAC addresses! Metcalfe is not a software engineer and wouldn't have these insights ...

jonstewart a year ago

I’ve got his book Packet Communication, and the acknowledgements ends with “Don’t let the bastards get you down.”

pedrovhb a year ago

Ah, the best and most readily available source of makeshift jumper wires. Truly an amazing contribution even in ways it wasn't quite designed to be :)

dredmorbius a year ago

Oh yeah, Cap'n Bob:

"Linux's '60s technology, open-sores ideology won't beat W2K, but what will?" (Infoworld, June 21, 1999)

... Why do I think Linux won't kill Windows? Two reasons. The Open Source Movement's ideology is utopian balderdash. And Linux is 30-year-old technology.

The Open Source Movement reminds me of communism. Richard Stallman's Marx rants about the evils of the profit motive and multinational corporations. Linus Torvalds' Lenin laughs about world domination....

<https://web.archive.org/web/19991216220752/http://www.infowo...>

Though in time he moderated his views ... slightly:

<https://web.archive.org/web/20070622115025/http://www.linux....>

agomez314 a year ago

How is he only getting this award now?

  • williamDafoe a year ago

    ACM Turing award committee has its head up it's ass? Seriously 25% of winners have NO impact on the field ...

    Metcalfe was controversial because Alohanet from university of Hawaii pioneered the idea and Metcalfe was seen as writing a nice proof in CACM of the 1/e capacity breakdown theorem and popularizing an already extant technology. He did not build it alone Chuck Thacker probably built most of it but didn't have a PhD! Oh the horror!

    He should not have gotten it now - either give it sooner or not at all - and he should not be the only one getting it!

cc101 a year ago

He is also the arrogant ignorant engineer who bought and ruined Infoworld.

leephillips a year ago

I hear the Matcalfe has been spending some time recently at the MIT Julia lab working on climate issues.

hoseja a year ago

What's the killer feature that differentiates Ethernet from other phy protocols?

  • jacquesm a year ago

    The fact that it allowed for all kinds of topologies, and that it served as a bus (shared medium, hence the name 'Ether') rather than a point-to-point link is what I think made the biggest difference.

    Of course now that we all use switched links they are point-to-point again but an ethernet 'hub' gave you the same effect as a bus with all devices seeing all of the traffic. This made efficient broadcast protocols possible and also allowed for a historical - but interesting - trick: the screw-on taps that you could place on a single coaxial cable running down a department giving instant access without pulling another cable. Zero network configuration required, just get the tap in place and assign a new address. DHCP later took care of that need as well.

    This was fraught with problems, for instance a single transceiver going haywire could make a whole segment unusable and good luck finding the culprit. But compared to the competition it absolutely rocked.

    • ethbr0 a year ago

      To build on your comment, although it's been years since I studied Ethernet in depth...

      - (On the bus thread) Ethernet started from an assumption of bad behavior (out of spec cabling, misbehaving clients, etc.) and tighten requirements just enough to construct a useful network. Much better balance between de facto ruggedness vs performance than its peers.

      - From the beginning, Ethernet reasoned that it was cheaper to put logic in purpose-built networking hardware than endpoints (i.e. PC network adapters). This was a better scaling tradeoff. 1x $$$ network device + 100x $ client adapter vs 1x $$ networking device + 100x $$ client adapter.

      - Because of the above, you started to get really cost- and data-efficient networks when the cost of Ethernet switches plummeted. (Remember, in early Ethernet days, networks were hub/broadcast-only!)

      • jacquesm a year ago

        I remember paying about $1000 per port for 100 megabit switches.

        • drewg123 a year ago

          Ethernet switches are actually pretty complex things, when you think about it. They have to learn what MAC addresses are behind each port, and build a complex forwarding table and do table lookups in real time. The larger the switch, the more complex it is. Its hard to make it scale.

          Around the same era, Myrinet switches with higher bandwidth (1.2Gb/s if I remember correctly) and higher density at a fraction of the port cost of slower ethernet switches. This was possible because the Myrinet switches were dumb.. The Myrinet network elected a "mapper" that distributed routes to all NICs. The NICs then pre-pended routing flits to the front of each packet. So to forward a packet to its destination, all a Myrinet switch had to do was strip off and then read the first flit, see that it said "exit this hop on port 7", and then forward it on to the next switch. Higher densities were achieved with multiple chips inside the cabinet.

          In the mid 2000s we even built one of what was, at the time, one of the worlds largest ethernet switches using (newer, faster Myrinet) internally, and encapsulating the ethernet traffic inside Myrinet. That product died due to pressure from folks that were our partners, but felt threatened by our incredibly inexpensive high density switches.

          https://www.networkworld.com/article/2323306/myricom-rolls-o...

          EDIT: fixed routing flit description, added link to PR

          • jabl a year ago

            Sounds similar to Infiniband where each subnet has a subnet manager which calculates routing tables for the entire subnet, and assigns 16-bit local identifiers (LID) so you stations don't need to use the full 16 byte GUID's.

            Also Infiniband packets are power of two sized, making fast switching easier.

            • ethbr0 a year ago

              Neat! (Re: you and parent)

              At their core, most hardware evolutions seem like optimizing compute:memory:storage:functionality vs the (changing) current state of the art/economy.

              When Ethernet was first released, compute was expensive. Made sense to centralize compute (in routers) and make everything else dumb-tolerant.

              Now, compute is cheap and plentiful at network-calculating scales and throughout expectations are very high, so it makes sense to burn compute (including in clients) to simplify routing hardware.

        • ethbr0 a year ago

          Ha! But they delivered that much value (or more), so the market supported the price until supply flooded.

          We could do worse for a transformative technology ranking metric than "How overpriced was this when first released?" (significant look at Nvidia cards)

          • jacquesm a year ago

            I had a bunch of workloads that quite literally got cut down to about 15% or so of the original runtime (a cluster compressing a whole archive of CDs for a large broadcaster) so I happily paid up. But still... $1000 / port!!

            And here I have sitting next to me a 48 port gigabit switch that cost 15% of what that 100 megabit switch cost in 1996 or so. Iirc it was one of the first D-link products I bought, it definitely wasn't great (it ran pretty hot) but it worked quite well enough. Amazing progress.

            • bombcar a year ago

              And you can get switches for less than $25 per 10gb port now.

              Of course the jump from 10mb hub to 100mb switch was much larger than any of the later jumps, just because of the reduced noise.

    • em-bee a year ago

      for years i was carrying around an ethernet splitter that would allow me to connect two devices into one ethernet port. i last used it some 10 years ago in a place without wifi

      • asimpletune a year ago

        Yeah, it’s a very cool trick that surprises a lot of people when they learn that only half the wires are used.

        • jacquesm a year ago

          Not for gigabit ethernet and good luck picking up the pieces if you find yourself splitting a power-over-ethernet setup :)

          • em-bee a year ago

            using a splitter is usually a temporary solution, and i am unlikely to be sharing a port with a PoE device. nor do i care about gigabit speed when the only reason to use a splitter is to make up for missing wifi.

    • gooroo a year ago

      And it was sniffing heaven. Only paralleled by the brief period of nobody using any serious encryption on their wifi.

      • xbar a year ago

        Where "brief" was about 10 years, which at the time was about 25% of all time that networks were common.

        • yuuho a year ago

          At the time, maybe. Eventually it will be remembered as a short glitch in tech history.

    • sriram_sun a year ago

      Yup! A whole other real-time industrial protocol called EtherCAT has been built on top of the same hardware.

  • williamDafoe a year ago

    Original DIX Ethernet was standardized by my manager, David Redell of Xerox. It was the bare minimum to do the job, 6-byte station destination, 6-byte source address, 2-byte packet length, a 2-byte Ethertype field (the latter 2 were combined for networks with hardware framing), and 32-bit CRC. NO arc in the hardware. It leveraged the move to byte-based memories and small CPUs. It followed the end-to-end principle in system design just about optimally - the most minimal MAC design of all time. EASY TO BUILD UPON AND ENHANCE.

    Ethernet (CSMA/CD) is a protocol that copies human speech patterns. After someone stops speaking people hear the quiet (carrier sense multiple access / CSMA) and wait a very short and randomized amount of time and begin to speak. If two speakers collide they hear the collision and shut up (CD - collision detection). They both pick a randomized amount of time to pause before trying again. On the second third etc. collision people wait longer and longer before retrying.

    The thing about original ethernet (1981) is that it wastes 2/3 of the channel because a highly loaded channel has too many collisions and too many back offs. But deployment and wiring were expensive so running a single wire throughout a building was the cheapest possible way to start (enhanced by thinwire Ethernet and twisted pair to have a less bulky cable a few years later). The frame design was PERFECT and within ~10 years people were using ethernet frames to build switched networks and today only radio networks are CSMA/CD = Ethernet.

    • knuckleheadsmif a year ago

      I was in Xerox SDD in the early 80’s I have lots of memories dealing with the large coax taps which we in the ceiling.

      I also remember setting up a Star demo at the NCC and someone forget coax cable terminators (or was short one terminator?) which was causing reflectance issues with the signal which was solved by cutting the cable to a precise length to get the demo working.

    • williamDafoe a year ago

      In the original ethernet design, routers were not used (because 8-bit processors were too slow and 16-bit processors were just starting to emerge). So the original standard proposed repeaters as the way to extend a large network, and this was a very very cheap analog way to grow your network. It was quite common to have a whole building or even several nearby buildings on one ethernet and then a high speed (i.e. 56 Kbps or maybe even a T-1) link to other buildings either nearby or in other cities.

    • irq-1 a year ago

      Maybe you know, why isn't the CRC at the end? Then you could stream the packet instead of needing to construct it and then go back to the header to write the CRC.

  • roganp a year ago

    Packet collision detection vs. collision avoidance

drewg123 a year ago

Back when I was doing a lot of ethernet driver work, I joked to colleagues about what I'd do if I had a time machine. Go back and kill Hitler? no. Go back and stop John Wilkes Booth from shooting Lincoln? No. I'd go back and convince Bob Metcalfe to make ethernet headers 16 bytes rather than 14 to avoid all sorts of annoying alignment issues

  • gooroo a year ago

    Yeah those alignment issues surely have killed more jews than that Hitler guy. /s

    • drewg123 a year ago

      Lol. No, its more like everybody will line up take care of those more important things when they get access to a time machine, but when I get access to a time machine, I want to take care of my pet peeve :)

      • toast0 a year ago

        While you're back there, convince people to do IP truncation rather than fragmentation. Truncation would probably be a lot more useful at lower cost than fragmentation, and maybe path MTU problems wouldn't still be an issue. *grumble*grumble*

northlondoner a year ago

Well deserved. Impact is immense. Of course, Xerox PARC alumni.

higeorge13 a year ago

I feel extremely to have attended one of his keynotes in one network conference once and for the quick opportunity to greet him.

Well deserved.

amelius a year ago

If only USB was half as reliable as Ethernet ...

The anti-Turing award goes to the inventors of USB.

1970-01-01 a year ago

Around 2002 he had the wild idea about Ethernet for intra-bus and peripheral communication. Nobody in the room thought it was a good idea. Glad he was smart enough to abandon that idea and stick with networking. I didn't want my mouse and keyboard getting an IP address.

lambda_dn a year ago

Wifi/5G guys next? Much more important invention in my opinion

  • rasz a year ago

    Wifi would have to go to Steve Jobs :-) Lucent was sitting on 802.11 (WaveLAN) for ten years selling super expensive products targeting niche markets and it took Apple to move things forward. More in “Oral History of Arthur “Art” Astrin”, wifi pioneer: https://youtu.be/Tj5NNxVwNwQ

  • xbar a year ago

    From 1990, the 802.11 standards body was gyrating on radio-based 802.11 ideas.

    That body would not even have existed without Ethernet.

theharrychen a year ago

How did he not get this earlier?

  • tobylane a year ago

    There’s a lot of inventions to award people for. Some of the other recipients look overdue by the time it came to them. https://en.wikipedia.org/wiki/Turing_Award

    • OJFord a year ago

      It coming to Metcalfe for Ethernet 7 years after to Berners-Lee for the WWW is amusing though.

      • js8 a year ago

        Perhaps they are giving it by OSI layers.

        • jacquesm a year ago

          That's very funny :)

      • jacquesm a year ago

        Indeed, the one would not have existed without the other.

  • williamDafoe a year ago

    ACM awards are dominated by the theory community. A lot of theoreticians with NO impact on the world have awards. Metcalfe was one of a dozen people who co-invented Ethernet and does not fit the historians "Great man" theory where history is decided by a few "Great Men" who went a different direction at a critical moment ... Ethernet's success is only 25% due to him.

    For example, in 1979 at UIUC a grad student built 230kbps S-100 cards using rs232 chips and I wrote the Z-80 csma/cd drivers (as a high school student) so it was not rocket science.

    So there was reluctance to give him an award for something he didn't pioneer all alone.

  • jacquesm a year ago

    Exactly. He really should have.

williamDafoe a year ago

This just shows what a huge joke the Turing Award process is! He should have gotten this award by 2000 or never at all! But the committee was too busy giving out awards for writing sexy sounding papers about stoplight verification and zero knowledge proofs to honor someone who disrupteded the whole field!