ACCount36 21 hours ago

The thing I loathe the most about embedded work is dealing with silicon vendors and their boneheaded refusal to publish the fucking documentation and tooling.

  • junon 21 hours ago

    Microchip in particular is very bad at developer experience and tooling. The only vendor I actually enjoy working with to any degree is ST, and only design boards using their uC's for that reason.

    I've heard good things about Nordic, though. Might try them out at some point.

    Microchip's own IDE and project generator spit out a hello world project that didn't even compile. NXP wouldn't even let me download their tooling even after their obfuscated sign up flow.

    • HeyLaughingBoy 15 hours ago

      Good god. I literally just uninstalled MPLAB IDE for a project that we cancelled. It freed up something like 30Gb on my system. I built the existing project once!

      I also really like ST. At a previous job our go-to processors were Nordic for wearables or anything that needed BLE, and STM32 for pretty much everything else. Wasn't unusual to have an STM32 for all the peripheral I/O and an nRF52 hanging off an I2C port just to talk to an app.

      Nordic is OK. Starting up a new project is nowhere near as easy as STMCubeMX and they do tend to update their SDKs frequently which can be annoying if you have to support legacy projects, but we used them for years with no problems.

      • junon 13 hours ago

        That's good to know, thanks! And yeah, STMCube's quick pin config and clock config tools alone are what keep me coming back. I don't use any of the C generation or code editing/compiling stuff (I write all of my firmware in Rust) but having the configurator there to quickly configure peripherals is space age levels of tooling compared to any other manu I've worked with so far.

    • Gracana 19 hours ago

      I was on the Microchip bandwagon until the PIC32. Little MIPS MCUs with onboard RAM and graphics, awesome! ...except they came with an errata that listed huge problems with every interesting peripheral, and it didn't improve for ages. They may still suck, I don't know.

      A while back I tried out Espressif's esp32 and I was impressed by what they were offering. Their devices seem to be well documented and the esp-idf framework is really pleasant to use. It's much easier to work with than STM32Cube and ST's sprawling documentation.

      • Sanzig 17 hours ago

        I really like the STM32 ecosystem, but you have hit the nail on the head - there's too many variants. You really don't need 200 variations of a basic M0+ part, and having all those SKUs hurts availability because you're playing roulette when it comes to what will actually be in stock when your CM goes to order parts.

        • azonenberg 17 hours ago

          I did not like that part of it.

          Personally I've standardized on just three STM32 parts:

          * L031 for throwaway-cheap stuff where I'm never going to do field firmware updates (so no need to burn flash on a proper bootloader) and just need to toggle some GPIOs or something

          * L431 for most "small" stuff; I use these heavily on my large/complex designs as PMICs to control power rail and reset sequencing. They come in packages ranging from QFN-32 to 100-ball 0.5mm BGA which gives a nice range of IO densities.

          * H735 for the main processor in a complex design (the kinds of thing most people would throw embedded Linux at). I frequently pair these with an FPGA to do the heavy datapath lifting while the H735 runs the control plane of the system.

          • HeyLaughingBoy 14 hours ago

            +1

            This is the approach I took at my last job: we standardized on a small handful of CPUs selected for a certain level of complexity. Before this, choosing a CPU was an agonizing task that took days and didn't add a lot of value. The only time it actually mattered was the one time we got an order of several 100,000 units. In that case, you want to get the BOM cost as low as you can.

            Trying to get the same thing implemented at my current job. I'm seeing the same behavior where a team takes forever to choose a processor, and a "good enough" choice would have taken a couple of hours.

    • Aurornis 17 hours ago

      > The only vendor I actually enjoy working with to any degree is ST, and only design boards using their uC's for that reason.

      ST has good documentation most of the time, but for a while some of their higher end MCUs had a lot of weird bugs and errata that were simply not documented. I haven’t used any of their modern parts recently but I’ve heard the situation has started improving. I have some friends who were ready to abandon ST altogether after losing so much time on a design to undocumented bugs and parts not behaving as documented.

      • azonenberg 17 hours ago

        I don't think I've ever used a ST part without reporting a bunch of datasheet errors.

        I haven't been bit by an undocumented silicon bug, but I step on documented STM32H7 bugs on a pretty regular basis and there are some poor design decisions around the OCTOSPI (in addition to bugs) that make me avoid it in almost every situation.

        But at least they document (mostly correctly) the registers to talk to their crypto accelerator unlike the Renesas and NXP parts I looked at as potential replacements, both of which needed an NDA to get any info about the registers (although they did supply obfuscated or blob driver layers IIRC).

      • technothrasher 13 hours ago

        I got very wary of ST when some of their STM32C chips that were rated for -40C consistently stopped working correctly for me at -20C, and their support response was basically a big shrug.

    • ACCount36 21 hours ago

      True, and Microchip is still good when compared to the likes of Broadcom and Qualcomm.

      • azonenberg 17 hours ago

        This is why I used a VSC PHY. After they bought Microsemi (and Vitesse as a division of Microsemi) it looked like the only viable option to get a QSGMII PHY since all the other players were much worse.

        When I first started the project in 2012-13, Vitesse was just as NDA-happy and I ruled them out. The original roadmap called for a 24-port switch with 24 individual TI DP83867 SGMII PHYs on three 8-port line cards.

        • ranma42 13 hours ago

          BTW looking at the 8051 patch bytes, they look like 8051 code to me. 0x02 is the ljmp opcode, so this is a jump table: 0x02, 0x40, 0x58, 0x02, 0x40, 0x4e, 0x02, 0x44, 0x00, 0x02, 0x42, 0x2b, 0x02, 0x41, 0x82

          I poked at a vsc73xx-based switch in the past and wrote my own test firmware, but had problems with packet loss since I didn't do all the necessary phy initializations I guess, in case this might be of interest: https://github.com/ranma/openvsc73xx/blob/master/example/pay...

          Also on the device I had the EEPROM was tiny and the code is loaded from EEPROM into RAM, you were pretty much stuck with 8051 assembly that had to fit into the 8KiB of onchip RAM :)

          • azonenberg 10 hours ago

            Those addresses all make sense, as 0x4000 - 4fff appears to be where the 8051 has its RAM mapped (all of the peek/poke addresses used for accessing serdes fields are on the high end)

      • ants_everywhere 19 hours ago

        I don't do embedded work, but I see Broadcom and Qualcomm show up a lot in Linux bugs.

        I'd love to hear stories of what it's like to work with chips from these companies.

        • sumtechguy 14 hours ago

          When I worked at qcom it took 2 devs 3 managers and 2 directors to get another group to let us fix a single race condition in their code. Once I got a copy of the code I started handing off tons of memory overwrites and underruns back for them to fix and another 10 or so race conditions. Their initial stance was 'there is nothing wrong with the code it is your code that is broken'. That was internal in the same vertical. It would be awful to work as an external group with that lib. Other stacks I got to look into were in similar shape. Some groups if you got their code you knew it was going to be solid and you would learn a thing or two. Others groups you were kind of surprised it compiled.

        • AlotOfReading 13 hours ago

          Unless you're buying hundreds of thousands of chips, you don't even get the "opportunity" to work with them. They won't sell you chips directly or return emails reliably.

          If you're working at one of the big companies (e.g. Microsoft), they'll give you access to the documentation and source code that should be open for everyone, but even then you're going to spend time reverse engineering your own documentation because trying to get details from them is a months long process of no one being willing to say yes. It's painful. Best to stay away unless you have no other alternatives.

        • bgnn 9 hours ago

          I don't know what's it like to work with their chips, but I worked at one of them designing chips. There is often a block designed by someone 10 years ago but left the company, it has bugs, but nobody dares to touch it and fix it because "we knyiw ots bugs" and nobody really knows what's happening under the hood. So someone writes a wrapper for the next gen, to add new features. This person soon leaves the company.. rinse and repeat. I've seen wrapper of a wrapper of a wrapper on 10th gen of a product in 5nm process of a product line started its life when 90nm was state of the art. Most of the the bugs accumulated over the years were still there. They won't fix it as long as a big customer doesn't complain about it.

          • ants_everywhere 7 hours ago

            oh wow. do they at least keep a list of known bugs centrally? or do they just hope nobody notices?

      • junon 20 hours ago

        Oh yeah definitely.

    • stephen_g 18 hours ago

      Which NXP tooling did you have trouble with? I’ve downloaded MCUXpresso earlier this week, no NDA or anything (yes you do need an account), it’s never been hard to get…

      • junon 13 hours ago

        I remember their site inexplicably not letting me download anything despite being logged in. I think it wasn't able to retain my session for some reason, though it's been a while (I looked around 2023).

        I do remember trying different browsers and even different machines, to no avail. Quickly gave up.

    • tliltocatl 14 hours ago

      Nordics are good with documentation and devex indeed. Zephyr is still a bloated and somewhat buggy disaster for this scale of MCUs. Doesn't really beats ST unless you are power-constrained and need wireless connectivity - the hardware is a bit too quirky.

  • bardak 15 hours ago

    One of the biggest advantages I saw from the Raspberry Pi rp series for amateurs is they they have great documentation.

  • traverseda 19 hours ago

    Why though? This is clearly a problem I just don't understand what the vendors are getting out of it.

    • ACCount36 18 hours ago

      Perhaps they think that by forcing everyone to go through the sales dept just to get the basic docs, they'll get better sales opportunities.

      How do you upsell a hardware engineer who just wants to buy a specific chip, and already has everything to evaluate and use it? You don't. So you force everyone to go through sales, and then sales wants to talk to non-engineering higher-ups, and then the upsell happens - while the people who actually knew what they wanted remain as far away as possible.

      And if you don't have the pockets deep enough for the sales dept to acknowledge your existence, then you might as well not exist.

      • stephen_g 18 hours ago

        They don’t care about ‘upselling’ - it’s just that they only really care about the orders that are at least in the tens of thousands of units (or for the Broadcoms of the world, hundreds of thousands) per year, and ongoing.

        If you even promise to buy a few hundred a year through a business, it puts you in a different category and everything gets much easier, but you usually have to go via a distributor (Avnet, Future, Arrow etc.). But if you’re big enough (the hundreds of thousands + qtys) these companies will actually send dedicated support engineers to work with you and help you integrate their parts into your product.

    • Aurornis 17 hours ago

      When you get into high volume production, the vendors have their own engineers who will work with you directly. Depending on the arrangement and your volume they might bill you for the time, require you to buy blocks of support hours, or they might bend over backward to help you out in any way possible if you’re buying enough parts from them.

      Dealing with small clients is not a priority for most part vendors. Many of them won’t even sell you chips at all until you can qualify yourself as a big customer or, in some cases, buy a license to start designing with their parts for six figures or more.

      Unfortunately for the small players, it’s not a priority for most companies to support small customers who might only buy a couple thousand parts or less.

    • bgnn 9 hours ago

      Because the margin per chip is low and supporting more customers cost more. They sell these parts for couple of bucks/part at volume, although it costs 20-30 million NRE. With the fixed production costs added, they have tens of cents per part margin. At that point less than 1M part customer becomes irrelevant.

    • Kirby64 12 hours ago

      Because creating good documentation suitable for external consumers costs money, and even if you have decent documentation, people buying small quantities of parts can end up costing companies money just to interact with one of their application engineers. If you're not set up for it (very few companies are), it literally is negative value to answer support questions for small time customers.

      • Dylan16807 11 hours ago

        I'm not sure how your answer relates to the question about why they won't release documentation they already have. Are you saying that releasing documentation is going to increase the number of support questions?

        If you want to sell or limit support, why not do that without the documentation complications?

        • Kirby64 10 hours ago

          > Are you saying that releasing documentation is going to increase the number of support questions?

          Yes, absolutely. Notoriously, smaller customers are more needy in fact. The bigger the customer, the more competent their engineers tend to be (or, the more time they have to spend figuring out how to use your stuff). Smaller customers try to offload support onto vendors, which pushes burden onto internal vendor teams (who don't want to provide the support...).

          • Dylan16807 10 hours ago

            Smaller customers are more needy, sure, but there's a couple steps missing here. Is better public documentation going to bring in a lot more small customers, more than it solves problems? Is an NDA by itself keeping away lots of small customers?

            • Kirby64 10 hours ago

              > Is better public documentation going to bring in a lot more small customers, more than it solves problems?

              Maybe. The other thing is that public documentation gets a lot more scrutiny than internal documentation. You don't have any resource to talk to, so something like typos or mistakes need to be corrected rather than just papered over by a helpful applications engineer.

    • rcxdude 13 hours ago

      There's certainly cases where the documentation basically doesn't exist, or it's essentially the same as the design documentation, and the strategy is basically that if you're a big enough customer then you'll be getting some engineer time to make things work, and if you're a smaller customer you'll be told to pound sand anyway, it's not worth them fixing up their documentation to get your business. Generally the more complex and specialised the hardware the more it'll look like this, and it basically saves them on R&D because they have only a few applications to actually focus on.

    • dgfitz 19 hours ago

      I think the short answer is because manufacturers don’t care.

    • gosub100 14 hours ago

      Guessing: if you publish too much about your design, competitors can use it against you during sales negotiations. "Don't buy theirs, ours has better pin placement, better IF, better thermals, etc"

  • FpUser 16 hours ago

    My last experience with embedded MCUs was few years ago with ATMEL and all was fine

    • InitialLastName 16 hours ago

      Atmel hasn't existed for almost 10 years (Microchip bought them in 2016). The situation has not improved in the intervening time.

      • FpUser 11 hours ago

        Too bad. Did not know about it.

gregfjohnson a day ago

On the topic of Microchip and secrecy: I downloaded and installed their IDE, MPLAB X IDE v6.20. It is for a pic3mx chip. The compiler looks like a completely generic gcc, built to cross-compile on a Windows host. However, they want a $1000.00 “licensing fee” in order to enable any optimization level above -O0. This seems wrong. Wouldn’t this be a violation of the copyleft license covering gcc? I’m guessing there’s some loophole, since otherwise EFF and folks would be going after them. Or perhaps they don’t know about this situation? Should I alert EFF to this situation

  • Tuna-Fish 21 hours ago

    The GPL in no way forbids that. However, if they are obeying GPL you can ask them for the source code and then remove that limit yourself. If you ask for the source and they don't give it to you, then alert GNU.

    • lmz 19 hours ago

      Of course that depends if the optimization was compiled into the version they have. One can imagine two binaries with the optimizations just missing from the free one.

      • rkangel 18 hours ago

        If they distribute the one with optimisations, then they need to make the source available.

        • Tuna-Fish 17 hours ago

          ... but only to the ones they distribute it to. They can then choose to redistribute it if they want to.

          • lmz 4 hours ago

            ... a.k.a. the RHEL model (with contract termination on sharing source).

    • immibis 20 hours ago

      In some jurisdictions you may even be able to sue them for the source code without bothering GNU.

  • extraduder_ire a day ago

    How much does it look like gcc? Can you run it on its own with a --version argument, or run it through strings to get the text out of it.

    If it's actually gcc, a copy of the GPL should have come with the software. A bunch of other compilers mimic a lot of its interface for compatibility’s sake.

  • Cerium a day ago

    It's been this way forever... They do distribute source (but last time I checked it is with incomplete build info). I think there is also some BS fine print about the licensing fee being for the provided header files.

  • stephen_g a day ago

    Yeah I’m really not a fan - we had some designs with PICs on them and ended up switching to NXP micros (MCX-A and i.MX-RT) instead, partly because of MPLAB and also because the Microchip ones had some annoying quirks. NXP’s documentation I find a lot better too. I literally try to avoid Microchip where I can from the experience…

    • jmiskovic 19 hours ago

      Personally I hated the NXP's docs for the ARM M4 core. Bunch of dry tables listing each register in details, lacking the juicy diagrams and descriptions on how the bits integrate to work as a subsystem. I constantly needed to cross-reference 3+ documents (most of which describe the whole family and not the specific IC). Their HALs and code samples were obviously written by students/interns.

      I liked working with Microchip uC, but this was back when the whole IC (PIC24) was described in a single ~1000 page document. I found it very readable and instructive in general.

      If I had to pick something today it would be with RP2040/2350. The docs look awesome and there's a huge community that is not locked down in some corporate moderated forum but spread organically, with actually useful GitHub projects. It is the only embedded product where it felt like the open source community is along for the ride and not just picking up the scraps. I hope they continue this line of products.

      • mardifoufs 18 hours ago

        Yeah, NXP in my experience had an issue with having too much documentation. In the sense that you get drowned in a 3000 pages PDF that lists every detail but becomes hard to parse unless you want to base everything around that specific platform for years. Though that sounds like an awesome "issue" to have in some circumstances.

        • jmiskovic 16 hours ago

          It felt very different and not pleasant.

          The PIC24 was actually my first large project. I learned awful lot from reading its docs, for example setting the DMA to read 32 samples from ADC and let CPU know when done. Putting it together felt like playing with LEGO blocks. There were many annoyances with the toolchain and the clumsy memory addressing but I enjoyed it overall.

          The NXP was downright unpleasant compared to it. I don't think a junior could be handed a NXP dev board and all the docs to hone their craft. It requires significant patience and expertise to pick out the relevant details in the vastness of their documentation. Of course the NXP product line is huge and I can only comment on few uC models I had contact with. The sensors and other less complex ICs were vastly better and docs were quite digestable.

        • rpaddock 12 hours ago

          When you add up all the various PDFs documenting the NXP MCXN947 it comes to around ~7,000 pages.

  • dmitrygr 15 hours ago

    Ran into this at Google. Qualcomm compiler for their DSP was an expensive branch of GCC. I asked my manager if we could just ask them for source instead of paying per-seat license. He said that “ our contract with Qualcomm specifically prohibits us from asking them for the source of this compiler”. They found the workaround tor GPL I guess.

    • howerj 14 hours ago

      I have heard that this is how it is done before. I wonder how that works with a third party? If they happened to come across the binaries some how they could demand the source. I also wonder if that clause is enforceable.

      • eqvinox 13 hours ago

        AIUI the entity distributing it has to provide the source. So if Google were to (try to) (re-)distribute that compiler, they'd be legally fucked because they'd have to provide the source… which they don't have and can't get.

        (But presumably that agreement also restricts Google from redistributing the binaries anyway.)

  • fragmede a day ago

    The GPL doesn't say you can't charge money for things. Do they provide patches for their changes to the source?

    • znpy 21 hours ago

      I think that the issue here is the following:

          - you can charge money for things
          - anything that's not built with the "official compiler" is not "supported"
      
      I've interviewed for a junior embedded software engineer when i was in university and when i started mentioning i had experience building cross-compilers i was immediately stopped by the guy interviewing me (he literally didn't even let me finish the sentence) and told me "Absolutely no. We don't want to maintain our own toolchain and we want everything to be coming from the BSP [Board support package] and integrated nicely with the vendor's IDE.

      They used ARM chips, so not even anything strange...

      The real issue would come if they did not provide the source code for the gcc build they sell you, though.

      • Aurornis 17 hours ago

        > Absolutely no. We don't want to maintain our own toolchain and we want everything to be coming from the BSP [Board support package] and integrated nicely with the vendor's IDE

        This is critical if you want any support from the vendor.

        If you come to them with a bug in their hardware but you’re not using their toolchain and BSP, it’s the end of the road. You have to recreate a minimal reproduction of the bug in their ecosystem before they’ll look at it.

        When you’re working at company scale, paying $1000 for a compiler is a trivial expense.

        • azonenberg 16 hours ago

          I avoid vendor toolchains and BSPs just because of how buggy they are.

          From my perspective, it's much better to reproduce a bug with a 20-line C or assembler file that compiles with upstream gcc, completely ruling all of their custom stuff out as the root cause.

          Just tell me what the silicon does when I poke this register and I'll work around it.

        • rcxdude 13 hours ago

          IME you're usually on your own anyway. I've rarely found it worth slogging through their crappy BSP for the chance of maybe receiving some useful support.

          (I am aware that there's a certain kind of mindset that likes to lean on support from vendors to do basically anything, and I think if you're in a position where you actually get good support, that might work, but in most of the instances where I've seen such a mentality it tends to produce expensive results that still don't actually work, and sometimes even when AFAICT the vendor's pretty switched on, they just don't actually have all the context)

        • eqvinox 13 hours ago

          > not using their toolchain and BSP, it’s the end of the road.

          Mhm. I'd say, you're forced to reproduce it on their toolchain and BSP, which may or may not be the end of the road depending on how complex the problem and your use case are.

      • mycatisblack 20 hours ago

        Offloading the liability for a compiler sounds like common sense to me. How many heads did this company pay? 100, 1000?

        Related, compiler bugs aren’t uncommon in the arm-none-eabi family. Especially the cortex m0 seems to have a few that recur every few years.

      • azonenberg 17 hours ago

        I have not used anything but arm-none-eabi-gcc for STM32s from day one. Never even installed CubeMX or any other ST software.

  • msgodel a day ago

    That's such a weird thing to do. MPLab used to be completely free to encourage people to use their chips.

anitil a day ago

I'm really enjoying this series, and this one is a good example of how working with hardware can be really difficult as manufacturers aren't always fully open (or honest) about device's capabilities. But typically you don't find that out until you're already a long way through bring-up.

This was an impressive amount of research to get what he wanted out of the device!

  • Xss3 a day ago

    The same is common in software. A real nightmare for me was a client insisting their entire library was single threaded only to discover one small but aspect wasn't deep into development and debugging. Had to refactor a huge chunk of the project.

0xTJ 19 hours ago

I wish that Microchip would officially publish the programming algorithms and EE bit maps for the Atmel ATF16V8, ATF22V10 SPLDs and ATF15xx CPLDs. They programming algos have been mostly reverse-engineered (or otherwise figured out), but it'd be nice to have those published in the open, and I don't think the ATF15xx maps are completely known.

The ATF15xx have BSDL files released, but that's only for testing/bypass.

lttlrck a day ago

Token ring was old in 1996 when my masters thesis focused on error handling behavior and simulation thereof.

I wonder if there are certain elements in certain "industrial complexes" that need to maintain or interface with legacy TR systems and that's why it's still hanging around in "dark silicon".

  • mannyv 18 hours ago

    The ideas behind token ring underlies DOCIS.

    While not technically TR, it does use a token that moves from device to device.

    It would be interesting to know if TR is better at contention management than broadcast ethernet - which nobody does anymore because everyone uses switches.

    • eqvinox 13 hours ago

      > The ideas behind token ring underlies DOCIS.

      > While not technically TR, it does use a token that moves from device to device.

      I assume you typo'd DOCSIS there, but no, DOCSIS does not use a token; it uses separate channels for down- & uplink, and the uplink channels are TDMA and/or CDMA depending on DOCSIS version.

  • AceJohnny2 13 hours ago

    The chemical factory I did an internship at in 2001 was sticking with Token Ring at the time.

    It was eye-opening.

  • wokkel a day ago

    What i know from the past is for realtime guarantees ethernet does not cut it. So you might be right.

    • MrBuddyCasino a day ago

      AFAIK Ethernet is used for realtime audio distribution, so that can't be completely correct.

      • zokier 20 hours ago

        AVB and TSN are relatively new addition to Ethernet, and specifically designed for realtime AV use. Traditional Ethernet is not really intended for tight real-time use.

      • rbanffy a day ago

        There is audio distribution real-time and nuclear reactor (or avionics, medical, etc) real-time. I assume the people doing (or certifying) the latter will want better guarantees.

        • thyristan 17 hours ago

          The usual terminology is hard, firm and soft realtime. In hard realtime, missing a deadline is a total failure and to be avoided at all cost, i.e. your reactor melts down, you car runs over someone, stuff like that. Firm realtime means that a missed deadline will not be a total catastrophic failure but it will make the result useless. E.g. when your printer control system mistimes the "fire ink now" in the printer, and the ink lands not on the page but somewhere else. Soft realtime means that your result will gradually degrade with missed deadlines, but not be totally useless.

          Audio is usually soft realtime, sometimes, e.g. when doing studio recordings, firm realtime.

          • eqvinox 13 hours ago

            It should be noted these distinctions don't correlate with timing; you can have a hard realtime system that needs some network packets at 50ms±10ms intervals, and a soft realtime system that needs packets at 500µs±5µs.

            Some audio setups are run quite "close to the metal", both because it needs less buffering, but also the lower human threshold for noticing latency seems to be around 10ms. And having audio not get out of phase with multiple sources/sinks gets added on top of that.

            • thyristan 11 hours ago

              Correct. If you imagine having a dam overflow, the release valves are a hard realtime system. If the dam overflows for more than a few minutes, damage will occur, so the release valves need to be opened within, say, overflow plus 5mins. A generous deadline for any computer, but still a deadline that needs to be kept at all cost.

  • badgersnake a day ago

    There were still using it for the desktops at IBM when I was there in 2001, although they were starting to phase it out.

hwj 15 hours ago

Texas Instruments has awesome documentation. Every single MSP430 microcontroller comes with:

- a family guide describing all features of microcontroller family, usually >500 pages long

- concrete microcontroller guide describing specifics of a single microcontroller, usually >50 pages long

- errata guide describing all(?) known silicon bugs with their workarounds

Also, Clang has a backend for MSP430 by default: `clang -print-targets`

  • AceJohnny2 13 hours ago

    Sure, but that's irrelevant. The MSP430 is not (in) an Eth PHY.

    As the Author demonstrated, the network IC world is very unaproachable.

userbinator a day ago

Microchip seems to be reasonably good at opening stuff up that it's bought from other companies; various Atmel security ICs which were previously very secretive now have full datasheets freely downloadable from their site.

  • azonenberg 17 hours ago

    Yes, Vitesse had been on my "naughty list" of companies that were permanently banned from getting a design win from me because of refusing to share any docs or sell parts at distributors or other engineer-hostile practices popular with the likes of Marvell and Broadcom.

    After MCHP bought them and opened up (what I thought was) the full datasheet I gave them a second chance. Seems they still held some back.

kragen a day ago

This is amazing.

Oh, no wonder this is so comprehensive and fearless. It's Andrew Zonenberg.

nemo8551 a day ago

I adore this series and other deep dives like it.

If anyone can suggest others I would be grateful.

Taniwha a day ago

This has generally been my experience of PHYs in general, lots of twisty passages all different

  • stephen_g 18 hours ago

    It doesn’t help that this is a Vitesse Semiconductor part that became a MicroSemi part that became a Microchip part through a bunch of mergers and acquisitions…

    • Taniwha 3 hours ago

      of course eventually everything becomes a Broadcom chip ....

RainyDayTmrw a day ago

In perhaps a feat of nominative determinism, both the website and the feature are named serdes.

  • kmeisthax a day ago

    SERDES is an acronym, it means serializer / deserializer, in the same way that MODEM is modulator / demodulator.