snvzz 2 years ago

The take-home, IMO:

>Let’s step back for a moment and reflect on what an amazing accomplishment this was. The entire ARM design team consisted of Sophie Wilson, Steve Furber, a couple of additional chip designers, and a four-person team writing testing and verification software. This new 32-bit CPU based on an advanced RISC design was created by fewer than 10 people, and it worked correctly the first time. In contrast, National Semiconductor was up to the 10th revision of the 32016 and was still finding bugs.

Remember: Complexity is poisonous.

  • samizdis 2 years ago

    Indeed. And I was also impressed by this:

    > Because of the simplicity of the design and the low clock rate, the actual power draw ended up at 0.1 watts.

    > In fact, one of the first test boards the team plugged the ARM into had a broken connection and was not attached to any power at all. It was a big surprise when they found the fault because the CPU had been working the whole time. It had turned on just from electrical leakage coming from the support chips.

  • robomartin 2 years ago

    > This new 32-bit CPU based on an advanced RISC design was created by fewer than 10 people

    A small motivated team with sufficient domain knowledge will always outperform large teams, regardless of funding.

    The history of business is full of examples of this. Not to go too far, the entire consumer computer industry came out of places like Apple (1 engineer!), Microsoft (a couple of guys), Compaq (three guys), etc.

  • bell-cot 2 years ago

    More perspective on the 32016 vs. the ARM1:

    There were about 25K transistors total, in the KISS-or-die ARM1 chip ( https://www.theregister.com/2015/11/28/arm1_visualized/ ).

    vs.

    There were about 60K transistors in the NS32016 chip, which had to implement features like virtual memory ( http://cpu-ns32k.net/CPUs.html ).

    A quote on the 32016, from the latter source: "It is one thing to define a superior architecture on paper. But it is another thing to implement all the required functionality with a limited budget of time and man power." Sounds like NS was really skimping on resources for the 32016. (Wikipedia claims NS had $1 billion in annual sales by 1981, which was Serious Money back then. Vs. the article makes it pretty clear how much leaner things were at Acorn Computer.)

  • Taniwha 2 years ago

    The big difference of course is tooling (CAD tooling) ten years after that I was knocking out that many timed gates a year myself, 5 years later, maybe 5 times more again - that world has changed a lot from the 6502 which was done with litho and modelling knives

  • cmrdporcupine 2 years ago

    Entirely different market 320xx vs ARM (at the time and even now). The simplicity you're advocating as successful for ARM was not applicable for the market NSC was attempting to compete in.

    The 320xx, like the 68k, was targeting the nascent Unix workstation market and minicomputer-competition. It was explicitly VAX like -- except simplified (apparently 32xx was quite a clean and consistent architecture). As others have pointed out, it supported everything needed for virtual memory (better than the 68k, actually)

    I am not sure ARMv1 (and v2) even had supervisor vs user mode, etc.? (It may have, Google isn't helping me here)

    Eventually the 68k ended up in consumer microcomputers (Amiga, Atari ST, etc.) but it started out in that same market. People who wanted to take a bite out of VAX's lunch (which was big money back then.)

    The ARM was built by a microcomputer company, for the next generation of their microcomputers. And then for the nest 30 years it was used in various forms for embedded and portable / low-power systems (apart from the ill-fated Archimedes line which was kind of Atari/Amiga/Mac level competition) . It's only in the last few years that we're seeing it in server and workstation class systems.

    Note that Apple was an investor and user in ARM from the very earliest days but only in the last couple of years has seen it wise to put it into that class of machine. They went from 68k to PowerPC, and from PowerPC to Intel without choosing ARM for the job, even though they were already a huge ARM customer and had already ported their OS.

    Different market, different kind of project.

    • talex5 2 years ago

      > I am not sure ARMv1 (and v2) even had supervisor vs user mode, etc.? (It may have, Google isn't helping me here)

      v2 at least had 4 modes: user mode, supervisor, IRQ and FIQ. They were encoded in the low 2 bits of r15 (which weren't otherwise needed, since the PC was always word-aligned).

      The original Archimedes was intended to run a "preemptively multi tasking, multi-threaded, multi-user, written in Acorn-extended Modula2+" system called ARX. But when that wasn't ready in time, they ended up just porting the 8-bit BBC Micro MOS to the new 32-bit machines!

      This is a great read about it all:

      http://www.rougol.jellybaby.net/meetings/2012/PaulFellows/in...

      • klelatti 2 years ago

        This is a great link. Really liked the section

        > Now, the aforementioned, Arthur Norman, who had written the LISP interpreter for the BBC Micro, and was in and out of Acorn the whole time, had come up with a design for a machine with 3 instructions called the SKI machine. Whose instructions were called 'S', 'K' and 'I' and it ran LISP in hardware and he built one of these things, it never worked, it was so big and covered in wire-wrap it was always broken. The idea was to prove that you really didn't need very many instructions for a general purpose machine.

    • laurencerowe 2 years ago

      Apple didn't invest in ARM until late 1990, after ARM3 had been released.

      By this point Acorn had already released two generations of its UNIX workstation line running its 4.3BSD + X11 based RISC iX, the ARM2 A440/1 based R140 and ARM3 A540 based R260.

      It did have virtual memory though its large for the time 32k page size was awkward. It only supported DMA for video so the CPU needed to manually copy data from the hard drive controller.

      https://en.wikipedia.org/wiki/RISC_iX

      • cmrdporcupine 2 years ago

        I'll take your word for it on investment date. But Apple used the ARM as early as 1992 (for the Newton). And so, pipeline wise, on their radar much earlier than that. So years before the 68k->PowerPC transition, and while they were still considering 88k.

        It is somewhat fun to imagine an alternate world where Apple jumped to ARM instead of PowerPC, but they had other goals at the time. (Likely the whole 68k emulation story would have been very tricky with ARM at the time, tho)

        • laurencerowe 2 years ago

          Apple's investment in ARM in November 1990 (when ARM was spun out of Acorn as a joint venture between Acorn, Apple, and VLSI) was a result of Apple's decision to select ARM for the Newton sometime in 1990 since Acorn and Apple competed in the UK education market [1].

          [1] https://www.youtube.com/watch?v=_6sh097Dk5k, "A History of The ARM Microprocessor"

        • klelatti 2 years ago

          Apple can use Arm today because TSMC offers leading edge fabrication. It almost certainly couldn't do before because it would have had to use an uncompetitive node the compared to say Intel.

          Plus Apple now has the cash and design team to design a very high performance implementation. It's not had that at the time of the last two transitions.

          • laurencerowe 2 years ago

            Had the StrongARM processor come out a little earlier you could just about imagine an alternative where Apple had picked it over the PowerPC. DEC released the StrongARM in 1996 and it was faster and far lower power than Intel's Pentiums of the time.

            The StrongARM IP ended up with Intel as XScale when they bought out DEC's semiconductor manufacturing operations as part of a legal settlement. But the lead of Digital's design center that built StrongARM went on to form PA Semi, who Apple bought in 2008. The first Apple-designed ARM chip was released a couple of years later with the iPhone 4 in 2010.

          • astrange 2 years ago

            Apple is using ARMv8 which doesn’t have much in common with older ARMs past the name.

            Also, Apple didn’t just fund and design a good implementation, ARMv8 was created for the project in the first place.

          • tambourine_man 2 years ago

            I think the TSCM change is more relevant. Having the best process node in the world for rent for anyone willing to pay is really transformative.

    • klelatti 2 years ago

      > Entirely different market 320xx vs ARM (at the time and even now). The simplicity you're advocating as successful for ARM was not applicable for the market NSC was attempting to compete in.

      Except the original IBM 801 RISC was built - with c20 people - explicitly to compete with VAX. The next set of RISC machines likewise had small teams and high performance.

      The whole 32016 approach was just less effective both from design and performance perspective.

      • cmrdporcupine 2 years ago

        Clearly the 32016 was a failure for all sorts of reasons that are well known. The chip was buggy as hell and late.

        But it came out around the same time period as the 68000 -- a very similar chip -- which is no less complicated. And that processor was enormously successful. (From what I've seen of the NS32k it was basically... A lot like 68k but little endian and witha smaller register set that wasn't split into address vs data registers)

        My point is that 'engineering complexity' can be measured in all sorts of ways. It took 15 years after that period for RISC to truly prove its way, and even then it's been heavily modified from the original "recipe."

        • klelatti 2 years ago

          But the 68000 dates from 1979 before even the first RISC architectures made to market. Competition was 8088. SGI for example dropped 68k range in due course for MIPS.

          What makes ARM interesting to me is that it applied RISC principles in a way that worked well for lower cost computers.

          Incidentally you might enjoy this from 1986. They were having the same debates back then as we are now!

          https://www.youtube.com/watch?v=DIccm7H3OA0

  • amzans 2 years ago

    Definitely agree.

    I think the problem is that it’s usually easier to arrive at complex solution than a simple one.

    • sbuk 2 years ago

      Always work with the following in mind:

      Il semble que la perfection soit atteinte non quand il n'y a plus rien à ajouter, mais quand il n'y a plus rien à retrancher.

      It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove.

      • klelatti 2 years ago

        Saint-Exupery?

        • sbuk 2 years ago

          Indeed.

          • klelatti 2 years ago

            Wind, Sand and Stars just may be my 'Desert Island Book'. Probably not as well known in the English speaking world as it deserves to be.

  • amelius 2 years ago

    Perhaps this was simply because the National Semiconductor people were allowed to make mistakes, whereas for ARM it was crucial to get it right the first time.

    It also tells that designing hardware is perhaps not as hard as most software engineers make it out to be.

    • pavlov 2 years ago

      As the article notes, the ARM v1 CPU had 27,000 transistors.

      Designing a RISC processor that small is hard in a different way. It’s probably similar to designing a useful CAD application whose code fits in 30 kilobytes on an operating system which only provides file I/O. (That would be the original AutoCAD.)

      The difficulty of both hardware and software has changed towards dependency management rather than squeezing the most out of a handful of transistors or program bytes. There’s no reason for anyone to create ARM v1 or AutoCAD 1.0 today.

      • klelatti 2 years ago

        Agree 100% with the thrust of your argument but isn't the challenge still there today for a Cortex-M0 equivalent say?

DrBazza 2 years ago

This takes me back. For those somewhat younger than me...

This was the first 32 bit machine widely and cheaply available in the UK. It shipped, briefly with Arthur OS (as mentioned in article), and then RiscOS came out which was so much better. C was supported.

Apps were a folder containing the code, plus a "magic" file called !Boot, which was scanned and ...executed... by the OS. Which lead to the first bunch of viruses.

An upgrade chip was available a couple of years afterwards (33Mhz IIRC).

The software wasteland as described in the article wasn't as bad as it sounds, there was lots of "home brew" software being distributed via floppy disks in the school playground (no internet kids!), including a hypertext/markup editor (that I can't find the link to) but for static docs, not internet ones. Impressions [1] was the word processor of choice. And was leagues ahead of WordPerfect and Word. Published magazines were also full of code (that you had to type in), as well as the beginnings of "free disk on the cover of this magazine".

The archimedes shipped with !Draw and a few other built in apps. !Draw was a vector drawing app, that was again, unlike anything on any other machine.

So many (mainstream) firsts, but just didn't "win" (a bit like a lot of tech today).

It's just such a shame that Acorn didn't have any reach into America and sell it in volume.

1. http://www.computinghistory.org.uk/cgi/archive.pl?type=Softw...

  • mattkevan 2 years ago

    My school was all in on Acorn, so we had a number of different Acorns over the years, starting with Arthur and then upgrading to RiscOS.

    In fact, my first experience of computer hardware was prising out the Arthur ROM chips and replacing them with RiscOS ones.

    It truly was unique - could boot in seconds and was usable immediately thanks to the built in vector, bitmap and text editors.

    It was a shock whenever I visited a friends house and they had an IBM PC - DOS and Windows felt like relics from the past in comparison.

    Really wanted to get a Risc PC, but by the mid 90’s it was clear Acorn weren’t going to make it.

    We got an Apple instead, which at the time it wasn’t obvious they were going to make it either.

    • DrBazza 2 years ago

      Digging a ROM out of a motherboard made you value the machine even more. I can distinctly remember bending the leg on one of the chips and carefully straightening it, otherwise that would have been £80 (?!?!?) down the drain which was a lot back in the early 90s.

  • stevekemp 2 years ago

    I remember when I went to sixth form in the early nineties they had a couple of the Archimedes machines around the place - but all the kids, and staff, used the x86 machines (running GEM IIRC) instead.

    I had a couple of friends at the time who'd owned BBC Machines, but most people had the ZX Spectrum or Commodore 64. I never knew anybody who personally owned an Archimedes at all, and other than seeing them at the sixth form I never saw any at all in the wild.

    • DrBazza 2 years ago

      My school back then went from BBC micros straight to PCs as well (I suspect it might have been govt policy, or lack thereof). And the PCs then were awful compared to the Arch. Can't remember if it was an early version of Windows or OS/2, but back then the PC gui desktop was absolutely shocking in comparison to the Archimedes and Macintosh.

HarHarVeryFunny 2 years ago

One of the interesting things about the ARM instruction set (at least in the Acorn era - not sure if this is still true of more recent ARM iterations) is that ALL instructions, not just branch, were conditional, which meant that you could avoid the need for branch instructions in some cases - rather than conditionally branch past some instructions, just make them conditional and "execute" them regardless. Smaller code and faster execution!

The article cites Hermann pushing IBM's RISC approach, but as I recall (I worked at Acorn at the time) the inherently RISC-like simplicity of the 6502 that Acorn was using was itself part of the inspiration.

  • chasil 2 years ago

    Conditional execution was explicitly removed from AArch64 to improve pipelining and branch prediction. It does decrease code density.

    The register had a much longer set of articles on the history of Acorn and ARM. I was hoping for new information in this Ars Technia piece, but it was scant.

    https://www.theregister.com/2012/05/03/unsung_heroes_of_tech...

  • shadowofneptune 2 years ago

    I struggle to see what is RISC-like about the 6502. 8-bit processors are usually too quirky to put into one of the categories. RISC hallmarks are:

    - Fixed-width encoding: No. It varies from one to three bytes.

    - Large register set: No, it's an accumulator architecture. I don't count the zero page because every program that ran on the computer had to use the same zero page. IIRC, user programs on some late-era 6502 OSes would have only a handful of those 256 bytes to work with, rest being claimed by the system software.

    - Pipelining: There's some optimization with prefetching of instructions. It is not the multi-step pipeline of RISC, nor does it need to be.

    - Load/store architecture: Plenty of instructions work directly on memory.

    • HarHarVeryFunny 2 years ago

      RISC wasn't even a thing when the 6502 was designed, but in terms of complexity it was definitely at the reduced/simple instruction set end of the spectrum, as evidenced by it's transistor count.

      I never had the conversation with Wilson/Furber, but I'm assuming that CISC was essentially off the table in the first place. Both had zero CPU design experience, so to have any hope for success it had to be simple. The 6502 provided plenty of validation that simple could be fast, and no-one at Acorn had been impressed with next-gen 16/32-bit CISC designs anyway. No doubt other early RISC designs were an influence, but I don't believe there was any RISC vs CISC religion involved.

      It's amusing how today's worldwide domination of the processor market by ARM is really based on design decisions that were born of necessity for a chip that was really targeted for desktop use. It had to be simple, hence the die was small and cheap. It had to use low cost plastic packaging, so it had to be low power to avoid overheating. Low cost and low power is what led to ARM's mobile success.

      • shadowofneptune 2 years ago

        Just being simple in electronic design doesn't make it RISC, that makes the term useless. The PDP-8 shares a lot of similarities with the 6502, and in one implementation comes in at only 1409 transistors. If you look at what you need to do to write a program in it, it's anything but RISC in concept. A small set of operations is the least useful marker, the ones I gave above are common criteria.

        • HarHarVeryFunny 2 years ago

          Sure, but RISC vs CISC is at heart just a design trade off. Once you've decided (or been forced by pragmatisim) to go with a "simple but fast" instruction set then design choices like large register set and load-store architecture (to keep instruction set small and orthogonal) are all obvious choices, and don't need to have come from some rulebook of RISC design principles.

          All I'm saying is that I heard at the time that the well-liked 6502 was a source of inspiration for the design (and someone else has noted Hermann also remembering something similar), notwithstanding that there was no doubt much borrowing from the contemporary (UC Berkeley, Stanford) RISC designs as well.

          As a 6502 programmer the utility of zero page variables is pretty obvious and there is a parallel there to a large register bank.. Maybe the 6502 wasn't "RISC like" in sense of incorporating what would become RISC design principles (it was too primitive for that), but it certainly validated a KISS design approach, just as contemporary next-gen CISC designs did nothing in Acorn's eyes to sell that approach. Back at that time we were all just seeing/discovering what was possible, and I'd have to guess that the ARM instruction set was more pragmatic than dogmatic - a combination of what seemed desireable, appeared reasonable to implement, and performed well in simulation.

  • mijoharas 2 years ago

    I was speaking with Hermann about this article so asked him and he confirmed what you said:

    > I do not remember IBM featuring so prominently in our deliberations. Our starting point was a 16 bit version of the 6502.

    • klelatti 2 years ago

      I think it might have been Andy Hopper who pushed the RISC papers forward (IBM / Berkeley etc).

      There is an interesting interview with Hermann where he says that Acorn asked Intel for a special version of the 80286. I've always been puzzled by that given how weak the 286 was!

      • HarHarVeryFunny 2 years ago

        > There is an interesting interview with Hermann where he says that Acorn asked Intel for a special version of the 80286

        First I've heard of that. Did he give any details of what it was being considered for, or what special features were being asked for ?

        • klelatti 2 years ago

          Said the team liked it but wanted address and data bus demultiplexed - but the 286 already has demultiplexed buses unlike the 8086. Maybe I misunderstood!

          It’s here I think:

          https://m.youtube.com/watch?v=w1HODsDGMzI

          • HarHarVeryFunny 2 years ago

            Thanks.

            Yeah, that doesn't sound quite right. There must have been some approach to Intel, but that particular complaint applies to the 8086 not the 286. There was certainly a big emphasis on memory bandwidth at Acorn. There's a video by Sophie Wilson (who I knew pre-Sophie - worked with her to reuse BBC BASIC floating point libary for ISO Pascal) describing how Acorn found that, across various process architectures being evaluated, performance was directly related to instruction fetch speed regardless of how that broke down in terms of clock speed, bus width or cycles per instruction.

            Acorn had of course been experimenting with most of the next-gen processors at the time, and put out a few in "2nd processor" form before a bit later repackaging them as the Acorn Business Computer series. There was actually a prototype 286-based version of the business computer, but by then the ARM was already being developed.

            I don't recall hearing anything about the 286 being evaluated by Acorn the time (I was there from 82-85). It's interesting that the 286 and Nat Semi 16032 both came out in '82, and Acorn chose to go with the 16032 as a 2nd processor (I still have one - did some work on PANOS) rather than the 286. Perhaps this is the time frame Hermann is talking about? The "If Intel hadn't rejected us, there'd have been no ARM" story it cute to make Intel look foolish, but it's not obvious that a 286 2nd processor would have curtailed ARM development anymore than the 16032 had.

            • klelatti 2 years ago

              Thanks so much for coming back. I wondered if I was missing something in what Hermann was saying!

              It must have been interesting to work with Sophie. I remember looking at some disassembly of the BBC Basic 6502 code and being fascinated by how she got such good performance out of it.

              Does your 16032 still work? Must be fairly rare if it does!

              I was at the Uni at the time ARM was launched. Remember going to a University Computer Society meeting where I think (my memory is a bit vague at this point) Steve Furber presented the first Arm. No one could quite believe what they'd done.

              I'm working on a series of articles for my blog on the early days of RISC. Comparing with the other RISC architectures it's really interesting how they differ. I think Sophie / Steve did a fantastic job of turning RISC into a practical and cost effective reality. I've just had a twitter conversation with Ben Finn (of Sibelius music software fame) who was saying how pleasant writing ARM assembly was!

  • fulafel 2 years ago

    It seems there was a reboot in the AAarch64 (aka 64-bit ARM), and the general predication was jettisoned. Along with some other traditional ARM characteristic features like loading / storing multiple registers with one instruction.

    • TazeTSchnitzel 2 years ago

      Loading and storing multiple registers is maybe most useful for loading a 64-bit value (e.g. double-precision floating-point) as two 32-bit pieces. Once you have direct 64-bit support it might be less useful?

      • adrian_b 2 years ago

        Loading and storing multiple registers was mainly used in function prologues and epilogues, for saving and restoring the registers.

        If you have to save e.g. 10 registers, using 40 bytes of instructions on function entry and another 40 bytes of instructions on function exit is a lot of wasted code space.

        On Intel/AMD CPUs, the deprecation of the loading and storing of multiple registers (POPA and PUSHA) had less impact, because for pushing or popping a single register the length of an instruction is only 1 byte or 2 bytes, vs. 4 bytes on ARM.

        To alleviate somewhat the removal of the instructions that could load and store up to all registers, in AArch64 double load and double store instructions have been added to the ISA, which can load or store a pair of distinct registers.

        • PopePompus 2 years ago

          Loading/storing multiple registers was also more important for the early ARM processors, because they had no cache at all, so storing or loading registers one at a time required multiple instructions to be fetched from off-chip RAM, tying up bus bandwidth and stalling the pipeline (such as it was).

      • monocasa 2 years ago

        It's that it's a pain for precise exceptions. Handling how you get a page fault for ldm/stm and then restart them once the relevant page is loaded is non trivial.

        • fulafel 2 years ago

          Don't they still have to handle that for the remaining "load 2" instructions?

          • monocasa 2 years ago

            No, a single 128bit load/store even if it addresses two registers is way easier to handle than dumping a variable amount of your whole register file in a single instruction.

            That's why the ARM1 had a microcode engine for LDM/STM despite having 'RISC' as the center letter in it's initialism.

  • joezydeco 2 years ago

    The conditional execution was the key to making pipelining work back in those days.

    Modern processors do way way more to try and predict where branches will go and minimize breaking the pipeline flow.

  • Turing_Machine 2 years ago

    "RISC-like" is a good term. The way the 6502 used Page 0 RAM effectively gave it 256 extra "registers". Not as fast as the real on-CPU registers, of course, but definitely faster than accessing a random byte of memory. Accessing arbitrary memory required a two byte address, but some instructions had a "page 0" form that let you access those addresses with only a one byte address, with page 0 assumed.

  • Someone 2 years ago

    > One of the interesting things about the ARM instruction set […] is that ALL instructions, not just branch, were conditional, which meant that you could avoid the need for branch instructions in some cases - rather than conditionally branch past some instructions, just make them conditional and "execute" them regardless. Smaller code and faster execution!

    I think both can be debated. It took 4 bits out of each 32-bit instruction to specify those condition flags. That’s 12.5% of the bits in each instruction that could have been used for other stuff such as supporting more registers and larger immediate constants, both of which could speed up code.

    It was an interesting idea, though. Simple to implement and useful at times.

  • rrss 2 years ago

    this (and the barrel shifter being available in all instructions) was at least partially due to the limitation mentioned in the article that the processor had no data or instruction cache. If single instructions could have these extra capabilities, the processor could do more work between stalling waiting to read the next instruction from memory.

poofmagicdragon 2 years ago

Wilson later went on to co-found Element 14 and design the FirePath architecture, which is specialised for bulk data processing. The company was later acquired by Broadcom, and for many years such processors have been and are currently used in much of its product range related to networking. For example, if you've ever used a DSL modem, it's likely that the transceiver at the telephone exchange end is a FirePath chip.

I find it truly impressive how the processor design work of this one individual had so much influence in both the general purpose computing and telecommunications fields.

klelatti 2 years ago

The thing that has always impressed me about the ARM1 is that they didn't just take the RISC model from the papers they read: they adapted it to suit their own needs. Adding the barrel shifter, conditional execution, removing features that made the design more complex, and squeezing it into a gate count that meant that they could produce it at an acceptable cost.

As Wilson said later - "MIPS for the masses" - and not for workstations.

It wasn't just a technical achievement it was a commercial achievement too - although Acorn failed to make use of it of course.

bonzini 2 years ago

Olivetti went terribly wrong in the 90s, but it was no typewriter company in the 80s! It started as a calculator company and even NASA used its stored program calculators, effectively one of the first desktop computers (https://en.m.wikipedia.org/wiki/Programma_101). In the 80s they bought the rights to the Thomson MO6 and the BBC Micro in an attempt to use their brand to enter the home computer market in Italy, but they were already selling both PC compatible computers and custom designs such as the M10 (an improved Tandy 100).

bogomipz 2 years ago

I have a question about the relationship between the "simplified transistor" animated gif and the "Simplified AND and OR gates, using transistors" animated gif here:

https://cdn.arstechnica.net/wp-content/uploads/2022/09/and-o...

The image in the URL above represents two transistors in each gate type, where the transistor is shows as a circle with 3 rectangles in it. Is my understanding of this relationship correct if I assume the following?

The line labeled "input" in the animated gif maps to the gate in the transistor.

The line labeled "output" in the animated gif maps to the drain in the transistor.

The line labeled "voltage" in the animated gif maps to the to source in the transistor.

In other words gate equates to input, the output equates to drain and voltage equates to source and the distinction between these is really whether we are talking about the transistor or a circuit(gate) made up of those transistors?

  • JeremyReimer 2 years ago

    You've got it right. I was going to label the inputs in the transistors in the second animation, but it made the diagram rather crowded, so I left it out.

    The diagrams are simplified because they leave out things like connections to ground, attached resistors that mitigate the total current, etc. But I wanted to convey, as simply as possible, how transistors worked and how gates could be made very simply out of a small number of transistors.

    The actual CMOS chips in use by ARM (and still in use today) complicate things more, because you have different types of silicon (NMOS and PMOS) on the same substrate, so the transistors work slightly differently and you can simplify making things like NAND gates, which themselves can be combined to make all other types of gates. For this article, I didn't want to get that deep into the woods, though. :)

    • bogomipz 2 years ago

      Hi, thanks for the article, I'm looking forward to the next installment! I hope you post it here when it comes out.

      I actually appreciated the simplicity of the diagrams and I think most literature follows this where terminals(I think that's the correct term) are marked source, gate and drain and then the circuits are denoted input, output, voltage/vss. I've never seen it stated bluntly that "gates mean this in this context and we refer to as an input in the circuit context." Maybe it's just considered too obvious but it's always made me question my own understanding. Thanks for the confirmation.

bicolao 2 years ago

> A disadvantage of the RISC design was that since programs required more instructions, they took up more space in memory. Back in the late 1970s, when the first generation of CPUs were being designed, 1 megabyte of memory cost about $5,000. So any way to reduce the memory size of programs (and having a complex instruction set would help do that) was valuable. This is why chips like the Intel 8080, 8088, and 80286 had so many instructions.

> But memory prices were dropping rapidly. By 1994, that 1 megabyte would be under $6. So the extra memory required for a RISC CPU was going to be much less of a problem in the future.

I think this is still a problem? Not memory size, but the speed of the memory bus.

  • snvzz 2 years ago

    >A disadvantage of the RISC design was that since programs required more instructions,

    It's important to understand that, while that was the case back then, today, RISC-V has best 64-bit code density, whereas best 32-bit is still ARM thumb2, with RISC-V being a close 2nd, and actually better with current B and Zc extensions, to be ratified soon.

    This is achieved with very little complexity required in the decode, which becomes a net win if there's any ROM or cache in the chip.

  • kimixa 2 years ago

    Caches are also very much size-constrained.

    Compressed instruction formats phase in an our of favor to this day, trading size for decode complexity.

cbm-vic-20 2 years ago

The article mentions WDC's 65C816, which you can still buy today at the usual electronic parts distributors.

bluedino 2 years ago

I just remember reading the name StrongARM in signatures on newsgroups in the 90's

  • pmyteh 2 years ago

    The StrongARM was a DEC-designed offshoot of ARM which (amongst other things) traded off some of the low power consumption of ARM for greater performance. The last-generation Acorn computer (the RiscPC) had swappable processor cards and Acorn more-or-less force-fit the SA110 into the last generation of RiscPCs because it was so much faster than the ARM8 that was in the roadmap. There were a bunch of software incompatibilities, partly because it used a different ISA (ARM v4 instead of the v3 used by the CPUs on the existing cards) and partly because it had separate instruction and data caches that broke a lot of existing self-modifying code. An Acorn application note with differences is online[0].

    Honestly, those of us with StrongARMs were prone to showing off about it because it was such a speed upgrade over the existing machine. I didn't acquire mine until 1999, well after the whole Acorn ecosystem was obsolescent, but I might well have put it in my sig anyway...

    [0]: http://chrisacorns.computinghistory.org.uk/docs/Acorn/AN/295...