dymk 2 years ago

I built a small multitasking kernel with a friend with a 68000 (m68000) in college. We implemented it on breadboards, I think with 30 or so feet of jumpers [0]. I had very little prior embedded experience, so it was trial by fire.

It was a wonderful introduction to how kernels work (or at least concurrency and scheduling) at their most basic level, without having to deal with the complexity of virtual address spaces, memory protection, or the byzantine bring-up dance of register prodding that x86_64 needs. It prepared me well for my operating systems class the next year, and as far as I can tell, was the eye-catcher project that got me an internship on a team doing kernel development the following summer.

The instruction set is also a dream. Super CISC-y, yet more enjoyable (and IMO easier to grok) than x86. Take a look: http://wpage.unina.it/rcanonic/didattica/ce1/docs/68000.pdf

My favorite is DBcc - "Test condition, decrement, and branch". All in one instruction.

[0] - https://i.imgur.com/MKD7wTv.jpg

Here's the code - I have no idea how it works anymore, and I believe it's incomplete compared to what I had running. The complete code archive I think is lost to time - https://github.com/dymk/68k/blob/master/projects/libraries/l...

My friend wrote up a much more comprehensive document on the build:

https://github.com/ZigZagJoe/68k

https://docs.google.com/document/d/1ejW_Ist19tIXeA5HtEWixaLo...

  • systems_glitch 2 years ago

    Very cool! Do note that the Wikipedia page is about the 6800 (Sixty-Eight Hundred), though, not the 68K. A generation earlier, 8-bit, far less popular in end-user applications than the 68K. Still a good processor.

    • deaddodo 2 years ago

      It was only unpopular because the 6502 was built by the same engineers and came out relatively soon after at a much lower price point.

      In fact, the 6501 (architecturally the same as a 6502) was pin compatible with the 6800 and was discontinued after a lawsuit.

      I actually preferred aspects of the 6800 series, particularly the addressing options.

  • Teknoman117 2 years ago

    I think the x86_64 chips are fairly unique in terms of how hilariously awful the bring-up process is. I'd hate to be a microcode engineer on one of them.

    So much state. Then throw virtualization into the mix.

    • deaddodo 2 years ago

      They’re only terrible because they needed to be backwards compatible with 32-bit x86 code. Once they’ve been bootstrapped into pure 64-bit mode, they’re a bit better.

  • Doctor_Fegg 2 years ago

    > "Test condition, decrement, and branch". All in one instruction

    The Z80 had that too - DJNZ (Decrement and Jump if Not Zero). Standard way of writing loops.

    • ataylor284_ 2 years ago

      The 8080 lacked this, but even though the 8086 was more or less based on the 8080, this probably inspired the 8086 LOOPNZ instruction. Same idea, hardcoded to use the CX register instead of B on the Z80. The unoffical nicknames for the AX, BX, CX, DX were accumulator, base, count, and data. CX was used for the count for all looping and string move instructions. Similarly, BX was special purpose for indirect operations and AX/DX for certain logic and arithmetic instructions.

      • bonzini 2 years ago

        AX/CX/DX/BX (in machine code this is the order from 000 to 011) were intended as equivalent to AF/BC/DE/HL.

    • drfuchs 2 years ago

      The IBM 360 had BCT back in 1964. Rumor had it that it was created for Fortran DO statements (“for-loops”), but IBM’s compiler never emitted it.

      “The BCT instruction subtracts 1 from the value of the contents of the target register specified in the first argument. If the value in the target register after the subtraction is zero, no branch occurs. Otherwise the program branches to the specified address.”

    • tom_ 2 years ago

      DJNZ only does the decrement and branch if zero part. That's implicit in DBcc, which also initially checks the condition and is equivalent to NOP when true.

    • classichasclass 2 years ago

      PowerPC bdnz says hi, though it is restricted to one special purpose register (CTR, the counter).

  • pinewurst 2 years ago

    The 6800 is nothing like the 68000.

    • dymk 2 years ago

      Well, it was the predecessor :)

      • mek6800d2 2 years ago

        There was the 6809 in between. I remember BYTE magazine's cover story, "A Microprocessor for the Revolution". Well, not quite, as it turned out! :)

        • danachow 2 years ago

          The 6809 had some hobbyist/consumer prominence on this side of the pond in the TRS-80 CoCo - and some European similars. And there was a multitasking OS written for it - OS-9.

          OS-9 has an interesting history in its own right. It was ported to a wide range of subsequent architectures. All sorts of applications - Fairlight synths, Philips CD-i, most of the traffic lights in the US in the 80s and 90s to name a few.

        • ncmncm 2 years ago

          The 6809 was used in a production gadget, the Vectrex, the only vector-graphic consumer videogame console. It came with a screen that worked like an oscilloscope -- no raster, just an electron beam sweeping along the line. So, no jaggies.

          As it was necessarily monotone, games came with clear plastic color overlays for the screen.

          6809 was notable as the first 8-bit microprocessor with a multiply instruction: 8x8->16 bits.

          • ddingus 2 years ago

            It was also in some Williams games, including my favorite: DEFENDER.

            Crazy what could be done just pushing pixels at 1Mhz!

        • mark-r 2 years ago

          I never got to use the 6800, but did use both the 6809 and 68000 in assembler. Both were a joy to work with.

          • KerrAvon 2 years ago

            I never encountered the 6800, but I did talk to people in the 90's who thought the 6800 was pretty bad and the 6809 pretty good.

            The 68000 is really a different class of CPU entirely, so I'm not sure why folks are discussing it here. Only the name is similar.

        • deaddodo 2 years ago

          The 6809 was an answer to the 6502. They needed something reasonably “better” than the 6502 to justify the price tag. Unfortunately, for them, the 6502 and it’s variants were too entrenched at that point.

      • djmips 2 years ago

        Sort of. I mean they were both made by Motorola and both had then #6800 in them...

        • krallja 2 years ago

          They also both have registers!

          • djmips 2 years ago

            But a significantly different Strahler number.

  • gbraad 2 years ago

    This article refers to the 6800 which isn't the same. Also, that's a 68008 (with the 'simplified' 8bit data and smaller address bus). Both somebow 8bit :-P

    • dymk 2 years ago

      It certainly made it easier to wire up :) I would not want to hand-wire a 32 bit databus.

      • gbraad 2 years ago

        It was considered as the 'cheaper' option by Commodore for the Amiga. Luckily they didn't add the prices dropped for the regular 68000 chip.

        It is only hsndy when considering interfacing with just older 68xx peripheral chips.

        Note: I did wirewrap the full bus for a 68000 schoolproject, plus the address split for high and low addresses (like an odd/even rom) IIRC.

  • keithnz 2 years ago

    I built a multi tasking kernel for the 6809, which was an extended version of the 6800 rather than the 68000 series, which is quite a different kind of a CPU. It was fun! Also created a double sided PCB using laser printers and then ironing the layout onto photo etching PCBs.

    • anonymousiam 2 years ago

      There was also OS9 (Microware, not Apple). It was very Unix-like, but not fully POSIX, and very poorly documented. After initial development on the 6809, they released a port for 68K.

      https://en.wikipedia.org/wiki/OS-9

Taniwha 2 years ago

In college in NZ I wrote a simple compiler for the 6802 with some friends, it fit in 2k (just) .... we called ourselves "uSoft" (with a greek mu - but we were cross-compiling from cards, no room for a mu) .... the next year we heard of some jokers in the US who were using our name, and pffft! they only had a basic interpreter, so lame!

Needless to say the jokers in the US became multi-billionaires, we were stuck on the other side of the world with no one to sell to, and no real knowledge of the marketing we'd need to bring our code to market - if only we'd incorporated we could at least have sold the name :-)

  • benj111 2 years ago

    You can at least claim to be a founder of Microsoft at parties and stuff.

    • Taniwha 2 years ago

      I have done that in company trivia contests ....

klelatti 2 years ago

Worth mentioning the 6501 which was launched by MOS Technology alongside the 6502 and was pin-compatible with the 6800.

Motorola sued alleging patent infringement and misappropriation of trade secrets.

MOS Technology settled, paid Motorola $200,000 and dropped the 6501.

  • dylan604 2 years ago

    $200k sounds like such a small number, but it was the dropping of the offending chip that was the gotcha.

    • kjs3 2 years ago

      Well...200k in 1975 $'s. And MOS was a startup, basically. So not a trivial amount. But yeah, a big part of the initial pitch was "you can use your same hardware design but replace the $300 CPU with our $25 CPU".

      • mark-r 2 years ago

        I knew a guy who tried to build a multi processor system with 16 6800s. Made a mistake with the power supply and ended up frying all of them.

        • kjs3 2 years ago

          That's seriously ambitious (tips hat). There were a number of folks that came up with dual processor designs back in those days playing on the observation that most 8-bitters (and many 16- and 32-bitters) could never utilize more than 50% of the available memory bandwidth. There's an NS32000 application note somewhere that describes such a design, and NS had datasheets for an NS32132 that was an NS32032 with added some support for such a system. I dunno if the NS32132 ever shipped, however.

      • deaddodo 2 years ago

        There was also an argument that the 6501 was built as a sacrificial lamb so that when Motorola inevitably sued them, they would be able to keep the 6502 out of the case.

        • kjs3 2 years ago

          Do you have a source for this? I had always read the change in pin-out was initiated after Motorola went legal.

          • deaddodo 2 years ago

            The 6501 and 6502 were developed simultaneously and the 6502 was released a month later (Aug 1975 vs Sept 1975). Both well before the lawsuit began, yet alone concluded.

awful 2 years ago

The Heathkit/Zenith ET-3400 trainers with 6800s, and the accompanying Heath/Zenith coursework, were fantastic in 1982. 50+ of us completed it that year, the class final was bit-banging the tune of "Anchors Away" as the instructor was a Navy officer and educator, retired to civilian teaching. I later learned machine language on broken superscalar mainframes as bit-chaser, but the 6800 were simply fantastic devices and prepared me well. Flat, shared memory, von Neumann architecture. Very nice op codes and indexing, as I recall. Ill have to go back to my coursework and reminisce...

  • Marcus10110 2 years ago

    I have an ET-3400 on the shelf behind me! I was just playing with it the other day. After watching Jason Turner's CppCon talk on writing an i386 to 6502 assembly translator [1][2], I started working on a fork that would target the 6800. I only got about 3 instructions working, but that's really all you need for some really simple test code with optimization turned to the max. It also turns out that someone wrote a fantastic emulator specifically for the ET-3400 trainer [3], and I managed to get my application running on it!

    [1] https://www.youtube.com/watch?v=zBkNBP00wJE [2] https://github.com/lefticus/6502-cpp [3] https://github.com/CalPlug/Heathkit_ET-3400

    There is something special to me about the idea of writing modern C++, and compiling it for such early microprocessors. The 512 bytes of RAM is a pretty big limitation though. I wanted to try and emulate an EEPROM using an Arduino or FPGA, but got stalled out on the project. From time to time I like to browse through the LLVM backend documentation, but I can't seem to commit to trying to build a backend.

  • B1FF_PSUVM 2 years ago

    > Flat, shared memory [...] nice op codes and indexing, as I recall.

    Yeah, that was what the guy working with the Moto chips smirked at us slaving on Intel 808x contraptions, back in those days ...

  • Salgat 2 years ago

    I learned assembly on a 68HC11 heathkit, damn those things were awesome!

ajxs 2 years ago

I'll mention this even though it isn't exactly the Motorola 6800: I've been doing a lot of work recently with the Hitachi 6303, which is a member of Hitachi's family of Motorola 680x alternatives. The Hitachi 6303 is featured in a lot of 80s Japanese synthesisers, particularly Yamaha's DX/TX range. The Motorola 680x series also features in the Ensoniq ESQ family of synthesisers, probably many more.

I became acquainted with this architecture disassembling the Yamaha DX7 firmware: https://github.com/ajxs/yamaha_dx7_rom_disassembly It's a great instruction set to work with. It's my first experience with 8-bit programming, and I found it very intuitive.

jhallenworld 2 years ago

This was my first microprocessor: I developed assembly language for it and the 6809 using Motorola's Exorciser development system (which was already old in the mid 80s when I used it). Here is a simulator I wrote for it, in case you want to try it in Linux or Cygwin:

https://github.com/jhallen/exorsim

8 inch floppies! I remember we had an old GE chain-train printer for it which was awesome because it was so fast.

FullyFunctional 2 years ago

With the optics of _compiled_ code, how do the various 8-bitters stack up? (6809/6811, 65C02, Z80, H8, ...)? One would have to account for the frequency allowed by the ISA at iso-technologies (which makes including AVR somewhat tricky).

I only have experience with Z80 and 65C02 and I believe the consensus is that a 4 MHz Z80 beats a 2 MHz 65C02, but neither is a particularly nice compiler target.

  • self 2 years ago

    Some of that can be gleaned from this benchmark, published in Byte, in 1981: https://en.wikipedia.org/wiki/Byte_Sieve

    Results for a few 8/16 bit processors are here (and on subsequent pages): https://archive.org/details/byte-magazine-1981-09/page/n193/...

    • FullyFunctional 2 years ago

      Very cool. While obviously not ideal, the results are probably accurate within a small factor. Unfortunately there's no assembly version for 65C02 but Z80 does surprisingly well in this test.

      I muse what could be done with modern cross-compiler (SAT solving for optional code sequences?) A llvm backend for Z80 has recently kicked back into gear: https://github.com/jacobly0/llvm-project

    • mysterymath 2 years ago

      I ran the C version of this benchmark using llvm-mos's Clang for the 6502. The results:

      21.4 seconds 5793 bytes

      Which is middle of thepack for the Z80 benchmarks, but well below the 6502 ones. We're also using a slightly tweaked embedded printf written in C, so this could probably be improved somewhat there, sans any compiler changes.

  • crest 2 years ago

    The 6809 had two 16 bit index registers, PC-relative addressing, and upper half of the stack and data page address was taken from special purpose registers instead of hardwired. It should be a fairly straight forward compiler target. On the other hand it was late to the game, expensive and not (much) faster than 8bit microprocessors. The 6502 very cheap, fast enough when it came out, but a really annoying compiler target.

    • PaulHoule 2 years ago

      I had a TRS-80 Color Computer when I was kid which had one major drawback: it could only display 32 characters across the screen compared to 40 characters for the Apple ][, C64 and most others at the time.

      The Coco could run an operating system called OS-9 which was Unix-influenced and came with a good C compiler and also a bytecode interpreted structure basic called BASIC09.

      I know C compilers were really popular among CP/M users running the Z-80 and 8080 chips and also on the IBM PC which had a segmentation system to reach beyond 64k that I thought felt really elegant in assembly language but was awkward for compilers.

      Where OS-9 had all the above beat was that it was a real multitasking OS and I had two terminals plugged into my coco in addition to the TV console and could use it like a minicomputer.

      When I switched to an IBM PC AT compatible my favorite programming language was Turbo Pascal which adds everything missing from Pascal to do systems programming. I switched to C when I went to college because that was supported on the the various UNIX workstations they had.

      • jhallenworld 2 years ago

        The 6809 was nice, but I think the CoCo otherwise was crap. Aside from the 32 column display and awful color scheme, the built-in serial port was bit-banged. This meant that floppy drive and serial access could not happen at the same time. This is very relevant when trying to use OS-9.

        There was an external serial port as an option, but there was only one slot. So you also had to buy a slot expander (a Multi-Pak).

        • PaulHoule 2 years ago

          Note that hardware flow control makes the bit banger a lot more reliable than it would be otherwise.

          I had the bit banger connected to a compact printing terminal from DEC that ran at 300 baud and had an acoustic coupler so you could log into 300 baud services with nothing but the terminal. There was not a lot of risk that this device would overflow your buffers.

        • PaulHoule 2 years ago

          I had the multi pak and the external uart. The entry level price of the coco was low but I think I got most of the peripherals available for it, particularly the disks were crazy expensive. Adding it all up I must have spent more than I spent on the AT clone that replaced it ($1200)

          In most ways the C-64 was a great machine but boy was the disk drive slow.

          • jhallenworld 2 years ago

            Perfect system == 6809 + SIO from the Atari800 + C64 SID + IBM PC keyboard + C64 VIC or maybe V9938.

            • ddingus 2 years ago

              That would be fun!

              I would add the MMU from the CoCo 3, so one can bank in lots of RAM with only minor league fuss.

  • klodolph 2 years ago

    Just for modern perspective... you can use SDCC. It's an open-source optimizing C compiler targeting small microprocessors like the Z80 and various others. The project itself is terribly run--the maintainers recently pushed out an ABI change which broke everyone's code, but released it as a minor version bump. This ABI change did speed up the code, but that's small consolation to anyone who ended up with broken code.

    IMO the Z80 is a lot nicer compiler target than the 6502, because the stack pointer is 16-bit and it's much easier to use the stack in general.

    There are a couple C compilers for 6502 (like cc65 and WDC's C compiler) but they're not quite as good as SDCC, as far as I can tell. They're also not as actively maintained.

    • davidgould 2 years ago

      > The project itself is terribly run--the maintainers recently pushed out an ABI change which broke everyone's code, but released it as a minor version bump.

      When was this? What version? Thanks.

      • klodolph 2 years ago

        SDCC 4.2.0, released Mar 8, 2022.

        • davidgould 2 years ago

          I don't think your comment about a breaking abi change on a minor point release is fair. Perhaps you have misunderstood the release number scheme? Every year around the first quarter they have one major release, ie 3.9, 4.0, 4.1, and now 4.2. A minor release would be like 4.2.1. There is no significance to the major digit, ie 4.0 was just the release a year after 3.9 and not otherwise special.

          I'm not affiliated with the project, but I think would be unfortunate if someone was turned away using or contributing to the best or only opensource tool chain for a number of processors (eg the paduak family) because someone claimed the project was terribly run.

          Please consider updating your comment.

  • Gordonjcp 2 years ago

    The 6809 is ridiculously suitable for running Forth, because you've got two stacks, a 16-bit accumulator, and you can implement NEXT in two instructions taking about three clocks each.

    • cmrdporcupine 2 years ago

      Someone needs to write a WASM VM for it.

  • cmrdporcupine 2 years ago

    I just don't think any of the 8-bits made good compiler targets. At least not C compilers. Not enough registers, even in the 6809.

    • ncmncm 2 years ago

      C compiled for the 6502 would use the zero page (first 256 bytes of RAM) as registers, and use the actual registers just to run instructions of a higher-level abstract machine that, e.g., understood 16-bit numbers. Kind of like Xerox Alto, in that particular way; on the Alto, only device drivers were coded native.

      There was a C compiler that kept one of the 6502 index registers zero at all times, just to have a zero handy.

      This use of a little interpreter to provide a higher-level instruction architecture to program to was really, really common in the 50s and 60s. The Apollo AGC computer that landed on the moon was mostly programmed that way. It seems surprising that with memory so tight, they would use up so much of it for the virtual machine interpreter, but instructions for that could be much more compact than native code. It made a slow computer even slower, but they all felt fast back then.

      Steve Wozniak burned a little interpreter like that into the Apple ][ ROM, just smart enough for dumb jobs like copying blocks of memory. It used a reserved fragment of the zero page as its registers.

    • rahen 2 years ago

      These processors were designed for assembly programming, hence their very CISC-y ISA. Some 8-bitters designed for compiled languages would be the AVR family.

      • FullyFunctional 2 years ago

        That's too generous. They were primarily designed for what was easy to implement and what _could_ be handled in assembly. Also, 8080 was an evolution of the 4040 (and that owed much to the 4004). Z80 had to be compatible with the 8080... I know less of 6502 but most of them were designed about what could be done, not what would be easy to program.

        • cmrdporcupine 2 years ago

          I actually only recently learned that the 8008/8080 was in fact not based on the 4004/4040 instruction set. They used the same numbering system, but the ISA has little to nothing in common with it and in fact the 8008 project was in started before the 4004:

          https://sites.google.com/site/microprocessorintel4004/8008-8...

          • FullyFunctional 2 years ago

            Thanks I might misremember. Regradless, 8080 is still awful and they could have sone better with more foresight (which is easy to say so many decades later).

    • PaulHoule 2 years ago

      When you’ve got hardly any registers you don’t have any choices how to allocate them. What’s maddening is a chip like the 8086 where you have enough registers that how you allocate them matters, but still very little space to work in. You are left working hard on a register allocator that is still not going to be very food.

    • hashmash 2 years ago

      In theory, a smart compiler could make heavy use of the DP register (6809 feature) and then local variables within a compiled function could access these "fast" global variables instead. This was a common pattern when coding in assembly, and it's much faster than accessing variables off the stack. The function wouldn't be reentrant, but a compiler pragma could be used to enable/disable the DP mode. Declaring the local variables as static should be sufficient, however.

      • cmrdporcupine 2 years ago

        The WDC C compiler for the 65816 takes advantage of this (relocatable direct page). And the relocatable stack. In fact I believe what it does is relocate the stack to at least partially overlap the direct page.

        It's still an awkward target though.

      • ncmncm 2 years ago

        Back then, memory was only one cycle away, so was practically registers. This is why the 6502 zero page was so important. 6502 instructions took very few cycles, so you could move a zero-page byte to the accumulator in 3 cycles, sometimes 2, and is why a $25 6502 could match a $200 Z80.

        Z80 had a faster clock, but instructions took loads of cycles. Nowadays that is OK, but Z80 did not pipeline. It had fancy looping instructions, but they ran slower than the loop would have.

      • jhallenworld 2 years ago

        I wrote a multi-tasking OS for the 6809 and used the DP to hold the current task ID: task local variables would be in the direct page.

fnkgkyu 2 years ago

a lot of pinball machines are based on the 6800. i've been really impressed by one project that replaces the 6800 with an avr by just wiring up all the relevant pins and holding the 6800 in halt:

https://github.com/BallySternOS/BallySternOS

i've started working on a derivative project using an rp2040 instead of the avr with hopes of offloading the 6800 bus work to a pio

bitgif 2 years ago

The Space Shuttle Main Engine built by Rocketdyne uses redundant M68000 processor to control the engine. I would say I was lucky to have chance to work on a system that has a great function such as SSME

ChrisMarshallNY 2 years ago

This was the first microprocessor I programmed.

My kit was for the school I was in. It was an STD Bus-based card, nailed onto a piece of wood, with a hex keypad/display.

The card had 256 whole bytes of RAM.

We programmed it in machine code.

davidgould 2 years ago

If you are interested on programming something like a 6800 or 6502, but would like to make a practical device and not just run in simulation, take a look at the STM8. It's a very widely used 8-bit embedded controller and architecturally very like a cleaned up and improved 6800.

The STM8S discovery board for this is $8 and in stock at ST and Digikey.This includes the target system and the ST-Link programmer. There are free commercial tool chains and the open source SDCC toolchain.

tibbydudeza 2 years ago

Bill Mensch and Chuck Peddle - familiar names indeed.

Torwald 2 years ago

It was the first CPU of the Macintosh under Raskin. When Jobs took over the switched to the 68k.

  • jecel 2 years ago

    The switch was 6809 to 68000 with 8 bit bus (like the TMS9900 in the TI99/4A) to keep Raskin happy that a machine was still cheap enough with only 64KB. The goal was to share some of the Lisa software. But having a very low cost Lisa got Jobs excited and he ended up taking over the project.

    https://www.folklore.org/StoryView.py?project=Macintosh&stor...

martyvis 2 years ago

One of the fellows at high school talked about and I think brought along a DREAM-6800 computer [0] It was published as a project kit by the Electronics Australia magazine. That said the Apple ][ and TRS-80 seemed way more functional. It wasnt until a few years later at Uni that I got to really enjoy low-level programming and working directly with I/O.

(I think the guy with the DREAM-6800 ended up making a motza making poker machines)

[0] http://www.mjbauer.biz/DREAM6800.htm

engineer_22 2 years ago

I just finished an undergraduate class on microcontroller applications that used a MC68HC12 dev board. We coded in Assembly and then later in C.

I'm curious, is this a typical platform in other computer engineering programs in the US?

  • PAPPPmAc 2 years ago

    The university I'm attached to used 68HC11-family dev boards for their intro embedded course until the early/mid 2010s, with a smattering of other platforms like 8051 derivatives in advanced classes, and have switched to ARM (Small Cortex M ARMs, typically TI TM4C123 TivaC boards) for almost all our embedded content since. Plus a few little Arduino based activities with the freshmen, though the form of that has changed over the years.

    The intro to embedded systems course used to be only taken by Computer Engineers around their Junior year, since ~2017 we do it a semester or so earlier and make the EEs take it too as part of a streamline; EEs no longer take the computer architecture course, and the new embedded course covers some basic architecture concepts.

    We still do the beginning of the semester in Assembly (we really only show them ARM Thumb) and the later part in C.

    Possibly the most revelatory thing about that course is that the low-level view means we find out and try to correct that most of our students (as second semester sophomores who have in theory passed at least two programming courses and a digital logic course) haven't the slightest idea what code actually means/does.

    Example: Every semester I've been involved, when we transition from assembly to C, we give a simple assignment to sort some arrays of (well documented) structs by a specified field and order, given the address where first element starts and length in elements, in both C and assembly. They are handed a starter project with a lightly-obfuscated object file that sets up the arrays, calls two provided function headers for them to fill in, then tests if the sorts succeeded. Details get changed every semester because students cheat compulsively on programming assignments, but it's always set up to be easy, the structs they handle in ASM are always 16 or 32b in length, stored aligned, etc.

    Many of them... struggle mightily for two weeks because they haven't actually retained anything about number representation, memory (size, layout, byte addressing), arguments, the difference between a value and a pointer, and so on. The course staff spend weeks doing patient remediation around that point in the semester. At least we get a chance to make another pass over that material and more of them get it after.

  • hakfoo 2 years ago

    My CS programme (class of 2003) had one semester of assembly programming, either 68HC11 or 8086. I took the 8086 flavour and it was definitely designed around dev-board style development rather than expecting a rich BIOS/OS support environment.

    There was a more embedded-focus "Computer Systems Engineering" degree option which involved a lot more assembly..

  • mtnygard 2 years ago

    I had a similar course using the 68HC11... but that was in 1991 or '92. I am (pleasantly) surprised to hear that this kind of class is still running. Haven't done embedded for a long time, so I don't know if the 68HC12 is current or not.

    Wouldn't worry thouhg. Even if that chip is not in wide use now, the skills you learned are transferrable to any other.

  • shimonabi 2 years ago

    At our company we still use a MC68HC11 for an industrial device.

    • exmadscientist 2 years ago

      Be careful -- they're rapidly going obsolete. No one wants to fab old EPROM processes, and no one wants to package PLCCs anymore. If it's neither of those, I think you've got a bit more time.

      There are "compatibles" out there, and some of them are very good indeed, but they're not without their hassles.

      • mark-r 2 years ago

        I think I have a 68HC11 sitting around that I picked up for a project and never used. I wonder if it's worth anything?

        • exmadscientist 2 years ago

          One-off hobbiest/repairman/desperation quantities will be available for years. It's industrial, new-build quantities that are going to be problems.

          And, yes, the 68HC11 was on December's EOL list, so I hope anyone who was using it in products has a plan! (Or hire my company, we fix things like that... not that it's the most fun work in the world, we have other things we'd rather be doing....)

aliswe 2 years ago

Serious question, why is this on Hacker News? is it a classic of some sort?

  • Tor3 2 years ago

    The 6800 definitely is a classic, yes. The classics included at least 6800, 6502, 8080, and Z80. (To some extent 8085, but as an "expanded" 8080 the Z80 dominated.) The four of them powered the most iconic systems at the start of the microprocessor "takeover" at the end of the seventies (at the same time as I entered my electronics education). The 6800 was kind of left behind by the others eventually, but was used in e.g. the SWTPC 6800, and also in a lot of minicomputer (e.g. DEC) peripherals at the time (including variants like the 6802 with a little on-board RAM)

    • AnimalMuppet 2 years ago

      Was the 8085 ever used much as a microcomputer CPU? I've seen it as an embedded CPU, but I don't recall it getting much run as a general-purpose CPU.

      • krallja 2 years ago

        You could run CP/M on it, of course, but Digital Research didn’t see the point in creating 8085 or Z80 specific versions of CP/M, so they just ran the 8080 version.

      • Tor3 2 years ago

        I've only touched one 8085-based CP/M system myself. I don't think they were that common, certainly nothing compared to Z80 and 8080.

      • renewedrebecca 2 years ago

        The 8085 was in the Tandy 100 and 102 computers.

    • timbit42 2 years ago

      The 1802 is neglected again.

      • Tor3 2 years ago

        The Cosmac ELF.. yes, I too was aware of the 1802, and there are of course other important microprocessors from that time period. But those four covered a lot among themselves - in particular the trio 6502/8080/Z80. But of course there were others. The Texas TMS9*, the 6800 already mentioned, then there's of course the 6809 (Tandy Color Computer, Dragon (basically a clone), the 6809 variant of the aforementioned SWTPC. Or we can as well find a list: https://en.wikipedia.org/wiki/Microprocessor_chronology

andrekandre 2 years ago

  > It has 72 instructions with seven addressing modes for a total of 197 opcodes.
noob question: why so many addressing modes?
  • KMag 2 years ago

    It has very few general-purpose registers (A and B accumulators, IX and SP indexes), and it was designed when few (if any) mainstream processors were pipelined. Fewer addressing modes would mean more intermediate values to store, and more cycles spent in calculating their values.

    If you have plenty of registers and a pipelined processor with a decent bypass network to get intermediate results available earlier, then it makes sense to simplify the addressing modes (to increase frequency and/or shorten the pipeline). However, the 6800 had neither many registers nor a pipelined ALU.

  • ddingus 2 years ago

    Back then people wrote assembly language themselves.

    At 1-2Mhz, sometimes Khz, it was how one got performant code.

    Addressing flexibility makes assembly language programming practical for people to do.

    Check out the 6809 for a beautiful ISA and probably one of the most powerful 8 bit CPUs.

    • KMag 2 years ago

      But, they pretty commonly used macro assemblers, so they could have used macros to emulate more complex addressing modes. I think code density and performance had a bigger impact than developer ergonomics on the decision to implement the more complex addressing modes in hardware instead of assembler macros.

      • ddingus 2 years ago

        I agree with you. Both of those were a big deal given how little RAM was often available. Many systems measured in bytes!

        On those, people writing programs was common.

        Bigger systems, kilobytes and larger, saw the use of macro assemblers were far more common.

        The first assembler I bought was a Macro assembler.

    • gary_0 2 years ago

      "You kids and your RISC! Back in my day we had seven addressing modes and we liked it!"

      Everyone nowadays takes it for granted that you can use software to write software. There are still lots of graybeard programmers around who had to make do with punch-cards when they were learning.

      • ddingus 2 years ago

        LMAO!

        I never did punch cards, but I have used paper tape with one of these:

        https://w140.com/tekwiki/wiki/4051

        And here is a video of the storage CRT: (they are beautiful to watch in person, and the one I used had a 2048x2048 vector space to draw in.

        https://youtu.be/YQMQ62glZ44

        On mine, I had a cassette like tape drive that had some CNC programming tools on it. Another tape had various utilities and some crude simulation programs.

        User data was on paper tape, usually source code, or finalized G-code, or plots one might want to reproduce. Frankly, I love paper tape. And I have had to read it, patch instructions in with one of those little machines with all the hand push button punches and index gear to keep it lined up. Same for damaged tapes.

        And my first machine language programs were hand assembled from blurry, photocopied data book pages mooched from the local university.

        Sidenote: Moto was cool. I asked about documentation later for the 6809, and was able to have my parents take me to a local office where I got to chat with an engineer and left with a pile of docs, and reference databooks!

        I did not get an assembler of my own until I mowed a lot of lawns and bought MAC/65 for my Atari. Prior to that, I was typing stuff into the mini-assembler on the Apple.

        I was a kid, 14 for that stuff.

        Later at 19 I ended up working in small shops using Tektronix hand me down gear while attending college. Super glad I fell into that experience frankly. Those Tek computers were odd, but well conceived and more powerful than one might think 8 bit stuff could be. The people in that shop made some impressive stuff essentially laid out and programmed on a 6800 CPU found in the Tek storage tube terminal / computers. They were interesting designs.

        One could get one and just set it up for serial comms and use it as a weird but capable text display and or, graphics display like a paperless plotter. Xterm has Tektronix mode to support that even today.

        Add some ROM and RAM, and peripherals, and then it was a powerful, technical computing micro computer. Disk drives, cartridge tape, paper tape read and punch, plotter, joystick, and off you go! Never did get to use a disk. But that fast cartridge tape drive and paper tape worked better than expected.

        What I find interesting is younger people are checking this stuff out and or building their own gear. 8 bits is enough to really do stuff and understand the entire thing. While not practical given what we have today, it all is still educational in a way that appears to remain potent.

        For my own fun reasons, and some product development, I have the luxury of...

        I keep an Apple //e Platinum on my work bench. And I use it to do electronic projects the same way I did as a kid. Good for simple prototypes or to understand a sensor, do comms. When my current project slows, I plan on making a card with a Propeller chip on it to make a cool dev station that works like an Apple with command line, just type a line and go BASIC, as well as self hosted compiler and assembler... good times. And practical. People I work with and I have done a couple designs. It all works just fine and it is simple. No updates, no OS, just lean and mean.

    • tgv 2 years ago

      > Back then people wrote assembly language themselves.

      Not only this. It made for more compact code, and memory was very expensive. So expensive that 64kB was practically unheard of. In 1980, 4kB would cost $100 or so. If I read it correctly, the original board with the 6800 was shipped with 128B.

      • ddingus 2 years ago

        Same with the KIM 1 boards.

        128 bytes is a lot! Seems absolutely insane small now, but back then every bit mattered. And being so close to the hardware meant being able to do things super lean and mean.

        On the KIM 1, the RIOT chip came with 128bytes of RAM, and Rockwell, MOS, and others also later packaged the CPU with small amounts of RAM, timers, and other handy things. Early origins of system on chip designs. And there was a jumper or solder pad one could use to sort out address decoding should more RAM be added.

        128 bytes, probably split between zero page and the stack, can do a lot!

        For perspective, the Atari 2600 had 128 bytes of RAM in its RIOT chip and that was the total system RAM!

        Atari 2600 Space Invaders fit into 4K ROM and the 128 bytes of RAM.

        https://youtu.be/QB7GGkCx5Bg

        No frame buffer, no tile engine!

        Graphics were generated line by line, game logic happening during the screen blanking period!

      • 8bitsrule 2 years ago

        >memory was very expensive

        No lie! I recall paying $200 for a Godbout 12KB (static RAM) board kit (96 1K chips to hand-socket) for my H-8 (8080; Heathkit included no RAM). Later added another 12KB for an incredible 24KB total. Only a few years later at a swap meet I landed two 16KB manufactured boards for only $50!

        • ddingus 2 years ago

          Brutal times!

          A similar thing happened with SGI workstations. Drop $20 on a high end box, score one for a fraction on ebay some while later. Not long enough later.

  • analog31 2 years ago

    Another way of thinking about it was that things were done with the index registers and addressing modes, that would be represented by pointers in a language like C.

    • KMag 2 years ago

      That is true, but the same could be said of even an extreme RISC ISA that only has an indirect addressing mode. Taken by itself, it doesn't help explain why late 1970s processors tended to have a lot of addressing modes.