We broke the "encryption" (more like scrambling) of the AMD K8 and K10 CPU microcode updates. We released tooling to write and apply your own microcode updates. AMD did not take any actions against us. Granted, this was a university project so we clearly were within the academic context, but we were in no way affiliated with a too big to sue company.
I have not looked at the format of the microcode yet, so this is only based on the blog post and discussions. K8 and K10 were based on Risc86 just like Zen seems to be. There also are some parallels, especially when it comes to sequence words and branch delay slots. There are also major differences like moving from triads to quads. I assume there are quite some similarities, but the current authors are better qualified to answer this at this point.
Any encryption/signature that can be broken in software on affordable hardware is just that: BROKEN.
What is your theory of harm? Who is harmed and how? Why should the law protect them by restricting the freedom of others?
AMD *sold* these CPUs to customers potentially running this tool on their hardware. That makes you think AMD should be entitled to restrict what the public is allowed to know about their products or does with them post sale?
Also if AMD is still in control shouldn't they be liable too? Should users get to sue AMD if an AMD CPU got compromised by malware e.g. the next side channel attack?
I might start to feel some sympathy for AMD and Intel if they voluntary paid all their customers for the effective post-sale performance downgrades inflicted on customers by mitigations required to make their CPUs fit for purpose.
Are you talking about legalities? AFAIK Hardware jailbreaking/homebrew tools are fine even in jurisdictions blighted with with DMCA unless they're specifically for circumventing DRM.
If more about morals, generally publishing vulnerability research tooling is business as usual for white hat vulnerability researchers, working at bigcorps or not, and has a long history. seems surprising to see this kind of "not cool" comment on this site.
It's not who releases it, it's who is the target that makes the difference. AMD chooses not to sue the researchers, whereas a game console maker would probably sue.
> The fix released by AMD modifies the microcode validation routine to use a custom secure hash function. This is paired with an AMD Secure Processor update which ensures the patch validation routine is updated before the x86 cores can attempt to install a tampered microcode patch.
CPUs don't have non-volatile storage for microcode updates; it gets uploaded on boot from a copy stored alongside the other firmware in a flash chip on the motherboard, or optionally later in the boot process when an OS loads a microcode update from some other storage device. So a malicious microcode update that's trying to persist itself doesn't have to monitor for attempts to update CPU microcode, it has to detect attempts to install a BIOS update that includes a microcode update, find and poison the microcode update embedded within that BIOS update, and subvert any attempt to checksum the flash contents before rebooting. Fitting an attack that complex into CPU microcode patches that are on the order of a kilobyte is extremely implausible.
Would it be useful to have a public list of all example keys that could be accidentally used, which could be CI/CD tested on all publicly released firmware and microcode updates?
If there was a public test suite, Linux fwupd and Windows Update could use it for binary screening before new firmware updates are accepted for distribution to endpoints.
Hyundai used both this same NIST AES key _and_ an OpenSSL demo RSA key together in a head unit! (search “greenluigi1” for the writeup).
Using CMAC as both the RSA hashing function and the secure boot key verification function is almost the bigger WTF from AMD, though. That’s arguably more of a design failure from the start than something to be caught.
It doesn't really fix the underlying "didn't hire a qualified cryptographer" issue. By the time a third party scanning finds it in a released firmware millions of chips will already have been produced.
Plus it would only help with that one issue, not with the millions of other ways things can go wrong. Vendors publishing their security architecture so others can convince themselves that it is in fact secure would be better, it is how TLS or WPA get enough eyeballs.
Both AMD and Google note, that Zen[1-4] are affected, but what changed about Zen5? According to the timeline, it released before Google notified AMD [1].
Is it using different keys, but same scheme (and could possibly be broken via side-channels as noted in the article)? Or perhaps AMD notices something and changed up the microcode? Some clarification on that part would be nice.
You can make a new instruction (or repurpose an existing one) that accesses physical memory bypassing the page walk, which would be faster. You can also make instructions that bypasses some checks (like privilege checks) and squeeze some tiny performance. Note this would introduce security issues though, so you could only use it on trusted software.
It's interesting to think about the sorts of things we could do if we had low level control over our hardware. Unfortunately things seem consistently headed in the opposite direction.
RISC-V wouldn’t help here at all. There’s nothing about RISC-V that prevents a CPU manufacturer putting in custom instructions and not documenting them.
Random off topic question: could one theoretically (with infinite time and resources) write new microcode firmware for a modern processor that turns it into an armv8+ processor?
As others have pointed out the short answer is no. The longer answer is still no if you value execution performance at least a little bit.
However, maybe, there is a way. Back when we were researching microcode we found a talk [1] that ran multiple ISAs in parallel on the same processor using microcode. We never figured out how this worked, our best guess is either swapping microcode from RAM as needed or branching to an emulator in x86 code. If this was a K10 cpu, which might be a bit old at the time of the talk, then there is no way you could fit an ARM interpreter into the update. You had, iirc, 32 triads of 3 operations each. Maybe, just maybe, you could fit a bytecode interpreter that then executes the actual ISA emulator. However you would need to hook every instruction, or at least trap on each instruction fetch and hook the appropriate handling routine and both sound very complicated.
If your infinite resources include manufacturing new silicon with the proper fast path and microcode decoder, then yes, but note that x86 and ARM have different memory models. Also at that point you just have a very expensive, very inefficient ARM processor.
Without more in depth knowledge here my guess would be yes, if you can fit an Emulator for armv8 into the size available for microcode. The instruction the cpu runs vs those that are emulated via microcode are already pretty extensive, running essentially an ARM Emulator on Top of it should in theory not make too much of a difference since you are essentially running an x86 Emulator on whatever the ryzen instruction set really is.
Not on these, since the decoder is hardwired for x86-shaped instructions (prefixes, etc). Some instructions are also hardwired to produce certain uops.
in the introduction they explain that it's not possible
"The first question everyone has about microcode updates is something like "So can I execute ARM64 code natively on my Athlon?" It's a fun idea, but now we know that a microcode patch doesn't work like that -- so the answer is no, sorry!"
Was the microcode signing scheme documented by AMD, or did the researchers have to reverse engineer it somehow? I couldn't see a mention in the blog post.
> We plan to provide additional details in the upcoming months on how we reverse engineered the microcode update process, which led to us identifying the validation algorithms
> Here's the thing - the big vendors encrypt and sign their updates so that you cannot run your own microcode. A big discovery recently means that the authentication scheme is a lot weaker than intended, and you can now effectively "jailbreak" your CPU!
But there's no further details. I'd love to know about the specifics too!
Yikes! One would have expected a little more code review or a design review from a hardware manufacturer, especially of security system. A system that people have been worried about since the Pentium FDIV bug.
CPUs have no non-volatile memory -- microcode fully resets when the power is cycled. So, in a sensible world, the impact of this bug would be limited to people temporarily compromising systems on which they already had CPL0 (kernel) access. This would break (possibly very severely and maybe even unpatchably) SEV, and maybe it would break TPM-based security if it persisted across a soft reboot, but would not do much else of consequence.
But we do not live in a sensible world. The entire UEFI and Secure Boot ecosystem is a complete dumpster fire in which the CPU, via mechanisms that are so baroque that they should have been disposed of in, well, the baroque era, enforces its own firmware security instead of delegating to an independent coprocessor. So the actual impact is that getting CPL0 access to an unpatched system [0] will allow a complete compromise of the system flash, which will almost certainly allow a permanent, irreversible compromise of that system, including persistent installation of malicious microcode that will pretend to be patched. Maybe a really nice Verified Boot (or whatever AMD calls its version) implementation would make this harder. Maybe not.
(Okay, it's not irreversible if someone physically rewrites the flash using external hardware. Good luck.)
[0] For this purpose, "unpatched" means running un-fixed microcode at the time at which CPL0 access is gained.
> enforces its own firmware security instead of delegating to an independent coprocessor
That depends on how we define "independent" - AMD's firmware validation is carried out by the Platform Security Processor, which is an on-die ARM core that boots its firmware before the x86 cores come up. I don't know whether or not the microcode region of the firmware is included in the region verified by their Platform Secure Boot or not - skipping it on the basis that the CPU's going to verify it before loading it anyway seems like an "obvious" optimisation, but there's room to implement this in the way you want.
But raw write access to the flash depends on you being in SMM, and I don't know to what extent microcode can patch what SMM transitions look like. Wouldn't bet against it (and honestly would be kind of surprised if this was somehow protected), but I don't think what Google's worked out here yet gives us a solid answer.
By “firmware security” I meant control of writes to the SPI flash chip that controls firmware. There are other mechanisms that try to control whether the contents of the chip are trusted for various purposes at boot, and you’re probably more familiar with those than I am.
As for my guesses about the rest:
As far as I know (and I am not privy to any non-public info here), the Intel ucode patch process sure seems like it can reprogram things other than the ucode patch SRAM. There seem to be some indications that AMD’s is different.
I wouldn’t bet real money, with fairly strong odds, that this ucode compromise gives the ability to run effectively arbitrary code in SMM CPL0, without even a whole lot of difficulty other than reverse engineering enough of the CPU to understand what the uops do and which patch slots do what. I would also bet, at somewhat less aggressive odds, that ucode patches can do things that even SMM can’t, e.g. writing to locked MSRs and even issuing special extra-privileged operations like the “Debug Read” and “Debug Write” operations that Intel CPUs support in the “Red Unlock” state.
> But raw write access to the flash depends on you being in SMM
Look at tests/stop.sh and check the different segments (ls:, ms:, etc you can also address them like 0:[..], 1, 2, 3,... 15:[...]). One of those is probably flash. If you know how that looks like try to dump it first with a load and then check which segment and which address it is at and then write back to it.
SEV attestation does delegate to the PSP, no? I think it _might_ be reasonable to attest that upgraded microcode is both present and valid using SEV, without the risk of malicious microcode blinding the attestation, but I’m not positive yet - need to think on it a bit more.
This probably depends on a lot of non-public info: how does the PSP validate CPU state? where does PSP firmware come from? can the PSP distinguish between a CPU state as reported by honest ucode and that state as reported by the CPU running malicious ucode?
I think that, at least on Intel, the “microcode” package includes all kinds of stuff beyond just the actual CPU microcode, and I think it’s all signed together. If AMD is like this, than an unpatched CPU can be made to load all kinds of goodies.
Also, at least in Intel (and I think also on AMD), most of the SPI flash security mechanism is controlled by SMM code. So any ranges that the CPU can write, unless locked by a mechanism outside of the control of whatever this bug compromises, can be written. This seems pretty likely to include the entire SPI chip, which includes parts controlling code that will run early after the next power cycle, which can compromise the system again.
PSP firmware is in system flash, but is verified by the PSP with its own signing key. PSP firmware is loaded before x86 comes up, and as long as the SEV firmware measures itself and as long as it patches the microcode loader before allowing x86 to run (which the description of the patch claims it does) I think SEV is rescuable.
Wow, so providing a tool for bypassing the protection mechanism of a device (cpu) is accepted when it comes from google?
Try this on any game console or drm protected device ans you are DMCAed before you know it.
We broke the "encryption" (more like scrambling) of the AMD K8 and K10 CPU microcode updates. We released tooling to write and apply your own microcode updates. AMD did not take any actions against us. Granted, this was a university project so we clearly were within the academic context, but we were in no way affiliated with a too big to sue company.
https://www.usenix.org/system/files/conference/usenixsecurit...
https://informatik.rub.de/veroeffentlichungenbkp/syssec/vero...
https://github.com/RUB-SysSec/Microcode
How much is there in common between the Zen cpus of the current microcode reverse-engineering compared to the K8/K10 you looked at?
I have not looked at the format of the microcode yet, so this is only based on the blog post and discussions. K8 and K10 were based on Risc86 just like Zen seems to be. There also are some parallels, especially when it comes to sequence words and branch delay slots. There are also major differences like moving from triads to quads. I assume there are quite some similarities, but the current authors are better qualified to answer this at this point.
Any encryption/signature that can be broken in software on affordable hardware is just that: BROKEN.
What is your theory of harm? Who is harmed and how? Why should the law protect them by restricting the freedom of others?
AMD *sold* these CPUs to customers potentially running this tool on their hardware. That makes you think AMD should be entitled to restrict what the public is allowed to know about their products or does with them post sale?
Also if AMD is still in control shouldn't they be liable too? Should users get to sue AMD if an AMD CPU got compromised by malware e.g. the next side channel attack?
I might start to feel some sympathy for AMD and Intel if they voluntary paid all their customers for the effective post-sale performance downgrades inflicted on customers by mitigations required to make their CPUs fit for purpose.
Are you talking about legalities? AFAIK Hardware jailbreaking/homebrew tools are fine even in jurisdictions blighted with with DMCA unless they're specifically for circumventing DRM.
If more about morals, generally publishing vulnerability research tooling is business as usual for white hat vulnerability researchers, working at bigcorps or not, and has a long history. seems surprising to see this kind of "not cool" comment on this site.
We live in an age where it's okay to pirate terabyte of data if you're Meta.
‘In the courts, you will be deemed either innocent or guilty, according to your wealth or poverty.’
What a sad and anti-hacker mentality comment.
It's not who releases it, it's who is the target that makes the difference. AMD chooses not to sue the researchers, whereas a game console maker would probably sue.
Ok, but that's on "you" being braindead releasing something like that on github. Release it anonymously and DMCA paper becomes toilet paper.
Same with apps, aka everything is opensource if you know RE ;-)
only if it's a nintendo console :)
Nintendo just shut down some Github repos for lousy DRM claims:
https://github.com/github/dmca/blob/master/2025/02/2025-02-2...
The blog post that explains the exploit and how this whole thing works is at https://bughunters.google.com/blog/5424842357473280/zen-and-...
Is the mitigation something that has to be installed on every system boot and only protects against microcode exploits later on that boot?
> The fix released by AMD modifies the microcode validation routine to use a custom secure hash function. This is paired with an AMD Secure Processor update which ensures the patch validation routine is updated before the x86 cores can attempt to install a tampered microcode patch.
What if your cpu microcode already has malware which injects itself into the microcode update?
https://dl.acm.org/doi/10.1145/358198.358210
CPUs don't have non-volatile storage for microcode updates; it gets uploaded on boot from a copy stored alongside the other firmware in a flash chip on the motherboard, or optionally later in the boot process when an OS loads a microcode update from some other storage device. So a malicious microcode update that's trying to persist itself doesn't have to monitor for attempts to update CPU microcode, it has to detect attempts to install a BIOS update that includes a microcode update, find and poison the microcode update embedded within that BIOS update, and subvert any attempt to checksum the flash contents before rebooting. Fitting an attack that complex into CPU microcode patches that are on the order of a kilobyte is extremely implausible.
I would guess it is a BIOS patch, just like the microcode normally is.
So it probably needs to be installed at every system boot.
Perhaps someone more knowledgeable can correct my guesses?
I love that Tavis is only listed as a "Software Engineer"
This is not the first case of accidental reuse of example keys in firmware signing, https://kb.cert.org/vuls/id/455367
Would it be useful to have a public list of all example keys that could be accidentally used, which could be CI/CD tested on all publicly released firmware and microcode updates?
If there was a public test suite, Linux fwupd and Windows Update could use it for binary screening before new firmware updates are accepted for distribution to endpoints.
Hyundai used both this same NIST AES key _and_ an OpenSSL demo RSA key together in a head unit! (search “greenluigi1” for the writeup).
Using CMAC as both the RSA hashing function and the secure boot key verification function is almost the bigger WTF from AMD, though. That’s arguably more of a design failure from the start than something to be caught.
It doesn't really fix the underlying "didn't hire a qualified cryptographer" issue. By the time a third party scanning finds it in a released firmware millions of chips will already have been produced.
Plus it would only help with that one issue, not with the millions of other ways things can go wrong. Vendors publishing their security architecture so others can convince themselves that it is in fact secure would be better, it is how TLS or WPA get enough eyeballs.
Both AMD and Google note, that Zen[1-4] are affected, but what changed about Zen5? According to the timeline, it released before Google notified AMD [1].
Is it using different keys, but same scheme (and could possibly be broken via side-channels as noted in the article)? Or perhaps AMD notices something and changed up the microcode? Some clarification on that part would be nice.
[1] https://github.com/google/security-research/security/advisor...
We were not able to demonstrate that Zen5 is affected. If we end up doing so, we may release a new advisory or something.
Are there any examples of using this for non-nefarious reasons? For instance, could I add new instructions that made some specific calculation faster?
You can make a new instruction (or repurpose an existing one) that accesses physical memory bypassing the page walk, which would be faster. You can also make instructions that bypasses some checks (like privilege checks) and squeeze some tiny performance. Note this would introduce security issues though, so you could only use it on trusted software.
Yes. It's been done before for Intel CPUs. https://misc0110.net/files/cpu_woot23.pdf
It's interesting to think about the sorts of things we could do if we had low level control over our hardware. Unfortunately things seem consistently headed in the opposite direction.
Hopefully RISC-V changes things a bit.
RISC-V wouldn’t help here at all. There’s nothing about RISC-V that prevents a CPU manufacturer putting in custom instructions and not documenting them.
I understand that, but my sliver of hope is that since it's an open architecture, there will be manufacturers that make very hackable versions of it.
Doesn't changing how your cpu's microcode works mean you can bypass or leak all kinds of security measures and secrets?
Yes.
Random off topic question: could one theoretically (with infinite time and resources) write new microcode firmware for a modern processor that turns it into an armv8+ processor?
As others have pointed out the short answer is no. The longer answer is still no if you value execution performance at least a little bit.
However, maybe, there is a way. Back when we were researching microcode we found a talk [1] that ran multiple ISAs in parallel on the same processor using microcode. We never figured out how this worked, our best guess is either swapping microcode from RAM as needed or branching to an emulator in x86 code. If this was a K10 cpu, which might be a bit old at the time of the talk, then there is no way you could fit an ARM interpreter into the update. You had, iirc, 32 triads of 3 operations each. Maybe, just maybe, you could fit a bytecode interpreter that then executes the actual ISA emulator. However you would need to hook every instruction, or at least trap on each instruction fetch and hook the appropriate handling routine and both sound very complicated.
If your infinite resources include manufacturing new silicon with the proper fast path and microcode decoder, then yes, but note that x86 and ARM have different memory models. Also at that point you just have a very expensive, very inefficient ARM processor.
[1] https://troopers.de/events/troopers16/655_the_chimaera_proce...
Without more in depth knowledge here my guess would be yes, if you can fit an Emulator for armv8 into the size available for microcode. The instruction the cpu runs vs those that are emulated via microcode are already pretty extensive, running essentially an ARM Emulator on Top of it should in theory not make too much of a difference since you are essentially running an x86 Emulator on whatever the ryzen instruction set really is.
Not on these, since the decoder is hardwired for x86-shaped instructions (prefixes, etc). Some instructions are also hardwired to produce certain uops.
in the introduction they explain that it's not possible
"The first question everyone has about microcode updates is something like "So can I execute ARM64 code natively on my Athlon?" It's a fun idea, but now we know that a microcode patch doesn't work like that -- so the answer is no, sorry!"
I'd also be interested to know to what extent this will be possible
I wonder if this can be used to figure out what code is running on the PSP?
Was the microcode signing scheme documented by AMD, or did the researchers have to reverse engineer it somehow? I couldn't see a mention in the blog post.
From the blog post:
> We plan to provide additional details in the upcoming months on how we reverse engineered the microcode update process, which led to us identifying the validation algorithms
> You can use the `resign` command to compensate for the changes you made:
How does that work? Did someone figure out AMD's private keys?
The intro document mentions
> Here's the thing - the big vendors encrypt and sign their updates so that you cannot run your own microcode. A big discovery recently means that the authentication scheme is a lot weaker than intended, and you can now effectively "jailbreak" your CPU!
But there's no further details. I'd love to know about the specifics too!
They accidentally used the example key from AES-CMAC RFC, the full details are in the accompanying blog post: https://bughunters.google.com/blog/5424842357473280/zen-and-...
Yikes! One would have expected a little more code review or a design review from a hardware manufacturer, especially of security system. A system that people have been worried about since the Pentium FDIV bug.
I guess this one just slipped through the cracks?
I suppose the reuse wasn't accidental, but they mistakenly thought the key doesn't matter for CMAC.
Taking "never roll your own" too far.
This work is related to recently found signing weakness and supposedly fake key with resign works with unpatched CPUs.
Something worth noting:
CPUs have no non-volatile memory -- microcode fully resets when the power is cycled. So, in a sensible world, the impact of this bug would be limited to people temporarily compromising systems on which they already had CPL0 (kernel) access. This would break (possibly very severely and maybe even unpatchably) SEV, and maybe it would break TPM-based security if it persisted across a soft reboot, but would not do much else of consequence.
But we do not live in a sensible world. The entire UEFI and Secure Boot ecosystem is a complete dumpster fire in which the CPU, via mechanisms that are so baroque that they should have been disposed of in, well, the baroque era, enforces its own firmware security instead of delegating to an independent coprocessor. So the actual impact is that getting CPL0 access to an unpatched system [0] will allow a complete compromise of the system flash, which will almost certainly allow a permanent, irreversible compromise of that system, including persistent installation of malicious microcode that will pretend to be patched. Maybe a really nice Verified Boot (or whatever AMD calls its version) implementation would make this harder. Maybe not.
(Okay, it's not irreversible if someone physically rewrites the flash using external hardware. Good luck.)
[0] For this purpose, "unpatched" means running un-fixed microcode at the time at which CPL0 access is gained.
> enforces its own firmware security instead of delegating to an independent coprocessor
That depends on how we define "independent" - AMD's firmware validation is carried out by the Platform Security Processor, which is an on-die ARM core that boots its firmware before the x86 cores come up. I don't know whether or not the microcode region of the firmware is included in the region verified by their Platform Secure Boot or not - skipping it on the basis that the CPU's going to verify it before loading it anyway seems like an "obvious" optimisation, but there's room to implement this in the way you want.
But raw write access to the flash depends on you being in SMM, and I don't know to what extent microcode can patch what SMM transitions look like. Wouldn't bet against it (and honestly would be kind of surprised if this was somehow protected), but I don't think what Google's worked out here yet gives us a solid answer.
By “firmware security” I meant control of writes to the SPI flash chip that controls firmware. There are other mechanisms that try to control whether the contents of the chip are trusted for various purposes at boot, and you’re probably more familiar with those than I am.
As for my guesses about the rest:
As far as I know (and I am not privy to any non-public info here), the Intel ucode patch process sure seems like it can reprogram things other than the ucode patch SRAM. There seem to be some indications that AMD’s is different.
I wouldn’t bet real money, with fairly strong odds, that this ucode compromise gives the ability to run effectively arbitrary code in SMM CPL0, without even a whole lot of difficulty other than reverse engineering enough of the CPU to understand what the uops do and which patch slots do what. I would also bet, at somewhat less aggressive odds, that ucode patches can do things that even SMM can’t, e.g. writing to locked MSRs and even issuing special extra-privileged operations like the “Debug Read” and “Debug Write” operations that Intel CPUs support in the “Red Unlock” state.
> But raw write access to the flash depends on you being in SMM
Look at tests/stop.sh and check the different segments (ls:, ms:, etc you can also address them like 0:[..], 1, 2, 3,... 15:[...]). One of those is probably flash. If you know how that looks like try to dump it first with a load and then check which segment and which address it is at and then write back to it.
SEV attestation does delegate to the PSP, no? I think it _might_ be reasonable to attest that upgraded microcode is both present and valid using SEV, without the risk of malicious microcode blinding the attestation, but I’m not positive yet - need to think on it a bit more.
This probably depends on a lot of non-public info: how does the PSP validate CPU state? where does PSP firmware come from? can the PSP distinguish between a CPU state as reported by honest ucode and that state as reported by the CPU running malicious ucode?
I think that, at least on Intel, the “microcode” package includes all kinds of stuff beyond just the actual CPU microcode, and I think it’s all signed together. If AMD is like this, than an unpatched CPU can be made to load all kinds of goodies.
Also, at least in Intel (and I think also on AMD), most of the SPI flash security mechanism is controlled by SMM code. So any ranges that the CPU can write, unless locked by a mechanism outside of the control of whatever this bug compromises, can be written. This seems pretty likely to include the entire SPI chip, which includes parts controlling code that will run early after the next power cycle, which can compromise the system again.
PSP firmware is in system flash, but is verified by the PSP with its own signing key. PSP firmware is loaded before x86 comes up, and as long as the SEV firmware measures itself and as long as it patches the microcode loader before allowing x86 to run (which the description of the patch claims it does) I think SEV is rescuable.
> This probably depends on a lot of non-public info: how does the PSP validate CPU state?
https://github.com/amd/AMD-ASPFW/blob/3ca6650dd35d878b3fcbe5...
That seems to be reading a memory mapped register per core. I wonder what backs that register.
I wonder if anyone involved could define 'zen.' I know the answer.