"The beauty of Arm is that it is open. You can you can add all sorts of interesting technologies that will be beneficial to supercomputing, such as being tightly integrated. By marrying an Arm CPU with a Tesla GPU...".
The essence of it. NVidia is so close that it makes development very painful. At the moment it's the required path for deep learning, but hopefully it'll change a bit.
Does x86 maintain any real advantages over ARM at this point, other than the ability to run legacy software? I'm genuinely asking; all I know is that ARM devices can be really powerful these days and they're also much more power efficient.
There is no power efficiency advantage in today's actual high performance ARM chips. See the Cavium ThunderX2 server chips: their dual socket x 32 core system is generally slower[1] (though it is faster in a couple of benchmarks) and draws more power than similarly configured x86 servers[2].
In many cases dramatically so (and I say this as someone who doesn't even like x86 much).
It depends on your application. For small embedded applications where things like BOM or electrical power massively dominate, neither x86 or ARM is worth it (and this is in a world where you can get a small ARM for less than a dollar -- we're talking about CPUs that cost pennies in volume).
For pretty much every "mobile" application (where "mobile" includes not just phones and tablets but TVs, IoT etc) ARM is the majority. Intel BTW had several opportunities to dominate this with non-x86 devices and screwed all of them up.
But for desktop and especially server machines, the two x86 vendors really crush the alternatives (until a couple of months ago I would have said "Intel" rather than "x86 vendors, but AMD is really really hot). They've spent decades going after an architectural point with lots of fast IO lanes, huge caches, lots of micro architectural units, and the result is machines that can get a lot of data in and out quickly and operate on a lot of it at once. Basically they are designed for large sustained workloads. That's an interesting place to be financially, but it's a proportionately decreasing share of the computation market.
It's great to see that ARM-based machines have made strides in supercomputing but those workloads are quite specialized so you can't really learn much of anything from that. I do believe long term that's also a market where x86's complexity will work against it since supercomputing workloads are typically architected around the machine they will be run upon.
There's a push for networking chips (ones with built-in multi-gigabit NICs): Mellanox Bluefield, NXP Layerscape [the one that's in the new SolidRun workstation!]..
And for regular old server chips (or "cloud" in marketing speak): Ampere eMAG, Amazon Graviton..
networking chips typically won't have the kind of workload I was talking about so are a natural for an ARM or MIPS (or someday I hope RISC-V) device. x86 is just too expensive and Intel long abandoned (for economic reasons) the lifetime guarantees needed for such devices.
There are some server experiments but they are a long way from mainstream. You'd think they might be fine for microservices but if you're AWS it's better to share a lot of VMs because the CPU cost doesn't dominate. Someday I hope.
ehh… x86 ISA does require variable-length decoding, but from what I've heard it's not that much of a power hog, especially in high power scenarios. Microarchitecture is what matters.
Currently, Apple has an incredible microarchitecture, possibly even THE best, but they aren't sharing it with anyone :(
> real advantages over ARM
I guess being first to bring wider SIMD to the market: 256-bit AVX2 is widespread, AVX512 is available on real products. ARM SVE will be amazing, when it ships to actual users.
Though keeping in mind the potential problems of super wide SIMD on the same cores as everything else ( https://blog.cloudflare.com/on-the-dangers-of-intels-frequen... ) and the limited number of truly SIMD-heavy programs (x264 and.. something?) it's not a big deal. AMD Zen 1 is 128-bit anyway, btw.
I'm also curious how fellow HNers think x86 will fare in the next 10, 20, 30 years. Will we still be using x86 as the most common option in laptops by then?
x86 will be around in some form for at least 25+ years as legacy programs require it. Microsoft has played with x86 emulation on ARM, and they've had an ARM build on windows for years now.
We're sorta moving in the ARM direction for certain applications, budget laptops already are flirting with arm to varying success.
I think what we'll see is better compiler/tooling support for cross-compilation for modern programs, and a slow movement towards ARM as it matures in performance/cost.
They own Mellanox now, and they have their own ARM core.
I could imagine them coming out with a Network GPU that has nothing but 100GbE/200GbE and power.
The ARM core doesn't have to be super-fast if it's just setting up RDMA transfers, of data and command buffers.
“There are no GPUs in this system because there are no European makers of GPUs”
Cough cough. Mali. Imagination. Even Apple.
Well, I guess they count Mali ( ARM ) and IMG as UK and not EU ?
Last I checked, Europe is a continent, not a bureaucracy.
"The beauty of Arm is that it is open. You can you can add all sorts of interesting technologies that will be beneficial to supercomputing, such as being tightly integrated. By marrying an Arm CPU with a Tesla GPU...".
The essence of it. NVidia is so close that it makes development very painful. At the moment it's the required path for deep learning, but hopefully it'll change a bit.
Does x86 maintain any real advantages over ARM at this point, other than the ability to run legacy software? I'm genuinely asking; all I know is that ARM devices can be really powerful these days and they're also much more power efficient.
There is no power efficiency advantage in today's actual high performance ARM chips. See the Cavium ThunderX2 server chips: their dual socket x 32 core system is generally slower[1] (though it is faster in a couple of benchmarks) and draws more power than similarly configured x86 servers[2].
[1] https://www.servethehome.com/cavium-thunderx2-review-benchma...
[2] https://www.servethehome.com/updated-cavium-thunderx2-power-...
In many cases dramatically so (and I say this as someone who doesn't even like x86 much).
It depends on your application. For small embedded applications where things like BOM or electrical power massively dominate, neither x86 or ARM is worth it (and this is in a world where you can get a small ARM for less than a dollar -- we're talking about CPUs that cost pennies in volume).
For pretty much every "mobile" application (where "mobile" includes not just phones and tablets but TVs, IoT etc) ARM is the majority. Intel BTW had several opportunities to dominate this with non-x86 devices and screwed all of them up.
But for desktop and especially server machines, the two x86 vendors really crush the alternatives (until a couple of months ago I would have said "Intel" rather than "x86 vendors, but AMD is really really hot). They've spent decades going after an architectural point with lots of fast IO lanes, huge caches, lots of micro architectural units, and the result is machines that can get a lot of data in and out quickly and operate on a lot of it at once. Basically they are designed for large sustained workloads. That's an interesting place to be financially, but it's a proportionately decreasing share of the computation market.
It's great to see that ARM-based machines have made strides in supercomputing but those workloads are quite specialized so you can't really learn much of anything from that. I do believe long term that's also a market where x86's complexity will work against it since supercomputing workloads are typically architected around the machine they will be run upon.
It's not just supercomputing.
There's a push for networking chips (ones with built-in multi-gigabit NICs): Mellanox Bluefield, NXP Layerscape [the one that's in the new SolidRun workstation!]..
And for regular old server chips (or "cloud" in marketing speak): Ampere eMAG, Amazon Graviton..
networking chips typically won't have the kind of workload I was talking about so are a natural for an ARM or MIPS (or someday I hope RISC-V) device. x86 is just too expensive and Intel long abandoned (for economic reasons) the lifetime guarantees needed for such devices.
There are some server experiments but they are a long way from mainstream. You'd think they might be fine for microservices but if you're AWS it's better to share a lot of VMs because the CPU cost doesn't dominate. Someday I hope.
AWS is putting a lot of VMs on their 16-core Cortex-A72 chips :)
> much more power efficient
ehh… x86 ISA does require variable-length decoding, but from what I've heard it's not that much of a power hog, especially in high power scenarios. Microarchitecture is what matters.
Currently, Apple has an incredible microarchitecture, possibly even THE best, but they aren't sharing it with anyone :(
> real advantages over ARM
I guess being first to bring wider SIMD to the market: 256-bit AVX2 is widespread, AVX512 is available on real products. ARM SVE will be amazing, when it ships to actual users.
Though keeping in mind the potential problems of super wide SIMD on the same cores as everything else ( https://blog.cloudflare.com/on-the-dangers-of-intels-frequen... ) and the limited number of truly SIMD-heavy programs (x264 and.. something?) it's not a big deal. AMD Zen 1 is 128-bit anyway, btw.
I'm also curious how fellow HNers think x86 will fare in the next 10, 20, 30 years. Will we still be using x86 as the most common option in laptops by then?
FWIW, all my laptops are currently arm-based (chromebooks), for the simple reason of price + battery performance.
x86 will be around in some form for at least 25+ years as legacy programs require it. Microsoft has played with x86 emulation on ARM, and they've had an ARM build on windows for years now.
We're sorta moving in the ARM direction for certain applications, budget laptops already are flirting with arm to varying success.
I think what we'll see is better compiler/tooling support for cross-compilation for modern programs, and a slow movement towards ARM as it matures in performance/cost.
Yeah x86 has vendors that don't absolutely despise opensource software.
Given that they're trying to push their Tegra platform for automotive and robotic uses I'd hope so.