I used to be Pro Anandtech and consider them one of the best Sources online for Hardware News. But the fact they have yet write a single post, big or small about Intel's Zombieload and its implication on performance worries me a bit.
Then there is the "Intel" benchmarks as usual [1] on GPU. Trying to suggest the 2 CPU were both running at 25W TDP to give a "fair" comparison, without mentioning the Ice-Lake U CPU were running with 50% more memory bandwidth vs the AMD Ryzen. And we know Graphics Benchmarks do depend a lot on memory bandwidth. The memory used was somehow mentioned in Toms or Other Sites but not Anandtech. ( Although none of them had mentioned the bandwidth difference, it was up to the reader to work them out )
Anyway none of these Consumer CPU upgrade interest me anymore ( Although any improvement to iGPU would be great ) I am eagerly waiting for a 2S - 128 Core EPYC 2 on a Server or AWS to play around with it.
I feel like the entire "PC enthusiast" review space has dropped the ball on hardware vulnerabilities. Reevaluating performance between microcode and OS patches is an afterthought, and when a new CPU hits the market the numbers are presented without the obvious disclaimer that these performance gains may evaporate within months.
Some even perpetuate the "only relevant to datacenter" myth despite the fact that security researchers have shown to be able to exploit these vulnerabilities with JavaScript in the browser.
I'm glad the PC enthusiast space hasn't succumbed to the wild hysteria caused elsewhere by the side channel issues. It's tiresome to see every new variation people come up with reported as a new apocalypse all over again. Half the reason I still pay attention is to find if there's a new Linux boot switch I need to turn on to disable some new performance regression.
Even though I'm personally in the fortunate position not to have any reasonable exposure to these vulnerabilities, I wouldn't be particularly worried even if this wasn't the case. It's been well over a year since Meltdown and Spectre came out and there still hasn't been a single case of anyone successfully using these vulnerabilities to productive ends in the wild that I know of. Obviously, cloud computing vendors need to pay attention and there are legitimate business concerns that are affected by this, but insofar as personal computing goes? If people persist in the ridiculous notion that constantly running completely arbitrary code in naive sandboxes is a great idea, I imagine there will eventually be issues, but so far the issue seems to be vastly overblown in the popular media.
I don't see how the act of taking and publishing measurements after microcode and OS updates constitutes hysteria. It's my understanding, at least on Windows, you pretty much have to opt-out of these patches or manually install an update that disables the mitigations.
I fully support a users right to bypass these mitigations, and you're correct that your typical desktop user, at least today, isn't a target. But it seems odd that websites dedicated to performance computing have a blindspot to how automatically installed updates will impact performance.
> I don't see how the act of taking and publishing measurements after microcode and OS updates constitutes hysteria.
It's quite easy to sensationalize benchmark results even unintentionally. The average reader of PC hardware review sites is totally willing to latch on to a microbenchmark result that shows a 20% performance drop and claim that it's disastrous for performance, even if the actual added delay to real-world operations is a fraction of a millisecond and thus will almost never cause the result of your user input to be delayed by even a single frame. There's a certain degree of irresponsibility in publishing results that you know will be taken out of context by almost everyone who reads them. I've discontinued benchmarks in the past because it was frustrating seeing readers pretend like they show a meaningful difference between products when the reader's workload never comes close to the workload represented by that benchmark.
Did you just make an argument against PC review sites publishing any benchmarks at all because readers can't be trusted to interpret them correctly?
That would kind of defeat the point.
If they published a benchmark in the past and don't bother to correct the benchmark when it becomes out of sync with reality - that is just bad journalism.
Nobody is saying you should go and cherry-pick benchmarks after the mitigations hit, but you should definitely check the benchmarks you already published once.
These sites can and should expect an informed reader.
In any case: Leaving wrong information up uncontested helps neither "experts" nor laymen.
> Did you just make an argument against PC review sites publishing any benchmarks at all because readers can't be trusted to interpret them correctly?
No, and you should know better.
> If they published a benchmark in the past and don't bother to correct the benchmark when it becomes out of sync with reality - that is just bad journalism.
Proper practice is to publish the full test conditions, including software, firmware and nowadays also microcode versions. The availability of newer versions does not make older results any less true.
At AnandTech, we make all reasonable attempts to keep a thorough database of older hardware tested on newer benchmark suites, but the time this requires means we cannot re-test everything multiple times per year. I have over 200 SSDs and counting in the collection, and that test suite is over 30 hours long. The collection of CPUs is much larger. GPU reviews typically have fewer back-catalog hardware entries because updating to new drivers a few times a year is often unavoidable. You can browse the results for current and previous test suites at https://www.anandtech.com/bench/
> These sites can and should expect an informed reader.
In the case of my 6850k my overclock was silently killed by the Windows 10 microcode update which locked multi to 38x.
This behaviour angered me no end. I wasted significant time looking for workarounds, and deleting the microprocessor driver was the only way. I wonder what fixes I've now nixed, but there was seriously no need for Intel to kill my overclock.
On a couple of occasions Intel have pushed updates which have reset my fix. Dear $deity .. my next PC will be AMD for sure.
"Wild hysteria", what do you mean? Experts seem to be far from a hysterical, and go for a more technical language, benchmarks and all. And the mass that usually goes hysterical actually doesn't even know their CPUs are going to take a 20% performance hit next OS update, and probably won't even realize that.
The conversation was about the (nominally technical as well as more mainstream) press, not the experts. My remark regarding "wild hysteria" was made in that context. Experts and competent users will do the same thing they always do - evaluate any and all mitigations in the context of the threat models relevant to their usecases and act accordingly. Whether depriving the mass of less technically inclined users of the performance they are used to with all the implications that entails (including for energy efficiency and other externalities) is a wise decision only time will tell.
> Considering we are referring to attacks that can bypass your PC's security, "prudence" is a better word than hysteria.
That statement can be made about any vulnerability whatsoever. The merit of any mitigation can only be determined by a cost/benefit analysis that takes into account the potential impact of the vulnerability as well as the very real costs of mitigating it.
> Yes, if they are left alone, it is the "end of the world".
No offense, but this is exactly why the word "hysteria" seems far more appropriate than "prudence". Not a single one of these vulnerabilities has been used to cause any measurable damage anywhere that we know of, whereas the mitigations deployed have significant costs that everyone must pay. Despite this, emotional "the sky is falling" type pronouncements are far more common in the media - even the ostensibly technical press - than attempts to rationally weigh the costs and benefits of any particular approach to the problem.
>Not a single one of these vulnerabilities has been used to cause any measurable damage anywhere that we know of, whereas the mitigations deployed have significant costs that everyone must pay.
That's like saying: "nobody was drowned that we know of, whereas there was a significant cost to building the dam that everyone paid". (And also not dissimilar to arguments about doing no major industry/lifestyle changes regarding climate change).
It's exactly because there were mitigations relatively quickly deployed that we didn't have a "hack em all" exploit doing the rounds in hundreds of millions of devices. The difficulty of exploiting also gave some leeway to deploying those mitigations.
> That's like saying: "nobody was drowned that we know of, whereas there was a significant cost to building the dam that everyone paid". (And also not dissimilar to arguments about doing no major industry/lifestyle changes regarding climate change).
It is very dissimilar indeed - the sentence you quoted does not constitute an argument by itself. It is an observation regarding the present state of affairs (which you have not disputed), which to me indicates a need to take a breath and do a reasoned cost/benefit analysis as opposed to the hysterical "this must be fixed at any cost, externalities be damned" mindset that is fairly common in many circles.
If you really want a climate change analogy, though, consider this - however many mitigating workarounds you invent, as long as speculative execution exists there will always be side channel attacks, and eventually some of them will probably succeed to some extent. Perhaps, as you noted, some major industry/lifestyle changes are indeed in order - people could stop living in the delusion that a perfect sandbox is possible and realize that arbitrary code execution will always entail risks. Rather than turning every website into a potential security risk, perhaps it is our approach to software (rather than hardware) that needs re-evaluation.
> The difficulty of exploiting also gave some leeway to deploying those mitigations.
That's putting it lightly. Exploiting Spectre to get private data is difficult. Turning that into a privilege escalation is exponentially harder, so any "hack em all" exploit on hundreds of millions of devices would have needed an entirely unrelated mechanism for spreading.
> I feel like the entire "PC enthusiast" review space has dropped the ball on hardware vulnerabilities. Reevaluating performance between microcode and OS patches is an afterthought, and when a new CPU hits the market the numbers are presented without the obvious disclaimer that these performance gains may evaporate within months.
If you want useful benchmarks that show the performance impact, go to phoronix.
"PC Enthusiast" websites care about gaming performance and single user desktop performance, and always have. This has been the same since I started following things when the fastest CPU available was a 300 MHz Pentium 2. Imagine how amazed we all were by the 1 GHz Slot A Athlon.
Their audience is not mainstream but enthousiast And professionals. They do not provide in depth analysis since they probably know the reader can do that for themselves.
That may have been true at one time, but the class of people who would consider themselves to be enthusiasts has broadened well beyond the class of people who can accurately judge how their workload corresponds to the benchmark results they're reading. The recent improvements in the Linux gaming situation have been a big contributor and has undoubtedly skewed the Phoronix audience.
Phoronix serves two important purposes that nobody else does. 1) It serves as a news aggregator for a lot of different open source communities. You'd think that a site called hacker news would do that, but ironically it doesn't. Most content here is either heavily web dominated, or just random drivel about being excellent in life. 2) He runs his standard battery of tests on everything. A lot of upstream projects don't seem to have that much emphasis on performance regression testing. He has uncovered a few regressions and reported them upstream on a few occasions.
I greatly value Phoronix for both of those things; it's a great resource for both my work and personal computer usage. But it does mean that the traditional hardware reviews themselves are something of an afterthought.
Most places certainly retested CPUs in the wave of Spectre/Meltdown and at least the sources I've seen have mentioned Zombieload/MDS though they've yet to go back and rebenchmark CPUs due to the fact they're either prepping for or travelling to Computex currently. I'd expect most of them to have videos in the next month though.
It's rumored that companies like Intel and Nvidia will retaliate against review sites and publications for bad press coverage by slowing or cutting off access to preview release products for reviews.
I've seen a number of comments like this over the last few days and I don't really get it, gamers use the internet right? Vast overwhelming majority of them are going to be running javascript programs hundreds of times per day
sure, and maybe browsers or os need some way to say “running untrusted code please turn off performance for a sec” for that use case. until then ill just use one tab at a time or disable js before i opt in for slow.
Nobody was offended by the other. I think people are downvoting you and gameswithgo because you appear to be applying your own personal notions to an entire segment with little to no evidence.
'PC enthusiast' is such a blanket term to start with, so applying a blanket statement to such a group is obviously doomed to failure from the very start.
Because its a hardware and software survey. I have a memory of them asking some extra questions but that was years ago. Either I remembered wrong (likely), they changed it, or they don't report it all.
Because downloading and running binaries of applications is much safer?
From what i've seen web browser teams have taken the recent risks extremely seriously - I have sure had a worse track record of infection via downloading and installing software versus visiting sites with JS running
> web browser teams have taken the recent risks extremely seriously
Not really. They didn't even properly apply band-aids.
Chrome and Firefox disabled number of features, that allow Javascript code to create high-precision timers. This makes exploiting slightly more difficult, but the gaping hole is still there — there is infinite number of ways to create a high-precision timer, just not as obvious as closed ones.
Chrome has enabled Site Isolation on desktop, but haven't done it on Android (presumably, because of associated increase in memory consumption).
All major browsers still allow Javascript to run in background, create CPU threads and consume unrestricted amount of CPU time. I don't believe, that any of them have mounted instruction-based defenses (lfence etc.), but I may be mistaken here.
Those are bad examples because they both run unsandboxed.
The recent CPU vulnerabilities aren't uniquely bad for Javascript specifically. They're bad for wanting to run unprivileged code. Javascript in regular web pages just happens to be the most obvious example of sandboxed code in desktop computers.
Most of the people who write these articles for review sites do not really understand CPUs in depth. They know how to run benchmark suites and talk about new features mentioned in Intel's marketing material. Most of these people are writers, not engineers. If they were experts they could make a lot more money working at tech companies instead of working for review sites.
In fact, I would bet that most professional software engineers could not correctly explain Spectre, Meltdown, and Zombieload without making at least a few mistakes.
So Despite what I wrote, I still think they are one of the best. Both Ian and Andrei are good with many in depth article, I really do miss Anand's article though. I think, the real problem is Anandtech is short on staff.
Anand has been working in Apple for a few years now, I wonder what has he been up to.
I would say Ian is really the only writer at anand i care to read from anymore. His articles always take time to come out compared to everyone else but they at least cover every thing he can think of to tell you about and are well researched
Because intel strictly forbids publishing benchmarks of their processors with the "hardware vulnerability mitigations" applied. Even OEMs cannot show them to their enterprise customers. You can do your own benchmarking after buying the systems. So, no money, no real-world benchmarking.
Intel has never told AnandTech not to benchmark their microcode updates or a third party's OS updates. They haven't threatened to stop sampling CPUs for review. I haven't seen any evidence that Intel has ever attempted to enforce such a restriction against anyone. It's just a stupid clause that one of their dumber lawyers slipped into the EULA text, and does not appear to be something they actually care about at an organizational level or expect to be able to enforce in the real world.
I would say putting it in the EULA text is telling you and what's more important, a court of law would probably agree. I don't know why you would expect anything more?
Do you work for Anandtech? If not, what are you basing these claims on? I suspect that Anandtech, etc would not publicly disclose if a hardware manufacturer was forbidding them from benchmarking certain configurations under threat of not releasing samples..
Yes, I write for AnandTech (paid as an independent contractor; I'm not one of the salaried editors). I've done some Spectre/Meltdown regression testing for AnandTech, and I've never been instructed to not do such testing in the future.
Microcode benchmarking is not the hill Intel wants to die on.
As soon as the definition of "with all vulnerability mitigations on" stays stable long enough to put together a good review. Benchmarking a moving target is hell, and we don't have enough equipment or staff to do the around-the-clock regression testing that would be necessary to keep our benchmark database current with everything that's happened over the past 1.5 years.
End-user perceived performance is usually not affected enough to meaningfully change the ranking of products. If a chip goes from being 5% faster to 3% slower when mitigations are applied, you'll never notice that without busting out a stopwatch and digging for a reason to be disappointed. Remember, measurable performance differences aren't always noticeable performance differences, especially without a side by side comparison.
And if two competing processors are close enough in performance for these mitigations to change which one comes out on top of benchmark charts, then other factors like price, power consumption and IO capabilities are probably a much bigger deal at that point than minor CPU performance differences.
Most if not all of our benchmark suites have been updated to include at least the early Spectre/Meltdown/et al. mitigations, and new CPUs are being tested with the microcode they launch with.
Why not do an article with then-current mitigations every 6 months? Rather than conveniently waiting until Intel can get their hardware fixes out. Which will coincide with the "mitigations are stable" article.
I'd be willing to bet that if the majority of mitigations impacted AMD rather than Intel as it does today, you'd already have done this and would continue to follow up. This is very interesting material and a special moment in time to cover it and inform your readers. Other than willful laziness ("lack of time", everyone knows you make time for priorities), this appears like shilling hard for Intel.
If Anandtech decides to do the right thing, I'd like to see .Net or Java compilation. Real-world based benchmarking only.
It's not ok to insinuate shilling on HN or to dismiss someone's work by assuming bad faith in this way. Would you please review the site guidelines and follow them when posting here?
> I'd be willing to bet that if the majority of mitigations impacted AMD rather than Intel as it does today, you'd already have done this and would continue to follow up.
Fuck you, too. I've given you reasonable explanations and you're still throwing out insulting conspiracy theories. If you want sensationalized news, there are plenty of outlets that will give you what you want, and you don't need to be a dick to those of us who are trying to be reasonable and honest about both the subject matter and the resources we have to provide quality coverage.
Also, we've done two significant articles in the past year measuring the impact of these mitigations, so we're not even falling behind the standards you claim to want us to meet.
Personal attacks and name-calling are not ok on HN, regardless of how unfairly someone is interpreting your work. Would you please review the site guidelines and follow them when posting here?
...you sign a legally binding NDA to be able to early-access the CPUs, test & review them; get the semi-classified technical documents to develop your new servers.
If you don't sign that NDA you can't buy the CPUs from intel to resell them. Even if you are able to buy the CPUs from them, there's no guarantee that you'll buy from the list price or get the discounts for big, prestige projects which require tenders.
The problem is I am on Mac ecosystem, which means I don't have much of a choice ( I doubt Apple will ever switch to AMD ) . And since most of my casual gaming are done on mobile, ( I am quite old and don't have time like I used to spent hours on UO or WoW ), none of these upgrade means anything to me. So my interest is in Servers where most of my time are spent now in Web Development.
This probably doesn't contribute to the conversation but with the number of serious vulnerabilities that have popped up recently I'm not inspired to solve the truth table for the CPU vendor that leaves me the least exposed. As others have said - and I have seen - some of these can be exploited with Javascript in the browser. (I do not know much about Zombieland presently)
Looking forward to a less complex architecture even if it means cutting me off at the knees with execution speed (for a few years):
I think the point about the Ice Lake announcement is a mischaracterization.
It's typical for news sites to report individual announcements, with little or no analysis; this is fair, as long as the post clearly specifies its nature (which, in this case, does).
Anandtech did something very interesting actually on the Intel subject, which I didn't see on other sites - it made an article about the performance of the i9-9900k locked at its nominal TDP (95W), which showed very significant losses.
I was put off AT the day I noticed they forgot to cover the Threadripper launch for weeks while they flooded the front page with dry half pagers about new Intel motherboards (not benchmarks mind you, just snippets from the OEMs Press release). I asked about it in the comments and got a boilerplate answer that they strive to present quality articles to the readers.
They also had the Intel series 6 launch where they praised the 6600K and compared it to the 2500K to show “massive” improvements over the years. This while all the other websites noted “minimal speed boost for too big price”. Perhaps both true but the spin on it makes all the difference when showing the intention.
AT shows quite the Intel bias. And it’s not the Intel part that bothers me, it’s the bias part. They go out of their way to make Intel look better without outright lying, just selectively presenting the truth in a way that shines a much better light on Intel. This for me casts doubt on other articles.
I’m glad Andrei Frumusanu’s mobile reviews still have a home, being the best I have seen on the entire internet. But that’s the only segment on AT where I can be reasonably sure about impartiality.
Their October 2018 i9 benchmark review was subtitled "Hardware and Software Security Fixes", and literally began with the following sentence:
The Spectre and Meltdown vulnerabilities made quite a splash earlier this year, forcing makers of hardware and software to release updates in order to tackle them.
explain why i should care instead of downvoting. a windows program can already just directly look at the memory of other running processes, so why do i care about sidechannel attacks outside of javascript snooping on things which i can mitigate with simple behavior changes.
What licence? You don't have to agree to a licence to use a CPU. I mean Intel might think otherwise, but those kinds of licences("you agree to this licence by just opening the product") are not worth the paper they are written on in EU, so even if there is such a licence it wouldn't be applicable everywhere.
Anandtech went downhill when they lost Anand. Not sure what the hell he could possibly be doing do that's useful to Apple, but the conspiracy theorist in me wants to think it was Apple getting him out of the media.
Intel has fallen so far. It's honestly a shame to watch at this point.
I remember back when Sandy Bridge was first released, and I was extremely pleased by the performance improvements my new chip was able to provide. Did they really manage to mess everything up within such a limited timespan? Or was there just always a hidden incompetence that never showed itself until now?
Their design for 10nm and the implementation didn't line up. Whatever their (still undisclosed) problems were, the entire node was fundamentally flawed.
It might have been hubris at having been at the cutting edge of fab tech for so long. It could have just been the fruits of pushing the envelope - sometimes what you predict will happen when you put theory to application proves false.
It has warped their business heavily for 4+ years now, but in the same way AMD had to "get their act together" with their processor design after Bulldozer failed spectacularly in practice and took ~7 years to fix it companies at these scales cannot turn on a dime - Intel had their roadmap planned a decade in advance, and to have it so thoroughly trashed starting around ~2015-2016 will require until at least 2021 to correct in all likelihood.
I more and more tend to believe the rumors (started by Semiaccurate) we will not see 10nm in mass quantities and 7nm is the next. We will see very, very soon: Intel said 10nm CPUs in client systems will be on shelves for the 2019 holiday season.
Cannon Lake was released only because many at Intel have their bonuses tied to the process node launch. Well, they launched a 10nm CPU... so bad the GPU is disabled, performance/watt it's worse than KBR and it was only available in limited quantities.
KBR = Kaby Lake Refresh. Interesting, in that Intel's "10nm" node is said to be more power efficient than their "14nm" node, in this case, per AnandTech (https://www.anandtech.com/show/11738/intel-launches-8th-gene...) KBR was launched with 14+nm. Could be that early of a 10nm part wasn't yet very power optimized. And that would be a very good explanation for why the GPU was disabled vs. the yield issue, which we're pretty sure is much more fundamental than specs of dust and other isolated things that can disable a part of a die without killing it altogether.
Hits forehead for forgetting to check SemiWiki. But note that since then Global Foundries has abandoned for now offering this general node, 7nm as they name it, 10nm as Intel does.
But isn't the whole die exposed and otherwise processed as a whole piece? Very much not deeply educated here and can't justify the investment to change that, my primary mental models for defects are either something that takes out a whole die, like one lithography step being misaligned, or spot damage like a piece of dust.
But there are clearly issues in between that are statistical inside a die, I recall Semiaccurate saying one of Nvidia or AMD did a GPU tape out to a TSMC process where they duplicated vias because that process' were iffy, and they compensated with a less dense design where either one or two working was OK. If Intel is suffering that sort of problem, then the GPU is a big part of the die that can be fused out while you still have something useful. If all your CPUs or all your L3 cache banks fail, a working GPU is pointless.
That article points out two particularly suspect things Intel is uniquely trying at this node: SAQP for the metal layers, which I've seen cited before, and which they generically officially blame, and cobalt in interconnects. And at least one other thing was mentioned as suspect, and four new things total.
One ray of hope is mentioned for Intel, in that they were the most aggressive in the industry with their 14nm and 10nm nodes, and in both cases paid the price in yields, while they're being conservative for their 7nm node, no doubt because EUV is a very big step for everyone. Semiaccurate also commented and/or theorized that a compelling reason Intel is continuing to work on their 10nm at one fab is that one or more things in it are also going to be used in their 7nm, so they might as well debug them now and there, and sell some chips while they're at it.
I've seen some recent tests showing that bulldozer is quite competitive with new multithreaded friendly stuff like DX12 and Vulkan. Roughly saying... if you think of it as a rematch against the same intel products then bulldozer can win on lots of situations.
It's not Intel, it's the end of Moore's law. Intel's problem is that they are not well positioned to capitalize on the specialized processors that will be required to continue ekeing out advances for the next decade or two before we're entirely up a creek. :)
From how Apple and AMD are doing with their own processors though, it seems like Intel is just fundamentally doing worse even as things become more difficult with smaller transistor sizes. Apple is going to replace Intel with their own processors because Intel has failed to meet requirements. AMD, with a shoestring budget basically on the verge of bankruptcy the entire time they were doing their R&D, managed to build out a new architecture that has provided amazing results while Intel has basically had nothing to show in the same time.
But perhaps there's something I'm missing here. Is there a misconception or lack of information here on my end that needs to be clarified? I can only make my analysis largely as an outsider looking in when talking about semiconductors.
Oddly enough, the challenge of estimating who is "ahead" is kind of like traffic. Intel arrived at the scaling traffic jam way before anyone else, and has been slowly slogging through it. New entrants are catching up to the traffic jam and will have to make their way through it as well. If there is no breakthrough, then everyone will find themselves more tightly bunched in feature/performance curves than they have been in the past.
The spoiler though is that different architectures have different scaling properties and limitations. IBM's Power architecture has already scaled past where Intel is, not because of the semiconductor process, but because the architecture is more streamlined. ARM is somewhere in the middle, it started off pretty streamlined but it has been adding warts (special instructions) to more directly compete with Intel and that creates impediments to scaling.
Bad analogy. You can prove that getting in line earliest will get you out of it earliest. If you postulate that it’s more complex than that, it might hold up. You could say that Intel is driving a semi, while others are mini coopers and motorcycles, splitting lanes and better at speeding up/slowing down. At which point no analogy is necessary: startups and smaller companies are more nimble than larger companies, at the trade off of resources.
It's a good analogy as long as Intel was the first to experience the end of Dennard scaling (https://en.wikipedia.org/wiki/Dennard_scaling) because their fab lines were ahead of the rest of the industry's. And fabs are all "semis", due to the massive amounts of capital and talent needed to move to the next node.
So much so that we're now we're down to two companies in the whole world who are successfully executing the smallest CPU nodes, unless Intel manages to make their "10nm" work, or pulls off their "7nm".
While we're hearing the very roughly equivalent TSMC "5nm" node is starting risk production (https://wccftech.com/tsmc-5nm-production-euv/ beta testing, you might say, someone outside of TSMC has to be the first, second, etc. to try to get real world dies that work on a new node). Intel isn't saying anything, but Semiaccurate has reported at least two fab lines that were slated to move to their 10nm are installing lots of EUV equipment consistent with using them for their 7nm node (and at least one fab moving back to 14nm).
Apple and AMD just have to ask TSMC to do their magic to make 7nm chips - they haven't had to do anything spectacular, just use TSMCs design libraries.
Intel is struggling because of their struggles with 10nm. Apple and AMD are not because TSMC has pulled off 7nm. Architecture matters, but process node matters a lot too.
What would you say are the primary differences between the two companies? Is it more just a matter of luck that has allowed for TSMC to have been able to succeed where Intel hasn't? Or is there actually a meaningfully different process design and/or problem solving approach that is enabling this?
The full story on how Intel managed to fuck up 10nm so badly may not see the light of day for years if ever. But generally, it seems that Intel tried to make too many changes in one generation. They probably wanted their 10nm to be the most advanced process that didn't require EUV lithography. Some features of their 10nm process ended up not working (evidence points to the cobalt interconnects as one of the hang-ups). In the meantime, it looks like EUV is coming along nicely.
They compounded their problems by essentially stopping microarchitecture development on 14nm, which is why eg. their laptop processors still don't support LPDDR4, and they're still shipping basically the same CPU core they released in 2015. Coupling microarchitecture and fabrication development has at times been an advantage for Intel, but for the past few years it's been a huge mistake, and they've promised changes to their design processes so that they don't get stuck like this again in the future if fab advances aren't ready when new microarchitectures are.
TSMC naturally doesn't have this problem, because they're a pure play foundry. Their customers have to each make their own bets on when new fab processes will be truly ready, and how well they will perform in practice.
Not tntn, but I've been following this and it seems to be both Intel's now decades long history of very bad high level engineering and personal management catching up with their crown jewel, and being more aggressive than TSMC's initial 7nm node. Perhaps Intel depending on a particular lithography? technique that TSMC isn't, or isn't yet heavily, but we don't really know, no one authoritative is talking, and Intel is still claiming 10nm is going to make it.
I think Intel having their own manufacturing fab is hurting them in the longterm. By outsourcing it you can go with whomever has the best solution. By matter of pride, Intel has not done this but AMD, Nvidia, Apple all do this.
I've heard entirely the opposite, that having a close relationship between chip designers and fabricators allows for higher performance designs. I don't know of anyone who interpreted AMD selling off its foundries as anything other than severe financial distress, and it worked supremely well for Intel while they stayed at least one step ahead of the competition. Enough so this is said to have wiped out a generation of CPU architects while Dennard scaling still worked, no matter how clever they were, Intel moving to its next process node wiped out their speed advantage.
But it's a brittle model, if a company screws up a node and is too messed up to handle the failure gracefully, as Intel is doing with their "10nm", no doubt with pride as a factor. And it's not uncommon for institutions to permanently lose abilities, I'm not counting on Intel succeeding with their "7nm" node.
On the third hand, we're now down to 2-3 high end CPU fab companies, Samsung, TSMC, and maybe Intel. That also can be a brittle thing.
Intel was ahead, and hit the wall first. Apple & AMD are not ahead, they're just catching up. I don't want to understate how big a problem that could be for Intel, of course. But they're also doing it on low margin parts, and Intel continues to make bank with their data center parts.
I don't think any of this represents a short-term problem for Intel, other than the general downturn in processor sales because fewer people will need to upgrade. But I think it represents a very serious long-term threat.
They have some really cool technical advances, like 3D xpoint. But I'm concerned that they do so badly on embedded and custom integration from a long-term perspective.
Apple sold millions of iPhones with 7nm chips while Intel struggles to build comparable 10nm chips and keeps releasing 14+++ nm. AMD will release 7nm chips very soon. It does not seem like they are catching up. Quite the opposite.
Then you have to ensure you're comparing chips designed for the same market segment. Die size comparisons work well if you're talking about a Cortex-A53 on 16nm vs 12nm. It doesn't work as well when you're talking about a full SoC, or even a desktop CPU+GPU combo where core counts for both sides of the chip can vary greatly.
My simplest laptop in current use has 4 times more memory than my current phone and I probably would need to make huge compromises to live with half as much. A lot of the chips in phones don't even have external memory buses. A top-of-the-line iPad Pro sports an 8-core asymmetrical core design, with 4 fast cores and 4 slow ones and, overall, is slower than a 2-core Core M-based MacBook (although it feels great because iOS does a lot less than macOS).
Also, Apple doesn't make its own A-series processors - it uses TSMC for that.
And iPhones still don't come close to competing with desktop-class processors in terms of performance. iPhones also use much less electricity, of course, but the point remains.
I don't know enough about this, but the GP's argument of "Intel hit the wall first because they were the first to reach that level of performance" makes logical sense to me.
They can compete in certain workloads. As a computational tool however, desktop Intel CPUs can be optimized far, far beyond the capabilities of any A-series CPU.
Don't forget that Intel CPUs have things that A CPUs are missing like QuickSync, AVX2, massive PCIe interconnectivity.
Whether the A-series CPU could be modified into something competitive on that front is yet to be seen. Whether this actually matters considering the state of our compilers and software development is yet another question.
> Apple's newest CPUs have hardware explicitly for accelerating Javascript
They really don't. A12 added a couple of instructions for floating point conversions, but contrary to claims making rounds on Twitter at the time, they were not even generated by WebKit when the benchmarks were run.
Intel made one single bad bet - their 10 nm process didn't work as well as they expected - and TSMC, who made the right bet, leapfrogged them.
In terms of architecture and vulnerabilities, it's not prudent to bet Intel chips are more vulnerable to exploits than others - it's just that we know more about those vulnerabilities. If you want to find vulnerabilities with high impact in cloud and enterprise data centers, Intel Xeon CPUs will be your primary research target.
We're not really sure yet whether TSMC have leapfrogged Intel in the longer term though. Intel's 10nm issues seem to have delayed their smaller process nodes in the medium term, but by how much is yet to be seen. It seems, for example, that Intel 7nm isn't in quite as much trouble as one might expect.
It's also naive to dismiss the possibility for Intel to have learnt a lot from some of the failures in 10nm that will prove useful in accelerating node development in the future.
The sizing numbers are also just nonsense marketing. They stopped meaning anything in particular a long time ago. Intel's '10nm' and TSMC's '7nm' are about the same size.
The reason they are having problems is that they just continued doing die shrinks and speculation hacks to increase performance. They've essentially had the same core since sandy bridge.
They didn't see zen coming, didn't have to compete with bulldozer, and thought they could just keep shrinking rather than building a new core design. Once they hit 10nm, they failed, and their old core got some healthy competition from zen. Now AMD is looking to take a serious lead with zen 2, aka ryzen 3000 series.
I don't think Moore's law is dead, Intel just gave up on real r&d because it was cheaper.
I can't wait for arm and risc v to enter the playing field.
That means very little. As the saying goes, “How did you go bankrupt?” “Two ways: gradually, then all at once.”
In technology, downward swings of fate tend to come fast and hard. The camera world went from 100% film to 100% digital in the space of about five years, which extinguished Kodak. Or consider Palm/Nokia/ Blackberry, who went from collectively owning the entire mobile market to dead as doorknobs in even less time.
It’s easy to see how it happens to Intel too: AMD’s big-core-count chips start eating up server business, while ARM takes over PCs (at this point people consider it all but certain Apple is switching to ARM in the next few years, and Microsoft is building Windows on ARM as a hedge), and without another business for Intel to fall back on (they’ve shut down modems, mobile chips, and anything else that could’ve been a new source of revenue), that’s the end.
I’m not saying it’s certain, but I’m saying it’s totally possible and their current market share means nothing.
If I buy 100k chips for my HPC to do single-tenant processing, performance, and performance/watt are priority one. The vulnerabilities intel has to fight right now are irrelevant for this use case.
amd isn’t immune to side channel attacks either. the most recent one we think amd is immune to but i wouldn’t assume in the long run that amd will generally prove to be more resistant to them than intel.
Should they be describing it as 8 cores 16 threads when there have been multiple security vulnerabilities that have to turn off hyperthreading to be mitigated?
This is a very good point. I hope AMD brings it up with the EU. Might be a very slow process though, but at this point it is anticompetitive behaviour. AMD could probably squeeze a fair bit more performance out of their processors if they were allowed to cut some security corners.
I hear many people continuing to say that Intel are "cutting security corners".
Are they really? I don't have an extremely deep understanding of Intel's implementation of x86 ISA, but I do know enough to say that so far we've been able to effectively mitigate almost all of these attacks with existing instructions available on the Intel CPUs. That doesn't mean that they are still not open to other variants of these attacks - but at some point you have to assume diminishing returns. Spectre is still very difficult to exploit, for example.
Perhaps this has little to do with Intel and more to do with software authors cutting corners? LFENCE and SFENCE are reasonably well documented, after all...
Here's a register article from 2007 about page table permissions being problematic. If you look around a bit, there were a ton of security researchers who talked about the problem. It seems to have been a bit of an open secret that such a thing must exist -- they just hadn't found it yet.
The scariest part is that many of the best security minds work for various intelligence agencies. They very likely have known about such things for a very long time.
Meltdown strikes me as an almost perfect vulnerability. It affects almost everyone. It is undetectable until exploited and once exploited, it immediately goes away until the next time. It's easy to keep secret. Most importantly, it's a one-way vulnerability. Keep your secure systems from running untrusted code and there's zero risk. Since this is standard protocol anyway for those systems, you don't have the risk of someone running across a code patch somewhere.
The only potential downside is that the juiciest targets also aren't running untrusted code (though most foreign affairs workers probably run untrusted code). The big point of interest here is information symmetry. In most cases, giving others secret information is bad. In this case, both the best and worse case situations work out well for the USA. If nobody else knows, they get free info. If everyone else does know, then everyone gets perfect information about everything. This favors the most powerful country. They can eliminate the unknowns (the only real danger). In contrast, knowing you are going to be crushed does nothing if you can't hide your own hand either. So, the best case is very good and the worst case is still acceptable.
What else would they describe it as?
There are 8 cores and 16 threads, whether you have to turn off the hyperthreading feature or not is a different matter.
It just feels kinda shady to advertise the peak performance with safety features that should be on off without mentioning that. They should at least include a disclaimer.
These attacks work fine in the browser, as researchers continue to show. They allow complete bypass of any native app sandboxing layers. Surely you don't run everything on your box as root all the time.
Can you link to a hosted example of one of these. That would convince people nicely. Someone linked to one in a similar discussion yesterday but it didn't work anymore in currently patched browsers.
Meh. It doesn't require Javascript for your computer to run logic described by others. Browsers are such complex machines that it wouldn't surprise me if you could for example craft a malicious SVG that would bypass that, or a turing-complete CSS file that triggers a vulnerability...
By the way, does NoScript actually block in-SVG javascript?
Sure, but we all take risks every day. If you're worring about turning-complete CSS files exploiting Spectre and Meltdown then you probably don't leave the house much.
We know that attackers have reason to exploit literally all compute resources they can find a way to access. This is more like worrying about leaving the house during an epidemic of exploding ebola-infected pigeons — if you can do something about it, you should.
Attackers also have to consider cost/benefit analysis when evaluating methods of attack. Claims that "CSS is Turing complete" require a user to act as a "crank" [0], so there are lower-hanging fruit out there than trying to program complicated logic which can utilize the Meltdown / Spectre exploits in CSS.
Yes and no. It is possible to exploit Meltdown / Spectre via Javascript. From [0]:
> This can happen when one has opened the other using window.open, or <a href="..." target="_blank">, or iframes. If a website contains user-specific data, there is a chance that another site could use these new vulnerabilities to read that user data.
Most browsers have pushed patches which eliminate known mechanisms of leveraging the exploit, but the pathway cannot be completely mitigated by browser patches, I believe.
Given that most consumers run JavaScript unconditionally, yes. Browser vendors have basically declared Spectre/Meltdown/MDS unmitigatable at the browser level.
> Second, the increasingly complicated mitigations that we designed and implemented carried significant complexity, which is technical debt and might actually increase the attack surface, and performance overheads. Third, testing and maintaining mitigations for microarchitectural leaks is even trickier than designing gadgets themselves, since it’s hard to be sure the mitigations continue working as designed. At least once, important mitigations were effectively undone by later compiler optimizations. Fourth, we found that effective mitigation of some variants of Spectre, particularly variant 4, to be simply infeasible in software, even after a heroic effort by our partners at Apple to combat the problem in their JIT compiler.
> Our research reached the conclusion that, in principle, untrusted code can read a process’s entire address space using Spectre and side channels. Software mitigations reduce the effectiveness of many potential gadgets, but are not efficient or comprehensive.
The “some variants” include MDS, which the author was aware of but which were not at the time of publication out of embargo.
But they do not claim that hardware mitigations are necessary. They claim that they need to change browser architecture a little bit:
> The only effective mitigation is to move sensitive data out of the process’s address space. Thankfully, Chrome already had an effort underway for many years to separate sites into different processes to reduce the attack surface due to conventional vulnerabilities. This investment paid off, and we productionized and deployed site isolation for as many platforms as possible by May 2018.
So with improved browsers it's still unclear why ordinary users need those performance-eating mitigations, when browser vendors managed to solve that problem themselves.
> But they do not claim that hardware mitigations are necessary. They claim that they need to change browser architecture a little bit
For Spectre, that’s enough; for Spectre-class Intel permission exploit vectors (aka, Meltdown, Fallout, ZombieLoad, RIDL, Store to Leak Forwarding and other MDS vulnerabilities) all of the same infeasability of browser mitigations apply but data also leaks across process boundaries, so process isolation does jack shit to protect you without lower level mitigations.
There’s nothing whatsoever browsers can do to prevent this. Process memory read isolation effectively doesn’t exist in the presence of unpatched Intel MDS vulnerabilities.
> So with improved browsers it's still unclear why ordinary users need those performance-eating mitigations, when browser vendors managed to solve that problem themselves.
The unclarity is only in your misunderstanding of the relationship of MDS vulnerabilities on Intel to Spectre vulnerabilities in general.
These vulnerabilities can jump process address space boundaries. It's a lot harder but can be done, look at the original Spectre paper: https://spectreattack.com/spectre.pdf
I don't get it. If it has an all-core frequency of 5GHz, doesn't that mean they've left some single-core boost on the table? Or have they hit some other limit and this part is basically free of thermal limits?
The switching speed of silicon has also some upper limit. When you drive silicon faster, it starts to make mistakes. i.e. not all electrons go the where you want them to go. This causes soft-faults and CPU re-executes the part at best, or gives you a BSOD, oops or panic at worst.
This upper limit depends on process, layout, power design and power limits of the CPU.
Last but, not the least, not all CPUs are created equal on a wafer. I came from an era where we hunted plain blue AMD Athlon dies for higher overclocking potential, since they were from center of the wafer and they were more stable under high load/voltage/clock. I had a 2200MHz Athon (200 x 11) which was faster then AMD's own 2200MHz Athlons, since AMD wasnt offering a 200MHz bus version of their 2200MHz parts.
That happens too. Prime 95 and other stability tests are used and can check when wrong results are returned. There's often a sliver of frequencies where a system under load begins performing floating point calculations incorrectly while other, simpler systems in the CPU are still functioning correctly.
The BSOD, oops, or panic is a symptom of widespread errors.
Center of a silicon wafer is said to be have a higher quality (due to lithography, physical stresses and other processes which I don't know exact details of), and the result is a die with more homogeneous properties and color reflection. Since the die's tolerances were lower around the center of the wafer, the performance of the resulting chip was better.
AMD was also sub-binning most of these parts (they were sold as Athlon 1700 @ 1433MHz regardless of their performance level), so people were buying these unlocked sleepers and overclocking them to insane levels without voltage increases.
However, today the processes is so different and node sizes are so small that the dies' color are different and not perceivable anyway.
In the older days, this issue was more of an obscure, collective wisdom which resulted from trial and error days of overclocking wars.
Disclaimer: this is an oversimplification and I only have a lay person's understanding.
CPUs are basically huge networks of transistors (on/off switches). They're sort of like tiny printed circuit boards; lots of individual 'parts' are connected by 'wires' on top of a silicon wafer.
The distances are miniscule, but the lengths of wires running between transistors still varies. So when a transistor switches between 'off' and 'on', the signal takes a different amount of time to reach to its destination depending on which transistors are being switched. The signal can also feed into multiple other transistors which it will reach at different times.
While signals are busy propagating through the circuit, the CPU's state will be unstable, including the 'output' value of its current instruction. The time that it takes for any given instruction to stabilize is tough to predict because it depends on a lot of things, including how far apart the transistors are and how many of them the signal needs to pass through.
The CPU's "tick rate" in Hertz relates to how quickly it "latches" its internal state. Between "ticks", the CPU waits for all of the signals to stabilize. If they haven't stabilized when the clock strikes, bad things can happen.
I'm not sure how the 'quality' of an individual chip can make it more amenable to overclocking, though; maybe they run into fewer issues from thermal stress? Maybe the tiny 'wires' between the transistors have slightly less resistance? I dunno, someone help me out?
I think the inconsistencies between samples of the same model of chip are much less about the interconnect wires than about the transistors themselves, having variation in their individual switching speed vs voltage curves. There's nor really much variation in interconnect length between a given two gates when both chips are made from the same masks. But especially at the lower (finer pitch) layers of metal interconnect, variations in resistance and capacitance can affect how things operate.
I believe Intel processors can't boost all cores. At least some tests I have done with my notebook processor (i7 8550u - 4c8t) with `stress -c n`, being n the number of processors, show that for n > 1 the processor doesn't reach 4ghz, only about 3.7ghz, while the package temps are still at around 70 °C. Only a single core on full load reaches 4ghz before throttling.
This is entirely configurable. They generally don't do all core == max boost for power consumption reasons (and I assume yield would be pretty low on chips that can do this)
Desktop chips also generally don't have any AVX offset, which is almost always required for 5 GHz all core.
> I believe Intel processors can't boost all cores.
And that's exactly what shereadsthenews's point is. They can't boost all cores, and they are not boosting any core beyond the all-core capacity if it's truly a CPU that runs at 5 GHz all the time.
Turbo Boost can certainly apply to all cores- the limits you hit are TDP and time based, there, not strictly thermal.
So, for example, my old laptop CPU would clock itself up to 2.7GHz on all cores... well, okay, it was a dual core, so that's not saying much, but still. But it'd only maintain that boost for a few seconds- under sustained load it dropped down to 2.5. This wasn't because of thermals, but rather because 2.7GHz was a Turbo Boost frequency, and once the PPL timer runs out...
And to explain why they don't have, say, one core boost to 5.1GHz...well, let's see what siliconlottery says.
> As of 3/16/19, the top 38% of tested 9900Ks were able to hit 5.0GHz or greater.
> As of 3/16/19, the top 8% of tested 9900Ks were able to hit 5.1GHz or greater.
So, Intel'd cut their yield by more than a factor of four if they only let parts that could hit 5.1 into this bin. For a 2% single-core performance boost...
As far as I know, K-series parts don't support binning of individual cores- if you have one bad core that'll only hit 5.0, 1-core turbo to 5.1 will still result in the OS scheduler periodically picking that core to use, it clocking up to 5.1, and problems resulting.
Intel's Turbo Boost 3.0 [1] was their attempt to take advantage of the fact that some cores on a chip can clock higher than others. It does not work well in practice, because it requires too much collaboration with motherboard and OS vendors. This feature is not available on their desktop platform, which the i9-9900KS uses.
XTU allows setting different turbo multipliers for 1-4 active cores (but the difference from the nominal clock speed typically gets smaller as more cores become active).
I have one constantly pushing 5.1Ghz (disabled speedstep etc. been stable for months). I bought it because there was no comparable AMD cpu and as far as I know AMD is still behind. Why do you think it is not a proper answer?
I think by AMD's new lineup they're referring to Ryzen 3000 series, which isn't out yet. If the rumors are true, the top models come with 12 to 16 cores, higher IPC and higher clocks than the current Zens, pushing 5GHz boost.
A current 9900K might be some 20%-30% faster than a current Zen, but it will no longer be so when with the new lineup.
Meanwhile mitigations are eating up Intel's performance advantage..
I saw some leaked benchmarks today and it doesn't look great. I really hope AMD will kick the Intel (I even have Ryzen 1800x too and will buy the new 16 core one), but I need fastest possible single core performance and by the look of it at best it will be the same. But AMD has other problems like huge DPC latency, which makes it difficult to use for real time computations. If Ryzen happens to have the same single core speed as 5Ghz 9900k and pack 16 cores capable of delivering it each, I'll swap my Intel in no time.
Why 5 Hz all the time? I'd love to have such an extremely powerful CPU but I'd actually appreciate if it could downclock itself automatically and stay as cold as possible whenever I don't need it's full power. Some times I run heavy computations and having 8 5Hz cores sounds great but most of the time I just read or write something so even 1000 Hz sounds an overkill.
Base frequency isn't the same as lowest frequency (ya... it's weird). Base frequency is vaguely related to the idea that if you had all cores running at the base frequency, you would run just about at the system's TDP (it's really a complete mess, this is a simplification). Your system can still drop CPU cores down to 400-800MHz in low energy states.
What this announcement is basically saying is that Intel now has a 8 core chip where all 8 cores can run at 5GHz indefinitely "out of the box".
According to siliconlottery.com 38% of 9900K are overclockable to 5 GHz. Probably they just decided to select good chips from 9900K at factory, those not exactly new chips.
That's mentioned in the article: "The new Core i9-9900KS uses the same silicon currently in the i9-9900K, but selectively binned in order to achieve 5.0 GHz on every core, all of the time."
Not indefinitely just at the same time because thermal throttling will happen after sometime. It just means all cores will be able to go to 5Ghz at the same time nothing about all being able to stay at 5Ghz
Did they say that? Because there are people overclocking current chips and with good cooling have no trouble staying at 5ghz on all cores without throttling.
Since it's turbo, I thought it means "as long as the CPU likes it", rather than indefinitely. Or did they change how turbo works and now it's "it will run on turbo frequencies as long as there is enough load and the CPU is not temperature throttled"?
> To simplify, there are three main numbers to be aware of. Intel calls these numbers PL1 (power level 1), PL2 (power level 2), and T (or tau).
> PL1 is the effective long-term expected steady state power consumption of a processor. [...] PL2 is the short-term maximum power draw for a processor. [...] Tau is a timing variable. It dictates how long a processor should stay in PL2 mode before hitting a PL1 mode.
> This is where it gets really stupid: the motherboard vendors got involved, because PL1, PL2 and Tau are configurable in firmware. [...] This lets them set PL2 to 4096W and Tau to something very large, such as 65535, or -1 (infinity, depending on the BIOS setup). This means the CPU will run in its turbo modes all day and all week, just as long as it doesn’t hit thermal limits.
I don't think that's how it works. The CPU can adjust it's frequency through a much larger range, the base clock is not the minimum frequency it will run at all the time.
Open Task Manager or Intel Power Gadget (on Mac) and watch your CPU frequency - it already downclocks itself when it's not under load. Usually my 4770 is around 1.2Ghz idling, and I believe some motherboards let you set a minimum clock lower than that in the BIOS.
In all cases, anyway, a 10 MHz 68030 should be enough for full Emacs, it's commonly seen as the lowest hardware requirement for a useful workstation Unix.
Nothing if you don’t care about AVX workloads you can get a 9900K and set it to 5.0ghz with an AVX offset of 2 pretty much out of the box.
Unless the KS would guarantee a 5.3-5.4 all core OC I don’t see it being anything more than a PR release anyhow.
That said I’m not even sure if the 9900KS doesn’t come with an AVX offset to begin with most higher end motherboards come with a 9900K 5.0 preset anyhow which sets the voltage to about 1.3-1.325 and an AVX offset of 3 it just yells at you that you need a good cooling solution and this is not guaranteed to work.
I am likely the odd one out here, but wouldn't having the capability to turbo a single core to, let's say, 5.5 GHz or higher as factory stock be more useful in real life than the one or all eight core turbo to 5 GHz instead of 4.7? There are still enough single core/single thread apps out there that could benefit from faster single core performance, and this newest and hottest (also in temperature) i9 cannot go faster in single core than the 9900K.
And the speed of my overclocked 9700K. If anything, this is just a "Hey, some people can cool an 8-core CPU at 5GHz, let's make a new bin for the 40% of CPUs that can maintain that" release.
From an interesting historical perspective, I mark the end of Moore's Law in 2001 with Intel's prediction of a 5GHz "Netburst" in 2005, which could not keep itself from melting. Somewhere I have a marketing road map of 5GHz in 2005, 10GHz in 2010. It was aspirational of course, but seeing what had to happen between then and now in order to get a chip that runs at 5GHz all the time based on their architecture is illuminating of the challenges they face.
Amplifying the other current replies, what you're bemoaning and what slagged the Netburst "marchitecture" is the end of MOSFET Dennard Scaling: https://en.wikipedia.org/wiki/Dennard_scaling
Moore's Law is "the number of lowest cost transistors doubles at X interval", and 193nm UV immersion lithography limits have been hitting it hard lately (see https://en.wikipedia.org/wiki/Multiple_patterning). But chip manufacturing equipment makers haven't run out of tricks quite yet.
Perhaps, and perhaps it is just a difference of how we internalize what "Moores Law" means. Granted when Gordon postulated it, he was strictly talking about numbers of transistors, and the implication was that transistors were a leading indicator of performance.
Since I lean more on the 'performance' side of things, that was the end of 'single thread performance scaling', or put another way, that was when the performance of a single core stopped doubling every 18 months or so. And everyone switched over to dealing with Amdahl's law instead.
If you look at transistor count, yes. But single-core performance has stagnated since ~2003, that’s when we hit the 3Ghz mark. Progress since then has been a lot slower.
True for practical uses, most of the performance increase comes from more bandwidth and parallelism. But it's a mere 2-4x increase for single-thread performance, over 15+ years: https://preshing.com/images/integer-perf.png
So as I understand it, this isn't new silicon, it's just binning the existing 9900K and if you wanted an overclock 9900K you would have just gone to Ciara who obviously bin and overclock and verify their systems anyway. So now you go to Ciara and Ciara go to Intel and buy a 9900KS instead of previously where you would go to Ciara they would go to Intel and buy a 5 9900K's and find the one that would've been as fast as the 9900KS anyway.
Those numbers are Intel's "Recommended Customer Price", not actual retail prices. The -F parts really are listed with the same RCP as the parts with GPUs enabled. No, it doesn't make much sense, but Intel has been experiencing a CPU manufacturing crunch, and the desktop market gets the short end of the stick when that happens.
There's no doubt about it: this will be a beast of a gaming chip. It will also likely cost an arm-and-a-leg (it has to, it's binned silicon meaning it's supply constrained) and likely have a really high TDP.
I used to be Pro Anandtech and consider them one of the best Sources online for Hardware News. But the fact they have yet write a single post, big or small about Intel's Zombieload and its implication on performance worries me a bit.
Then there is the "Intel" benchmarks as usual [1] on GPU. Trying to suggest the 2 CPU were both running at 25W TDP to give a "fair" comparison, without mentioning the Ice-Lake U CPU were running with 50% more memory bandwidth vs the AMD Ryzen. And we know Graphics Benchmarks do depend a lot on memory bandwidth. The memory used was somehow mentioned in Toms or Other Sites but not Anandtech. ( Although none of them had mentioned the bandwidth difference, it was up to the reader to work them out )
Anyway none of these Consumer CPU upgrade interest me anymore ( Although any improvement to iGPU would be great ) I am eagerly waiting for a 2S - 128 Core EPYC 2 on a Server or AWS to play around with it.
[1] https://www.anandtech.com/show/14405/intel-teases-ice-lake-i...
Edit: And the lesson here, never trust a single news source. Always have a few option opened and fact check yourself. ( If you have the time )
It's not just Anandtech.
I feel like the entire "PC enthusiast" review space has dropped the ball on hardware vulnerabilities. Reevaluating performance between microcode and OS patches is an afterthought, and when a new CPU hits the market the numbers are presented without the obvious disclaimer that these performance gains may evaporate within months.
Some even perpetuate the "only relevant to datacenter" myth despite the fact that security researchers have shown to be able to exploit these vulnerabilities with JavaScript in the browser.
I'm glad the PC enthusiast space hasn't succumbed to the wild hysteria caused elsewhere by the side channel issues. It's tiresome to see every new variation people come up with reported as a new apocalypse all over again. Half the reason I still pay attention is to find if there's a new Linux boot switch I need to turn on to disable some new performance regression.
Even though I'm personally in the fortunate position not to have any reasonable exposure to these vulnerabilities, I wouldn't be particularly worried even if this wasn't the case. It's been well over a year since Meltdown and Spectre came out and there still hasn't been a single case of anyone successfully using these vulnerabilities to productive ends in the wild that I know of. Obviously, cloud computing vendors need to pay attention and there are legitimate business concerns that are affected by this, but insofar as personal computing goes? If people persist in the ridiculous notion that constantly running completely arbitrary code in naive sandboxes is a great idea, I imagine there will eventually be issues, but so far the issue seems to be vastly overblown in the popular media.
I don't see how the act of taking and publishing measurements after microcode and OS updates constitutes hysteria. It's my understanding, at least on Windows, you pretty much have to opt-out of these patches or manually install an update that disables the mitigations.
I fully support a users right to bypass these mitigations, and you're correct that your typical desktop user, at least today, isn't a target. But it seems odd that websites dedicated to performance computing have a blindspot to how automatically installed updates will impact performance.
> I don't see how the act of taking and publishing measurements after microcode and OS updates constitutes hysteria.
It's quite easy to sensationalize benchmark results even unintentionally. The average reader of PC hardware review sites is totally willing to latch on to a microbenchmark result that shows a 20% performance drop and claim that it's disastrous for performance, even if the actual added delay to real-world operations is a fraction of a millisecond and thus will almost never cause the result of your user input to be delayed by even a single frame. There's a certain degree of irresponsibility in publishing results that you know will be taken out of context by almost everyone who reads them. I've discontinued benchmarks in the past because it was frustrating seeing readers pretend like they show a meaningful difference between products when the reader's workload never comes close to the workload represented by that benchmark.
Did you just make an argument against PC review sites publishing any benchmarks at all because readers can't be trusted to interpret them correctly?
That would kind of defeat the point.
If they published a benchmark in the past and don't bother to correct the benchmark when it becomes out of sync with reality - that is just bad journalism.
Nobody is saying you should go and cherry-pick benchmarks after the mitigations hit, but you should definitely check the benchmarks you already published once.
These sites can and should expect an informed reader.
In any case: Leaving wrong information up uncontested helps neither "experts" nor laymen.
> Did you just make an argument against PC review sites publishing any benchmarks at all because readers can't be trusted to interpret them correctly?
No, and you should know better.
> If they published a benchmark in the past and don't bother to correct the benchmark when it becomes out of sync with reality - that is just bad journalism.
Proper practice is to publish the full test conditions, including software, firmware and nowadays also microcode versions. The availability of newer versions does not make older results any less true.
At AnandTech, we make all reasonable attempts to keep a thorough database of older hardware tested on newer benchmark suites, but the time this requires means we cannot re-test everything multiple times per year. I have over 200 SSDs and counting in the collection, and that test suite is over 30 hours long. The collection of CPUs is much larger. GPU reviews typically have fewer back-catalog hardware entries because updating to new drivers a few times a year is often unavoidable. You can browse the results for current and previous test suites at https://www.anandtech.com/bench/
> These sites can and should expect an informed reader.
You don't read the comments as often as we do.
In the case of my 6850k my overclock was silently killed by the Windows 10 microcode update which locked multi to 38x.
This behaviour angered me no end. I wasted significant time looking for workarounds, and deleting the microprocessor driver was the only way. I wonder what fixes I've now nixed, but there was seriously no need for Intel to kill my overclock.
On a couple of occasions Intel have pushed updates which have reset my fix. Dear $deity .. my next PC will be AMD for sure.
"Wild hysteria", what do you mean? Experts seem to be far from a hysterical, and go for a more technical language, benchmarks and all. And the mass that usually goes hysterical actually doesn't even know their CPUs are going to take a 20% performance hit next OS update, and probably won't even realize that.
The conversation was about the (nominally technical as well as more mainstream) press, not the experts. My remark regarding "wild hysteria" was made in that context. Experts and competent users will do the same thing they always do - evaluate any and all mitigations in the context of the threat models relevant to their usecases and act accordingly. Whether depriving the mass of less technically inclined users of the performance they are used to with all the implications that entails (including for energy efficiency and other externalities) is a wise decision only time will tell.
>My remark regarding "wild hysteria" was made in that context.
Considering we are referring to attacks that can bypass your PC's security, "prudence" is a better word than hysteria.
Yes, if they are left alone, it is the "end of the world".
They can be used to make any modern OS and browser as full of holes as Windows 98.
> Considering we are referring to attacks that can bypass your PC's security, "prudence" is a better word than hysteria.
That statement can be made about any vulnerability whatsoever. The merit of any mitigation can only be determined by a cost/benefit analysis that takes into account the potential impact of the vulnerability as well as the very real costs of mitigating it.
> Yes, if they are left alone, it is the "end of the world".
No offense, but this is exactly why the word "hysteria" seems far more appropriate than "prudence". Not a single one of these vulnerabilities has been used to cause any measurable damage anywhere that we know of, whereas the mitigations deployed have significant costs that everyone must pay. Despite this, emotional "the sky is falling" type pronouncements are far more common in the media - even the ostensibly technical press - than attempts to rationally weigh the costs and benefits of any particular approach to the problem.
>Not a single one of these vulnerabilities has been used to cause any measurable damage anywhere that we know of, whereas the mitigations deployed have significant costs that everyone must pay.
That's like saying: "nobody was drowned that we know of, whereas there was a significant cost to building the dam that everyone paid". (And also not dissimilar to arguments about doing no major industry/lifestyle changes regarding climate change).
It's exactly because there were mitigations relatively quickly deployed that we didn't have a "hack em all" exploit doing the rounds in hundreds of millions of devices. The difficulty of exploiting also gave some leeway to deploying those mitigations.
> That's like saying: "nobody was drowned that we know of, whereas there was a significant cost to building the dam that everyone paid". (And also not dissimilar to arguments about doing no major industry/lifestyle changes regarding climate change).
It is very dissimilar indeed - the sentence you quoted does not constitute an argument by itself. It is an observation regarding the present state of affairs (which you have not disputed), which to me indicates a need to take a breath and do a reasoned cost/benefit analysis as opposed to the hysterical "this must be fixed at any cost, externalities be damned" mindset that is fairly common in many circles.
If you really want a climate change analogy, though, consider this - however many mitigating workarounds you invent, as long as speculative execution exists there will always be side channel attacks, and eventually some of them will probably succeed to some extent. Perhaps, as you noted, some major industry/lifestyle changes are indeed in order - people could stop living in the delusion that a perfect sandbox is possible and realize that arbitrary code execution will always entail risks. Rather than turning every website into a potential security risk, perhaps it is our approach to software (rather than hardware) that needs re-evaluation.
> The difficulty of exploiting also gave some leeway to deploying those mitigations.
That's putting it lightly. Exploiting Spectre to get private data is difficult. Turning that into a privilege escalation is exponentially harder, so any "hack em all" exploit on hundreds of millions of devices would have needed an entirely unrelated mechanism for spreading.
> Half the reason I still pay attention is to find if there's a new Linux boot switch I need to turn on to disable some new performance regression.
No need, if you really want to disable all mitigations, including future ones, use mitigations=off.
https://www.phoronix.com/scan.php?page=news_item&px=Spectre-...
Thank you, I didn't know about this! Currently still on 5.0 for most of my machines, but this will be helpful once I move to 5.2+.
> there still hasn't been a single case of anyone successfully using these vulnerabilities to productive ends in the wild that I know of
Why would the attackers let people know they've successfully exploited?
> I feel like the entire "PC enthusiast" review space has dropped the ball on hardware vulnerabilities. Reevaluating performance between microcode and OS patches is an afterthought, and when a new CPU hits the market the numbers are presented without the obvious disclaimer that these performance gains may evaporate within months.
If you want useful benchmarks that show the performance impact, go to phoronix.
"PC Enthusiast" websites care about gaming performance and single user desktop performance, and always have. This has been the same since I started following things when the fastest CPU available was a 300 MHz Pentium 2. Imagine how amazed we all were by the 1 GHz Slot A Athlon.
Phoronix has multiple articles on the impact of those vulnerabilities, from small laptop to large server processors.
The perk of being able to test on Linux is that it's much easier to fully automate testing. Unfortunately, their analysis tends to be shallow.
Their audience is not mainstream but enthousiast And professionals. They do not provide in depth analysis since they probably know the reader can do that for themselves.
That may have been true at one time, but the class of people who would consider themselves to be enthusiasts has broadened well beyond the class of people who can accurately judge how their workload corresponds to the benchmark results they're reading. The recent improvements in the Linux gaming situation have been a big contributor and has undoubtedly skewed the Phoronix audience.
Phoronix serves two important purposes that nobody else does. 1) It serves as a news aggregator for a lot of different open source communities. You'd think that a site called hacker news would do that, but ironically it doesn't. Most content here is either heavily web dominated, or just random drivel about being excellent in life. 2) He runs his standard battery of tests on everything. A lot of upstream projects don't seem to have that much emphasis on performance regression testing. He has uncovered a few regressions and reported them upstream on a few occasions.
I greatly value Phoronix for both of those things; it's a great resource for both my work and personal computer usage. But it does mean that the traditional hardware reviews themselves are something of an afterthought.
The lament was that the large enthusiast sites are staying hush hush, mostly to not bite the hand that feeds them.
Because Phoronix cares is outside the argument.
Most places certainly retested CPUs in the wave of Spectre/Meltdown and at least the sources I've seen have mentioned Zombieload/MDS though they've yet to go back and rebenchmark CPUs due to the fact they're either prepping for or travelling to Computex currently. I'd expect most of them to have videos in the next month though.
It's rumored that companies like Intel and Nvidia will retaliate against review sites and publications for bad press coverage by slowing or cutting off access to preview release products for reviews.
pc enthusiasts aren’t really into running javascript either.
I've seen a number of comments like this over the last few days and I don't really get it, gamers use the internet right? Vast overwhelming majority of them are going to be running javascript programs hundreds of times per day
sure, and maybe browsers or os need some way to say “running untrusted code please turn off performance for a sec” for that use case. until then ill just use one tab at a time or disable js before i opt in for slow.
I expect that the vast majority of PC gamers keep Javascript enabled in their browsers.
CS:GO also uses JS for its new GUI, and there's already been one exploit that took advantage of it.
I wouldn't expect that. Is this the kind of thing that would be reported by the Steam hardware survey (if that's still a thing)?
Edit: I took a look and it seems they don't ask/record that info. I apologise for the offence this idea seems to have caused someone.
Nobody was offended by the other. I think people are downvoting you and gameswithgo because you appear to be applying your own personal notions to an entire segment with little to no evidence.
'PC enthusiast' is such a blanket term to start with, so applying a blanket statement to such a group is obviously doomed to failure from the very start.
And that's exactly what the parent did that I replied to. But I do appreciate the explanation, cheers.
Why would steam's hardware survey report if JavaScript is enabled in browsers?
Because its a hardware and software survey. I have a memory of them asking some extra questions but that was years ago. Either I remembered wrong (likely), they changed it, or they don't report it all.
Because downloading and running binaries of applications is much safer?
From what i've seen web browser teams have taken the recent risks extremely seriously - I have sure had a worse track record of infection via downloading and installing software versus visiting sites with JS running
> web browser teams have taken the recent risks extremely seriously
Not really. They didn't even properly apply band-aids.
Chrome and Firefox disabled number of features, that allow Javascript code to create high-precision timers. This makes exploiting slightly more difficult, but the gaping hole is still there — there is infinite number of ways to create a high-precision timer, just not as obvious as closed ones.
Chrome has enabled Site Isolation on desktop, but haven't done it on Android (presumably, because of associated increase in memory consumption).
All major browsers still allow Javascript to run in background, create CPU threads and consume unrestricted amount of CPU time. I don't believe, that any of them have mounted instruction-based defenses (lfence etc.), but I may be mistaken here.
It would be nice if the mitigations could be applied per-core. Then the OS could set the affinity for processes like games that really don't care.
Discord is huge in gaming/pc enthusiasts. Oh, and you use VS Code in your stream. I wonder what those two are written in...
Those are bad examples because they both run unsandboxed.
The recent CPU vulnerabilities aren't uniquely bad for Javascript specifically. They're bad for wanting to run unprivileged code. Javascript in regular web pages just happens to be the most obvious example of sandboxed code in desktop computers.
Most of the people who write these articles for review sites do not really understand CPUs in depth. They know how to run benchmark suites and talk about new features mentioned in Intel's marketing material. Most of these people are writers, not engineers. If they were experts they could make a lot more money working at tech companies instead of working for review sites.
In fact, I would bet that most professional software engineers could not correctly explain Spectre, Meltdown, and Zombieload without making at least a few mistakes.
AnandTech is different. They cover μarch, and their writers clearly understand what they're talking about.
Anandtech was sold (or maybe Anand left, I don’t recall exactly) about 5-6 years ago. The depth of their technical writing isn’t what it used to be.
Anand left to work for Apple in 2014.
they're still the best in the mobile review space. though that may say more about typical mobile phone reviews than AnandTech.
Even AnandTech mostly just regurgitates what is feed to them at press events these days.
Anand Lal Shimpi left Anandtech 5 years ago, and the quality articles he wrote have not been replaced. It's basically a tech lite blog now.
So Despite what I wrote, I still think they are one of the best. Both Ian and Andrei are good with many in depth article, I really do miss Anand's article though. I think, the real problem is Anandtech is short on staff.
Anand has been working in Apple for a few years now, I wonder what has he been up to.
I would say Ian is really the only writer at anand i care to read from anymore. His articles always take time to come out compared to everyone else but they at least cover every thing he can think of to tell you about and are well researched
Andrei Frumusanu is also great in the mobile space. The only tech reviewer to run SPEC on phones. Great overviews on the arch of new cores too.
Because intel strictly forbids publishing benchmarks of their processors with the "hardware vulnerability mitigations" applied. Even OEMs cannot show them to their enterprise customers. You can do your own benchmarking after buying the systems. So, no money, no real-world benchmarking.
Intel has never told AnandTech not to benchmark their microcode updates or a third party's OS updates. They haven't threatened to stop sampling CPUs for review. I haven't seen any evidence that Intel has ever attempted to enforce such a restriction against anyone. It's just a stupid clause that one of their dumber lawyers slipped into the EULA text, and does not appear to be something they actually care about at an organizational level or expect to be able to enforce in the real world.
I would say putting it in the EULA text is telling you and what's more important, a court of law would probably agree. I don't know why you would expect anything more?
Do you work for Anandtech? If not, what are you basing these claims on? I suspect that Anandtech, etc would not publicly disclose if a hardware manufacturer was forbidding them from benchmarking certain configurations under threat of not releasing samples..
Yes, I write for AnandTech (paid as an independent contractor; I'm not one of the salaried editors). I've done some Spectre/Meltdown regression testing for AnandTech, and I've never been instructed to not do such testing in the future.
Microcode benchmarking is not the hill Intel wants to die on.
Which begs the question, when will we see benchmarks of Intel's CPUs with all vulnerability mitigations on vs. AMDs CPUs?
As soon as the definition of "with all vulnerability mitigations on" stays stable long enough to put together a good review. Benchmarking a moving target is hell, and we don't have enough equipment or staff to do the around-the-clock regression testing that would be necessary to keep our benchmark database current with everything that's happened over the past 1.5 years.
People are using your benchmarks to decide what computer to buy. If they're that out of date, what should I tell them?
End-user perceived performance is usually not affected enough to meaningfully change the ranking of products. If a chip goes from being 5% faster to 3% slower when mitigations are applied, you'll never notice that without busting out a stopwatch and digging for a reason to be disappointed. Remember, measurable performance differences aren't always noticeable performance differences, especially without a side by side comparison.
And if two competing processors are close enough in performance for these mitigations to change which one comes out on top of benchmark charts, then other factors like price, power consumption and IO capabilities are probably a much bigger deal at that point than minor CPU performance differences.
Most if not all of our benchmark suites have been updated to include at least the early Spectre/Meltdown/et al. mitigations, and new CPUs are being tested with the microcode they launch with.
Why not do an article with then-current mitigations every 6 months? Rather than conveniently waiting until Intel can get their hardware fixes out. Which will coincide with the "mitigations are stable" article.
I'd be willing to bet that if the majority of mitigations impacted AMD rather than Intel as it does today, you'd already have done this and would continue to follow up. This is very interesting material and a special moment in time to cover it and inform your readers. Other than willful laziness ("lack of time", everyone knows you make time for priorities), this appears like shilling hard for Intel.
If Anandtech decides to do the right thing, I'd like to see .Net or Java compilation. Real-world based benchmarking only.
It's not ok to insinuate shilling on HN or to dismiss someone's work by assuming bad faith in this way. Would you please review the site guidelines and follow them when posting here?
https://news.ycombinator.com/newsguidelines.html
> I'd be willing to bet that if the majority of mitigations impacted AMD rather than Intel as it does today, you'd already have done this and would continue to follow up.
Fuck you, too. I've given you reasonable explanations and you're still throwing out insulting conspiracy theories. If you want sensationalized news, there are plenty of outlets that will give you what you want, and you don't need to be a dick to those of us who are trying to be reasonable and honest about both the subject matter and the resources we have to provide quality coverage.
Also, we've done two significant articles in the past year measuring the impact of these mitigations, so we're not even falling behind the standards you claim to want us to meet.
Personal attacks and name-calling are not ok on HN, regardless of how unfairly someone is interpreting your work. Would you please review the site guidelines and follow them when posting here?
https://news.ycombinator.com/newsguidelines.html
To use the best information available, which are estimates on what the potential impact can be, and then make their own decisions.
They did for a brief period. It got reverted pretty quickly.
https://www.tomshardware.com/news/intel-cpu-microcode-benchm...
And you have to abide by those unjust rules because... ?
...you sign a legally binding NDA to be able to early-access the CPUs, test & review them; get the semi-classified technical documents to develop your new servers.
If you don't sign that NDA you can't buy the CPUs from intel to resell them. Even if you are able to buy the CPUs from them, there's no guarantee that you'll buy from the list price or get the discounts for big, prestige projects which require tenders.
It's a deep and ugly rabbit hole.
No free hardware for reviewing, and having to wait until the hardware is available to the general public, so every other site has scooped you.
Zombieload is two week old. Testing take time.
And a minute of googling brings up multiple articles about Spectre/Meltdown including two comparisons. https://www.anandtech.com/show/13659/analyzing-core-i9-9900k...
https://www.anandtech.com/show/12566/analyzing-meltdown-spec...
> ... none of these Consumer CPU upgrades interest me anymore
What about the Ryzen 3000 line up? The benchmark leaks make it seem like it is going to be a huge improvement and AMD isn't susceptible to Zombieload.
The problem is I am on Mac ecosystem, which means I don't have much of a choice ( I doubt Apple will ever switch to AMD ) . And since most of my casual gaming are done on mobile, ( I am quite old and don't have time like I used to spent hours on UO or WoW ), none of these upgrade means anything to me. So my interest is in Servers where most of my time are spent now in Web Development.
Apple might switch to ARM, but I think you're right on AMD
This probably doesn't contribute to the conversation but with the number of serious vulnerabilities that have popped up recently I'm not inspired to solve the truth table for the CPU vendor that leaves me the least exposed. As others have said - and I have seen - some of these can be exploited with Javascript in the browser. (I do not know much about Zombieland presently)
Looking forward to a less complex architecture even if it means cutting me off at the knees with execution speed (for a few years):
RISC-V
I remember mentioning before we should have learned with the first Xeon Phis to program microkernels for high core count in-order CPUs.
Because the future is looking increasingly in-order high-core-count with hard partitioning between security contexts.
That's because Anand Lal Shimpi is not doing the writing anymore...
I think the point about the Ice Lake announcement is a mischaracterization.
It's typical for news sites to report individual announcements, with little or no analysis; this is fair, as long as the post clearly specifies its nature (which, in this case, does).
Anandtech did something very interesting actually on the Intel subject, which I didn't see on other sites - it made an article about the performance of the i9-9900k locked at its nominal TDP (95W), which showed very significant losses.
I was put off AT the day I noticed they forgot to cover the Threadripper launch for weeks while they flooded the front page with dry half pagers about new Intel motherboards (not benchmarks mind you, just snippets from the OEMs Press release). I asked about it in the comments and got a boilerplate answer that they strive to present quality articles to the readers.
They also had the Intel series 6 launch where they praised the 6600K and compared it to the 2500K to show “massive” improvements over the years. This while all the other websites noted “minimal speed boost for too big price”. Perhaps both true but the spin on it makes all the difference when showing the intention.
AT shows quite the Intel bias. And it’s not the Intel part that bothers me, it’s the bias part. They go out of their way to make Intel look better without outright lying, just selectively presenting the truth in a way that shines a much better light on Intel. This for me casts doubt on other articles.
I’m glad Andrei Frumusanu’s mobile reviews still have a home, being the best I have seen on the entire internet. But that’s the only segment on AT where I can be reasonably sure about impartiality.
Their October 2018 i9 benchmark review was subtitled "Hardware and Software Security Fixes", and literally began with the following sentence:
The Spectre and Meltdown vulnerabilities made quite a splash earlier this year, forcing makers of hardware and software to release updates in order to tackle them.
https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9...
its gaming centric news, nobody cares about these exploits when running fortnite
explain why i should care instead of downvoting. a windows program can already just directly look at the memory of other running processes, so why do i care about sidechannel attacks outside of javascript snooping on things which i can mitigate with simple behavior changes.
>>a windows program can already just directly look at the memory of other running processes
Do you run everything with administrative privileges?
Too many programs ask to run as admin already, people just click okay on the UAC prompt to make it go away now.
Benchmarking cpu firmwares is forbidden by the license (same thing for databases)
Intel quickly walked back that crap
https://www.tomshardware.com/news/intel-cpu-microcode-benchm...
What licence? You don't have to agree to a licence to use a CPU. I mean Intel might think otherwise, but those kinds of licences("you agree to this licence by just opening the product") are not worth the paper they are written on in EU, so even if there is such a licence it wouldn't be applicable everywhere.
Intel could license the microcode update.
I would setup a scenario where safety is a concern (V2V) so you could get court order to benchmark firmware.
Anandtech went downhill when they lost Anand. Not sure what the hell he could possibly be doing do that's useful to Apple, but the conspiracy theorist in me wants to think it was Apple getting him out of the media.
Intel has fallen so far. It's honestly a shame to watch at this point.
I remember back when Sandy Bridge was first released, and I was extremely pleased by the performance improvements my new chip was able to provide. Did they really manage to mess everything up within such a limited timespan? Or was there just always a hidden incompetence that never showed itself until now?
Their design for 10nm and the implementation didn't line up. Whatever their (still undisclosed) problems were, the entire node was fundamentally flawed.
It might have been hubris at having been at the cutting edge of fab tech for so long. It could have just been the fruits of pushing the envelope - sometimes what you predict will happen when you put theory to application proves false.
It has warped their business heavily for 4+ years now, but in the same way AMD had to "get their act together" with their processor design after Bulldozer failed spectacularly in practice and took ~7 years to fix it companies at these scales cannot turn on a dime - Intel had their roadmap planned a decade in advance, and to have it so thoroughly trashed starting around ~2015-2016 will require until at least 2021 to correct in all likelihood.
I more and more tend to believe the rumors (started by Semiaccurate) we will not see 10nm in mass quantities and 7nm is the next. We will see very, very soon: Intel said 10nm CPUs in client systems will be on shelves for the 2019 holiday season.
But weren't there Intel 10nm chips sold for revenue in 2017? [0] Granted, it was an OEM-only part for some China specific education laptops...
0: https://www.anandtech.com/show/13405/intel-10nm-cannon-lake-...
Cannon Lake was released only because many at Intel have their bonuses tied to the process node launch. Well, they launched a 10nm CPU... so bad the GPU is disabled, performance/watt it's worse than KBR and it was only available in limited quantities.
KBR = Kaby Lake Refresh. Interesting, in that Intel's "10nm" node is said to be more power efficient than their "14nm" node, in this case, per AnandTech (https://www.anandtech.com/show/11738/intel-launches-8th-gene...) KBR was launched with 14+nm. Could be that early of a 10nm part wasn't yet very power optimized. And that would be a very good explanation for why the GPU was disabled vs. the yield issue, which we're pretty sure is much more fundamental than specs of dust and other isolated things that can disable a part of a die without killing it altogether.
https://semiwiki.com/semiconductor/intel/7433-intel-10nm-yie...
The GPU was disabled because it is those blocks which are the most problematic yield wise.
Hits forehead for forgetting to check SemiWiki. But note that since then Global Foundries has abandoned for now offering this general node, 7nm as they name it, 10nm as Intel does.
But isn't the whole die exposed and otherwise processed as a whole piece? Very much not deeply educated here and can't justify the investment to change that, my primary mental models for defects are either something that takes out a whole die, like one lithography step being misaligned, or spot damage like a piece of dust.
But there are clearly issues in between that are statistical inside a die, I recall Semiaccurate saying one of Nvidia or AMD did a GPU tape out to a TSMC process where they duplicated vias because that process' were iffy, and they compensated with a less dense design where either one or two working was OK. If Intel is suffering that sort of problem, then the GPU is a big part of the die that can be fused out while you still have something useful. If all your CPUs or all your L3 cache banks fail, a working GPU is pointless.
That article points out two particularly suspect things Intel is uniquely trying at this node: SAQP for the metal layers, which I've seen cited before, and which they generically officially blame, and cobalt in interconnects. And at least one other thing was mentioned as suspect, and four new things total.
One ray of hope is mentioned for Intel, in that they were the most aggressive in the industry with their 14nm and 10nm nodes, and in both cases paid the price in yields, while they're being conservative for their 7nm node, no doubt because EUV is a very big step for everyone. Semiaccurate also commented and/or theorized that a compelling reason Intel is continuing to work on their 10nm at one fab is that one or more things in it are also going to be used in their 7nm, so they might as well debug them now and there, and sell some chips while they're at it.
Now to do some catching up on SemiWiki, thanks!
I've seen some recent tests showing that bulldozer is quite competitive with new multithreaded friendly stuff like DX12 and Vulkan. Roughly saying... if you think of it as a rematch against the same intel products then bulldozer can win on lots of situations.
It's certainly better at multithreaded workloads than single threaded. But performance per watt is nowhere near Intel or Ryzen parts.
It's not Intel, it's the end of Moore's law. Intel's problem is that they are not well positioned to capitalize on the specialized processors that will be required to continue ekeing out advances for the next decade or two before we're entirely up a creek. :)
From how Apple and AMD are doing with their own processors though, it seems like Intel is just fundamentally doing worse even as things become more difficult with smaller transistor sizes. Apple is going to replace Intel with their own processors because Intel has failed to meet requirements. AMD, with a shoestring budget basically on the verge of bankruptcy the entire time they were doing their R&D, managed to build out a new architecture that has provided amazing results while Intel has basically had nothing to show in the same time.
But perhaps there's something I'm missing here. Is there a misconception or lack of information here on my end that needs to be clarified? I can only make my analysis largely as an outsider looking in when talking about semiconductors.
Oddly enough, the challenge of estimating who is "ahead" is kind of like traffic. Intel arrived at the scaling traffic jam way before anyone else, and has been slowly slogging through it. New entrants are catching up to the traffic jam and will have to make their way through it as well. If there is no breakthrough, then everyone will find themselves more tightly bunched in feature/performance curves than they have been in the past.
The spoiler though is that different architectures have different scaling properties and limitations. IBM's Power architecture has already scaled past where Intel is, not because of the semiconductor process, but because the architecture is more streamlined. ARM is somewhere in the middle, it started off pretty streamlined but it has been adding warts (special instructions) to more directly compete with Intel and that creates impediments to scaling.
Bad analogy. You can prove that getting in line earliest will get you out of it earliest. If you postulate that it’s more complex than that, it might hold up. You could say that Intel is driving a semi, while others are mini coopers and motorcycles, splitting lanes and better at speeding up/slowing down. At which point no analogy is necessary: startups and smaller companies are more nimble than larger companies, at the trade off of resources.
It's a good analogy as long as Intel was the first to experience the end of Dennard scaling (https://en.wikipedia.org/wiki/Dennard_scaling) because their fab lines were ahead of the rest of the industry's. And fabs are all "semis", due to the massive amounts of capital and talent needed to move to the next node.
So much so that we're now we're down to two companies in the whole world who are successfully executing the smallest CPU nodes, unless Intel manages to make their "10nm" work, or pulls off their "7nm".
While we're hearing the very roughly equivalent TSMC "5nm" node is starting risk production (https://wccftech.com/tsmc-5nm-production-euv/ beta testing, you might say, someone outside of TSMC has to be the first, second, etc. to try to get real world dies that work on a new node). Intel isn't saying anything, but Semiaccurate has reported at least two fab lines that were slated to move to their 10nm are installing lots of EUV equipment consistent with using them for their 7nm node (and at least one fab moving back to 14nm).
Apple and AMD just have to ask TSMC to do their magic to make 7nm chips - they haven't had to do anything spectacular, just use TSMCs design libraries.
Intel is struggling because of their struggles with 10nm. Apple and AMD are not because TSMC has pulled off 7nm. Architecture matters, but process node matters a lot too.
What would you say are the primary differences between the two companies? Is it more just a matter of luck that has allowed for TSMC to have been able to succeed where Intel hasn't? Or is there actually a meaningfully different process design and/or problem solving approach that is enabling this?
The full story on how Intel managed to fuck up 10nm so badly may not see the light of day for years if ever. But generally, it seems that Intel tried to make too many changes in one generation. They probably wanted their 10nm to be the most advanced process that didn't require EUV lithography. Some features of their 10nm process ended up not working (evidence points to the cobalt interconnects as one of the hang-ups). In the meantime, it looks like EUV is coming along nicely.
They compounded their problems by essentially stopping microarchitecture development on 14nm, which is why eg. their laptop processors still don't support LPDDR4, and they're still shipping basically the same CPU core they released in 2015. Coupling microarchitecture and fabrication development has at times been an advantage for Intel, but for the past few years it's been a huge mistake, and they've promised changes to their design processes so that they don't get stuck like this again in the future if fab advances aren't ready when new microarchitectures are.
TSMC naturally doesn't have this problem, because they're a pure play foundry. Their customers have to each make their own bets on when new fab processes will be truly ready, and how well they will perform in practice.
Not tntn, but I've been following this and it seems to be both Intel's now decades long history of very bad high level engineering and personal management catching up with their crown jewel, and being more aggressive than TSMC's initial 7nm node. Perhaps Intel depending on a particular lithography? technique that TSMC isn't, or isn't yet heavily, but we don't really know, no one authoritative is talking, and Intel is still claiming 10nm is going to make it.
I think Intel having their own manufacturing fab is hurting them in the longterm. By outsourcing it you can go with whomever has the best solution. By matter of pride, Intel has not done this but AMD, Nvidia, Apple all do this.
I've heard entirely the opposite, that having a close relationship between chip designers and fabricators allows for higher performance designs. I don't know of anyone who interpreted AMD selling off its foundries as anything other than severe financial distress, and it worked supremely well for Intel while they stayed at least one step ahead of the competition. Enough so this is said to have wiped out a generation of CPU architects while Dennard scaling still worked, no matter how clever they were, Intel moving to its next process node wiped out their speed advantage.
But it's a brittle model, if a company screws up a node and is too messed up to handle the failure gracefully, as Intel is doing with their "10nm", no doubt with pride as a factor. And it's not uncommon for institutions to permanently lose abilities, I'm not counting on Intel succeeding with their "7nm" node.
On the third hand, we're now down to 2-3 high end CPU fab companies, Samsung, TSMC, and maybe Intel. That also can be a brittle thing.
Intel was ahead, and hit the wall first. Apple & AMD are not ahead, they're just catching up. I don't want to understate how big a problem that could be for Intel, of course. But they're also doing it on low margin parts, and Intel continues to make bank with their data center parts.
I don't think any of this represents a short-term problem for Intel, other than the general downturn in processor sales because fewer people will need to upgrade. But I think it represents a very serious long-term threat.
They have some really cool technical advances, like 3D xpoint. But I'm concerned that they do so badly on embedded and custom integration from a long-term perspective.
Apple sold millions of iPhones with 7nm chips while Intel struggles to build comparable 10nm chips and keeps releasing 14+++ nm. AMD will release 7nm chips very soon. It does not seem like they are catching up. Quite the opposite.
You can’t compare nm between vendors - it’s just marketing numbers.
Not directly no, but the actual feature size and density of TSMC 7nm and Intel 10nm are comparable.
What about die size?
Then you have to ensure you're comparing chips designed for the same market segment. Die size comparisons work well if you're talking about a Cortex-A53 on 16nm vs 12nm. It doesn't work as well when you're talking about a full SoC, or even a desktop CPU+GPU combo where core counts for both sides of the chip can vary greatly.
Die size is independent of process size, customers can order most any die size they want.
My simplest laptop in current use has 4 times more memory than my current phone and I probably would need to make huge compromises to live with half as much. A lot of the chips in phones don't even have external memory buses. A top-of-the-line iPad Pro sports an 8-core asymmetrical core design, with 4 fast cores and 4 slow ones and, overall, is slower than a 2-core Core M-based MacBook (although it feels great because iOS does a lot less than macOS).
Also, Apple doesn't make its own A-series processors - it uses TSMC for that.
And iPhones still don't come close to competing with desktop-class processors in terms of performance. iPhones also use much less electricity, of course, but the point remains.
I don't know enough about this, but the GP's argument of "Intel hit the wall first because they were the first to reach that level of performance" makes logical sense to me.
"Apple’s iPhone Xs is faster than an iMac Pro on the Speedometer 2.0 JavaScript benchmark"
Yes, Safari, but iPhones do compete with your average desktop processor (not the top end).
https://macdailynews.com/2018/09/23/apples-iphone-xs-is-fast...
They can compete in certain workloads. As a computational tool however, desktop Intel CPUs can be optimized far, far beyond the capabilities of any A-series CPU.
Don't forget that Intel CPUs have things that A CPUs are missing like QuickSync, AVX2, massive PCIe interconnectivity.
Whether the A-series CPU could be modified into something competitive on that front is yet to be seen. Whether this actually matters considering the state of our compilers and software development is yet another question.
Apple's newest CPUs have hardware explicitly for accelerating Javascript, so it's not surprising they'd pull ahead there.
And as you said, you're comparing the top-of-the-line iPhone to an "average" CPU.
> Apple's newest CPUs have hardware explicitly for accelerating Javascript
They really don't. A12 added a couple of instructions for floating point conversions, but contrary to claims making rounds on Twitter at the time, they were not even generated by WebKit when the benchmarks were run.
Right, but Intel isn't falling behind AMD or apple in this comparison - it is falling behind TSMC.
Intel made one single bad bet - their 10 nm process didn't work as well as they expected - and TSMC, who made the right bet, leapfrogged them.
In terms of architecture and vulnerabilities, it's not prudent to bet Intel chips are more vulnerable to exploits than others - it's just that we know more about those vulnerabilities. If you want to find vulnerabilities with high impact in cloud and enterprise data centers, Intel Xeon CPUs will be your primary research target.
We're not really sure yet whether TSMC have leapfrogged Intel in the longer term though. Intel's 10nm issues seem to have delayed their smaller process nodes in the medium term, but by how much is yet to be seen. It seems, for example, that Intel 7nm isn't in quite as much trouble as one might expect.
It's also naive to dismiss the possibility for Intel to have learnt a lot from some of the failures in 10nm that will prove useful in accelerating node development in the future.
If the speculation about AMD's Rome / EPYC 2 performance is true, they have now surpassed Intel.
More accurately, AMD's 7nm processors should be ahead of Intel's 14nm processors. That's great, but it's not a huge surprise.
We've yet to see how competitive they'll be once Intel leapfrogs that 10nm node. That's assuming that they can, of course...
Are they doing worse? Or are they still ahead, just not as far ahead as they used to be?
I wouldn't expect any massive leads in any industry to last for long. This might just be regression to the mean.
They are behind. I think they have a chance to catch up with their 7nm which is supposed to be better than TSMC 7nm. But it won’t be soon.
But what does that really mean?
They still seem to be producing the fastest processors available for desktop and server.
It doesnt matter if someone else is making even a 3nm chip if the chip still can't outperform the current offerings.
The sizing numbers are also just nonsense marketing. They stopped meaning anything in particular a long time ago. Intel's '10nm' and TSMC's '7nm' are about the same size.
The size numbers do measure something, but what they're measuring differs between manufacturers.
They aren't comparable between AMD and Intel. They absolutely are comparable between Intel and Intel.
I wouldn't say that. Why are Epycs worse than Skylake xeons?
The reason they are having problems is that they just continued doing die shrinks and speculation hacks to increase performance. They've essentially had the same core since sandy bridge.
They didn't see zen coming, didn't have to compete with bulldozer, and thought they could just keep shrinking rather than building a new core design. Once they hit 10nm, they failed, and their old core got some healthy competition from zen. Now AMD is looking to take a serious lead with zen 2, aka ryzen 3000 series.
I don't think Moore's law is dead, Intel just gave up on real r&d because it was cheaper.
I can't wait for arm and risc v to enter the playing field.
You probably want to say “eking out,” not “eating out,” as the latter means something... very different.
Someday I will learn to triple check when I use speech recognition.
They did buy Altera didn't they?
reddit.com/r/boneappletea
Oh goodness, thanks. Fixed 'ekeing', which my phone apparently did not believe I'd said.
So close!
It's eking. Eke, eked, eking.
Mobile is the foot rub. (Autocorrect of "mobile is the future" I saw once.)
That’s ducking funny!
And yet they still have the lion's share of the market. And the money to eventually recover from this as if nothing happened.
Tell me when they have really fallen. Still very far from it.
That means very little. As the saying goes, “How did you go bankrupt?” “Two ways: gradually, then all at once.”
In technology, downward swings of fate tend to come fast and hard. The camera world went from 100% film to 100% digital in the space of about five years, which extinguished Kodak. Or consider Palm/Nokia/ Blackberry, who went from collectively owning the entire mobile market to dead as doorknobs in even less time.
It’s easy to see how it happens to Intel too: AMD’s big-core-count chips start eating up server business, while ARM takes over PCs (at this point people consider it all but certain Apple is switching to ARM in the next few years, and Microsoft is building Windows on ARM as a hedge), and without another business for Intel to fall back on (they’ve shut down modems, mobile chips, and anything else that could’ve been a new source of revenue), that’s the end.
I’m not saying it’s certain, but I’m saying it’s totally possible and their current market share means nothing.
IMO this is only a repeat of the AMD Athlon days and Intel will go back to their anticompetitive antics sooner than later.
Did they ever truly stop?
Losing up to 40% of performance and having to disable hyperthreading is going to kill them in the server space.
That ryzen 3 demo didn't look good for Intel either.
maybe it is just hard. amd isnt shipping clearly better stuff either.
show me what amd chip performs better instead of just downvoting.
performance benchmarks go out the window once you realize that a platform is woefully and unfixably insecure.
performance benchmarks mean even less when those security issues are band-aided by performance metric-hurting-workarounds.
If I buy 100k chips for my HPC to do single-tenant processing, performance, and performance/watt are priority one. The vulnerabilities intel has to fight right now are irrelevant for this use case.
amd isn’t immune to side channel attacks either. the most recent one we think amd is immune to but i wouldn’t assume in the long run that amd will generally prove to be more resistant to them than intel.
Should they be describing it as 8 cores 16 threads when there have been multiple security vulnerabilities that have to turn off hyperthreading to be mitigated?
This is a very good point. I hope AMD brings it up with the EU. Might be a very slow process though, but at this point it is anticompetitive behaviour. AMD could probably squeeze a fair bit more performance out of their processors if they were allowed to cut some security corners.
Who do you think is disallowing AMD from cutting security corners? (Not rhetorical)
Ethics, the researchers exposing these exploits and proper implementation of the x86 ISA.
No one "allowed" Intel to cut security corners. It was an oversight and took a long time to discover and understand the impact of.
Even when people started speculating (ha!) that speculative execution could be problematic it took years before they managed to exploit it.
I hear many people continuing to say that Intel are "cutting security corners".
Are they really? I don't have an extremely deep understanding of Intel's implementation of x86 ISA, but I do know enough to say that so far we've been able to effectively mitigate almost all of these attacks with existing instructions available on the Intel CPUs. That doesn't mean that they are still not open to other variants of these attacks - but at some point you have to assume diminishing returns. Spectre is still very difficult to exploit, for example.
Perhaps this has little to do with Intel and more to do with software authors cutting corners? LFENCE and SFENCE are reasonably well documented, after all...
Here's a register article from 2007 about page table permissions being problematic. If you look around a bit, there were a ton of security researchers who talked about the problem. It seems to have been a bit of an open secret that such a thing must exist -- they just hadn't found it yet.
https://www.theregister.co.uk/2007/06/28/core_2_duo_errata/
The scariest part is that many of the best security minds work for various intelligence agencies. They very likely have known about such things for a very long time.
Meltdown strikes me as an almost perfect vulnerability. It affects almost everyone. It is undetectable until exploited and once exploited, it immediately goes away until the next time. It's easy to keep secret. Most importantly, it's a one-way vulnerability. Keep your secure systems from running untrusted code and there's zero risk. Since this is standard protocol anyway for those systems, you don't have the risk of someone running across a code patch somewhere.
The only potential downside is that the juiciest targets also aren't running untrusted code (though most foreign affairs workers probably run untrusted code). The big point of interest here is information symmetry. In most cases, giving others secret information is bad. In this case, both the best and worse case situations work out well for the USA. If nobody else knows, they get free info. If everyone else does know, then everyone gets perfect information about everything. This favors the most powerful country. They can eliminate the unknowns (the only real danger). In contrast, knowing you are going to be crushed does nothing if you can't hide your own hand either. So, the best case is very good and the worst case is still acceptable.
What else would they describe it as? There are 8 cores and 16 threads, whether you have to turn off the hyperthreading feature or not is a different matter.
It just feels kinda shady to advertise the peak performance with safety features that should be on off without mentioning that. They should at least include a disclaimer.
Putting disclaimer is like voluntary shooting themselves in the foot.
Most consumer wouldn't care about this anyway and the risk was overblown.
I think that if you turn off hyperthreading it becomes 8 core 8 threads.
Yeah you are correct.
Besides cloud providers running VMs /containers on the cloud, is Spectre/Meltdown really such an issue for day-to-day consumers ?
Yes. I think this is a common misconception.
These attacks work fine in the browser, as researchers continue to show. They allow complete bypass of any native app sandboxing layers. Surely you don't run everything on your box as root all the time.
Can you link to a hosted example of one of these. That would convince people nicely. Someone linked to one in a similar discussion yesterday but it didn't work anymore in currently patched browsers.
I'll keep HT on because I use NoScript and I encourage others to do the same.
Meh. It doesn't require Javascript for your computer to run logic described by others. Browsers are such complex machines that it wouldn't surprise me if you could for example craft a malicious SVG that would bypass that, or a turing-complete CSS file that triggers a vulnerability...
By the way, does NoScript actually block in-SVG javascript?
in-SVG javascript only gets executed when viewing a SVG document (and maybe an <embeded> svg docuemnt), not when viewing an SVG in a img tag.
Sure, but we all take risks every day. If you're worring about turning-complete CSS files exploiting Spectre and Meltdown then you probably don't leave the house much.
We know that attackers have reason to exploit literally all compute resources they can find a way to access. This is more like worrying about leaving the house during an epidemic of exploding ebola-infected pigeons — if you can do something about it, you should.
Attackers also have to consider cost/benefit analysis when evaluating methods of attack. Claims that "CSS is Turing complete" require a user to act as a "crank" [0], so there are lower-hanging fruit out there than trying to program complicated logic which can utilize the Meltdown / Spectre exploits in CSS.
[0] https://news.ycombinator.com/item?id=10734966
Yes and no. It is possible to exploit Meltdown / Spectre via Javascript. From [0]:
> This can happen when one has opened the other using window.open, or <a href="..." target="_blank">, or iframes. If a website contains user-specific data, there is a chance that another site could use these new vulnerabilities to read that user data.
Most browsers have pushed patches which eliminate known mechanisms of leveraging the exploit, but the pathway cannot be completely mitigated by browser patches, I believe.
[0] https://developers.google.com/web/updates/2018/02/meltdown-s...
Given that most consumers run JavaScript unconditionally, yes. Browser vendors have basically declared Spectre/Meltdown/MDS unmitigatable at the browser level.
Can you link a source please?
https://v8.dev/blog/spectre
> Second, the increasingly complicated mitigations that we designed and implemented carried significant complexity, which is technical debt and might actually increase the attack surface, and performance overheads. Third, testing and maintaining mitigations for microarchitectural leaks is even trickier than designing gadgets themselves, since it’s hard to be sure the mitigations continue working as designed. At least once, important mitigations were effectively undone by later compiler optimizations. Fourth, we found that effective mitigation of some variants of Spectre, particularly variant 4, to be simply infeasible in software, even after a heroic effort by our partners at Apple to combat the problem in their JIT compiler.
> Our research reached the conclusion that, in principle, untrusted code can read a process’s entire address space using Spectre and side channels. Software mitigations reduce the effectiveness of many potential gadgets, but are not efficient or comprehensive.
The “some variants” include MDS, which the author was aware of but which were not at the time of publication out of embargo.
But they do not claim that hardware mitigations are necessary. They claim that they need to change browser architecture a little bit:
> The only effective mitigation is to move sensitive data out of the process’s address space. Thankfully, Chrome already had an effort underway for many years to separate sites into different processes to reduce the attack surface due to conventional vulnerabilities. This investment paid off, and we productionized and deployed site isolation for as many platforms as possible by May 2018.
So with improved browsers it's still unclear why ordinary users need those performance-eating mitigations, when browser vendors managed to solve that problem themselves.
> But they do not claim that hardware mitigations are necessary. They claim that they need to change browser architecture a little bit
For Spectre, that’s enough; for Spectre-class Intel permission exploit vectors (aka, Meltdown, Fallout, ZombieLoad, RIDL, Store to Leak Forwarding and other MDS vulnerabilities) all of the same infeasability of browser mitigations apply but data also leaks across process boundaries, so process isolation does jack shit to protect you without lower level mitigations.
There’s nothing whatsoever browsers can do to prevent this. Process memory read isolation effectively doesn’t exist in the presence of unpatched Intel MDS vulnerabilities.
> So with improved browsers it's still unclear why ordinary users need those performance-eating mitigations, when browser vendors managed to solve that problem themselves.
The unclarity is only in your misunderstanding of the relationship of MDS vulnerabilities on Intel to Spectre vulnerabilities in general.
These vulnerabilities can jump process address space boundaries. It's a lot harder but can be done, look at the original Spectre paper: https://spectreattack.com/spectre.pdf
I don't get it. If it has an all-core frequency of 5GHz, doesn't that mean they've left some single-core boost on the table? Or have they hit some other limit and this part is basically free of thermal limits?
I’d guess that the upper limit is stability rather than thermal capacity at that point
What does stability mean, a bit more exactly? Just curious
The switching speed of silicon has also some upper limit. When you drive silicon faster, it starts to make mistakes. i.e. not all electrons go the where you want them to go. This causes soft-faults and CPU re-executes the part at best, or gives you a BSOD, oops or panic at worst.
This upper limit depends on process, layout, power design and power limits of the CPU.
Last but, not the least, not all CPUs are created equal on a wafer. I came from an era where we hunted plain blue AMD Athlon dies for higher overclocking potential, since they were from center of the wafer and they were more stable under high load/voltage/clock. I had a 2200MHz Athon (200 x 11) which was faster then AMD's own 2200MHz Athlons, since AMD wasnt offering a 200MHz bus version of their 2200MHz parts.
>gives you a BSOD, oops or panic at worst.
That's not too bad. I'd be much more worried about code silently doing the wrong thing .
That happens too. Prime 95 and other stability tests are used and can check when wrong results are returned. There's often a sliver of frequencies where a system under load begins performing floating point calculations incorrectly while other, simpler systems in the CPU are still functioning correctly.
The BSOD, oops, or panic is a symptom of widespread errors.
That's also possible. That's why overclockers run Prime95 to test their CPU stability.
Also, a BSOD or panic in the wrong time can cause massive data loss. That's beyond bad sometimes.
Edit: I mixed Prime95 with SuperPi. Thanks AaronFriel.
Why do you think the panic happens?
It's because the code does the wrong thing, and that happens silently... until it hits some pointers or kernel structures and stops being silent.
Not always. CPUs have extensive "machine check" capabilities. some of these MCE events are recoverable, some not.
If the processor fires an unrecoverable MCE event, you're frozen with a nice, explanatory panic.
Why are chips in the centre plain blue and better?
Plain blue is not a reason but a result.
Center of a silicon wafer is said to be have a higher quality (due to lithography, physical stresses and other processes which I don't know exact details of), and the result is a die with more homogeneous properties and color reflection. Since the die's tolerances were lower around the center of the wafer, the performance of the resulting chip was better.
AMD was also sub-binning most of these parts (they were sold as Athlon 1700 @ 1433MHz regardless of their performance level), so people were buying these unlocked sleepers and overclocking them to insane levels without voltage increases.
However, today the processes is so different and node sizes are so small that the dies' color are different and not perceivable anyway.
In the older days, this issue was more of an obscure, collective wisdom which resulted from trial and error days of overclocking wars.
Disclaimer: this is an oversimplification and I only have a lay person's understanding.
CPUs are basically huge networks of transistors (on/off switches). They're sort of like tiny printed circuit boards; lots of individual 'parts' are connected by 'wires' on top of a silicon wafer.
The distances are miniscule, but the lengths of wires running between transistors still varies. So when a transistor switches between 'off' and 'on', the signal takes a different amount of time to reach to its destination depending on which transistors are being switched. The signal can also feed into multiple other transistors which it will reach at different times.
While signals are busy propagating through the circuit, the CPU's state will be unstable, including the 'output' value of its current instruction. The time that it takes for any given instruction to stabilize is tough to predict because it depends on a lot of things, including how far apart the transistors are and how many of them the signal needs to pass through.
The CPU's "tick rate" in Hertz relates to how quickly it "latches" its internal state. Between "ticks", the CPU waits for all of the signals to stabilize. If they haven't stabilized when the clock strikes, bad things can happen.
I'm not sure how the 'quality' of an individual chip can make it more amenable to overclocking, though; maybe they run into fewer issues from thermal stress? Maybe the tiny 'wires' between the transistors have slightly less resistance? I dunno, someone help me out?
I think the inconsistencies between samples of the same model of chip are much less about the interconnect wires than about the transistors themselves, having variation in their individual switching speed vs voltage curves. There's nor really much variation in interconnect length between a given two gates when both chips are made from the same masks. But especially at the lower (finer pitch) layers of metal interconnect, variations in resistance and capacitance can affect how things operate.
The CPU not losing its marbles when overclocked.
https://devblogs.microsoft.com/oldnewthing/20050412-47/?p=35...
Knowing nothing can reduce performance from peak can be more valuable than an extra 4%
Yeah it's weird. Why not set the single-core turbo higher?
Marketing.
I believe Intel processors can't boost all cores. At least some tests I have done with my notebook processor (i7 8550u - 4c8t) with `stress -c n`, being n the number of processors, show that for n > 1 the processor doesn't reach 4ghz, only about 3.7ghz, while the package temps are still at around 70 °C. Only a single core on full load reaches 4ghz before throttling.
This is entirely configurable. They generally don't do all core == max boost for power consumption reasons (and I assume yield would be pretty low on chips that can do this)
Desktop chips also generally don't have any AVX offset, which is almost always required for 5 GHz all core.
Yes, there's a table in the firmware that can tell you the maximum speed with N active cores. This one though has the same speed for all values of N.
> I believe Intel processors can't boost all cores.
And that's exactly what shereadsthenews's point is. They can't boost all cores, and they are not boosting any core beyond the all-core capacity if it's truly a CPU that runs at 5 GHz all the time.
Turbo Boost can certainly apply to all cores- the limits you hit are TDP and time based, there, not strictly thermal.
So, for example, my old laptop CPU would clock itself up to 2.7GHz on all cores... well, okay, it was a dual core, so that's not saying much, but still. But it'd only maintain that boost for a few seconds- under sustained load it dropped down to 2.5. This wasn't because of thermals, but rather because 2.7GHz was a Turbo Boost frequency, and once the PPL timer runs out...
And to explain why they don't have, say, one core boost to 5.1GHz...well, let's see what siliconlottery says.
> As of 3/16/19, the top 38% of tested 9900Ks were able to hit 5.0GHz or greater.
> As of 3/16/19, the top 8% of tested 9900Ks were able to hit 5.1GHz or greater.
So, Intel'd cut their yield by more than a factor of four if they only let parts that could hit 5.1 into this bin. For a 2% single-core performance boost...
I think those numbers are for chips that can 5.1 GHz all core though, which is probably a lot less than 5.1 GHz single core.
As far as I know, K-series parts don't support binning of individual cores- if you have one bad core that'll only hit 5.0, 1-core turbo to 5.1 will still result in the OS scheduler periodically picking that core to use, it clocking up to 5.1, and problems resulting.
Might be wrong, though.
Intel's Turbo Boost 3.0 [1] was their attempt to take advantage of the fact that some cores on a chip can clock higher than others. It does not work well in practice, because it requires too much collaboration with motherboard and OS vendors. This feature is not available on their desktop platform, which the i9-9900KS uses.
[1] https://www.intel.com/content/www/us/en/architecture-and-tec...
XTU allows setting different turbo multipliers for 1-4 active cores (but the difference from the nominal clock speed typically gets smaller as more cores become active).
Some of the 9900K chips are able to push 5.2Ghz, this is not a proper answer to the AMD's new lineup
I have one constantly pushing 5.1Ghz (disabled speedstep etc. been stable for months). I bought it because there was no comparable AMD cpu and as far as I know AMD is still behind. Why do you think it is not a proper answer?
I think by AMD's new lineup they're referring to Ryzen 3000 series, which isn't out yet. If the rumors are true, the top models come with 12 to 16 cores, higher IPC and higher clocks than the current Zens, pushing 5GHz boost.
A current 9900K might be some 20%-30% faster than a current Zen, but it will no longer be so when with the new lineup.
Meanwhile mitigations are eating up Intel's performance advantage..
I saw some leaked benchmarks today and it doesn't look great. I really hope AMD will kick the Intel (I even have Ryzen 1800x too and will buy the new 16 core one), but I need fastest possible single core performance and by the look of it at best it will be the same. But AMD has other problems like huge DPC latency, which makes it difficult to use for real time computations. If Ryzen happens to have the same single core speed as 5Ghz 9900k and pack 16 cores capable of delivering it each, I'll swap my Intel in no time.
Still marketing SMT. Interesting move.
Desperate, is what they are.
Why 5 Hz all the time? I'd love to have such an extremely powerful CPU but I'd actually appreciate if it could downclock itself automatically and stay as cold as possible whenever I don't need it's full power. Some times I run heavy computations and having 8 5Hz cores sounds great but most of the time I just read or write something so even 1000 Hz sounds an overkill.
Base frequency isn't the same as lowest frequency (ya... it's weird). Base frequency is vaguely related to the idea that if you had all cores running at the base frequency, you would run just about at the system's TDP (it's really a complete mess, this is a simplification). Your system can still drop CPU cores down to 400-800MHz in low energy states.
What this announcement is basically saying is that Intel now has a 8 core chip where all 8 cores can run at 5GHz indefinitely "out of the box".
According to siliconlottery.com 38% of 9900K are overclockable to 5 GHz. Probably they just decided to select good chips from 9900K at factory, those not exactly new chips.
That's mentioned in the article: "The new Core i9-9900KS uses the same silicon currently in the i9-9900K, but selectively binned in order to achieve 5.0 GHz on every core, all of the time."
Not indefinitely just at the same time because thermal throttling will happen after sometime. It just means all cores will be able to go to 5Ghz at the same time nothing about all being able to stay at 5Ghz
Did they say that? Because there are people overclocking current chips and with good cooling have no trouble staying at 5ghz on all cores without throttling.
Since it's turbo, I thought it means "as long as the CPU likes it", rather than indefinitely. Or did they change how turbo works and now it's "it will run on turbo frequencies as long as there is enough load and the CPU is not temperature throttled"?
See the original comments under the news article.
> Base frequency is when the Tau moving window time has expired. How most modern high end motherboards set it to an effective unlimited time.
https://www.anandtech.com/show/13544/why-intel-processors-dr...
> To simplify, there are three main numbers to be aware of. Intel calls these numbers PL1 (power level 1), PL2 (power level 2), and T (or tau).
> PL1 is the effective long-term expected steady state power consumption of a processor. [...] PL2 is the short-term maximum power draw for a processor. [...] Tau is a timing variable. It dictates how long a processor should stay in PL2 mode before hitting a PL1 mode.
> This is where it gets really stupid: the motherboard vendors got involved, because PL1, PL2 and Tau are configurable in firmware. [...] This lets them set PL2 to 4096W and Tau to something very large, such as 65535, or -1 (infinity, depending on the BIOS setup). This means the CPU will run in its turbo modes all day and all week, just as long as it doesn’t hit thermal limits.
It doesn't run at 5Ghz all the time. 5GHz is it's all-core turbo. It simply means that all cores will run at 5Ghz under full load.
Yes, the article title is misleading, it actually says "All the time".
I don't think that's how it works. The CPU can adjust it's frequency through a much larger range, the base clock is not the minimum frequency it will run at all the time.
(I agree, it's confusing.)
https://ianhowson.com/blog/cpu-clock-rates-are-meaningless-n...
Open Task Manager or Intel Power Gadget (on Mac) and watch your CPU frequency - it already downclocks itself when it's not under load. Usually my 4770 is around 1.2Ghz idling, and I believe some motherboards let you set a minimum clock lower than that in the BIOS.
> 5 Hz
> 1000 Hz
You must have meant to say 5 GHz and 1000 MHz...
Why would you need 1GHz for writing, e.g. in vi/Emacs? 1kHz I'm not sure, but 1MHz should be enough.
> Emacs [...] 1MHz should be enough.
vi? Probably. But not Emacs, although I guess you could run something like Linus's Micro-emacs. https://git.kernel.org/pub/scm/editors/uemacs/uemacs.git/
In all cases, anyway, a 10 MHz 68030 should be enough for full Emacs, it's commonly seen as the lowest hardware requirement for a useful workstation Unix.
Most 9900K can hit 5.0 all core OC for non-AVX loads.
With AVX 4.8-4.9 is still doable without hitting the top 30% of CPUs in the CPU lottery.
My 9900K does 5.1 without any AVX offset but this is a top 10-20% CPU if the figures form Silicon Lottery are to be believed.
So it’s not that surprising Intel can simply bin CPUs to do 5.0 at near stock voltages since many resellers have been doing just that.
What does that spell for regular 9900K then?
Nothing if you don’t care about AVX workloads you can get a 9900K and set it to 5.0ghz with an AVX offset of 2 pretty much out of the box.
Unless the KS would guarantee a 5.3-5.4 all core OC I don’t see it being anything more than a PR release anyhow.
That said I’m not even sure if the 9900KS doesn’t come with an AVX offset to begin with most higher end motherboards come with a 9900K 5.0 preset anyhow which sets the voltage to about 1.3-1.325 and an AVX offset of 3 it just yells at you that you need a good cooling solution and this is not guaranteed to work.
I am likely the odd one out here, but wouldn't having the capability to turbo a single core to, let's say, 5.5 GHz or higher as factory stock be more useful in real life than the one or all eight core turbo to 5 GHz instead of 4.7? There are still enough single core/single thread apps out there that could benefit from faster single core performance, and this newest and hottest (also in temperature) i9 cannot go faster in single core than the 9900K.
I have 9900k binned for 5.1Ghz all core. Absolutely brilliant CPU. I wish there was 16 core version though.
What's the power usage on that? I could imagine the heat from it keeping your home warm on a cold winter's night.
I didn't measure. It is not too hot. I am typically getting 50-65 C under my workloads.
I forgot to add - it's been delidded. Crazy I know.
Woah, what do you use it for?
Ableton :-)
Is it going to be that much faster? 300mhz faster than the current top one according to the article.
And the speed of my overclocked 9700K. If anything, this is just a "Hey, some people can cool an 8-core CPU at 5GHz, let's make a new bin for the 40% of CPUs that can maintain that" release.
I wonder how much of that power will be eaten by spectre et.al mitigations.
From an interesting historical perspective, I mark the end of Moore's Law in 2001 with Intel's prediction of a 5GHz "Netburst" in 2005, which could not keep itself from melting. Somewhere I have a marketing road map of 5GHz in 2005, 10GHz in 2010. It was aspirational of course, but seeing what had to happen between then and now in order to get a chip that runs at 5GHz all the time based on their architecture is illuminating of the challenges they face.
Amplifying the other current replies, what you're bemoaning and what slagged the Netburst "marchitecture" is the end of MOSFET Dennard Scaling: https://en.wikipedia.org/wiki/Dennard_scaling
Moore's Law is "the number of lowest cost transistors doubles at X interval", and 193nm UV immersion lithography limits have been hitting it hard lately (see https://en.wikipedia.org/wiki/Multiple_patterning). But chip manufacturing equipment makers haven't run out of tricks quite yet.
That's an overexaggeration, IMO. Moore's law didn't really start losing steam until 2014-2015.
Perhaps, and perhaps it is just a difference of how we internalize what "Moores Law" means. Granted when Gordon postulated it, he was strictly talking about numbers of transistors, and the implication was that transistors were a leading indicator of performance.
Since I lean more on the 'performance' side of things, that was the end of 'single thread performance scaling', or put another way, that was when the performance of a single core stopped doubling every 18 months or so. And everyone switched over to dealing with Amdahl's law instead.
If you look at transistor count, yes. But single-core performance has stagnated since ~2003, that’s when we hit the 3Ghz mark. Progress since then has been a lot slower.
Moore's law is specifically about transistor count/density only.
GHz are basically meaningless. The 2GHz CPU in my laptop is an order of magnitude faster than anything 3GHz from 2003.
True for practical uses, most of the performance increase comes from more bandwidth and parallelism. But it's a mere 2-4x increase for single-thread performance, over 15+ years: https://preshing.com/images/integer-perf.png
IPC's gone up quite a bit though, right?
So as I understand it, this isn't new silicon, it's just binning the existing 9900K and if you wanted an overclock 9900K you would have just gone to Ciara who obviously bin and overclock and verify their systems anyway. So now you go to Ciara and Ciara go to Intel and buy a 9900KS instead of previously where you would go to Ciara they would go to Intel and buy a 5 9900K's and find the one that would've been as fast as the 9900KS anyway.
8-core processor that will run at 5.0 GHz during single core workloads and multi-core workloads."
Under full AVX workloads using Intel stock cooler?
Highly doubt it.
Is that a typo on the table or do those CPUs really cost the same whether they have an integrated GPU or not?
Those numbers are Intel's "Recommended Customer Price", not actual retail prices. The -F parts really are listed with the same RCP as the parts with GPUs enabled. No, it doesn't make much sense, but Intel has been experiencing a CPU manufacturing crunch, and the desktop market gets the short end of the stick when that happens.
There's no doubt about it: this will be a beast of a gaming chip. It will also likely cost an arm-and-a-leg (it has to, it's binned silicon meaning it's supply constrained) and likely have a really high TDP.
Curious what happens when you call into the vector (AVX hardware).
So now Zombieland et al can be exploited even faster!
Now you can run the speculation mitigations much more quickly!
It's nice to see AMD competing well enough for intel to actually push what their process can do. Finally!
Well, this also means that attacks exploiting speculation with run much faster!
I hope it includes a horde of running zombies.
just in-time for AMD's computex keynote...nicely played
no i7-9700KS? disappointing
https://boards.4channel.org/g/thread/71122779#p71122779
hyperthreading?