ksec 5 years ago

I used to be Pro Anandtech and consider them one of the best Sources online for Hardware News. But the fact they have yet write a single post, big or small about Intel's Zombieload and its implication on performance worries me a bit.

Then there is the "Intel" benchmarks as usual [1] on GPU. Trying to suggest the 2 CPU were both running at 25W TDP to give a "fair" comparison, without mentioning the Ice-Lake U CPU were running with 50% more memory bandwidth vs the AMD Ryzen. And we know Graphics Benchmarks do depend a lot on memory bandwidth. The memory used was somehow mentioned in Toms or Other Sites but not Anandtech. ( Although none of them had mentioned the bandwidth difference, it was up to the reader to work them out )

Anyway none of these Consumer CPU upgrade interest me anymore ( Although any improvement to iGPU would be great ) I am eagerly waiting for a 2S - 128 Core EPYC 2 on a Server or AWS to play around with it.

[1] https://www.anandtech.com/show/14405/intel-teases-ice-lake-i...

Edit: And the lesson here, never trust a single news source. Always have a few option opened and fact check yourself. ( If you have the time )

  • spamizbad 5 years ago

    It's not just Anandtech.

    I feel like the entire "PC enthusiast" review space has dropped the ball on hardware vulnerabilities. Reevaluating performance between microcode and OS patches is an afterthought, and when a new CPU hits the market the numbers are presented without the obvious disclaimer that these performance gains may evaporate within months.

    Some even perpetuate the "only relevant to datacenter" myth despite the fact that security researchers have shown to be able to exploit these vulnerabilities with JavaScript in the browser.

    • keldaris 5 years ago

      I'm glad the PC enthusiast space hasn't succumbed to the wild hysteria caused elsewhere by the side channel issues. It's tiresome to see every new variation people come up with reported as a new apocalypse all over again. Half the reason I still pay attention is to find if there's a new Linux boot switch I need to turn on to disable some new performance regression.

      Even though I'm personally in the fortunate position not to have any reasonable exposure to these vulnerabilities, I wouldn't be particularly worried even if this wasn't the case. It's been well over a year since Meltdown and Spectre came out and there still hasn't been a single case of anyone successfully using these vulnerabilities to productive ends in the wild that I know of. Obviously, cloud computing vendors need to pay attention and there are legitimate business concerns that are affected by this, but insofar as personal computing goes? If people persist in the ridiculous notion that constantly running completely arbitrary code in naive sandboxes is a great idea, I imagine there will eventually be issues, but so far the issue seems to be vastly overblown in the popular media.

      • spamizbad 5 years ago

        I don't see how the act of taking and publishing measurements after microcode and OS updates constitutes hysteria. It's my understanding, at least on Windows, you pretty much have to opt-out of these patches or manually install an update that disables the mitigations.

        I fully support a users right to bypass these mitigations, and you're correct that your typical desktop user, at least today, isn't a target. But it seems odd that websites dedicated to performance computing have a blindspot to how automatically installed updates will impact performance.

        • wtallis 5 years ago

          > I don't see how the act of taking and publishing measurements after microcode and OS updates constitutes hysteria.

          It's quite easy to sensationalize benchmark results even unintentionally. The average reader of PC hardware review sites is totally willing to latch on to a microbenchmark result that shows a 20% performance drop and claim that it's disastrous for performance, even if the actual added delay to real-world operations is a fraction of a millisecond and thus will almost never cause the result of your user input to be delayed by even a single frame. There's a certain degree of irresponsibility in publishing results that you know will be taken out of context by almost everyone who reads them. I've discontinued benchmarks in the past because it was frustrating seeing readers pretend like they show a meaningful difference between products when the reader's workload never comes close to the workload represented by that benchmark.

          • chmod775 5 years ago

            Did you just make an argument against PC review sites publishing any benchmarks at all because readers can't be trusted to interpret them correctly?

            That would kind of defeat the point.

            If they published a benchmark in the past and don't bother to correct the benchmark when it becomes out of sync with reality - that is just bad journalism.

            Nobody is saying you should go and cherry-pick benchmarks after the mitigations hit, but you should definitely check the benchmarks you already published once.

            These sites can and should expect an informed reader.

            In any case: Leaving wrong information up uncontested helps neither "experts" nor laymen.

            • wtallis 5 years ago

              > Did you just make an argument against PC review sites publishing any benchmarks at all because readers can't be trusted to interpret them correctly?

              No, and you should know better.

              > If they published a benchmark in the past and don't bother to correct the benchmark when it becomes out of sync with reality - that is just bad journalism.

              Proper practice is to publish the full test conditions, including software, firmware and nowadays also microcode versions. The availability of newer versions does not make older results any less true.

              At AnandTech, we make all reasonable attempts to keep a thorough database of older hardware tested on newer benchmark suites, but the time this requires means we cannot re-test everything multiple times per year. I have over 200 SSDs and counting in the collection, and that test suite is over 30 hours long. The collection of CPUs is much larger. GPU reviews typically have fewer back-catalog hardware entries because updating to new drivers a few times a year is often unavoidable. You can browse the results for current and previous test suites at https://www.anandtech.com/bench/

              > These sites can and should expect an informed reader.

              You don't read the comments as often as we do.

          • sundvor 5 years ago

            In the case of my 6850k my overclock was silently killed by the Windows 10 microcode update which locked multi to 38x.

            This behaviour angered me no end. I wasted significant time looking for workarounds, and deleting the microprocessor driver was the only way. I wonder what fixes I've now nixed, but there was seriously no need for Intel to kill my overclock.

            On a couple of occasions Intel have pushed updates which have reset my fix. Dear $deity .. my next PC will be AMD for sure.

      • dejaime 5 years ago

        "Wild hysteria", what do you mean? Experts seem to be far from a hysterical, and go for a more technical language, benchmarks and all. And the mass that usually goes hysterical actually doesn't even know their CPUs are going to take a 20% performance hit next OS update, and probably won't even realize that.

        • keldaris 5 years ago

          The conversation was about the (nominally technical as well as more mainstream) press, not the experts. My remark regarding "wild hysteria" was made in that context. Experts and competent users will do the same thing they always do - evaluate any and all mitigations in the context of the threat models relevant to their usecases and act accordingly. Whether depriving the mass of less technically inclined users of the performance they are used to with all the implications that entails (including for energy efficiency and other externalities) is a wise decision only time will tell.

          • coldtea 5 years ago

            >My remark regarding "wild hysteria" was made in that context.

            Considering we are referring to attacks that can bypass your PC's security, "prudence" is a better word than hysteria.

            Yes, if they are left alone, it is the "end of the world".

            They can be used to make any modern OS and browser as full of holes as Windows 98.

            • keldaris 5 years ago

              > Considering we are referring to attacks that can bypass your PC's security, "prudence" is a better word than hysteria.

              That statement can be made about any vulnerability whatsoever. The merit of any mitigation can only be determined by a cost/benefit analysis that takes into account the potential impact of the vulnerability as well as the very real costs of mitigating it.

              > Yes, if they are left alone, it is the "end of the world".

              No offense, but this is exactly why the word "hysteria" seems far more appropriate than "prudence". Not a single one of these vulnerabilities has been used to cause any measurable damage anywhere that we know of, whereas the mitigations deployed have significant costs that everyone must pay. Despite this, emotional "the sky is falling" type pronouncements are far more common in the media - even the ostensibly technical press - than attempts to rationally weigh the costs and benefits of any particular approach to the problem.

              • coldtea 5 years ago

                >Not a single one of these vulnerabilities has been used to cause any measurable damage anywhere that we know of, whereas the mitigations deployed have significant costs that everyone must pay.

                That's like saying: "nobody was drowned that we know of, whereas there was a significant cost to building the dam that everyone paid". (And also not dissimilar to arguments about doing no major industry/lifestyle changes regarding climate change).

                It's exactly because there were mitigations relatively quickly deployed that we didn't have a "hack em all" exploit doing the rounds in hundreds of millions of devices. The difficulty of exploiting also gave some leeway to deploying those mitigations.

                • keldaris 5 years ago

                  > That's like saying: "nobody was drowned that we know of, whereas there was a significant cost to building the dam that everyone paid". (And also not dissimilar to arguments about doing no major industry/lifestyle changes regarding climate change).

                  It is very dissimilar indeed - the sentence you quoted does not constitute an argument by itself. It is an observation regarding the present state of affairs (which you have not disputed), which to me indicates a need to take a breath and do a reasoned cost/benefit analysis as opposed to the hysterical "this must be fixed at any cost, externalities be damned" mindset that is fairly common in many circles.

                  If you really want a climate change analogy, though, consider this - however many mitigating workarounds you invent, as long as speculative execution exists there will always be side channel attacks, and eventually some of them will probably succeed to some extent. Perhaps, as you noted, some major industry/lifestyle changes are indeed in order - people could stop living in the delusion that a perfect sandbox is possible and realize that arbitrary code execution will always entail risks. Rather than turning every website into a potential security risk, perhaps it is our approach to software (rather than hardware) that needs re-evaluation.

                • wtallis 5 years ago

                  > The difficulty of exploiting also gave some leeway to deploying those mitigations.

                  That's putting it lightly. Exploiting Spectre to get private data is difficult. Turning that into a privilege escalation is exponentially harder, so any "hack em all" exploit on hundreds of millions of devices would have needed an entirely unrelated mechanism for spreading.

      • def- 5 years ago

        > Half the reason I still pay attention is to find if there's a new Linux boot switch I need to turn on to disable some new performance regression.

        No need, if you really want to disable all mitigations, including future ones, use mitigations=off.

        https://www.phoronix.com/scan.php?page=news_item&px=Spectre-...

        • keldaris 5 years ago

          Thank you, I didn't know about this! Currently still on 5.0 for most of my machines, but this will be helpful once I move to 5.2+.

      • h1d 5 years ago

        > there still hasn't been a single case of anyone successfully using these vulnerabilities to productive ends in the wild that I know of

        Why would the attackers let people know they've successfully exploited?

    • walrus01 5 years ago

      > I feel like the entire "PC enthusiast" review space has dropped the ball on hardware vulnerabilities. Reevaluating performance between microcode and OS patches is an afterthought, and when a new CPU hits the market the numbers are presented without the obvious disclaimer that these performance gains may evaporate within months.

      If you want useful benchmarks that show the performance impact, go to phoronix.

      "PC Enthusiast" websites care about gaming performance and single user desktop performance, and always have. This has been the same since I started following things when the fastest CPU available was a 300 MHz Pentium 2. Imagine how amazed we all were by the 1 GHz Slot A Athlon.

    • the8472 5 years ago

      Phoronix has multiple articles on the impact of those vulnerabilities, from small laptop to large server processors.

      • wtallis 5 years ago

        The perk of being able to test on Linux is that it's much easier to fully automate testing. Unfortunately, their analysis tends to be shallow.

        • consp 5 years ago

          Their audience is not mainstream but enthousiast And professionals. They do not provide in depth analysis since they probably know the reader can do that for themselves.

          • wtallis 5 years ago

            That may have been true at one time, but the class of people who would consider themselves to be enthusiasts has broadened well beyond the class of people who can accurately judge how their workload corresponds to the benchmark results they're reading. The recent improvements in the Linux gaming situation have been a big contributor and has undoubtedly skewed the Phoronix audience.

            • bubblethink 5 years ago

              Phoronix serves two important purposes that nobody else does. 1) It serves as a news aggregator for a lot of different open source communities. You'd think that a site called hacker news would do that, but ironically it doesn't. Most content here is either heavily web dominated, or just random drivel about being excellent in life. 2) He runs his standard battery of tests on everything. A lot of upstream projects don't seem to have that much emphasis on performance regression testing. He has uncovered a few regressions and reported them upstream on a few occasions.

              • wtallis 5 years ago

                I greatly value Phoronix for both of those things; it's a great resource for both my work and personal computer usage. But it does mean that the traditional hardware reviews themselves are something of an afterthought.

      • sitkack 5 years ago

        The lament was that the large enthusiast sites are staying hush hush, mostly to not bite the hand that feeds them.

        Because Phoronix cares is outside the argument.

    • Macha 5 years ago

      Most places certainly retested CPUs in the wave of Spectre/Meltdown and at least the sources I've seen have mentioned Zombieload/MDS though they've yet to go back and rebenchmark CPUs due to the fact they're either prepping for or travelling to Computex currently. I'd expect most of them to have videos in the next month though.

    • microwavecamera 5 years ago

      It's rumored that companies like Intel and Nvidia will retaliate against review sites and publications for bad press coverage by slowing or cutting off access to preview release products for reviews.

    • gameswithgo 5 years ago

      pc enthusiasts aren’t really into running javascript either.

      • phpnode 5 years ago

        I've seen a number of comments like this over the last few days and I don't really get it, gamers use the internet right? Vast overwhelming majority of them are going to be running javascript programs hundreds of times per day

        • gameswithgo 5 years ago

          sure, and maybe browsers or os need some way to say “running untrusted code please turn off performance for a sec” for that use case. until then ill just use one tab at a time or disable js before i opt in for slow.

          • Wowfunhappy 5 years ago

            I expect that the vast majority of PC gamers keep Javascript enabled in their browsers.

            • ChickeNES 5 years ago

              CS:GO also uses JS for its new GUI, and there's already been one exploit that took advantage of it.

            • kingosticks 5 years ago

              I wouldn't expect that. Is this the kind of thing that would be reported by the Steam hardware survey (if that's still a thing)?

              Edit: I took a look and it seems they don't ask/record that info. I apologise for the offence this idea seems to have caused someone.

              • zapzupnz 5 years ago

                Nobody was offended by the other. I think people are downvoting you and gameswithgo because you appear to be applying your own personal notions to an entire segment with little to no evidence.

                'PC enthusiast' is such a blanket term to start with, so applying a blanket statement to such a group is obviously doomed to failure from the very start.

                • kingosticks 5 years ago

                  And that's exactly what the parent did that I replied to. But I do appreciate the explanation, cheers.

              • BubRoss 5 years ago

                Why would steam's hardware survey report if JavaScript is enabled in browsers?

                • kingosticks 5 years ago

                  Because its a hardware and software survey. I have a memory of them asking some extra questions but that was years ago. Either I remembered wrong (likely), they changed it, or they don't report it all.

      • nullandvoid 5 years ago

        Because downloading and running binaries of applications is much safer?

        From what i've seen web browser teams have taken the recent risks extremely seriously - I have sure had a worse track record of infection via downloading and installing software versus visiting sites with JS running

        • yc12340 5 years ago

          > web browser teams have taken the recent risks extremely seriously

          Not really. They didn't even properly apply band-aids.

          Chrome and Firefox disabled number of features, that allow Javascript code to create high-precision timers. This makes exploiting slightly more difficult, but the gaping hole is still there — there is infinite number of ways to create a high-precision timer, just not as obvious as closed ones.

          Chrome has enabled Site Isolation on desktop, but haven't done it on Android (presumably, because of associated increase in memory consumption).

          All major browsers still allow Javascript to run in background, create CPU threads and consume unrestricted amount of CPU time. I don't believe, that any of them have mounted instruction-based defenses (lfence etc.), but I may be mistaken here.

      • CoolGuySteve 5 years ago

        It would be nice if the mitigations could be applied per-core. Then the OS could set the affinity for processes like games that really don't care.

      • 781 5 years ago

        Discord is huge in gaming/pc enthusiasts. Oh, and you use VS Code in your stream. I wonder what those two are written in...

        • AgentME 5 years ago

          Those are bad examples because they both run unsandboxed.

          The recent CPU vulnerabilities aren't uniquely bad for Javascript specifically. They're bad for wanting to run unprivileged code. Javascript in regular web pages just happens to be the most obvious example of sandboxed code in desktop computers.

  • twblalock 5 years ago

    Most of the people who write these articles for review sites do not really understand CPUs in depth. They know how to run benchmark suites and talk about new features mentioned in Intel's marketing material. Most of these people are writers, not engineers. If they were experts they could make a lot more money working at tech companies instead of working for review sites.

    In fact, I would bet that most professional software engineers could not correctly explain Spectre, Meltdown, and Zombieload without making at least a few mistakes.

    • skavi 5 years ago

      AnandTech is different. They cover μarch, and their writers clearly understand what they're talking about.

      • Merad 5 years ago

        Anandtech was sold (or maybe Anand left, I don’t recall exactly) about 5-6 years ago. The depth of their technical writing isn’t what it used to be.

        • saagarjha 5 years ago

          Anand left to work for Apple in 2014.

        • skavi 5 years ago

          they're still the best in the mobile review space. though that may say more about typical mobile phone reviews than AnandTech.

      • magila 5 years ago

        Even AnandTech mostly just regurgitates what is feed to them at press events these days.

  • AmVess 5 years ago

    Anand Lal Shimpi left Anandtech 5 years ago, and the quality articles he wrote have not been replaced. It's basically a tech lite blog now.

    • ksec 5 years ago

      So Despite what I wrote, I still think they are one of the best. Both Ian and Andrei are good with many in depth article, I really do miss Anand's article though. I think, the real problem is Anandtech is short on staff.

      Anand has been working in Apple for a few years now, I wonder what has he been up to.

    • treeevor 5 years ago

      I would say Ian is really the only writer at anand i care to read from anymore. His articles always take time to come out compared to everyone else but they at least cover every thing he can think of to tell you about and are well researched

      • skavi 5 years ago

        Andrei Frumusanu is also great in the mobile space. The only tech reviewer to run SPEC on phones. Great overviews on the arch of new cores too.

  • bayindirh 5 years ago

    Because intel strictly forbids publishing benchmarks of their processors with the "hardware vulnerability mitigations" applied. Even OEMs cannot show them to their enterprise customers. You can do your own benchmarking after buying the systems. So, no money, no real-world benchmarking.

    • wtallis 5 years ago

      Intel has never told AnandTech not to benchmark their microcode updates or a third party's OS updates. They haven't threatened to stop sampling CPUs for review. I haven't seen any evidence that Intel has ever attempted to enforce such a restriction against anyone. It's just a stupid clause that one of their dumber lawyers slipped into the EULA text, and does not appear to be something they actually care about at an organizational level or expect to be able to enforce in the real world.

      • sixbrx 5 years ago

        I would say putting it in the EULA text is telling you and what's more important, a court of law would probably agree. I don't know why you would expect anything more?

      • craftyguy 5 years ago

        Do you work for Anandtech? If not, what are you basing these claims on? I suspect that Anandtech, etc would not publicly disclose if a hardware manufacturer was forbidding them from benchmarking certain configurations under threat of not releasing samples..

        • wtallis 5 years ago

          Yes, I write for AnandTech (paid as an independent contractor; I'm not one of the salaried editors). I've done some Spectre/Meltdown regression testing for AnandTech, and I've never been instructed to not do such testing in the future.

          Microcode benchmarking is not the hill Intel wants to die on.

          • BubRoss 5 years ago

            Which begs the question, when will we see benchmarks of Intel's CPUs with all vulnerability mitigations on vs. AMDs CPUs?

            • wtallis 5 years ago

              As soon as the definition of "with all vulnerability mitigations on" stays stable long enough to put together a good review. Benchmarking a moving target is hell, and we don't have enough equipment or staff to do the around-the-clock regression testing that would be necessary to keep our benchmark database current with everything that's happened over the past 1.5 years.

              • Filligree 5 years ago

                People are using your benchmarks to decide what computer to buy. If they're that out of date, what should I tell them?

                • wtallis 5 years ago

                  End-user perceived performance is usually not affected enough to meaningfully change the ranking of products. If a chip goes from being 5% faster to 3% slower when mitigations are applied, you'll never notice that without busting out a stopwatch and digging for a reason to be disappointed. Remember, measurable performance differences aren't always noticeable performance differences, especially without a side by side comparison.

                  And if two competing processors are close enough in performance for these mitigations to change which one comes out on top of benchmark charts, then other factors like price, power consumption and IO capabilities are probably a much bigger deal at that point than minor CPU performance differences.

                  Most if not all of our benchmark suites have been updated to include at least the early Spectre/Meltdown/et al. mitigations, and new CPUs are being tested with the microcode they launch with.

                  • BuckRogers 5 years ago

                    Why not do an article with then-current mitigations every 6 months? Rather than conveniently waiting until Intel can get their hardware fixes out. Which will coincide with the "mitigations are stable" article.

                    I'd be willing to bet that if the majority of mitigations impacted AMD rather than Intel as it does today, you'd already have done this and would continue to follow up. This is very interesting material and a special moment in time to cover it and inform your readers. Other than willful laziness ("lack of time", everyone knows you make time for priorities), this appears like shilling hard for Intel.

                    If Anandtech decides to do the right thing, I'd like to see .Net or Java compilation. Real-world based benchmarking only.

                    • wtallis 5 years ago

                      > I'd be willing to bet that if the majority of mitigations impacted AMD rather than Intel as it does today, you'd already have done this and would continue to follow up.

                      Fuck you, too. I've given you reasonable explanations and you're still throwing out insulting conspiracy theories. If you want sensationalized news, there are plenty of outlets that will give you what you want, and you don't need to be a dick to those of us who are trying to be reasonable and honest about both the subject matter and the resources we have to provide quality coverage.

                      Also, we've done two significant articles in the past year measuring the impact of these mitigations, so we're not even falling behind the standards you claim to want us to meet.

                • oarsinsync 5 years ago

                  To use the best information available, which are estimates on what the potential impact can be, and then make their own decisions.

    • pfortuny 5 years ago

      And you have to abide by those unjust rules because... ?

      • bayindirh 5 years ago

        ...you sign a legally binding NDA to be able to early-access the CPUs, test & review them; get the semi-classified technical documents to develop your new servers.

        If you don't sign that NDA you can't buy the CPUs from intel to resell them. Even if you are able to buy the CPUs from them, there's no guarantee that you'll buy from the list price or get the discounts for big, prestige projects which require tenders.

        It's a deep and ugly rabbit hole.

      • randallsquared 5 years ago

        No free hardware for reviewing, and having to wait until the hardware is available to the general public, so every other site has scooped you.

  • dual_basis 5 years ago

    > ... none of these Consumer CPU upgrades interest me anymore

    What about the Ryzen 3000 line up? The benchmark leaks make it seem like it is going to be a huge improvement and AMD isn't susceptible to Zombieload.

    • ksec 5 years ago

      The problem is I am on Mac ecosystem, which means I don't have much of a choice ( I doubt Apple will ever switch to AMD ) . And since most of my casual gaming are done on mobile, ( I am quite old and don't have time like I used to spent hours on UO or WoW ), none of these upgrade means anything to me. So my interest is in Servers where most of my time are spent now in Web Development.

      • berbec 5 years ago

        Apple might switch to ARM, but I think you're right on AMD

    • blitmap 5 years ago

      This probably doesn't contribute to the conversation but with the number of serious vulnerabilities that have popped up recently I'm not inspired to solve the truth table for the CPU vendor that leaves me the least exposed. As others have said - and I have seen - some of these can be exploited with Javascript in the browser. (I do not know much about Zombieland presently)

      Looking forward to a less complex architecture even if it means cutting me off at the knees with execution speed (for a few years):

      RISC-V

      • rbanffy 5 years ago

        I remember mentioning before we should have learned with the first Xeon Phis to program microkernels for high core count in-order CPUs.

        Because the future is looking increasingly in-order high-core-count with hard partitioning between security contexts.

  • nobrains 5 years ago

    That's because Anand Lal Shimpi is not doing the writing anymore...

  • pizza234 5 years ago

    I think the point about the Ice Lake announcement is a mischaracterization.

    It's typical for news sites to report individual announcements, with little or no analysis; this is fair, as long as the post clearly specifies its nature (which, in this case, does).

    Anandtech did something very interesting actually on the Intel subject, which I didn't see on other sites - it made an article about the performance of the i9-9900k locked at its nominal TDP (95W), which showed very significant losses.

  • close04 5 years ago

    I was put off AT the day I noticed they forgot to cover the Threadripper launch for weeks while they flooded the front page with dry half pagers about new Intel motherboards (not benchmarks mind you, just snippets from the OEMs Press release). I asked about it in the comments and got a boilerplate answer that they strive to present quality articles to the readers.

    They also had the Intel series 6 launch where they praised the 6600K and compared it to the 2500K to show “massive” improvements over the years. This while all the other websites noted “minimal speed boost for too big price”. Perhaps both true but the spin on it makes all the difference when showing the intention.

    AT shows quite the Intel bias. And it’s not the Intel part that bothers me, it’s the bias part. They go out of their way to make Intel look better without outright lying, just selectively presenting the truth in a way that shines a much better light on Intel. This for me casts doubt on other articles.

    I’m glad Andrei Frumusanu’s mobile reviews still have a home, being the best I have seen on the entire internet. But that’s the only segment on AT where I can be reasonably sure about impartiality.

  • nl 5 years ago

    Their October 2018 i9 benchmark review was subtitled "Hardware and Software Security Fixes", and literally began with the following sentence:

    The Spectre and Meltdown vulnerabilities made quite a splash earlier this year, forcing makers of hardware and software to release updates in order to tackle them.

    https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9...

  • gameswithgo 5 years ago

    its gaming centric news, nobody cares about these exploits when running fortnite

    • gameswithgo 5 years ago

      explain why i should care instead of downvoting. a windows program can already just directly look at the memory of other running processes, so why do i care about sidechannel attacks outside of javascript snooping on things which i can mitigate with simple behavior changes.

      • gambiting 5 years ago

        >>a windows program can already just directly look at the memory of other running processes

        Do you run everything with administrative privileges?

        • codexon 5 years ago

          Too many programs ask to run as admin already, people just click okay on the UAC prompt to make it go away now.

  • tatref 5 years ago

    Benchmarking cpu firmwares is forbidden by the license (same thing for databases)

    • gambiting 5 years ago

      What licence? You don't have to agree to a licence to use a CPU. I mean Intel might think otherwise, but those kinds of licences("you agree to this licence by just opening the product") are not worth the paper they are written on in EU, so even if there is such a licence it wouldn't be applicable everywhere.

      • tinus_hn 5 years ago

        Intel could license the microcode update.

    • mycall 5 years ago

      I would setup a scenario where safety is a concern (V2V) so you could get court order to benchmark firmware.

  • spronkey 5 years ago

    Anandtech went downhill when they lost Anand. Not sure what the hell he could possibly be doing do that's useful to Apple, but the conspiracy theorist in me wants to think it was Apple getting him out of the media.

lawrenceyan 5 years ago

Intel has fallen so far. It's honestly a shame to watch at this point.

I remember back when Sandy Bridge was first released, and I was extremely pleased by the performance improvements my new chip was able to provide. Did they really manage to mess everything up within such a limited timespan? Or was there just always a hidden incompetence that never showed itself until now?

  • zanny 5 years ago

    Their design for 10nm and the implementation didn't line up. Whatever their (still undisclosed) problems were, the entire node was fundamentally flawed.

    It might have been hubris at having been at the cutting edge of fab tech for so long. It could have just been the fruits of pushing the envelope - sometimes what you predict will happen when you put theory to application proves false.

    It has warped their business heavily for 4+ years now, but in the same way AMD had to "get their act together" with their processor design after Bulldozer failed spectacularly in practice and took ~7 years to fix it companies at these scales cannot turn on a dime - Intel had their roadmap planned a decade in advance, and to have it so thoroughly trashed starting around ~2015-2016 will require until at least 2021 to correct in all likelihood.

    • chx 5 years ago

      I more and more tend to believe the rumors (started by Semiaccurate) we will not see 10nm in mass quantities and 7nm is the next. We will see very, very soon: Intel said 10nm CPUs in client systems will be on shelves for the 2019 holiday season.

      • DuskStar 5 years ago

        But weren't there Intel 10nm chips sold for revenue in 2017? [0] Granted, it was an OEM-only part for some China specific education laptops...

        0: https://www.anandtech.com/show/13405/intel-10nm-cannon-lake-...

        • chx 5 years ago

          Cannon Lake was released only because many at Intel have their bonuses tied to the process node launch. Well, they launched a 10nm CPU... so bad the GPU is disabled, performance/watt it's worse than KBR and it was only available in limited quantities.

          • dfrage 5 years ago

            KBR = Kaby Lake Refresh. Interesting, in that Intel's "10nm" node is said to be more power efficient than their "14nm" node, in this case, per AnandTech (https://www.anandtech.com/show/11738/intel-launches-8th-gene...) KBR was launched with 14+nm. Could be that early of a 10nm part wasn't yet very power optimized. And that would be a very good explanation for why the GPU was disabled vs. the yield issue, which we're pretty sure is much more fundamental than specs of dust and other isolated things that can disable a part of a die without killing it altogether.

            • chx 5 years ago

              https://semiwiki.com/semiconductor/intel/7433-intel-10nm-yie...

              The GPU was disabled because it is those blocks which are the most problematic yield wise.

              • dfrage 5 years ago

                Hits forehead for forgetting to check SemiWiki. But note that since then Global Foundries has abandoned for now offering this general node, 7nm as they name it, 10nm as Intel does.

                But isn't the whole die exposed and otherwise processed as a whole piece? Very much not deeply educated here and can't justify the investment to change that, my primary mental models for defects are either something that takes out a whole die, like one lithography step being misaligned, or spot damage like a piece of dust.

                But there are clearly issues in between that are statistical inside a die, I recall Semiaccurate saying one of Nvidia or AMD did a GPU tape out to a TSMC process where they duplicated vias because that process' were iffy, and they compensated with a less dense design where either one or two working was OK. If Intel is suffering that sort of problem, then the GPU is a big part of the die that can be fused out while you still have something useful. If all your CPUs or all your L3 cache banks fail, a working GPU is pointless.

                That article points out two particularly suspect things Intel is uniquely trying at this node: SAQP for the metal layers, which I've seen cited before, and which they generically officially blame, and cobalt in interconnects. And at least one other thing was mentioned as suspect, and four new things total.

                One ray of hope is mentioned for Intel, in that they were the most aggressive in the industry with their 14nm and 10nm nodes, and in both cases paid the price in yields, while they're being conservative for their 7nm node, no doubt because EUV is a very big step for everyone. Semiaccurate also commented and/or theorized that a compelling reason Intel is continuing to work on their 10nm at one fab is that one or more things in it are also going to be used in their 7nm, so they might as well debug them now and there, and sell some chips while they're at it.

                Now to do some catching up on SemiWiki, thanks!

    • ulzeraj 5 years ago

      I've seen some recent tests showing that bulldozer is quite competitive with new multithreaded friendly stuff like DX12 and Vulkan. Roughly saying... if you think of it as a rematch against the same intel products then bulldozer can win on lots of situations.

      • spronkey 5 years ago

        It's certainly better at multithreaded workloads than single threaded. But performance per watt is nowhere near Intel or Ryzen parts.

  • dgacmu 5 years ago

    It's not Intel, it's the end of Moore's law. Intel's problem is that they are not well positioned to capitalize on the specialized processors that will be required to continue ekeing out advances for the next decade or two before we're entirely up a creek. :)

    • lawrenceyan 5 years ago

      From how Apple and AMD are doing with their own processors though, it seems like Intel is just fundamentally doing worse even as things become more difficult with smaller transistor sizes. Apple is going to replace Intel with their own processors because Intel has failed to meet requirements. AMD, with a shoestring budget basically on the verge of bankruptcy the entire time they were doing their R&D, managed to build out a new architecture that has provided amazing results while Intel has basically had nothing to show in the same time.

      But perhaps there's something I'm missing here. Is there a misconception or lack of information here on my end that needs to be clarified? I can only make my analysis largely as an outsider looking in when talking about semiconductors.

      • ChuckMcM 5 years ago

        Oddly enough, the challenge of estimating who is "ahead" is kind of like traffic. Intel arrived at the scaling traffic jam way before anyone else, and has been slowly slogging through it. New entrants are catching up to the traffic jam and will have to make their way through it as well. If there is no breakthrough, then everyone will find themselves more tightly bunched in feature/performance curves than they have been in the past.

        The spoiler though is that different architectures have different scaling properties and limitations. IBM's Power architecture has already scaled past where Intel is, not because of the semiconductor process, but because the architecture is more streamlined. ARM is somewhere in the middle, it started off pretty streamlined but it has been adding warts (special instructions) to more directly compete with Intel and that creates impediments to scaling.

        • IgorPartola 5 years ago

          Bad analogy. You can prove that getting in line earliest will get you out of it earliest. If you postulate that it’s more complex than that, it might hold up. You could say that Intel is driving a semi, while others are mini coopers and motorcycles, splitting lanes and better at speeding up/slowing down. At which point no analogy is necessary: startups and smaller companies are more nimble than larger companies, at the trade off of resources.

          • dfrage 5 years ago

            It's a good analogy as long as Intel was the first to experience the end of Dennard scaling (https://en.wikipedia.org/wiki/Dennard_scaling) because their fab lines were ahead of the rest of the industry's. And fabs are all "semis", due to the massive amounts of capital and talent needed to move to the next node.

            So much so that we're now we're down to two companies in the whole world who are successfully executing the smallest CPU nodes, unless Intel manages to make their "10nm" work, or pulls off their "7nm".

            While we're hearing the very roughly equivalent TSMC "5nm" node is starting risk production (https://wccftech.com/tsmc-5nm-production-euv/ beta testing, you might say, someone outside of TSMC has to be the first, second, etc. to try to get real world dies that work on a new node). Intel isn't saying anything, but Semiaccurate has reported at least two fab lines that were slated to move to their 10nm are installing lots of EUV equipment consistent with using them for their 7nm node (and at least one fab moving back to 14nm).

      • tntn 5 years ago

        Apple and AMD just have to ask TSMC to do their magic to make 7nm chips - they haven't had to do anything spectacular, just use TSMCs design libraries.

        Intel is struggling because of their struggles with 10nm. Apple and AMD are not because TSMC has pulled off 7nm. Architecture matters, but process node matters a lot too.

        • lawrenceyan 5 years ago

          What would you say are the primary differences between the two companies? Is it more just a matter of luck that has allowed for TSMC to have been able to succeed where Intel hasn't? Or is there actually a meaningfully different process design and/or problem solving approach that is enabling this?

          • wtallis 5 years ago

            The full story on how Intel managed to fuck up 10nm so badly may not see the light of day for years if ever. But generally, it seems that Intel tried to make too many changes in one generation. They probably wanted their 10nm to be the most advanced process that didn't require EUV lithography. Some features of their 10nm process ended up not working (evidence points to the cobalt interconnects as one of the hang-ups). In the meantime, it looks like EUV is coming along nicely.

            They compounded their problems by essentially stopping microarchitecture development on 14nm, which is why eg. their laptop processors still don't support LPDDR4, and they're still shipping basically the same CPU core they released in 2015. Coupling microarchitecture and fabrication development has at times been an advantage for Intel, but for the past few years it's been a huge mistake, and they've promised changes to their design processes so that they don't get stuck like this again in the future if fab advances aren't ready when new microarchitectures are.

            TSMC naturally doesn't have this problem, because they're a pure play foundry. Their customers have to each make their own bets on when new fab processes will be truly ready, and how well they will perform in practice.

          • dfrage 5 years ago

            Not tntn, but I've been following this and it seems to be both Intel's now decades long history of very bad high level engineering and personal management catching up with their crown jewel, and being more aggressive than TSMC's initial 7nm node. Perhaps Intel depending on a particular lithography? technique that TSMC isn't, or isn't yet heavily, but we don't really know, no one authoritative is talking, and Intel is still claiming 10nm is going to make it.

          • pkaye 5 years ago

            I think Intel having their own manufacturing fab is hurting them in the longterm. By outsourcing it you can go with whomever has the best solution. By matter of pride, Intel has not done this but AMD, Nvidia, Apple all do this.

            • dfrage 5 years ago

              I've heard entirely the opposite, that having a close relationship between chip designers and fabricators allows for higher performance designs. I don't know of anyone who interpreted AMD selling off its foundries as anything other than severe financial distress, and it worked supremely well for Intel while they stayed at least one step ahead of the competition. Enough so this is said to have wiped out a generation of CPU architects while Dennard scaling still worked, no matter how clever they were, Intel moving to its next process node wiped out their speed advantage.

              But it's a brittle model, if a company screws up a node and is too messed up to handle the failure gracefully, as Intel is doing with their "10nm", no doubt with pride as a factor. And it's not uncommon for institutions to permanently lose abilities, I'm not counting on Intel succeeding with their "7nm" node.

              On the third hand, we're now down to 2-3 high end CPU fab companies, Samsung, TSMC, and maybe Intel. That also can be a brittle thing.

      • dgacmu 5 years ago

        Intel was ahead, and hit the wall first. Apple & AMD are not ahead, they're just catching up. I don't want to understate how big a problem that could be for Intel, of course. But they're also doing it on low margin parts, and Intel continues to make bank with their data center parts.

        I don't think any of this represents a short-term problem for Intel, other than the general downturn in processor sales because fewer people will need to upgrade. But I think it represents a very serious long-term threat.

        They have some really cool technical advances, like 3D xpoint. But I'm concerned that they do so badly on embedded and custom integration from a long-term perspective.

        • vbezhenar 5 years ago

          Apple sold millions of iPhones with 7nm chips while Intel struggles to build comparable 10nm chips and keeps releasing 14+++ nm. AMD will release 7nm chips very soon. It does not seem like they are catching up. Quite the opposite.

          • akvadrako 5 years ago

            You can’t compare nm between vendors - it’s just marketing numbers.

            • throwaway2048 5 years ago

              Not directly no, but the actual feature size and density of TSMC 7nm and Intel 10nm are comparable.

              • pkaye 5 years ago

                What about die size?

                • wtallis 5 years ago

                  Then you have to ensure you're comparing chips designed for the same market segment. Die size comparisons work well if you're talking about a Cortex-A53 on 16nm vs 12nm. It doesn't work as well when you're talking about a full SoC, or even a desktop CPU+GPU combo where core counts for both sides of the chip can vary greatly.

                • throwaway2048 5 years ago

                  Die size is independent of process size, customers can order most any die size they want.

          • rbanffy 5 years ago

            My simplest laptop in current use has 4 times more memory than my current phone and I probably would need to make huge compromises to live with half as much. A lot of the chips in phones don't even have external memory buses. A top-of-the-line iPad Pro sports an 8-core asymmetrical core design, with 4 fast cores and 4 slow ones and, overall, is slower than a 2-core Core M-based MacBook (although it feels great because iOS does a lot less than macOS).

            Also, Apple doesn't make its own A-series processors - it uses TSMC for that.

          • Wowfunhappy 5 years ago

            And iPhones still don't come close to competing with desktop-class processors in terms of performance. iPhones also use much less electricity, of course, but the point remains.

            I don't know enough about this, but the GP's argument of "Intel hit the wall first because they were the first to reach that level of performance" makes logical sense to me.

            • robocat 5 years ago

              "Apple’s iPhone Xs is faster than an iMac Pro on the Speedometer 2.0 JavaScript benchmark"

              Yes, Safari, but iPhones do compete with your average desktop processor (not the top end).

              https://macdailynews.com/2018/09/23/apples-iphone-xs-is-fast...

              • spronkey 5 years ago

                They can compete in certain workloads. As a computational tool however, desktop Intel CPUs can be optimized far, far beyond the capabilities of any A-series CPU.

                Don't forget that Intel CPUs have things that A CPUs are missing like QuickSync, AVX2, massive PCIe interconnectivity.

                Whether the A-series CPU could be modified into something competitive on that front is yet to be seen. Whether this actually matters considering the state of our compilers and software development is yet another question.

              • Wowfunhappy 5 years ago

                Apple's newest CPUs have hardware explicitly for accelerating Javascript, so it's not surprising they'd pull ahead there.

                And as you said, you're comparing the top-of-the-line iPhone to an "average" CPU.

                • saagarjha 5 years ago

                  > Apple's newest CPUs have hardware explicitly for accelerating Javascript

                  They really don't. A12 added a couple of instructions for floating point conversions, but contrary to claims making rounds on Twitter at the time, they were not even generated by WebKit when the benchmarks were run.

          • tntn 5 years ago

            Right, but Intel isn't falling behind AMD or apple in this comparison - it is falling behind TSMC.

        • rbanffy 5 years ago

          Intel made one single bad bet - their 10 nm process didn't work as well as they expected - and TSMC, who made the right bet, leapfrogged them.

          In terms of architecture and vulnerabilities, it's not prudent to bet Intel chips are more vulnerable to exploits than others - it's just that we know more about those vulnerabilities. If you want to find vulnerabilities with high impact in cloud and enterprise data centers, Intel Xeon CPUs will be your primary research target.

          • spronkey 5 years ago

            We're not really sure yet whether TSMC have leapfrogged Intel in the longer term though. Intel's 10nm issues seem to have delayed their smaller process nodes in the medium term, but by how much is yet to be seen. It seems, for example, that Intel 7nm isn't in quite as much trouble as one might expect.

            It's also naive to dismiss the possibility for Intel to have learnt a lot from some of the failures in 10nm that will prove useful in accelerating node development in the future.

        • atq2119 5 years ago

          If the speculation about AMD's Rome / EPYC 2 performance is true, they have now surpassed Intel.

          • Filligree 5 years ago

            More accurately, AMD's 7nm processors should be ahead of Intel's 14nm processors. That's great, but it's not a huge surprise.

            We've yet to see how competitive they'll be once Intel leapfrogs that 10nm node. That's assuming that they can, of course...

      • creato 5 years ago

        Are they doing worse? Or are they still ahead, just not as far ahead as they used to be?

        I wouldn't expect any massive leads in any industry to last for long. This might just be regression to the mean.

        • vbezhenar 5 years ago

          They are behind. I think they have a chance to catch up with their 7nm which is supposed to be better than TSMC 7nm. But it won’t be soon.

          • jjeaff 5 years ago

            But what does that really mean?

            They still seem to be producing the fastest processors available for desktop and server.

            It doesnt matter if someone else is making even a 3nm chip if the chip still can't outperform the current offerings.

            • dodobirdlord 5 years ago

              The sizing numbers are also just nonsense marketing. They stopped meaning anything in particular a long time ago. Intel's '10nm' and TSMC's '7nm' are about the same size.

              • Filligree 5 years ago

                The size numbers do measure something, but what they're measuring differs between manufacturers.

                They aren't comparable between AMD and Intel. They absolutely are comparable between Intel and Intel.

            • shaklee3 5 years ago

              I wouldn't say that. Why are Epycs worse than Skylake xeons?

        • robdachshund 5 years ago

          The reason they are having problems is that they just continued doing die shrinks and speculation hacks to increase performance. They've essentially had the same core since sandy bridge.

          They didn't see zen coming, didn't have to compete with bulldozer, and thought they could just keep shrinking rather than building a new core design. Once they hit 10nm, they failed, and their old core got some healthy competition from zen. Now AMD is looking to take a serious lead with zen 2, aka ryzen 3000 series.

          I don't think Moore's law is dead, Intel just gave up on real r&d because it was cheaper.

          I can't wait for arm and risc v to enter the playing field.

    • ves 5 years ago

      You probably want to say “eking out,” not “eating out,” as the latter means something... very different.

      • dgacmu 5 years ago

        Someday I will learn to triple check when I use speech recognition.

    • cozzyd 5 years ago

      They did buy Altera didn't they?

    • jscott0918 5 years ago

      reddit.com/r/boneappletea

      • dgacmu 5 years ago

        Oh goodness, thanks. Fixed 'ekeing', which my phone apparently did not believe I'd said.

        • cormacrelf 5 years ago

          So close!

          It's eking. Eke, eked, eking.

        • api 5 years ago

          Mobile is the foot rub. (Autocorrect of "mobile is the future" I saw once.)

          • mruts 5 years ago

            That’s ducking funny!

  • Shorel 5 years ago

    And yet they still have the lion's share of the market. And the money to eventually recover from this as if nothing happened.

    Tell me when they have really fallen. Still very far from it.

    • Analemma_ 5 years ago

      That means very little. As the saying goes, “How did you go bankrupt?” “Two ways: gradually, then all at once.”

      In technology, downward swings of fate tend to come fast and hard. The camera world went from 100% film to 100% digital in the space of about five years, which extinguished Kodak. Or consider Palm/Nokia/ Blackberry, who went from collectively owning the entire mobile market to dead as doorknobs in even less time.

      It’s easy to see how it happens to Intel too: AMD’s big-core-count chips start eating up server business, while ARM takes over PCs (at this point people consider it all but certain Apple is switching to ARM in the next few years, and Microsoft is building Windows on ARM as a hedge), and without another business for Intel to fall back on (they’ve shut down modems, mobile chips, and anything else that could’ve been a new source of revenue), that’s the end.

      I’m not saying it’s certain, but I’m saying it’s totally possible and their current market share means nothing.

      • Shorel 5 years ago

        IMO this is only a repeat of the AMD Athlon days and Intel will go back to their anticompetitive antics sooner than later.

        • Lev1a 5 years ago

          Did they ever truly stop?

    • robdachshund 5 years ago

      Losing up to 40% of performance and having to disable hyperthreading is going to kill them in the server space.

      That ryzen 3 demo didn't look good for Intel either.

  • gameswithgo 5 years ago

    maybe it is just hard. amd isnt shipping clearly better stuff either.

    • gameswithgo 5 years ago

      show me what amd chip performs better instead of just downvoting.

      • serf 5 years ago

        performance benchmarks go out the window once you realize that a platform is woefully and unfixably insecure.

        performance benchmarks mean even less when those security issues are band-aided by performance metric-hurting-workarounds.

        • davrosthedalek 5 years ago

          If I buy 100k chips for my HPC to do single-tenant processing, performance, and performance/watt are priority one. The vulnerabilities intel has to fight right now are irrelevant for this use case.

        • gameswithgo 5 years ago

          amd isn’t immune to side channel attacks either. the most recent one we think amd is immune to but i wouldn’t assume in the long run that amd will generally prove to be more resistant to them than intel.

lkschubert8 5 years ago

Should they be describing it as 8 cores 16 threads when there have been multiple security vulnerabilities that have to turn off hyperthreading to be mitigated?

  • hmottestad 5 years ago

    This is a very good point. I hope AMD brings it up with the EU. Might be a very slow process though, but at this point it is anticompetitive behaviour. AMD could probably squeeze a fair bit more performance out of their processors if they were allowed to cut some security corners.

    • fwip 5 years ago

      Who do you think is disallowing AMD from cutting security corners? (Not rhetorical)

      • lkschubert8 5 years ago

        Ethics, the researchers exposing these exploits and proper implementation of the x86 ISA.

      • nl 5 years ago

        No one "allowed" Intel to cut security corners. It was an oversight and took a long time to discover and understand the impact of.

        Even when people started speculating (ha!) that speculative execution could be problematic it took years before they managed to exploit it.

    • spronkey 5 years ago

      I hear many people continuing to say that Intel are "cutting security corners".

      Are they really? I don't have an extremely deep understanding of Intel's implementation of x86 ISA, but I do know enough to say that so far we've been able to effectively mitigate almost all of these attacks with existing instructions available on the Intel CPUs. That doesn't mean that they are still not open to other variants of these attacks - but at some point you have to assume diminishing returns. Spectre is still very difficult to exploit, for example.

      Perhaps this has little to do with Intel and more to do with software authors cutting corners? LFENCE and SFENCE are reasonably well documented, after all...

      • hajile 5 years ago

        Here's a register article from 2007 about page table permissions being problematic. If you look around a bit, there were a ton of security researchers who talked about the problem. It seems to have been a bit of an open secret that such a thing must exist -- they just hadn't found it yet.

        https://www.theregister.co.uk/2007/06/28/core_2_duo_errata/

        The scariest part is that many of the best security minds work for various intelligence agencies. They very likely have known about such things for a very long time.

        Meltdown strikes me as an almost perfect vulnerability. It affects almost everyone. It is undetectable until exploited and once exploited, it immediately goes away until the next time. It's easy to keep secret. Most importantly, it's a one-way vulnerability. Keep your secure systems from running untrusted code and there's zero risk. Since this is standard protocol anyway for those systems, you don't have the risk of someone running across a code patch somewhere.

        The only potential downside is that the juiciest targets also aren't running untrusted code (though most foreign affairs workers probably run untrusted code). The big point of interest here is information symmetry. In most cases, giving others secret information is bad. In this case, both the best and worse case situations work out well for the USA. If nobody else knows, they get free info. If everyone else does know, then everyone gets perfect information about everything. This favors the most powerful country. They can eliminate the unknowns (the only real danger). In contrast, knowing you are going to be crushed does nothing if you can't hide your own hand either. So, the best case is very good and the worst case is still acceptable.

  • jayflux 5 years ago

    What else would they describe it as? There are 8 cores and 16 threads, whether you have to turn off the hyperthreading feature or not is a different matter.

    • lkschubert8 5 years ago

      It just feels kinda shady to advertise the peak performance with safety features that should be on off without mentioning that. They should at least include a disclaimer.

      • matz1 5 years ago

        Putting disclaimer is like voluntary shooting themselves in the foot.

        Most consumer wouldn't care about this anyway and the risk was overblown.

    • hmottestad 5 years ago

      I think that if you turn off hyperthreading it becomes 8 core 8 threads.

  • saltyshake 5 years ago

    Besides cloud providers running VMs /containers on the cloud, is Spectre/Meltdown really such an issue for day-to-day consumers ?

    • infotogivenm 5 years ago

      Yes. I think this is a common misconception.

      These attacks work fine in the browser, as researchers continue to show. They allow complete bypass of any native app sandboxing layers. Surely you don't run everything on your box as root all the time.

      • markmark 5 years ago

        Can you link to a hosted example of one of these. That would convince people nicely. Someone linked to one in a similar discussion yesterday but it didn't work anymore in currently patched browsers.

      • mjrow 5 years ago

        I'll keep HT on because I use NoScript and I encourage others to do the same.

        • d33 5 years ago

          Meh. It doesn't require Javascript for your computer to run logic described by others. Browsers are such complex machines that it wouldn't surprise me if you could for example craft a malicious SVG that would bypass that, or a turing-complete CSS file that triggers a vulnerability...

          By the way, does NoScript actually block in-SVG javascript?

          • nightfly 5 years ago

            in-SVG javascript only gets executed when viewing a SVG document (and maybe an <embeded> svg docuemnt), not when viewing an SVG in a img tag.

          • dual_basis 5 years ago

            Sure, but we all take risks every day. If you're worring about turning-complete CSS files exploiting Spectre and Meltdown then you probably don't leave the house much.

            • ben_w 5 years ago

              We know that attackers have reason to exploit literally all compute resources they can find a way to access. This is more like worrying about leaving the house during an epidemic of exploding ebola-infected pigeons — if you can do something about it, you should.

              • dual_basis 5 years ago

                Attackers also have to consider cost/benefit analysis when evaluating methods of attack. Claims that "CSS is Turing complete" require a user to act as a "crank" [0], so there are lower-hanging fruit out there than trying to program complicated logic which can utilize the Meltdown / Spectre exploits in CSS.

                [0] https://news.ycombinator.com/item?id=10734966

    • dual_basis 5 years ago

      Yes and no. It is possible to exploit Meltdown / Spectre via Javascript. From [0]:

      > This can happen when one has opened the other using window.open, or <a href="..." target="_blank">, or iframes. If a website contains user-specific data, there is a chance that another site could use these new vulnerabilities to read that user data.

      Most browsers have pushed patches which eliminate known mechanisms of leveraging the exploit, but the pathway cannot be completely mitigated by browser patches, I believe.

      [0] https://developers.google.com/web/updates/2018/02/meltdown-s...

    • msbarnett 5 years ago

      Given that most consumers run JavaScript unconditionally, yes. Browser vendors have basically declared Spectre/Meltdown/MDS unmitigatable at the browser level.

      • vbezhenar 5 years ago

        Can you link a source please?

        • msbarnett 5 years ago

          https://v8.dev/blog/spectre

          > Second, the increasingly complicated mitigations that we designed and implemented carried significant complexity, which is technical debt and might actually increase the attack surface, and performance overheads. Third, testing and maintaining mitigations for microarchitectural leaks is even trickier than designing gadgets themselves, since it’s hard to be sure the mitigations continue working as designed. At least once, important mitigations were effectively undone by later compiler optimizations. Fourth, we found that effective mitigation of some variants of Spectre, particularly variant 4, to be simply infeasible in software, even after a heroic effort by our partners at Apple to combat the problem in their JIT compiler.

          > Our research reached the conclusion that, in principle, untrusted code can read a process’s entire address space using Spectre and side channels. Software mitigations reduce the effectiveness of many potential gadgets, but are not efficient or comprehensive.

          The “some variants” include MDS, which the author was aware of but which were not at the time of publication out of embargo.

          • vbezhenar 5 years ago

            But they do not claim that hardware mitigations are necessary. They claim that they need to change browser architecture a little bit:

            > The only effective mitigation is to move sensitive data out of the process’s address space. Thankfully, Chrome already had an effort underway for many years to separate sites into different processes to reduce the attack surface due to conventional vulnerabilities. This investment paid off, and we productionized and deployed site isolation for as many platforms as possible by May 2018.

            So with improved browsers it's still unclear why ordinary users need those performance-eating mitigations, when browser vendors managed to solve that problem themselves.

            • msbarnett 5 years ago

              > But they do not claim that hardware mitigations are necessary. They claim that they need to change browser architecture a little bit

              For Spectre, that’s enough; for Spectre-class Intel permission exploit vectors (aka, Meltdown, Fallout, ZombieLoad, RIDL, Store to Leak Forwarding and other MDS vulnerabilities) all of the same infeasability of browser mitigations apply but data also leaks across process boundaries, so process isolation does jack shit to protect you without lower level mitigations.

              There’s nothing whatsoever browsers can do to prevent this. Process memory read isolation effectively doesn’t exist in the presence of unpatched Intel MDS vulnerabilities.

              > So with improved browsers it's still unclear why ordinary users need those performance-eating mitigations, when browser vendors managed to solve that problem themselves.

              The unclarity is only in your misunderstanding of the relationship of MDS vulnerabilities on Intel to Spectre vulnerabilities in general.

shereadsthenews 5 years ago

I don't get it. If it has an all-core frequency of 5GHz, doesn't that mean they've left some single-core boost on the table? Or have they hit some other limit and this part is basically free of thermal limits?

  • adamparsons 5 years ago

    I’d guess that the upper limit is stability rather than thermal capacity at that point

    • kzrdude 5 years ago

      What does stability mean, a bit more exactly? Just curious

      • bayindirh 5 years ago

        The switching speed of silicon has also some upper limit. When you drive silicon faster, it starts to make mistakes. i.e. not all electrons go the where you want them to go. This causes soft-faults and CPU re-executes the part at best, or gives you a BSOD, oops or panic at worst.

        This upper limit depends on process, layout, power design and power limits of the CPU.

        Last but, not the least, not all CPUs are created equal on a wafer. I came from an era where we hunted plain blue AMD Athlon dies for higher overclocking potential, since they were from center of the wafer and they were more stable under high load/voltage/clock. I had a 2200MHz Athon (200 x 11) which was faster then AMD's own 2200MHz Athlons, since AMD wasnt offering a 200MHz bus version of their 2200MHz parts.

        • koala_man 5 years ago

          >gives you a BSOD, oops or panic at worst.

          That's not too bad. I'd be much more worried about code silently doing the wrong thing .

          • AaronFriel 5 years ago

            That happens too. Prime 95 and other stability tests are used and can check when wrong results are returned. There's often a sliver of frequencies where a system under load begins performing floating point calculations incorrectly while other, simpler systems in the CPU are still functioning correctly.

            The BSOD, oops, or panic is a symptom of widespread errors.

          • bayindirh 5 years ago

            That's also possible. That's why overclockers run Prime95 to test their CPU stability.

            Also, a BSOD or panic in the wrong time can cause massive data loss. That's beyond bad sometimes.

            Edit: I mixed Prime95 with SuperPi. Thanks AaronFriel.

          • atq2119 5 years ago

            Why do you think the panic happens?

            It's because the code does the wrong thing, and that happens silently... until it hits some pointers or kernel structures and stops being silent.

            • bayindirh 5 years ago

              Not always. CPUs have extensive "machine check" capabilities. some of these MCE events are recoverable, some not.

              If the processor fires an unrecoverable MCE event, you're frozen with a nice, explanatory panic.

        • tobylane 5 years ago

          Why are chips in the centre plain blue and better?

          • bayindirh 5 years ago

            Plain blue is not a reason but a result.

            Center of a silicon wafer is said to be have a higher quality (due to lithography, physical stresses and other processes which I don't know exact details of), and the result is a die with more homogeneous properties and color reflection. Since the die's tolerances were lower around the center of the wafer, the performance of the resulting chip was better.

            AMD was also sub-binning most of these parts (they were sold as Athlon 1700 @ 1433MHz regardless of their performance level), so people were buying these unlocked sleepers and overclocking them to insane levels without voltage increases.

            However, today the processes is so different and node sizes are so small that the dies' color are different and not perceivable anyway.

            In the older days, this issue was more of an obscure, collective wisdom which resulted from trial and error days of overclocking wars.

      • leggomylibro 5 years ago

        Disclaimer: this is an oversimplification and I only have a lay person's understanding.

        CPUs are basically huge networks of transistors (on/off switches). They're sort of like tiny printed circuit boards; lots of individual 'parts' are connected by 'wires' on top of a silicon wafer.

        The distances are miniscule, but the lengths of wires running between transistors still varies. So when a transistor switches between 'off' and 'on', the signal takes a different amount of time to reach to its destination depending on which transistors are being switched. The signal can also feed into multiple other transistors which it will reach at different times.

        While signals are busy propagating through the circuit, the CPU's state will be unstable, including the 'output' value of its current instruction. The time that it takes for any given instruction to stabilize is tough to predict because it depends on a lot of things, including how far apart the transistors are and how many of them the signal needs to pass through.

        The CPU's "tick rate" in Hertz relates to how quickly it "latches" its internal state. Between "ticks", the CPU waits for all of the signals to stabilize. If they haven't stabilized when the clock strikes, bad things can happen.

        I'm not sure how the 'quality' of an individual chip can make it more amenable to overclocking, though; maybe they run into fewer issues from thermal stress? Maybe the tiny 'wires' between the transistors have slightly less resistance? I dunno, someone help me out?

        • wtallis 5 years ago

          I think the inconsistencies between samples of the same model of chip are much less about the interconnect wires than about the transistors themselves, having variation in their individual switching speed vs voltage curves. There's nor really much variation in interconnect length between a given two gates when both chips are made from the same masks. But especially at the lower (finer pitch) layers of metal interconnect, variations in resistance and capacitance can affect how things operate.

  • Dylan16807 5 years ago

    Knowing nothing can reduce performance from peak can be more valuable than an extra 4%

  • pitaj 5 years ago

    Yeah it's weird. Why not set the single-core turbo higher?

  • gchamonlive 5 years ago

    I believe Intel processors can't boost all cores. At least some tests I have done with my notebook processor (i7 8550u - 4c8t) with `stress -c n`, being n the number of processors, show that for n > 1 the processor doesn't reach 4ghz, only about 3.7ghz, while the package temps are still at around 70 °C. Only a single core on full load reaches 4ghz before throttling.

    • mackal 5 years ago

      This is entirely configurable. They generally don't do all core == max boost for power consumption reasons (and I assume yield would be pretty low on chips that can do this)

      Desktop chips also generally don't have any AVX offset, which is almost always required for 5 GHz all core.

    • shereadsthenews 5 years ago

      Yes, there's a table in the firmware that can tell you the maximum speed with N active cores. This one though has the same speed for all values of N.

    • clarry 5 years ago

      > I believe Intel processors can't boost all cores.

      And that's exactly what shereadsthenews's point is. They can't boost all cores, and they are not boosting any core beyond the all-core capacity if it's truly a CPU that runs at 5 GHz all the time.

      • XMPPwocky 5 years ago

        Turbo Boost can certainly apply to all cores- the limits you hit are TDP and time based, there, not strictly thermal.

        So, for example, my old laptop CPU would clock itself up to 2.7GHz on all cores... well, okay, it was a dual core, so that's not saying much, but still. But it'd only maintain that boost for a few seconds- under sustained load it dropped down to 2.5. This wasn't because of thermals, but rather because 2.7GHz was a Turbo Boost frequency, and once the PPL timer runs out...

        • XMPPwocky 5 years ago

          And to explain why they don't have, say, one core boost to 5.1GHz...well, let's see what siliconlottery says.

          > As of 3/16/19, the top 38% of tested 9900Ks were able to hit 5.0GHz or greater.

          > As of 3/16/19, the top 8% of tested 9900Ks were able to hit 5.1GHz or greater.

          So, Intel'd cut their yield by more than a factor of four if they only let parts that could hit 5.1 into this bin. For a 2% single-core performance boost...

          • mackal 5 years ago

            I think those numbers are for chips that can 5.1 GHz all core though, which is probably a lot less than 5.1 GHz single core.

            • XMPPwocky 5 years ago

              As far as I know, K-series parts don't support binning of individual cores- if you have one bad core that'll only hit 5.0, 1-core turbo to 5.1 will still result in the OS scheduler periodically picking that core to use, it clocking up to 5.1, and problems resulting.

              Might be wrong, though.

              • wtallis 5 years ago

                Intel's Turbo Boost 3.0 [1] was their attempt to take advantage of the fact that some cores on a chip can clock higher than others. It does not work well in practice, because it requires too much collaboration with motherboard and OS vendors. This feature is not available on their desktop platform, which the i9-9900KS uses.

                [1] https://www.intel.com/content/www/us/en/architecture-and-tec...

    • rzzzt 5 years ago

      XTU allows setting different turbo multipliers for 1-4 active cores (but the difference from the nominal clock speed typically gets smaller as more cores become active).

ac130kz 5 years ago

Some of the 9900K chips are able to push 5.2Ghz, this is not a proper answer to the AMD's new lineup

  • kitchenkarma 5 years ago

    I have one constantly pushing 5.1Ghz (disabled speedstep etc. been stable for months). I bought it because there was no comparable AMD cpu and as far as I know AMD is still behind. Why do you think it is not a proper answer?

    • clarry 5 years ago

      I think by AMD's new lineup they're referring to Ryzen 3000 series, which isn't out yet. If the rumors are true, the top models come with 12 to 16 cores, higher IPC and higher clocks than the current Zens, pushing 5GHz boost.

      A current 9900K might be some 20%-30% faster than a current Zen, but it will no longer be so when with the new lineup.

      Meanwhile mitigations are eating up Intel's performance advantage..

      • kitchenkarma 5 years ago

        I saw some leaked benchmarks today and it doesn't look great. I really hope AMD will kick the Intel (I even have Ryzen 1800x too and will buy the new 16 core one), but I need fastest possible single core performance and by the look of it at best it will be the same. But AMD has other problems like huge DPC latency, which makes it difficult to use for real time computations. If Ryzen happens to have the same single core speed as 5Ghz 9900k and pack 16 cores capable of delivering it each, I'll swap my Intel in no time.

zwerdlds 5 years ago

Still marketing SMT. Interesting move.

  • snvzz 5 years ago

    Desperate, is what they are.

qwerty456127 5 years ago

Why 5 Hz all the time? I'd love to have such an extremely powerful CPU but I'd actually appreciate if it could downclock itself automatically and stay as cold as possible whenever I don't need it's full power. Some times I run heavy computations and having 8 5Hz cores sounds great but most of the time I just read or write something so even 1000 Hz sounds an overkill.

  • icegreentea2 5 years ago

    Base frequency isn't the same as lowest frequency (ya... it's weird). Base frequency is vaguely related to the idea that if you had all cores running at the base frequency, you would run just about at the system's TDP (it's really a complete mess, this is a simplification). Your system can still drop CPU cores down to 400-800MHz in low energy states.

    What this announcement is basically saying is that Intel now has a 8 core chip where all 8 cores can run at 5GHz indefinitely "out of the box".

    • vbezhenar 5 years ago

      According to siliconlottery.com 38% of 9900K are overclockable to 5 GHz. Probably they just decided to select good chips from 9900K at factory, those not exactly new chips.

      • zerd 5 years ago

        That's mentioned in the article: "The new Core i9-9900KS uses the same silicon currently in the i9-9900K, but selectively binned in order to achieve 5.0 GHz on every core, all of the time."

    • xbmcuser 5 years ago

      Not indefinitely just at the same time because thermal throttling will happen after sometime. It just means all cores will be able to go to 5Ghz at the same time nothing about all being able to stay at 5Ghz

      • jjeaff 5 years ago

        Did they say that? Because there are people overclocking current chips and with good cooling have no trouble staying at 5ghz on all cores without throttling.

    • muro 5 years ago

      Since it's turbo, I thought it means "as long as the CPU likes it", rather than indefinitely. Or did they change how turbo works and now it's "it will run on turbo frequencies as long as there is enough load and the CPU is not temperature throttled"?

      • segfaultbuserr 5 years ago

        See the original comments under the news article.

        > Base frequency is when the Tau moving window time has expired. How most modern high end motherboards set it to an effective unlimited time.

        https://www.anandtech.com/show/13544/why-intel-processors-dr...

        > To simplify, there are three main numbers to be aware of. Intel calls these numbers PL1 (power level 1), PL2 (power level 2), and T (or tau).

        > PL1 is the effective long-term expected steady state power consumption of a processor. [...] PL2 is the short-term maximum power draw for a processor. [...] Tau is a timing variable. It dictates how long a processor should stay in PL2 mode before hitting a PL1 mode.

        > This is where it gets really stupid: the motherboard vendors got involved, because PL1, PL2 and Tau are configurable in firmware. [...] This lets them set PL2 to 4096W and Tau to something very large, such as 65535, or -1 (infinity, depending on the BIOS setup). This means the CPU will run in its turbo modes all day and all week, just as long as it doesn’t hit thermal limits.

  • AmVess 5 years ago

    It doesn't run at 5Ghz all the time. 5GHz is it's all-core turbo. It simply means that all cores will run at 5Ghz under full load.

    • kristianp 5 years ago

      Yes, the article title is misleading, it actually says "All the time".

  • PascLeRasc 5 years ago

    Open Task Manager or Intel Power Gadget (on Mac) and watch your CPU frequency - it already downclocks itself when it's not under load. Usually my 4770 is around 1.2Ghz idling, and I believe some motherboards let you set a minimum clock lower than that in the BIOS.

  • segfaultbuserr 5 years ago

    > 5 Hz

    > 1000 Hz

    You must have meant to say 5 GHz and 1000 MHz...

    • muro 5 years ago

      Why would you need 1GHz for writing, e.g. in vi/Emacs? 1kHz I'm not sure, but 1MHz should be enough.

      • segfaultbuserr 5 years ago

        > Emacs [...] 1MHz should be enough.

        vi? Probably. But not Emacs, although I guess you could run something like Linus's Micro-emacs. https://git.kernel.org/pub/scm/editors/uemacs/uemacs.git/

        In all cases, anyway, a 10 MHz 68030 should be enough for full Emacs, it's commonly seen as the lowest hardware requirement for a useful workstation Unix.

dogma1138 5 years ago

Most 9900K can hit 5.0 all core OC for non-AVX loads.

With AVX 4.8-4.9 is still doable without hitting the top 30% of CPUs in the CPU lottery.

My 9900K does 5.1 without any AVX offset but this is a top 10-20% CPU if the figures form Silicon Lottery are to be believed.

So it’s not that surprising Intel can simply bin CPUs to do 5.0 at near stock voltages since many resellers have been doing just that.

  • sixothree 5 years ago

    What does that spell for regular 9900K then?

    • dogma1138 5 years ago

      Nothing if you don’t care about AVX workloads you can get a 9900K and set it to 5.0ghz with an AVX offset of 2 pretty much out of the box.

      Unless the KS would guarantee a 5.3-5.4 all core OC I don’t see it being anything more than a PR release anyhow.

      That said I’m not even sure if the 9900KS doesn’t come with an AVX offset to begin with most higher end motherboards come with a 9900K 5.0 preset anyhow which sets the voltage to about 1.3-1.325 and an AVX offset of 3 it just yells at you that you need a good cooling solution and this is not guaranteed to work.

IlegCowcat 5 years ago

I am likely the odd one out here, but wouldn't having the capability to turbo a single core to, let's say, 5.5 GHz or higher as factory stock be more useful in real life than the one or all eight core turbo to 5 GHz instead of 4.7? There are still enough single core/single thread apps out there that could benefit from faster single core performance, and this newest and hottest (also in temperature) i9 cannot go faster in single core than the 9900K.

kitchenkarma 5 years ago

I have 9900k binned for 5.1Ghz all core. Absolutely brilliant CPU. I wish there was 16 core version though.

  • mabbo 5 years ago

    What's the power usage on that? I could imagine the heat from it keeping your home warm on a cold winter's night.

    • kitchenkarma 5 years ago

      I didn't measure. It is not too hot. I am typically getting 50-65 C under my workloads.

Jonnax 5 years ago

Is it going to be that much faster? 300mhz faster than the current top one according to the article.

  • XMPPwocky 5 years ago

    And the speed of my overclocked 9700K. If anything, this is just a "Hey, some people can cool an 8-core CPU at 5GHz, let's make a new bin for the 40% of CPUs that can maintain that" release.

polskibus 5 years ago

I wonder how much of that power will be eaten by spectre et.al mitigations.

ChuckMcM 5 years ago

From an interesting historical perspective, I mark the end of Moore's Law in 2001 with Intel's prediction of a 5GHz "Netburst" in 2005, which could not keep itself from melting. Somewhere I have a marketing road map of 5GHz in 2005, 10GHz in 2010. It was aspirational of course, but seeing what had to happen between then and now in order to get a chip that runs at 5GHz all the time based on their architecture is illuminating of the challenges they face.

  • earenndil 5 years ago

    That's an overexaggeration, IMO. Moore's law didn't really start losing steam until 2014-2015.

    • ChuckMcM 5 years ago

      Perhaps, and perhaps it is just a difference of how we internalize what "Moores Law" means. Granted when Gordon postulated it, he was strictly talking about numbers of transistors, and the implication was that transistors were a leading indicator of performance.

      Since I lean more on the 'performance' side of things, that was the end of 'single thread performance scaling', or put another way, that was when the performance of a single core stopped doubling every 18 months or so. And everyone switched over to dealing with Amdahl's law instead.

    • ricardobeat 5 years ago

      If you look at transistor count, yes. But single-core performance has stagnated since ~2003, that’s when we hit the 3Ghz mark. Progress since then has been a lot slower.

      • chimpburger 5 years ago

        Moore's law is specifically about transistor count/density only.

      • earenndil 5 years ago

        GHz are basically meaningless. The 2GHz CPU in my laptop is an order of magnitude faster than anything 3GHz from 2003.

      • fwip 5 years ago

        IPC's gone up quite a bit though, right?

Traster 5 years ago

So as I understand it, this isn't new silicon, it's just binning the existing 9900K and if you wanted an overclock 9900K you would have just gone to Ciara who obviously bin and overclock and verify their systems anyway. So now you go to Ciara and Ciara go to Intel and buy a 9900KS instead of previously where you would go to Ciara they would go to Intel and buy a 5 9900K's and find the one that would've been as fast as the 9900KS anyway.

Epopeehief54 5 years ago

8-core processor that will run at 5.0 GHz during single core workloads and multi-core workloads."

Under full AVX workloads using Intel stock cooler?

Highly doubt it.

Narishma 5 years ago

Is that a typo on the table or do those CPUs really cost the same whether they have an integrated GPU or not?

  • wtallis 5 years ago

    Those numbers are Intel's "Recommended Customer Price", not actual retail prices. The -F parts really are listed with the same RCP as the parts with GPUs enabled. No, it doesn't make much sense, but Intel has been experiencing a CPU manufacturing crunch, and the desktop market gets the short end of the stick when that happens.

gigatexal 5 years ago

There's no doubt about it: this will be a beast of a gaming chip. It will also likely cost an arm-and-a-leg (it has to, it's binned silicon meaning it's supply constrained) and likely have a really high TDP.

gumby 5 years ago

Curious what happens when you call into the vector (AVX hardware).

IgorPartola 5 years ago

So now Zombieland et al can be exploited even faster!

Thev00d00 5 years ago

Now you can run the speculation mitigations much more quickly!

happycube 5 years ago

It's nice to see AMD competing well enough for intel to actually push what their process can do. Finally!

coliveira 5 years ago

Well, this also means that attacks exploiting speculation with run much faster!

bashwizard 5 years ago

I hope it includes a horde of running zombies.

bertomart 5 years ago

just in-time for AMD's computex keynote...nicely played

lousken 5 years ago

no i7-9700KS? disappointing

OrgNet 5 years ago

hyperthreading?