zanny 5 years ago

No mention of it, but the pressure should still be on AMD to open source or allow firmware disable of their Platform "Security" Processor.

I would really like to be more enthusiastic for my next build to use something like this, but all my computers are presently trustable in a way new platforms with proprietary coprocessors that haven't seen me_cleaner support cannot achieve.

It really sucks to give Intel money - its not like they support the me cleaner project and are actively antagonistic to third parties disabling their backdoors - but at some point it stops being a matter of principle and becomes one of practicality. I can disable the unwanted parts of the hardware on one platform and not on the other.

  • StudentStuff 5 years ago

    AMD's PSP is ARM TrustZone, there is no way AMD could open it in their current chips, they don't own the IP and ARM is vehemently opposed. Due to the outcry, they are more likely to build their own secure enclave/supervisor processor in the next major rework of Zen, which they would own the IP of.

    • walterbell 5 years ago

      There are open-source TrustZone implementations (OP-TEE).

      AMD could drop Arm and move to a RISC-V based secure enclave. Google is developing OpenTitan as open hardware based on RISC-V.

    • shmerl 5 years ago

      That would be good. Opposed ARM should get lost.

  • profquail 5 years ago

    Have you considered using the POWER9-based Raptor Blackbird uATX board for your next build? The firmware is open, and they advertise it as a feature:

    https://www.raptorcs.com/content/BK1B02/intro.html

    • baobrien 5 years ago

      The Raptor stuff is pretty cool, but the cost puts it well out of practical reach for most of the people calling for opening the PSP. The cheapest Blackboard+CPU combo is $1279.

  • shrimp_emoji 5 years ago

    Does Intel let you disable PSP?

    Meanwhile, AMD seemed to in the past[0].

    0: https://www.phoronix.com/scan.php?page=news_item&px=AMD-PSP-...

    • zanny 5 years ago

      They don't have the choice on older chipsets that ME Cleaner has been reverse engineered to work with.

      Hence why its practicality. Neither respects user freedom, but the community has reverse engineered the ability to disable the backdoor of one company and not the other.

      • walterbell 5 years ago

        Is there a Xeon E3 mATX motherboard (C236 or C224 chipset) which supports me_cleaner and has 8 SAS ports? Preferably without IPMI.

tmd83 5 years ago

I'm really looking forward to this. But one issue with current AMD cpu is that you have to buy a desktop GPU even when you are not doing any kind of gaming and such. I know intel iGPU haven't been terribly good but for work they are good enough and is one less part and cheaper to boot. For same performance a Ryzen 3rd gen + gpu might still be cheaper but the price advantage gets reduced.

I haven't really seen that mentioned much I wonder why is that. I do love the potential for Zen2 + 7nm. The 65w of 3700x and the high frequency of the 3900X both suggest interesting potential for the future. One could end up seeing that the Ryzen 5, six cores might have higher overclocking headroom.

Then there's of course Navi, the first new GPU core in a long long time.

  • jdietrich 5 years ago

    >But one issue with current AMD cpu is that you have to buy a desktop GPU even when you are not doing any kind of gaming and such.

    AMD just aren't selling many CPUs to business desktop system integrators, partly due to dubious tactics by Intel to keep them out of the market. If you sell most of your CPUs to enthusiasts, it just doesn't make sense to squander die area on a crappy iGPU. Gamers obviously want a fast GPU, but so do most creative professionals - Photoshop is heavily GPU accelerated, as is Premiere and Resolve, not to mention essentially all 3D modelling and CAD packages. Scientific computing is also rapidly moving towards the GPU. GPU performance has a surprisingly large impact on day-to-day responsiveness, because all the major browsers use GPU compositing.

    The market for fast chips with crappy iGPUs just isn't as big as it used to be, nor is it particularly accessible to AMD. The Athlon and Ryzen APUs make a great deal of sense for the current market, offering a good balance of performance between CPU and GPU. I expect to see 6 and 8 core Ryzen chips with Vega GPU cores as part of the Ryzen 3000 generation, which will further close the gap.

    • josteink 5 years ago

      > AMD just aren't selling many CPUs to business desktop system integrators, partly due to dubious tactics by Intel to keep them out of the market. If you sell most of your CPUs to enthusiasts, it just doesn't make sense to squander die area on a crappy iGPU.

      I'm a developer, and I want fast build times. I don't need a dedicated GPU for that.

      Right now I'm squandering money and power on a dedicated GPU which is probably idling at 0.0000001% rendering a composited 2D desktop in its sleep.

      • wtallis 5 years ago

        If you really need a mountain of CPU power, then your stance toward the GPU should be that you're glad the several Watts it requires aren't being emitted under the same heatsink as the CPU you're relying on, and that it isn't wasting memory bandwidth you could put to better use.

      • deagle50 5 years ago

        You can buy a used GPU for well under $100. Now you can drive multiple 4k screens for that HiDPI terminal goodness and stop choking your CPU's memory bandwidth :)

        Still a better deal than Intel and a superior desktop experience.

      • apatheticonion 5 years ago

        My terminal window has a background blur effect and I like my desktop switching animations to maintain 144fps.

        Implying devs don't use dedicated graphics cards...

        • copperx 5 years ago

          What desktop environment do you use that has buttery smooth 144fps animations? Gnome certainly isn't smooth.

          • josteink 5 years ago

            And who can read a terminal at 144fps anyway? I seriously couldn't care less about eye-candy, animations and fps.

            Then again, I'm an i3/sway-user, so I guess I don't exactly represent the average (Linux) user.

            • cfcosta 5 years ago

              No need for animations or anything like that, 144hz also means less input lag and tearing.

      • BuckRogers 5 years ago

        I feel the same way and wish all CPUs had at least some rudimentary video output capability. I've always appreciated Intel for that. They've taken a mobile-first / mobile->desktop strategy, and AMD adopted a server-first / server->desktop strategy. Intel's desktop CPUs are a bit of an afterthought, and AMD's mobile (APU) is a bit of an afterthought.

        For AMD, what they've done makes the most sense, as fighting Intel in the mobile space is the toughest market to break into. You can get a cheap Geforce 1050 for $130 or so, and with the perk of having great OpenGL/DX drivers to keep anything that you do end up using it for, nice and snappy. I'm in your same boat and use a 2700X and a 1060.

        AMD should really have their motherboard vendors add some sort of basic functionality, like the old IGPs.

      • cl0ckt0wer 5 years ago

        You could always get a reburb $15 Radeon from a few years ago if you really don't care about 3d performance.

        I also considered going with a USB-DisplayPort adapter, but it was more expensive and I wasn't sure how well it would work.

        • Tsiklon 5 years ago

          As far as I'm aware, a USB-Displayport adapter will make use of the system's CPU more than a cheap discrete add in card would, factor in the custom drivers that one would have to install (both on Linux and OS X) and for me that would be a non-starter as my sole video output

      • djsumdog 5 years ago

        Same here. I recently did a Ryzen 7 build with a Thin-ITX board.

        https://penguindreams.org/blog/louqe-ghost-s1-build-and-revi...

        I have a separate gaming machine, and would have rather just used the integrated for my Linux machine. It doesn't look like AMD is refreshing their APU lineup at all in this release. Did I miss it, or are there no APUs in the list?

        • mroche 5 years ago

          The APU/mobile chips typically come later. Raven Ridge chips were released after their desktop counterparts, and the Zen+ (12nm) 3000U series mobile chips were announced only a few months ago.

    • api 5 years ago

      AMD also wants to get more into servers, supercomputing, AI, and cloud. That's mostly Epyc but Ryzen is fast enough to play there and integrated GPUs dont matter there either.

    • njepa 5 years ago

      That is probably true, but if I am buying a GPU for hundreds of dollars I don't care as much about the cost of the CPU. I think AMD needs to do something if they want to fulfill the potential of Ryzen. (Removing or minimizing the role of the chipset might be interesting for example).

      • jpgvm 5 years ago

        Not necessarily. For instance if your target is 1080p gaming you can do that on a budget with an RX 570 or RX 580 + a reasonably priced Ryzen 7 chip. This market isn't at all competitive for Intel because it's price conscious and iGPUs aren't anywhere near up to to task.

        The only market that doesn't care about CPU price is the market that doesn't care about price at all. i.e people building systems with i9-9900K or X299 platform + RTX 2080 TIs.

        • njepa 5 years ago

          I am just not sure it is a super strong market. As far as I can tell those cards have been selling for roughly the same price or more as when they were introduced 2 years ago. A lot of people would just stick with what they got, or get a laptop instead. If the price of GPUs and memory was half of what it is it would make a lot more sense.

      • kyriakos 5 years ago

        Maybe offer a very cheap add on gpu for office use. Something in the igpu range that would sell for say 50usd. It could be based on one of AMDs older gpus but with support for new display types and ports.

        • dsr_ 5 years ago

          Right now, the cheapest "new" video card sold directly by Newegg is a $35 NVidia 210 which was new in 2010. Interestingly, reviews from that time period suggest that it sold for $30 after rebate. It has HDMI, DVI and VGA ports.

          In the $50 range you can get an R7 350, which promises 4K support (although I suspect that will be at 30Hz) and would certainly be enough to light a monitor with enough oomph to put shadows and transparency effects on your windows.

          • Dylan16807 5 years ago

            > In the $50 range you can get an R7 350, which promises 4K support (although I suspect that will be at 30Hz)

            It supports 4k@60. It looks like everything with a GCN core does, so most 7xxx, most 2xx, and all 3xx.

            • makomk 5 years ago

              I think the older Radeons only support 4k@60 over DisplayPort though, HDMI support for it is relatively new.

  • DCKing 5 years ago

    If you want a discrete GPU with the same featureset as a modern integrated Intel GPU - with basic desktop features such as 1) being able to decode modern video codecs like HEVC 2) prime Windows and (especially) Linux support or 3) modern video outputs such as HDMI 2.0 - you need to buy a Radeon RX550. This will still set you back more than $80 and add another fan to your build.

    A Radeon R7 240 from 6 years ago will still set you back $50, but will not give you modern video outputs or video codecs (although it at least still has relatively prime driver support). It's probably even slower than Intel's current integrated graphics too. Might as well go for the upsell then.

    The prevalence of internal GPUs unfortunately seems to have killed the market for up-to-date very low end discrete GPUs. The "budget stuff" starts at $80, which is quite steep for something that barely has added value over an integrated GPU.

    • MikusR 5 years ago

      Even the newest AMD Gpus don't have full VP9 (Youtube) acceleration.

      • DCKing 5 years ago

        Technically AMD's APUs (the Ryzen 2400G and other CPUs with integrated GPU of the Vega generation) do support it, but for some reason AMD has never enabled this in their discrete cards. Neither have they released a low end Vega discrete card either. They have the tech, but don't seem to be interested much in the low end.

  • deaddodo 5 years ago

    > But one issue with current AMD cpu is that you have to buy a desktop GPU even when you are not doing any kind of gaming and such.

    What? No you don't. They literally have an entire line dedicated to the exact use case you mentioned (CPU w/ iGPU for business use):

    https://www.amd.com/en/products/apu/amd-ryzen-5-pro-2400g

  • MrGilbert 5 years ago

    You could pick a Ryzen with "G", like the Ryzen 5 2400G, though... They have an integrated GPU.

    • redsparrow 5 years ago

      Unfortunately there were no desktop G parts (APUs) on the Zen+ architecture (as far as I know.) And they're not in the first round of Zen 2 either.

      The 2400G, which I think is currently the top desktop APU, launched in Feb. 2018 and has four cores.

      • mackal 5 years ago

        The Zen+ APUs should be out soonish. Can find plenty of information about the Ryzen 3 3200G and 3400G, but no release date :P

      • snvzz 5 years ago

        Nothing stops integrators from using the 3xxxU parts and the old 2xxxG parts.

        Regardless, I do expect G parts will be announced soon, too. 3 CPUs definitely isn't a full lineup.

      • carc1n0gen 5 years ago

        2xxx series is zen+, also ops complaint was about current gen ryzen, which the 2200G and 2400G fall in to.

        • onli 5 years ago

          Those APUs are a weird mix between Zen and Zen+, they got improvements over Zen but they are not fully Zen+, and they even have some unique drawbacks (the TIM, they are not soldered).

          • blattimwind 5 years ago

            Complaining about TIM on an entry-level processor seems... questionable.

            • onli 5 years ago

              I was just highlighting that there are differences.

              Though that temps are higher because of that, resulting in more work for the cooler, is not ideal for a CPU that's apart from that great for a HTPC.

        • NightlyDev 5 years ago

          2000 series is not zen+ when it comes to APUs. 3000 series is zen+ in APUs.

        • redsparrow 5 years ago

          Ah, interesting. Thanks for the clarification.

    • deafcalculus 5 years ago

      They are limited to 4 cores. In the new lineup too, when G series comes out later, they'll be limited to 8 cores (one cliplet for CPU and one for GPU).

      • zanny 5 years ago

        With this CPu chiplet design they couldn't have 2 8 core packages and a GPU on a single die.

        For the vast majority of professional use cases 8 core / 16 thread running at 4.5+ boost is going to be way enough for a while. AMD sure has spoiled the market in just two years on commoditizing 8 core desktop chips because in 2016 the conversation would be about what flavor of $400+ quadcore or $800+ sextacore from Intel you were going to get.

      • NightlyDev 5 years ago

        Not true. 3000 series APUs are based on zen+.

        • Dylan16807 5 years ago

          Read 'later' as 'presumably labeled with a misleading 4xxx'.

    • tmd83 5 years ago

      I know of the G series and I know why they are usually a GEN behind specially given the quality of discrete graphics AMD has and they need to sale. But for work, for just productivity a lot of the savings washes up with the GPU. It used to be that you get super lowend graphics card for those workload but these days almost the lowest price GPU is ~90.

      • lhl 5 years ago

        Just as an FYI for anyone looking for cheap basic display adapters, you can basically get an infinite supply of refurbed/pulled Quadro NVS 295 cards on eBay for $5-10.

      • AnthonyMouse 5 years ago

        $39:

        https://www.newegg.com/visiontek-radeon-5450-900860/p/N82E16...

        Hardly the latest technology but you can plug a monitor into it.

        • CoolGuySteve 5 years ago

          No DisplayPort and it almost certainly doesn’t support 4k at 60Hz, both of which the Intel chipset supports and both of which I’d consider essential for high-end productivity use these days.

          • copperx 5 years ago

            I still don't get how people are making use of 4k outside MacOS. Windows and Linux Desktop environments require tons of massaging to get scaling right, even with flagship software (some Adobe programs don't scale properly).

            Here we are, 8 years after Apple started shipping "Retina" displays, and PC software hasn't caught up. It's embarrassing.

            • CoolGuySteve 5 years ago

              I use Ubuntu on a 13" 4k laptop with 2x scaling and I've never had any issues.

        • broknbottle 5 years ago

          ah the OG king of HTPC/Media PC Graphics card. I used to have one in a HTPC paired with an Intel Q6600

          • tempestn 5 years ago

            Wow, that was a powerful (hot) CPU for an HTPC!

  • lm28469 5 years ago

    You can get a cheap passive gpu, most don't even need power from the PSU. Something like a nvidia GT 710 or 730.

    • justaj 5 years ago

      Is there anything similar but from AMD? As a Linux user I'd rather not buy nVidia.

      • pedrocr 5 years ago

        I've been defining specs for a new NAS/home server and was just waiting for this release to finish it. I have that issue and my conclusion has been that you either get an old Nvidia for ~30-40$ (e.g., GT710) or all the AMD options I've seen are ~100$. That generation of card seems to be well supported[1] by the nouveau open-source drivers at least but I'd much prefer AMD as well. For NAS and server applications it would be nice if someone did a motherboard like the Asus X370Pro with an old GPU soldered on. That way you could have simple boot graphics without this hassle and only add a good GPU if you actually needed it for something. But I guess that's too much of a niche to bother with just like low-end graphics cards.

        Edit: Found a few AMD R5 230 for the same 30-40$ range. Assuming the drivers are also good that seems like a good option. Edit2: Researching some more the R7 240 has a similar price and is probably the first that is already supported by the new amdgpu drivers so may be a better bet.

        [1] NVE0 here: https://nouveau.freedesktop.org/wiki/FeatureMatrix/ Seems like everything but power-management is fully supported. Need to do some more digging but if the incomplete power-management means it just doesn't throttle up as much that's fine for the application.

      • philjohn 5 years ago

        Something like the R5 230 or RX 460, several generations old, but will do.

      • posix_me_less 5 years ago

        Unfortunately the market with low power AMD cards is non-existent. Maybe some older HD model if you can find it second-hand, but those can't do 4K@60Hz. There is no low powered (=passive cooling) card based on the recent AMD chips. I wanted GPU with only passive cooling for my Xeon machine, had to buy lowend NVIDIA. Hopefully Navi at the end of the year will finally change this.

        • Zekio 5 years ago

          What are you talking about? you can get RX550s that are passive or even one of the rare RX560s which are passively cooled

          • croon 5 years ago

            I can't find a single passively cooled RX550, albeit some are passive up to a certain temperature, which is likely acceptable for the usecase assuming it works as advertised.

            • Zekio 5 years ago

              worst case scenario you can always buy an aftermarket passive cooler and swap the cooler yourself

      • opencl 5 years ago

        Passively cooled R5 230s are old but readily available. The newest passively cooled AMD card I'm aware of is the XFX RX460 but it's hard to find.

        The RX550 is low power enough to not need a PSU connector but for some reason nobody has made a passively cooled version.

  • bitL 5 years ago

    Mobo manufacturers could solder some weak GPU on board, but nobody seems to be interested so I guess they did their homework and decided it didn't make any economical sense.

  • deafcalculus 5 years ago

    Yep. Intel iGPUs have good open source Linux drivers too. I had hoped that AMD would have some basic graphics in the IO die, but it looks like that won't happen.

  • gigatexal 5 years ago

    The G-line of CPUs with integrated graphics is also interesting to me as well! As system ram gets quicker (it's my understanding that the frame buffer for these integrated GPUs comes from system ram) and nanometer tech gets better that a competent integrated GPU+CPU chip will be able to play AAA games at 1080P with some of the visuals turned up. Though the conspiracy theorist in me thinks that the console makers would never let that happen as it would mean DIY PC gamers would be able to build < 500 USD machines that could be play all the latest games.

    • teknologist 5 years ago

      Since console manufacturers seem to have been favouring AMD for graphics lately, this could actually be the reason for a lack of AMD iGPUs

  • sascha_sl 5 years ago

    There's always the 2200G and the 2400G, I'd expect those to get refreshed within the next year too.

  • carc1n0gen 5 years ago

    2200G and 2400G from current gen ryzen have integrated graphics though.

  • szatkus 5 years ago

    They use the same chip for desktop and for server market where it doesn't make much sense to add a crappy iGPU. Intel with their scale could build much more different chips.

  • skrebbel 5 years ago

    When I put together my computer a year or two ago, I just added some cheap $40 radeon board. It really didn't impact the bottom line much.

  • gigatexal 5 years ago

    It's my understanding that Navi is just GCN but tweaked with faster memory.

    • rudiv 5 years ago

      TFA seems to state that it's a new architecture, although how different it is from GCN depends on how much salt you take with marketing material. They did say it has redesigned compute units; that seems to point in the direction of "not just tweaks".

    • NightlyDev 5 years ago

      It's not, it's "RDNA".

      • gigatexal 5 years ago

        Thanks for clarifying. I read the article. I’m excited to see benchmarks.

  • dethac 5 years ago

    The 2400G existed. It's a lower end chip, but has the GPU you want.

  • jxi 5 years ago

    Funny I feel the opposite. If I'm going to build a desktop, I figure I would get a GPU either way (having a full discrete GPU being one of the main advantages of desktops over more portable formats). I feel like I'm wasting money if I go with Intel that comes with an iGPU that I won't use.

ksec 5 years ago

At close to 300 comments, I am surprised there is no mention of what I thought was the most important and surprise, 32 to 70MB L3 Cache. A lot of people focus on Core and Thread as well as IPC. We already knew what improvement IPC could do, we already knew what we could do 32 Thread. None of these are really new.

But 64MB of L3 Cache? In a consumer CPU at a price that I would hardly call expensive ( I would even go on to call it a bargain ). We used to talk about performance enhancements and cache miss, we now have 64MB to mess with, we could have the whole languages VM living in Cache!

  • pedrocr 5 years ago

    >we could have the whole languages VM living in Cache

    When dual-core processors came out someone said you could now have one core run your stuff and another run the anti-virus. That was widely joked about. This feels a little close to that. Having more CPU cache than we recently had RAM ending up being used for programming language overhead.

    • ncmncm 5 years ago

      Agreed. For those of us using VMs, the extra cache in each package is enough for the working set of the systemd we are obligd to run in each VM.

      Looking back to the 8M total RAM I had on a Mac SE/30, running A/UX Unix and a MacOS GUI comfortably, the sloppiness of modern productions is a disgrace and an embarrassment.

      What galls is not the wasteful extravagance. It's the failure of imagination that makes such meager, pitiable use of such extravagance. We make, of titanium airframes and turbojet engines, oxcarts.

      • ksec 5 years ago

        Agree with both the comment above, but do we really have a solution?

        It is a trade off between Cross Platform, time to market and development resources. And unlike any other scientific and engineering industry, Software Development doesn't even agree on a few industry standards, instead everything is hyped up every 2 years and something new come around and becomes a new "standard". And we keep wasting resources reinventing the flat tire.

  • dr_zoidberg 5 years ago

    This! I usually run code where I'm cache limited and adding more processes/threads slows down everything! At 64 MB of L3 cache, I'd be able to run things way faster[0] on image processing tasks!

    [0] for the curious reader, I've two machines: a 3MB-L3 i5 and a 20MB-L3 Xeon. So id't be looking at 20x and 3x improvements -- without taking into account other architectural improvemnents, like not-underclocked-AVX2, and the GHz count

  • zepearl 5 years ago

    Thanks for the heads-up - I totally missed this "detail" when I skimmed through the article & specs table earlier today.

    And just in case that somebody missed it, here is again the link to the PDF "What Every Programmer Should Know About Memory" by Ulrich Drepper...

    https://people.freebsd.org/~lstewart/articles/cpumemory.pdf

    ...which was posted here sometimes earlier this year and which talks about every detail of RAM and L1/2/3 cache access times and architectures etc... . Very heavy for me (read so far 25%) but as well very interesting.

  • Manjuuu 5 years ago

    This, people tend to forget what huge difference a few mb of cache can make. I'd prefer 1Mb more of cache over half a GHz of frequency.

    • pedrocr 5 years ago

      This is a bit pedantic but capitalization matters in units. I think you mean 1MB instead of 1Mb (byte not bit). And "mb" would be a milli-bit or a thousandth of a bit, which is a unit that probably makes sense in some situations (e.g., estimating information content of a message).

      (There's also the extra complexity around MiB vs MB for base 2 vs base 10 prefixes. In this case it would actually be MiB as RAM and cache is normally base 2 sized. But not everyone uses that, relying on convention from context instead.)

gigatexal 5 years ago

Yup. Just waiting on the benchmarks from independent reviewers and to see how XFR in this generation works but I’ll be getting a Ryzen 9 if everything checks out. 24 threads will be amazing for local development (microk8s, and others) when I’m not gaming and save me from having to build a separate box.

  • gigatexal 5 years ago

    ball parking the build in my head: - Ryzen 9 or Ryzen 7 - 32GB ddr4 - RTX 2070 or equivalent Navi for games* (depending on benchmarks and if I decide to actually do anything with CUDA) - everything else I have, NVMe system drive, case, PSU etc * Vulkan is big in the games I am targeting: Rage2, Doom Eternal

  • WC3w6pXxgGd 5 years ago

    What will you be doing that will actually use 24 threads?

    • gameswithgo 5 years ago

      A common developer scenario can involve a few things that will eat a lot of threads:

      1. playing some background music 2. running a local database 3. running a local webserver 4. running a browser 5. running an ide 6. running all of that stuff concurrently while testing the backend code 7. doing builds which are multi threaded in near linear speedup fashion in many languages/environments.

      I don't know if 12 cores 24 logical is going to make that scenario feel overall better than 4 cores 8 logical, but I do know that 4x8 feels much much better than 2x4 in my own use cases.

      #7 alone can be a really, really big win for long compiling projects.

      • dkersten 5 years ago

        Yeah, my docker setup alone runs a ton of processes. Aside from docker running a copy of my stuff, I often have tests auto-running, a separate REPL to try stuff out in, my editor, slack, music, browser. It all adds up and a bunch of cores/threads definitely makes everything run more smoothly.

      • Traster 5 years ago

        Common developer scenario: make -j24

      • gigatexal 5 years ago

        This. But to the OP's main criticism: I could very well do all of that with 16-threads and 8 real cores. I do a lot of work currently with distributed databases and I believe I need the cores for local testing along with everything else that gameswithgo mentioned.

    • fnord123 5 years ago

      Run electron apps.

      • lagadu 5 years ago

        Implying that there's a computer in existence that runs those fast.

        • sundvor 5 years ago

          Awesome, I just realised that the old "run Crysis" joke has been replaced.

          Electron apps do manage to make a mockery of my PC's specs though.

      • gigatexal 5 years ago

        I know this is a joke but aren't Electron apps single threaded by nature and usually memory hogs? So more memory would be better and having many cores is good too.

        • fnord123 5 years ago

          Electron apps have Helper processes. e.g. I currently have 3 Microsoft Teams Helper processes (with 37, 36, and 16 threads) running and 3 Spotify Helper processes (with 16, 9, and 4 threads).

          • megous 5 years ago

            Idle threads don't count.

        • 781 5 years ago

          Parent was asked about threads, not memory. Of course he also has 1 TB to keep those Electron apps happy.

      • rpastuszak 5 years ago

        Funny, never heard this joke on HN before.

    • ubercow13 5 years ago

      make -j24

      • vetinari 5 years ago

        Be sure to have enough RAM, so you will not run out of it.

        (For some reason, when building projects like LLVM with -j 16, 32 GB w/o swap may be not enough. With -j 2, it is enough, but takes an eternity).

        • nikic 5 years ago

          If you build LLVM often, use the shared library build (BUILD_SHARED_LIBS=true). Most of the memory usage (and a large part of the time in incremental builds) comes from linking final artifacts if you do not use shared libraries.

          • vetinari 5 years ago

            Thanks for the tip, unfortunately, when I'm building it (occasionally), the LLVM itself is part of another project (AMDVLK) and changes too.

        • 0815test 5 years ago

          Indeed, if you're bottlenecked on RAM, memory bandwidth itself will also be an issue (it seems to be the foremost bottleneck in modern compute, outside of single-threaded workloads. Part of why C-like languages are again becoming popular these days - they economize on memory-bandwidth per core). Might want to skip Ryzen 9 altogether and wait until the Threadripper parts are announced.

        • tlamponi 5 years ago

          Yes, building Ceph with less than 24 GB ram on a 16 core machine will make it run into Out Of Memory situations here, so that the OOM-Killer is summoned and kills the build..

        • jjuhl 5 years ago

          Make sure to install and use ccache and give it a big cache size on a fast ssd (I use 96GiB). It speeds up rebuilds (after the cache is populated) quite significantly.

        • gigatexal 5 years ago

          The initial plan is for a 32GB (2x 16GB sticks) with plans to move to 64GB when possible.

      • avar 5 years ago

        Having 24 threads doesn't mean that -j24 is the sweet spot. I have a machine 56 thread Xenon machine for building git.git, and I find that with its sockets/threads-per-core/threads-per-socket of 2/2/14 the sweet spot is closer to sockets*threads-per-socket = -j28.

        Things speed up rather linearly up to -j28, but once I get past -j28 (say -j32) it levels off, and -j56 starts being counterproductive.

        Same thing with the 160 thread POWER8 machine I have access to. That one runs 8 threads per core, and CPU-limited -jN tops out at around -j20.

        All of this is very worflow and CPU specific, but generally speaking don't blindly trust what things like "htop" show you as the number of available CPUs, under the hood many of them aren't "real".

        • gigatexal 5 years ago

          And on architectures like EPYC you run into NUMA issues and having to pin things to certain cores, right?

          • theevilsharpie 5 years ago

            The OS will attempt to schedule tasks as close to their memory as possible. Pinning tasks to specific cores may be needed in certain workloads, but for loosely-coupled parallel tasks like compiling code, you'll do fine letting the OS do its thing.

          • vbezhenar 5 years ago

            make launches separate processes, so NUMA should not be an issue, there's no interprocess communication.

            • avar 5 years ago

              Generally NUMA doesn't matter for "make" workloads so this as all academic, but that being said no, this isn't how modern CPUs work.

              When you start two unrelated processes one after the other (such as multiple compile steps) that operate on some of the same in-memory assets (files, things being sent over a pipe etc.) they're not just going to be in the main RAM, but also L1-3 cache, and the RAM itself may be segmented under the hood (even if it's presented to you as one logical address space).

              Thus you can benefit from pinning certain groups of tasks to a given CPU/memory space if you know the caches can be re-used without the OS having to transfer the memory to another CPU's purview, or re-populate the relevant caches from RAM.

      • chrisseaton 5 years ago

        Do you have the memory and IO bandwidth to back that up?

    • api 5 years ago

      I regularly max out 8 as a developer and could certainly make use of 16 or 24, and I am probably toward the moderate end of developer needs.

      Examples: multiple VMs, big editors/IDEs, local databases, local k8s clusters, local network simulators, and dont even start with AI or big analytics stuff.

    • bayindirh 5 years ago

      Develop multi-threaded scientific applications?

  • dmix 5 years ago

    Do you work on a desktop machine? At home?

    • sharpneli 5 years ago

      Whenever I'm having a remote day yes. Some projects can take literally hours to fully recompile. The more cores, the better.

      Will also be getting a Ryzen if the singlecore benchmarks show it's reasonable. Bunch of games I play at home tend to absolutely trash single core perf.

      • m0zg 5 years ago

        Why do you "fully recompile"? Use Google Bazel and never fully recompile anything.

        • sharpneli 5 years ago

          Because debugging and making changes. Trying to get some centralized server to produce just the right combo of Windows sdk and msvc (Just an example) would take even more time.

          And then there is the hassle of setting it up for all the possible projects one might work on.

          EDIT:

          Just some examples. Instead of developing electron application think of making changes to chromium. Instead of depeloping Qt application think of developing Qt itself. Etc etc.

          • m0zg 5 years ago

            Even so, with a proper build system you shouldn't need to recompile. That said, I don't think Windows has what I'd call a proper build system.

            • sharpneli 5 years ago

              One needs to recompile with a brand new checkout.

              Most of the time buildsystems work. Including one in Visual Studio. But I have never encountered a system that would always work flawlessly. From time to time one gets things like changes not detected or something else going haywire and one has to do make clean and just redo the whole thing.

              Another thing is when your'e doing profile guided builds then one has to do a full rebuild after each profiling run.

              • m0zg 5 years ago

                Not with Bazel (on Linux or Mac at least). It builds everything incrementally. If inputs that go into the build node did not change, they won't be rebuilt. You almost _never_ fully recompile, and with a cache incremental builds almost never take more than a couple of minutes. Google builds everything it runs from source, including things like compilers, runtimes, stdlibs, etc. Imagine if every engineer had to rebuild everything from the kernel up from source on every check-out. That wouldn't work.

    • gambiting 5 years ago

      Uhm, yes? What else would you do work on at home? On a laptop, with its shitty position for the head, back and the hands? I do have a laptop but that's purely for entertainment - my work is done 100% on a desktop, both at home and in the office.

      • dmix 5 years ago

        Pretty much everyone I know uses laptops at home which is why I was curious. I've always wanted to try out a home PC build for work at home w/ linux. But it's just easier to keep using my work Macbook Pro for everything.

        Also I always connect it to a large monitor + mechanical keyboard both at work and at home for any serious work... so not sure why you mentioned the neck/hand position.

      • paranoidrobot 5 years ago

        Laptops with Thunderbolt 3/USB-C are pretty great for use with a docking station.

        I get the advantage of having nice large monitors, proper clicky keyboards and mice, and the ability to charge/power the laptop - all over one cable. When I want to move away from the desk - unjack and keep going.

    • shmerl 5 years ago

      Why not? Lot's of FOSS development happens this way.

      • dmix 5 years ago

        I didn't say anything bad about it. Funny how people project.

    • gigatexal 5 years ago

      I do a lot of work related experiments at home for my own learning and to help what I do for my employer.

    • dewyatt 5 years ago

      I work pretty much 100% on my desktop at home every day. Only reason I'm on a laptop right now is because of a power outage.

burtonator 5 years ago

Man.. Loving this. Going to get the Ryzen 9 I think. My current machine is an 8 core i7 At 3.6Ghz and boost to 4.

Having 12 cores without the hyperthreading issues with intel and boost to 4.6 is going to rock.

  • tracker1 5 years ago

    i7-4790K myself, going to hold out a couple months longer to see if they get the 16core r9, or a new Threadripper out.

lettergram 5 years ago

Finally! I’ve been using a Ryzen 1800x since it’s release. Unfortunately, it has some stability issues and I’ve been waiting to upgrade on the Ryzen 3000, 7nm line.

This is going to be a solid 75%+ boost to performance, given I regularly max out my machines threads. Pretty amazing improvement in 2 years.

  • snvzz 5 years ago

    Bad ram that doesn't perform to spec is the cause most of the time.

    Ryzen runs them tight to spec, whereas intel is more relaxed. Here tight is better, but it also means exposing lies in manufacturer specs on ram.

    Cheapening on ram is never a good idea, but there's always the option of configuring more relaxed timings on the firmware settings, if the ram isn't up to its advertised spec.

    It's also possible your CPU is one of the first iteration of 1800x which had this issue that famously caused segfaults when compiling software on Linux. This was only seen on the first months of 1800x, and AMD offered free replacements to those affected. It's likely a bit late for that, but you're better off upgrading to Ryzen 3xxx anyway.

    • dralley 5 years ago

      I started getting crashes recently and so I did a memtest64 run. A couple of errors... but only in one of the tests.

      I backed the memory clocks down from 3200 (which it's supposed to be rated for) to 3000 and it passed with no errors.

      • sundvor 5 years ago

        Are you running correct voltage for 3200? Double check the spec! (Got bitten by this).

  • m0zg 5 years ago

    Could be that it'll still continue to have stability issues. For some reason Ryzens are extremely picky when it comes to memory. There's really no guarantee that 3-rd gen will resolve this issue. It's to the point where e.g. Corsair makes memory specifically designed to work with AMD CPUs. This memory typically contains Samsung B-die chips which work fairly well.

    • coder543 5 years ago

      No, the stability issues they're talking about are on a release day 1800X. It's not memory issues, and I've never had memory issues with either of my Ryzen processors.

      This is the issue they're certainly talking about: https://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Se...

      And it is assuredly not a problem with any Ryzen processors manufactured after the very first few months.

      • readittwice 5 years ago

        Well, there are some stability issues with my ryzen 2700x. At first there were CPU freezes in idle on Linux: https://bugzilla.kernel.org/show_bug.cgi?id=196683.

        After applying workarounds, I still see some strange crashes, not sure if at least some of those are still related to the CPU hangs from the bug above. TBF this might not be the CPU's fault. This is all quite annoying to me and time intensive to investigate (where do I even start?). Even though I really like AMD's tech I am quite frustrated and I haven't had these problems with my previous Intel builds so far...

      • IronBacon 5 years ago

        There was also a problem with P-states and Ryzen CPUs crashing/stalling in idle, but only with Linux kernels.

        I never experienced it myself but in my BIOS there's an option about "power on idle" that's suggested to not turn off for compatibility (I don't recall the correct words but I could check).

        It usually depends on MB manufacturers and BIOS/AGESA versions.

      • m0zg 5 years ago

        So I must be imagining my problems with Threadripper memory kits as of early this year as well as my coworkers problems with his 1800X. Hmm, OK then. HN knows best.

    • kalessin 5 years ago

      I had troubles with my memory (4*16GB with a 2950X) until I realized it was configured in 1T command rate by default. Things appear to be rock stable in 2T. Anyway, yeah, memory is more tricky than it should on Ryzen.

  • dewyatt 5 years ago

    My desktop w/a Ryzen 1800x still randomly freezes. I disabled most of the power saving features in the UEFI, which definitely reduced the frequency but it still happens on rare occasions. :(

    • chupasaurus 5 years ago

      Sounds like a problem with enabled C6 CPU state. Check out the web for a particular setting in your motherboard's BIOS.

    • clarry 5 years ago

      Mine does too, that is until I run

          sudo ZenStates-Linux/zenstates.py --c6-disable
      
      (I've got 29 days of uptime right now and the last reboot was due to a system upgrade)
gratilup 5 years ago

Now imagine a Threadripper with the Zen2 cores, higher IPC and frequency would be certainly welcome. Have the 32 core 2990WX and it's an incredible CPU for compiling large C++ programs, running big test suites and never having to worry about running too many tasks at the same time.

  • empyrical 5 years ago

    Conjecture on my part, but I wouldn't be surprised if we also see a 64 core Threadripper - there's going to be a 64 core Epyc:

    https://wccftech.com/amd-7nm-epyc-64-core-32-core-cpus-specs...

    Although they may save it for a "Zen2+" or something similar, like they did with 32 core Threadripper

    • bitL 5 years ago

      I am specifically waiting for 64c Threadripper. Would be also great if 32GB UDIMM ECC became available by that time to bump up RAM from 128GB to 256GB. That computer could then last a decade.

    • chaosbutters314 5 years ago

      I'm looking forward to dual socket 64 core epyc for at work (128 cores)

  • danbolt 5 years ago

    A lot of AAA game studios with larger C++ codebases use Incredibuild a lot. I can imagine having something with this level of parallelism would be incredibly useful.

    • gratilup 5 years ago

      Don't even need Incredibuild, even MSBuild or just plain /MP option in VC++ can take advantage of it. A build from VS of the Unreal Engine 4 client takes around 2 min, for example.

      • gambiting 5 years ago

        I mean, yes, but even that is not enough once the project is big enough. I work on a huge AAA game in C++ and on my 8-core 16-threaded Xeon the whole thing compiles in 40 minutes. Incredibuild is a must to keep the compilation times even remotely acceptable.

      • danbolt 5 years ago

        Agreed, utilizing multiple processes helps a lot, although a larger number of cores across the entire network helps a lot with compiling the mass of translation units. Most workplaces I’ve done C++ at have had extra IncrediBuild agents with Intel Xeons to help with that

    • ahartmetz 5 years ago

      Icecream is the distributed compiler to use on Linux. Add ccache as needed.

  • loeg 5 years ago

    Hell, I have the older 16 core 1950X and it's amazing for compiling large codebases. I'd heartily recommend these things, performance for dollar is fantastic.

    • pimeys 5 years ago

      Having the 24-core 2970x and can comfirm, it is very nice for Rust development.

      • jaimeyap 5 years ago

        Nice. What build times are you seeing for clean builds of the rust toolchain itself? Curious to benchmark against my 2700x. I'd imagine near linear scaling with the core count.

        I think the 3900x might be a happy middle ground. I'm guessing we would probably see (with the increased IPC, core count, and core clock) like 70-80% increases over a 2700x in these kinds of multithreaded workloads. So probably slightly more than half way to a 2970x or 2990wx?

        • loeg 5 years ago

          3900x looks fantastic on paper. In general, if the Ryzen stuff is sufficient for your needs, it's a better value. You pay a big premium for the Threadripper boards (and big case and big cooling solution). So in that sense, the 3900x is definitely in a sweet spot at the top of the Ryzen range.

          Tradeoffs: threadripper boards officially support ECC; Ryzen boards are hit or miss. TR boards tend to be priced around $300 whereas you can get a Ryzen board for $100ish. TR had (prior generations) twice the DRAM channels and way more PCIe lanes than Ryzen, so if you're doing GPU-intense work or something else with use for lots of PCIe, that's a plus. Not to mention, additional core count over Ryzen, although with greater inter-die latency. Not sure what that will look like with TR3.

          Is 3900X worth $500 at list over $400 3800X at list? Actually, yeah, it looks at least 25% better to me (esp. the doubled L3) if you can use the cores. The 3800X is overpriced; they probably are learning from the 1700<->1800 dynamic in gen1. Is it worth it over the 3700X at $330? Maybe not.

          For me the question is really, how long will Ryzen 3000 be on the market before those better IPC/clocks/core densities show up in TR3? PCIe 4.0 support is huge; AMD wasn't anemic on PCIe channels on Zen and Zen+, and PCIe 4.0 doubles bandwidth from 3.0. Hopefully those IPC gains do not come attached to Spectre/Meltdown-like vulnerabilities. I'm excited for Zen 3 TR! That might be worth an upgrade from the 1950X. Meanwhile, it doesn't seem like Intel will get to PCIe 4 until 2020 (although that's reasonably soon).

          • jaimeyap 5 years ago

            3800X is "gamer priced" :).

            I think the 3900x is in a great position to provide the best of both gaming and productivity. Extremely aggressively priced at $500 for the horsepower it seems to give you.

            I suspect there is going to be a 16 core 3950x later in the year. Maybe with slightly lower single core frequencies. But maybe 20-25% greater multicore performance.

            I bet they are delaying that to keep something up their sleeves when Intel responds. And to not totally cannibalize TR prior to releasing TR3.

          • tracker1 5 years ago

            The 570 boards are going to be around $100-200 more expensive though. The PCB is a bit different and the specifications are more tight for PCIe 4. I think the cheapest board you'll see soon will be above $150 at the very low end all the way up to $600 or so. Many of the prior two gens will have issues running the newer CPUs and the board vendors are recommending against Zen2 on chipset boards prior to 570

          • drudru11 5 years ago

            Yeah - I wish I didn’t have to go threadripper to get ECC. I don’t need that much power.

jaytaylor 5 years ago

It's not clear to me from TFA:

Do any existing AM4 mobos / chipsets have support for full PCIe 4.0 bandwith (64Gbps)?

Or will the existing mobos be limited to PCIe 3.0 (~5-6Gbps)?

  All of the five processors will
  be PCIe 4.0 enabled, and while
  they are being accompanied by
  the new X570 chipset launch,
  they still use the same AM4
  socket, meaning some AMD 300
  and 400- series motherboards can
  still be used.
I was just reading about PCIe 4.0 and 5.0 yesterday [0], and some quick research indicates only a week ago it was announced some current AMD boards do support PCIe 4.0 [1].

Would be awesome because the rate when transferring terabytes across SSD RAID arrays will see a 3-10x increase from ~500-600MBps to ~1.5-6GBps+. Fantastic!

[0] https://videocardz.com/review/pci-express-riser-extender-tes...

[1] https://www.pcgamesn.com/amd/400-series-pci-4-0-bandwidth-bi...

  • wtallis 5 years ago

    Existing 300 and 400 series boards may be able to operate at PCIe 4 speed for the CPU-provided lanes (as opposed to the ones routed through the chipset that you can't upgrade), however signal integrity issues may limit this to just the slot closest to the CPU. So far, I haven't heard about any particular boards that have been validated for a specific number of slots working at gen4 speeds. Whatever you're using for an SSD RAID array will probably get in the way of using gen4 speeds, since you likely won't be able to get gen4 speeds over any cables or risers without redriver chips.

p1mrx 5 years ago

It will be interesting to see whether they can match Intel in single-threaded performance across the board, and not just some carefully-selected benchmarks. This would be the first time since the Core2/Athlon64 days.

  • hajile 5 years ago

    They already came close in a ton of benchmarks and were faster in others. The biggest issue was AVX. They could run AVX2, but at (effectively) 1/2 clockspeed due to their implementation being 128-bits wide instead of 256-bits like Intel's (which downclocked, but was still faster for non-mixed loads).

    AMD now has 256-bit AVX2 units, but unlike Intel, they don't need to downclock due to 7nm TSMC's lower power requirements compared to Intel's 14nm process. This should also affect 128-bit AVX instructions. It should be possible to reorder and push 2 through the pipeline at the same time in a lot of circumstances.

    • Dylan16807 5 years ago

      > It should be possible to reorder and push 2 through the pipeline at the same time in a lot of circumstances.

      To be clear, the existing chips had two 128-bit vector units, and the new ones have two 256-bit vector units. So that would get you 4 total.

      Also each unit, at least on existing chips, is capable of either a single FMA or a completely independent multiply and add at the same time. I don't think Intel chips can do this?

  • sadris 5 years ago

    I'm begging for a good single threaded CPU that isn't $600

    • snvzz 5 years ago

      AIUI the 3900x delivers at just $500.

      I am hopeful for the actual third party benchmarks.

    • ianai 5 years ago

      Read that often here tonight. Is there a killer app for single core performance other than UI/UX?

      • Baeocystin 5 years ago

        Literally everything other than the rare, very specific workloads that are amenable to parallel processing! (This isn't hyperbole.)

        • 0815test 5 years ago

          There's nothing "rare" and "very specific" about parallel processing, what's "rare, very specific" is the amount of software that's been rewritten/redesigned to take advantage of it so far. Sure, there are inherently-serial workloads, but most of what we use our machines for isn't like that. Parallel processing is not just about performance either but also general stability, a many-core processor can run a lot cooler and be a lot less fiddly than a high-end CPU core that packs the same amount of compute performance in a single CPU thread!

          • Baeocystin 5 years ago

            >There's nothing "rare" and "very specific" about parallel processing, what's "rare, very specific" is the amount of software that's been rewritten/redesigned to take advantage of it so far.

            People have been banging their heads against the 'rewrite this software to take advantage of multiple cores' wall for decades. The lack of progress is telling. For a straightforward example, look at the second half of this Factorio update blog: https://www.factorio.com/blog/post/fff-215

            Factorio is a sim game that you would think on first consideration would hugely benefit from a multithreaded design. It turns out that doing so is actually slower(!). And although this example is pulled from a game, it is essentially the same story again and again, no matter what the subject.

            Now consider the second half of your statement- that the main benefit of multi-core processing is that it provides more CPUs, so that if any one gets choked, the general environment continues to operate.

            (Which is true, and a great advantage of having a multi-core CPU.)

            But consider a little deeper, too. If the first, best defense we have regarding multi-core designs is that they are simply more single-cores to have on hand, what does that say about the relative value of parallel processing vs. single-thread performance? Inherently serial workloads dominate across the board, in every field. The few parallel problems we have, we have because people have put a lot of brain sweat in to figuring out what, exactly, we can even do with all these cores lying around.

            Meanwhile, there are entire classes of problems that are simply waiting for better single-thread performance before we can move ahead.

            This is a very real problem, and it isn't going away.

      • ehnto 5 years ago

        There aren't that many benefits for super high core counts in your average enthusiasts use cases. Even games don't benefit from higher core count as much as you would expect.

        The people who would benefit from more cores have the server specific lines of CPUs to choose from, so that makes consumer grade CPUs a compromise between core count and single core performance.

        • saltminer 5 years ago

          > Even games don't benefit from higher core count as much as you would expect.

          And some games (like some Source-engine titles) crash if you have a high core count.

          I certainly benefit from the higher core count because I usually have a VM, 40 browser tabs, Slack, and a bunch of other stuff open at any given time, but my parents would see no benefit with their 5 tabs + iTunes + Word usage.

      • PureParadigm 5 years ago

        In all seriousness, I've been wanting better single threaded performance for running a Minecraft server.

        • ehnto 5 years ago

          What kind of player count are you getting?

          I have been running 4 players with one of the most taxing modpacks on a mid tier digital ocean VPS with no hitches. Not many players I guess but in case you were curious if you could use a VPS. Even when we had multiple excavators sending thousands of entities through sorting pipelines it was stil doing surprisingly well.

          • PureParadigm 5 years ago

            Thanks for the suggestion. I'll look into whether a VPS is viable, but we often have poor internet at our LAN parties and get the best experience when the server is local.

            It's a vanilla server and I also get around 4 players. The real problem occurs when people are generating new terrain while flying on an elytra, sometimes causing the server to crash altogether. When not exploring, it will frequently report "Can't keep up!" messages even when hanging around spawn, which I think might be due to the truly insane amount of hoppers we have (although haven't seen this as much in the recent update).

            If you're curious, the CPU is a i5-3570K @ 3.40GHz. The game is certainly playable, but it struggles under load like I described.

          • imtringued 5 years ago

            A max size reactor in minecraft consists of 50000 Tileentities and it only produces a few million RF/t enough to power a handful of max tier void miners. Thousands isn't exactly impressive.

            • ehnto 5 years ago

              Well we had a max size reactor powering our excavators so I guess we had those entities too. I didn't realise it took so many entities to run. Not to mention the hundreds of other pieces in our worlds automation puzzle and it was all spread out quite far apart, with chunk loaders maintaining the networks presence. Things were routed, crushed, smelted, crafted and eventually stored or utilized, all automatically.

        • doublepg23 5 years ago

          Intel just announced the i9-9900KS which has 8 cores all 5GHz. That should be top single core perf.

      • arianvanp 5 years ago

        Most Flight simulators or any physics heavy game gains a lot from single-threaded performance

      • intricatedetail 5 years ago

        Yes, music production. If you want to run plugins in realtime you need fast single core for long plugin chains.

      • fooey 5 years ago

        Games are still bottle necked by single thread performance

        • reiichiroh 5 years ago

          Last two Assassin's Creed engine benefit from more than 4 threads

  • mfatica 5 years ago

    Single threaded performance is irrelevant at this point

    • jwr 5 years ago

      On the contrary, it is absolutely crucial for some of us (yes, even for developers). I work with an interactively compiled language (Clojure and ClojureScript) and while I rarely do full "builds", I care a lot about recompiling single files and reloading them in the browser. That is not a multi-threaded job and requires single-threaded performance.

      This is why when looking at iMacs, I'd rather get the iMac than the iMac Pro. Multiple cores just aren't as important to me as is single-core performance.

      • pdimitar 5 years ago

        The iMac Pro's Xeon CPUs have pretty good turbo boost frequencies. If your machine isn't pegged on all cores when you want to do the task specified in your post then you should have zero problems.

DiabloD3 5 years ago

I watched the keynote live.

I'm sold, my desktop is most likely going to be a Ryzen (although not the 8/16 monster, come on, it's a desktop, if I need high core count, I have stuff at work for that).

  • pedrocr 5 years ago

    The "monster" is the 12/24 and that's still only 105W. For a real monster you'd need a Threadripper. The cheaper 8/16 at 65W/329$ looks like a great choice. But I guess 6/12 for 130$ less is also nice.

  • shmerl 5 years ago

    I don't mind 12 cores / 24 threads on my desktop. Makes compiling Mesa, Wine and Linux kernel a lot faster.

    • eropple 5 years ago

      I do a bunch of live video stuff (which, for the uninitiated, means "hi, I'm ffmpeg, I want all your processors") and I learned, the hard way, that I couldn't put a 16C/32T Threadripper in the box--it's a 4U case in a Gator transport case--because 180W would blow out my heat budget. 12C/24T at 105W should be doable. I'm excited; right now I'm constrained because stuff like my CG system is built on Chromium and so it gets a little chewy when I run multiple independent overlays. Doubling my cores--I have a Ryzen 1600 in there right now--and probably increasing single-thread performance by 40% should make my life pretty interesting.

      • shmerl 5 years ago

        Yeah, video encoding will surely benefit from it as well.

        • eropple 5 years ago

          Back of the matchbook, I should be able to encode 1080p60 at something like medium presets.

          "Medium" is really good.

    • eugene3306 5 years ago

      Cool, but we haven't seen compiling benchmarks yet. 12 cores might be bottlenecked by dual-channel memory.

      • opencl 5 years ago

        Compilation workloads are not really bandwidth sensitive at all. The existing quad-channel 32 core 2990WX compiles the Linux kernel faster than the 8 channel 32 core EPYC 7601 thanks to its higher clock speeds in spite of having half the memory channels. So I think it is safe to say that memory bandwidth is a total non-issue for this type of workload.

        Benchmarks here:

        https://www.phoronix.com/scan.php?page=article&item=amd-linu...

        https://www.phoronix.com/scan.php?page=article&item=amd-epyc...

      • theevilsharpie 5 years ago

        The AMD Threadripper 2990WX has 32 cores with quad-channel memory, and it seems to do fine.

      • shmerl 5 years ago

        Would be good to see some benchmarks.

        • agrover 5 years ago

          agree. Hopefully the seventy megabytes of cache between the two chiplets will help...

loser777 5 years ago

Biggest surprise seems to be the 65W 8/16 3700X. Hopefully that means a bit of overclocking headroom.

  • DuskStar 5 years ago

    Considering that the 105W 8/16 3800X only gains 300MHz base and 100MHz turbo for the extra 40W, I'm not sure that's the case. Still almost certainly what I'll be replacing my 4790k with though

    • tracker1 5 years ago

      Going to wait a couple more months to see if 16c 39xx shows up, or a new Threadripper... also on a 4790k.

    • SketchySeaBeast 5 years ago

      Yeah, I think this might finally be the reason to replace my old Haswell.

  • XMPPwocky 5 years ago

    This might just be a case of TDPs being essentially arbitrary. The 9900K is allegedly a 95W part, but even a moderate all-core OC means you have powers in the 200W and up range. AMD might have just jumped on the same bandwagon.

    • IanCutress 5 years ago

      AMD's TDPs are calculated at all-core turbo. Intel's are only valid for base frequency.

      • XMPPwocky 5 years ago

        Ah, so AMD doesn't have cheesy numbers- good to know.

        • effie 5 years ago

          Not entirely true, some AMD processors too can draw more power than the TDP value would suggest, due to XFR.

          • nolok 5 years ago

            Agreed but the effect is much more limited. If you compare manufacturer TDP against each other, it can only go even more in AMD's favor in real world scenario.

            Not very surprising even only from the manufacturing process used difference though, since Intel's next desktop generation is still 14nm it won't be changing soon.

hsivonen 5 years ago

What's the outlook for AMD to provide the kind of performance counters and performance counter accuracy that is needed for rr?

  • mkl 5 years ago

    For those curious like me: https://rr-project.org/

    It's a debugger from Mozilla that works by recording and replaying program executions.

fencepost 5 years ago

My question is how is their architecture doing regarding the assorted speculative execution baked-in issues and what kind of impact is there on the AMD processors compared to comparable Intel CPUs?

  • nolok 5 years ago

    Those affect Intel much, much more than AMD and once you include the fixes it always get better for AMD than without because, oversimplifying a lot, when doing branch speculation "Is it branch A or B ? I will guess A and pre compute that" Intel didn't bother with any access check during the prediction and only ran them after once confirmed it was indeed A (so it already had the result and merely did the ACL check), whereas AMD was already doing all the checks even during predition.

    That's why there is an entire range of such weakness that doesn't affect AMD while it hurts Intel very strongly.

  • xvector 5 years ago

    Frightening lack of press around this. Basically all of Intel's and AMD's lineups are centered around high thread count right now. Completely useless if I can't reasonably enable SMT.

    • makomk 5 years ago

      AMD's chips apparently don't have the security vulnerabilities that make SMT unsafe to enable on Intel chips - they released a white paper explaining why MDS etc aren't possible and what exactly the boundaries are on their chips: https://www.amd.com/system/files/documents/security-whitepap... It just didn't get a huge amount of press coverage.

    • pja 5 years ago

      So far, AMD has only been exposed to the Spectre (branch cache) information leaks haven’t they? They aren’t vulnerable to Meltdown or any of the recent MDS information leaks because they don’t speculatively execute if the thread doesn’t have access rights. Intel chose to speculate regardless and fixup the register contents on instruction retirement, which is why they have had so many problems.

jdsully 5 years ago

I didn’t see any word on whether the vector units are still half width. That’s still a major performance advantage for Intel.

Epopeehief54 5 years ago

Takes guts to stick with that core count and at least you get to enjoy the full 70MB of cache. Good thing Blender and Cinebench all fits inside that, not sure you can ever say the same for productivity workloads.

I guess AM4 also means no real improvements on the PCIe lane count: Would love to see real and IF switches to give a bit of flexibility and what they plan for a new Threadripper.

  • muxr 5 years ago

    > I guess AM4 also means no real improvements on the PCIe lane count

    The new Ryzen 3000 CPUs support PCIe Gen4.. so while the number of lanes will remain the same, their bandwidth could be doubled. Just announced Navi GPUs also support Gen4.

  • velox_io 5 years ago

    I am quite surprised that they're launching a consumer CPU with 70MB of cache! When most CPUs have ~8MB, I don't think there's ever been such a huge CPU cache jump in history.

    Those duel CCXs means the 16 core Ryzen could be ready for release (that's what I have my eye on!). It's funny, going from 12 to 16 cores is basically the i7 6700k in my desktop.

    Plus, it's great that Intel is being aggressive by releasing the 9900KS (which is a pretty good GPU for gamers). It's been a while since we've seen any real competition between AMD & Intel.

sandworm101 5 years ago

Looks like that 499 will include a cooler too. Id probably not use it (liquid cooling is better imho) but an included cooler is a perk.

  • sq_ 5 years ago

    I absolutely love that AMD includes solid coolers with all of the Ryzen chips. I run my R5 1600 with the stock cooler, and, though it runs hotter than it would on liquid or a top tier aftermarket cooler, I've still never felt the need to worry about temps. Plus it's relatively quiet.

    I think the cooler is a forgotten value add over Intel chips, since any chip you buy from them, whether it comes with a stock cooler or not, is going to require an aftermarket cooler in order to get decent temps/noise levels.

    • noir_lord 5 years ago

      Same on my 2700X, it's got a nice industrial design for a stock (if that is you thing) as well.

  • shmerl 5 years ago

    I wish they had a slightly cheaper option without a cooler. I already have a good air cooler from Noctua.

    • beenBoutIT 5 years ago

      Ditto. I end up not buying their X chips because I don't want to pay for another high end cooler that I'm not going to use.

      • chupasaurus 5 years ago

        In my country stores are selling OEM Ryzen CPUs for the same price as BOX versions.

  • radicsge 5 years ago

    It is amazingly cheap (compared to the intel's 1.1K), wonder what could be the reason behind it.

    • sandworm101 5 years ago

      They probably have something secret to counter intel in the 1k range. Probably 128 cores spread across a square-foot of silicon. Enough output that you can boil your coffee while gaming.

      • ianai 5 years ago

        Or bake a pizza, some cookies. Maybe they’ll offer a cast iron skillet version at some point? Play crysis AND get a good sear on dinner.

      • sq_ 5 years ago

        Isn't that what AMD GPUs are for? They'd be eating their own market share for PCs that double as frying pans/ovens!

        • snvzz 5 years ago

          If you're basing that comment on TDP, think again.

          NVIDIA redefined TDP to their convenience, to mean something more like "averages". So their numbers can't be compared directly.

          Always look at third party measurements. I'm looking forward to Navi's, while on the topic, as they've announced large improvements in power efficiency.

          • sq_ 5 years ago

            In response to what you said about Navi, I'm also excited about that. I'd really love to see some really strong competition to Nvidia's offerings that might be able to give them a good fight.

            Their dominance and tendency to push for proprietary features seems quite bad for the industry as a whole.

          • sq_ 5 years ago

            Yeah, I know. AMD GPUs also run a lot cooler as of the last time I checked. They've made a lot of progress, although the jokes from the R9 2xx-3xx era of a few years ago are still tempting.

      • snvzz 5 years ago

        If your use case is to fry some meat, I believe Intel still holds the crown.

        I'm expecting some new, crazy high binned, extremely expensive Intel CPU with sky-high TDP to be announced soon.

      • Dylan16807 5 years ago

        You can do a reasonable drip coffee off 300 watts, so that future is already here!

    • DuskStar 5 years ago

      I imagine their manufacturing costs are far lower than Intel's equivalent chip - the 3900X is essentially one and a half of a 3600X (since it has two CPU dies, and one interconnect), while Intel is still in the monolith business.

      • klingonopera 5 years ago

        A modular approach! Kudos to AMD for making it work nicely (anyone remember the first Pentium Ds, that were just two Pentium 4s clobbered together to make a dual-core chip?).

        Apart from that, I think you meant two-and-a-half, not one?

        • DuskStar 5 years ago

          Nope, one-and-a-half! Two of the CPU dies, one interconnect die - that's going to be somewhere between one and two!

          And their interconnect tech seems like it's going to be of huge importance in the server space - I can only imagine the yields on 8x4 core modules will be far higher than on a monolithic 32 core chip.

          • wtallis 5 years ago

            > I can only imagine the yields on 8x4 core modules will be far higher than on a monolithic 32 core chip.

            The next EPYC's going to be up to 8 chiplets x 8 cores, competing against Intel's current 28-core monolithic die (or their dual-die non-socketed 56-core). How many cores remain enabled on a hypothetical next-generation Threadripper is an open question, but they would probably go beyond 32 cores total. And a 32-core Threadripper would probably not have 8 chiplets but rather four fully-enabled active chiplets and four mechanical spacers.

    • klingonopera 5 years ago

      Intel is just woefully overpriced?

  • DuskStar 5 years ago

    I'd say it's more relevant for something like the 3700X - air cooling is very reasonable for a 65W TDP chip.

    • ehnto 5 years ago

      You can get practically silent air coolers for ~$40 now too. The only reason I would do liquid cooling now is for the fun of putting one together.

      • sq_ 5 years ago

        Isn't AIO liquid cooling still a good solution for decent overclocks without emptying your wallet for a custom loop?

        To my knowledge, those $40 silent air coolers perform well but still can't match a big liquid radiator.

        • Someone1234 5 years ago

          > To my knowledge, those $40 silent air coolers perform well but still can't match a big liquid radiator.

          The best air coolers can match AIOs. Specifically Noctua's air coolers can go toe to toe with most commercial AIOs, with one fewer point of failure (pump). At least according to Gamer's Nexus and similar sites.

          The biggest arguments for AIOs is:

          - Appearance/aesthetics

          - Space around the CPU block (air coolers in this league are HUGE)

          - Improved small form factor build flexibility

          In terms of raw performance, only custom loops really challenge high end air cooling.

          With air cooling you typically save some money (even a high end Noctua is often 40% or more cheaper than a branded AIO). No leak issues. Fewer points of failure.

        • sbov 5 years ago

          I can't find the link but a youtuber tested various AOIs and air coolers and a Noctua air cooler did the best job cooling. Unless you like AOIs for some other reason (e.g. some people don't like all the space air coolers take), if you aren't running a custom loop you're probably wasting your money when water cooling.

          • klingonopera 5 years ago

            Add to that the extra hassle and maintenance you have to invest in water-cooling (plus the danger of leaks!), air cooling really starts to shine.

          • sq_ 5 years ago

            Interesting. Do you have any idea as to which Noctua cooler it was? Their highest end ones run really close in price to the lower end AIOs.

            • sbov 5 years ago

              NH-U12A. And yeah, it's fairly expensive for an air cooler.

              • sq_ 5 years ago

                Thanks! Been a couple years since I built a new PC, and it's helpful to have that info for anything I might build soonish.

                • Klathmon 5 years ago

                  If you don't know about it, pcpartpicker.com is a fantastic resource for building a new system.

                  It makes researching and comparing a bunch of brands and parts as well as waiting for the best times to buy a breeze.

                  • sq_ 5 years ago

                    Yeah, I know about PCPP, used it to research/design my last build. The reason I was asking sbov questions is that I've kinda fallen off the wagon in terms of keeping up with hardware news since I did my last build.

                    Thanks for the recommendation, though; it really is an awesome resource.

IlegCowcat 5 years ago

My guess: standing by for a last 2019 or 2020 refresh. AMD simply doesn't have to play their full hand right now to be competitive. Going to 16 core on AM4 looks to be trivial on paper since they're already doing 12 core: just a matter of clock speeds and core voltage to make it happen inside of AM4 parameters.

  • tracker1 5 years ago

    My guess is their chiplets may have defective cores that are being disabled, and they aren't getting high enough yeilds on 8-core chiplets to release a 16c pair CPU yet.

    I doubt they'd hold back on offering a significant bump over Intel's current offering with a 16c/32t mainstream cpu.

psnosignaluk 5 years ago

I’ve really been looking forward to these chips. The 2700X was pretty close to Intel in 1440p benchmarks, so I’m looking forward to what the likes of Gamers Nexus have to say about overall performance, specifically in line with performance elements like frame render time. Not that it really matters to me. I’ll have one of the 8 core chips on a micro-ATX X570 and build up a system around that. I’ve purposely held off building a new desktop because of Ryzen 3000. In a Ghost S1 or NCase M1, it should be the ideal CPU for a powerful SFF build.

PaulBGD_ 5 years ago

I'll probably still get it since I've been waiting for it to release, but a bit disappointed they couldn't get 5GHz.. I'm curious what overclocking could be done here.

  • dralley 5 years ago

    Despite being behind on frequency, it seems they've totally caught up (and maybe surpassed, not enough info to know yet) Intel in IPC, and even the 3700X will have more than double the cache of the 9900K

    So we'll have to see what the benchmarks look like but they might not be as far behind as the frequency gap would suggest.

    • PaulBGD_ 5 years ago

      That's true, if more software starts taking advantage of multiple threads it'd probably be better than most of Intel's offerings.

      • mort96 5 years ago

        What software which requires a lot of processing power currently doesn't take advantage of multiple threads? Games are increasingly parallel, browsers do most non-JS things in parallel (and obviously run each tab in parallel), compiling code and video rendering eats CPU cores like crazy.

        • gameswithgo 5 years ago

          yeah a lot of people will assert that games don't benefit from more threads, but, if you fire up fortnite or battlegrounds and look at your cpu grapsh, you will see at least 8 of them working pretty hard

  • snvzz 5 years ago

    Since they're beating Intel, lower clock only means faster/clock, which is better.

    And there's of course potential for higher clocks as yields improve with 7nm process maturity.

    In fact, they might be even possible today, but AMD chose to release a relatively affordable lineup, which is the opposite of Intel's expensive chips with "golden die" binning.

    This shows AMD is relaxed, in contrast to Intel's desperation.

shmerl 5 years ago

12 cores with 105W TDP - good improvement over 8 cores with same TDP for Ryzen 7 2700X.

techntoke 5 years ago

Nothing mentioned about their "G" series.

  • snvzz 5 years ago

    They already launched a new generation of that recently, based on Zen+ with Vega. The previous one was pre-zen+.

    Their gpu+cpu chips are always based on the more mature tech.

    • Marsymars 5 years ago

      The only Zen+ based APUs are mobile/embedded-only though. Desktop APUs are all still pre-Zen+.

      • techntoke 5 years ago

        The 2200/2400G is Zen-based.

    • techntoke 5 years ago

      Are you referring to the 2400G? Rumors are they will release a 3600G this time around based on Navi.

joeleisner 5 years ago

I'm excited for these CPUs to come out; I've been yearning to get away from Intel after they've continually shot themselves in the foot, and these models price/performance are really selling team red to me.

harry8 5 years ago

What is the interthread latency between cores on a ryzen?

Something like: https://gitlab.com/hal88/interthread_latency

I get in the ballpark of 250 cycles on an intel $whatever (it's been pretty stable for a while) for cores on the same package full round trip, (hyperthreading off).

    mean = 255, 247 (ignoring blowouts) 
    max: 4898276 
    min: 93 
    (cyles)
bashwizard 5 years ago

Nice. I guess it's time to upgrade from my old 1700X.

auvi 5 years ago

I really expected a 16C/32T Ryzen 9 though.

  • jlawer 5 years ago

    Potentially is a yield issue / segmentation issue.

    Releasing the top SKU as a 12 core instead of 16 core part gives AMD:

    1.) Still the highest core count "Mainstream" part. I would expect Intel to fire back with a higher IPC / thread performance part rather then try and ramp core counts. Outside of productivity, content creation and server workloads I really don't see the need for anything past 6-8 cores right now.

    2.) Lets AMD keep the flawless chiplets for Ryzen 7 where there will be high demand. While providing a way to move more 6 core chiplets. They can change the product mix once yields improve. I would imagine that there is likely better margin on Ryzen 7 (1x chiplet) rather then the R9 (2x chiplet).

    3.) Keeps more of the Threadripper line viable for those that need high cores (especially if you don't require the extra memory bandwidth).

    4.) Sells a bunch of CPUs now, while keeping the ability to quickly respond to Intel if needed (or as a spoiler around the Intel 10nm release)

    • NightlyDev 5 years ago

      1) Intel won't be able to, at least not until they finally manage to get the new architecture out. Heat is already a huge issue for intel.

      2) AMD might be saving perfect core chiplets, but probably mostly for use in EPYC as the margins are higher.

      4) I expect 16 core am4 to arrive someday.

      • jlawer 5 years ago

        Good point on EPYC, I had assumed these would be using a different design like earlier generations, but the chiplet design makes this not only possible, but likely where the best chiplets are going.

        I agree that 16 core will arrive this generation, but I could also see it being 6 months or so. I have also wondered if it is waiting for a second generation I/O die with an enhanced memory controller to feed the extra cores.

    • kristianp 5 years ago

      Thanks for this analysis, although I wouldn't agree that R9 profit margin is lower than Ryzen 7. It would depend on the yield though, so if yields are low for 8 core chiplets, the R7 would cost more to make than if yields are higher.

    • tmd83 5 years ago

      It's interesting though that Ryzen 9 is almost the highest clocked part of the bunch.

      • effie 5 years ago

        AMD did not want to release the expected 16 core yet, but had to release some "monster" to get at those hype-affected buyers. So they clocked/binned those 6-core chips accordingly, to make this core-lacking monster more attractive. It also makes sense to sell as many silicon chips in Ryzen 9 package as possible, instead of selling them in Ryzen 5 package - more luxurious, better margin. Of course, anybody who cares about value and does some research before buying expensive high-end parts will wait for the Intel-AMD exchange to play out, or for the 16-core to be announced, and only then buy the best option.

  • DuskStar 5 years ago

    I wouldn't be surprised to see that come out as a mid-generation bump, personally. But I imagine they ran hard into thermal/TDP issues adding the extra four cores - especially if the dies used for the 3900X are highly binned.

  • treeevor 5 years ago

    Its going to come after intel has a chance to try and respond to the 12 core. Let intel think they have a chance and finally bury them

  • klingonopera 5 years ago

    Wouldn't that be the domain of ThreadRipper?

    • sq_ 5 years ago

      Using PCPartPicker as a reference [0], it looks like ThreadRipper bottomed out at 8c/16t with the 1900X.

      So I guess there's general overlap between Ryzen and TR in AMDs lineup there?

      [0] https://pcpartpicker.com/products/cpu/#s=63

      • makomk 5 years ago

        Somewhat, though bear in mind that even the 1900X ThreadRipper has 4 memory channels and a ridiculous number of PCIe lanes whereas Ryzen is 2 channel and has much less PCIe.

      • klingonopera 5 years ago

        Yeah, I'm guessing the general overlap is because Ryzen and TR run on different sockets (AM4 and TR4, respetively). The 1900X is an economic solution to get your own TR4 platform running and then you can upgrade it to some monster 32C beast of a processor, when you really need the extra CPU power.

        • sq_ 5 years ago

          Good point. I'd forgotten about TR4 and its absolute behemoth of a socket.

    • wmf 5 years ago

      Maybe ThreadRipper 3000 will have 24/32/48 cores.

      • dis-sys 5 years ago

        the problem is new TR is not on their 2019 roadmap PPT

        • ericd 5 years ago

          That's too bad, it's a really neat chip line. More PCI-E lanes than a Xeon E5 without the ridiculous premium.

          • dis-sys 5 years ago

            There are rumors that AMD is not happy with TR2's memory performance as the 4 channel memory controller is the bottleneck, they are probably looking at 8 channel memory option for TR3. ;)

            • ericd 5 years ago

              Ha well I wouldn’t complain if they did that. The higher core count TR2s have a bit of a weird performance profile because of the memory channel setup - two of the chiplets have to hop through another chiplet for memory accesses. Is that what you’re referring to?

              • dis-sys 5 years ago

                > Is that what you’re referring to?

                yes, they had to introduce that extra hop to workaround the fact that it has 4 less channel compared to the regular 8 channel EPYC.

    • DuskStar 5 years ago

      Not if AMD's looking to remove ThreadRipper from their product stack, and segment their server chips a little more.

      • klingonopera 5 years ago

        Are they actually planning to? I just recently looked into "modern" hardware (was out of the fold for some years) and it would be a shame to retire the relatively new TR4 platform so soon...

      • liuliu 5 years ago

        Unlikely though. TR is AMD's only HEDT presence (qual-channel, high core count). X570 chipset will likely s be dual-channel memory, limiting to 64GiB, seems still positioned for mainstream. Intel currently have very high price HEDT offering, hard to see how AMD is going to give up that market. Probably after Rome?

  • bryanlarsen 5 years ago

    My guess is that bandwidth and cooling limitations meant that 16C/32T Ryzen wouldn't have been much faster than 12C/24T. It sounds like the 12C/24T outperforms the 12C/24T Intel; if they had released the 16C/32T reviewers would compare it against an 18C/36T Intel chip, which it may not do as well against.

  • dethac 5 years ago

    Probably saving golden 8 core dies for Epyc.

    It'll come whenever the market needs it, but right now they're doing fine without it.

  • TazeTSchnitzel 5 years ago

    16C is probably coming later once Intel responds.

    • PixyMisa 5 years ago

      Intel are planning a 10 core desktop part, so the moment it pops its head up AMD will drop their 16 core chip on it.

      • topspin 5 years ago

        Yeah, that's almost a given isn't it? If AMD has really closed the single-thread performance gap then Intel is in for a hard few years in this market. Looking forward to independent benchmarks.

gravelc 5 years ago

Is there any clarity on how much RAM the 3900X supports? Presumably can do 4x32GB, which would make for a great little bioinformatics workstation. Am very excited with this chip and Windows bringing the full Linux kernel - can do everything I want at home with just one fairly cheap PC and no dual booting. Good times!

  • jaytaylor 5 years ago

    Are 32GB sticks of fast registered DDR4 still absurdly expensive?

    • gravelc 5 years ago

      Still not cheap - $200 for DDR4-2666 up to $350+ for DDR-3200 from what I can see on Newegg

deathtrader666 5 years ago

We've had 32 cores from AMD, yes.. But what about 64 cores like Intel's Knight Landing Xeon Phi?

  • wyldfire 5 years ago

    Well the Phi is sold/packaged as an accelerator so it's not quite the same. But I was pretty sure i had seen >= 32 core Xeons (though those were likely with now-nerfed HT so kinda moot).

  • marmaduke 5 years ago

    They are way too hard to use and get performance comparable to the dual Xeon servers.

  • muxr 5 years ago

    Epyc 2 (Rome) will have up to 64 cores.

andy_ppp 5 years ago

Any chance I can have a Mac Mini with a 3700X in please?

  • vbezhenar 5 years ago

    AFAIK Apple has zero macs with AMD cpus. Also I've heard about issues running macOS even in virtual machine on AMD hardware. So I have my doubts about that. You're more likely to have RAM Mac Mini.

saltminer 5 years ago

Did AMD talk about VME instructions? I'd love to be able to run a Win 9x VM again.

  • sebazzz 5 years ago

    Running a VM is already possible. Graphics acceleration is what the real issue is.

    • saltminer 5 years ago

      I'm referring to virtual 8086 mode enhancements. They're broken on Ryzen 1- and 2-series chips, AMD responded to my bug report saying they had no intention of fixing them in microcode updates. You wouldn't notice this unless you tried to run older OSes in a VM, like DOS-based Windows, or some applications in DOSBox.

jxi 5 years ago

The Ryzen 7 3700X looks like it hits a sweet spot. 65W TDP, $330, almost as fast as a 9900k. Was originally planning to go for the Ryzen 5, but with these specs it seems silly not to go for the 3700X.

  • TheOperator 5 years ago

    Also with CPU performance starting to taper off I think people need to also start think of their computers as longer term investments. I think the 8 core makes sense. Even my phone has 8 cores now...

    • maximente 5 years ago

      assuming computers here means CPUs, honestly i think 70% of this (needing "more processing power") is just marketing and consumerism, 10% is OS bloat and the rest is probably legitimate.

      i'm sure 100 JPL astrophysicists will pile in here but i've been using an AMD 3.2ghz quad core processor purchased in 2010 for software development in java, scala, clojure and now golang as a daily driver up until today. i've replaced the RAM twice, power supply once, and the video card once.

      i proudly use linux so i'm not sure what the situation is like with other OSes, so i'll budget some for things like windows "growing" over the years, but it's just not necessary for me to update the processor until it dies.

      • TheOperator 5 years ago

        I'm doing virtualization workloads. The traditional approach is to use some old servers but there's benefits to having a newer chip in your workstation instead.

    • zelon88 5 years ago

      I've been hearing this for a long time, but the "taper" isn't really a taper in all markets. In consumer markets, yes the taper is very real. Infact I bet you could take one $500 machine off the shelf of Wal-Mart every year for the past 5 years and benchmark them all within 15% of each other overall.

      But it isn't really visible when money is no object. If it's not the cores getting faster it's the core-count going up. Moore's Law is really easy to beat if you focus on making your CPU die's larger instead of their transistors smaller.

      • alkonaut 5 years ago

        Also for at least one part of high perf computing, gaming, the trend has been that more cores now actually count in modern engines. Couple of years ago you wanted a couple of cores but single thread perf was the big thing. Now some modern game engines scale very well beyond 4 cores. Presumably they have shifted to some sort of work-item based system instead of having fixed threads for subsystems (rendering, AI, ...).

        So upgrading from a 4 to 8 core a few years back was expensive and mostly not worth it for gaming, whereas now it might actually give the huge speedup the core count would indicate (Obviously for gaming this just means moving the bottleneck, but still)

      • TheOperator 5 years ago

        That's just it. The 12 core system years from now will have barely aged. CPUs will just have gotten larger and more expensive.

        This is changing a past situation where say a CPU would lose half its value in 3 years. This incentivizes spending big and upgrading infrequently.

    • NightlyDev 5 years ago

      CPU performance has increased a lot lately, mainly because of amd. ARM CPUs and x86 are apples and oranges.

vbezhenar 5 years ago

No 5GHz, I'm disappointed. Was considering switching to AMD, but Intel looks better. I'll wait for Threadripper announce, though, may be they'll release something powerful.

  • jaytaylor 5 years ago

    This may be an oversimplification. There is more to it than just megahertz or gigahertz. The most important metric in this case is work done per clock, on average.

    • skummetmaelk 5 years ago

      It's like 10 years ago when all hardware shops were advertising the size of their PCs harddrives as if that made them faster.

    • vbezhenar 5 years ago

      I don't believe that AMD made better architecture than Intel in regards to performance. They might be similar, but definitely not 10% better. So in the end it comes to frequency (and core count for niche tasks).

      • abdulmuhaimin 5 years ago

        just based on the previous gen benchmark, the perfomance/clock of Zen+ cpu is better htan intel offering. What makes you say it isnt so?

        • vbezhenar 5 years ago

          https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9... this page compares 9900K and 2700X. 9900K should work at 4700 MHz with all cores loaded. 2700x should work at 4300 MHz. So the difference is roughly 10%. And Intel in most benchmarks scores more than 10%.

          • the_why_of_y 5 years ago

            Those benchmarks were done 3 or 4 Meltdown microcode mitigations ago... the problem with Intel CPUs is, they become slower the longer they're exposed to security researchers.

            • vbezhenar 5 years ago

              Not everybody uses those mitigations.

  • effie 5 years ago

    You should base such decisions on independent reviews and tests of the new Zen 2 CPUs which are not out yet. GHz numbers for Zen 2 do not tell us much, yet.