atdrummond a year ago

This is why I can never get excited about Intel doing anything outside mainline x86.

Years ago, IBM and Freescale had the faith to build a community through POWER and, while not a homerun, that ecosystem still exists today (and I was able to benefit by selling hundreds of thousands/millions of devices powered by PPC/POWER).

Every time I’ve tried to get excited about an Intel product line and build on it - Larrabee, Phi, Itanium - I’ve gotten massively burned. I just can’t trust them.

  • misnome a year ago

    Exactly the same experience here. At encouragement of intel contacts, we went into Intel (ex-Altera) FPGA last year, with the oneAPI push and programming the FPGA cards with SYCL being free.

    The cards are now discontinued (but still sold), the drivers don't work on any distro more recent than 2018 - even though the oneAPI toolkit itself requires a more recent OS; the only non-discontinued replacements require software ecosystem license purchase, it's impossible to download the older (working) versions of the toolkit, and the contacts stopped talking to us once we actually bought the cards.

    Add to that the constant show-stopping compiler bugs and toolkit issues that made it feel like we were the only people actively using it, I think we've learned to run far, far away now.

    • Foobar8568 a year ago

      I have been burnt by the sockit board. I move away from wanting to learn fpga after all the bushit. All fgpa vendors are similar as far as I'm concerned.

      • krasin a year ago

        Open source FPGA toolchains begin to appear, mostly on the low-end for now. And because all FPGA vendors are doing such a terrible job for tooling, Yosys and friends are likely the future.

        • FullyFunctional a year ago

          Very true; Lattice Semi’s ECP5 is a fairly decent FPGA with very mature support from Yosys/nextpnr/trellis.

          Xilinx’ Artix-7/Kintex-7 are much more powerful FPGAs and the support for those is getting to be pretty good (DDR3 support is steadily maturing)

          It would be nice to see a vendor actually officially supporting these efforts. Maybe one day.

          (Not to mention various open source FPGAs - actual chips - being developped)

      • rowanG077 a year ago

        Treat them as purely hardware vendors and go open source tooling. The tooling is infinitely superior from a ux point of view. There are still quite a bit of missing features but it is getting there.

      • misnome a year ago

        We've heard positive opinions of the Xilinx boards and tooling. At least, not having everything about the HLS tooling either changed or syntax deprecated every three months.

        • bcrl a year ago

          But Xilinx has the absolute worst SERDES interface. Microsemi's Polarfire was way easier to get 10G ethernet up and running on (hours vs weeks).

    • AtlasBarfed a year ago

      Well that's because Intel has nothing but beancounters milking it for bonuses and stock options running it. This is not news to anyone who watches Intel for the last couple decades.

      Things like this require vision, patience, investment, outreach. Things that Intel is all horrible at. They are a monopoly that wants to sit in their executive suites and ship out N+1 Moore's Law chip. Engineers? Keep them in the other buildings, and avoid them as much as possible.

      Intel made, what 10 billion in PROFITS last year? With a "B". And times have been semi-lean the last few years. We are talking about a monopoly that could have invested 50 billion dollars (current dollar adjusted) in the overall OSS and alternate computing platform over the last 40 years.

      Intel should have:

      - a production linux distribution highlighting their hardware features with wide support for x86. This should have happened 25 years ago. Keep Microsoft's feet to the flame. Sure, charge money for it, who cares if it doesn't sell. It keeps Microsoft actively utilizing your hardware, which Intel has always had a problem with.

      - Hell, at this point, they should be funding ReactOS, whatever Beos got open sourced as, one/some of the BSDs, OpenSolaris. Redox. All of it. They should be very very interested in getting as many x86 machines and x86 peripherals running anywhere they can. The writing is on the wall with the M1/M2 -- Microsoft will go ARM hardcore if Qualcomm or AMD crank out a M1/M2 competitor that is superior to x86. Intel is being stupid if they aren't covering their bases with OSS OSes.

      - get off your asses and get x86 on a mobile device and embedded. This is related to support of other operating systems. x86 can run in so many embedded and mobile, but because Intel didn't have the OS support flexibility, it hamstrings moving x86 and their silicon into non-Windows non-PC areas.

      - one thing (I guess) that Intel does well software-wise is their compilers. Why are these expensive? Why do they cost anything at all? This is related to OS support and lots of computing platforms besides PCs. If the compilers and tools were free, there was good support of OSs in lots of different computing platforms, Intel would have good diversification.

      - a competitive discrete graphics card. You're telling me AMD and NVidia can do this, but you can't? Seriously? It's not silicon, I highly doubt that. What it probably is ... Intel just doesn't like talking to software people, and games software? You don't need the best, but you can certainly do better than the IGP afterthoughts they do provide.

      - why exactly does Samsung have dominance in SSDs? Like, didn't Intel invent them?

      Intel might be a lost cause on the mobile phone market. You know why they missed that so bad?

      #1 reason: they didn't have sufficient outreach in Linux, if they did, they'd probably have been the preferred architecture of the devs

      #2 reason: of course related to that, they didn't have support in non-PC computing platforms. If they supported more semi-embedded (even if pure embedded wasn't a good match for x86), then they'd have been ready for the emerging devices that became the smartphone.

      You know what? So much of what I just explained applies to AMD too, although they probably are more willing to support non-x86/ARM/RISC-V. AMD now has a billion or two a year to do OS outreach and better compiler tools and all that. And with OS outreach, you can make sure the code generated isn't Intel-optimized, like Intel loves to do. Your OS support, Your drivers can be AMD optimized.

      I get AMD has been severely revenue constrained for about a decade when they sat on their asses when Intel was being dumb with Netburst, but AMD is somewhat back on top. Will AMD sit on their hands again?

      • ac29 a year ago

        > Intel made, what 10 billion in PROFITS last year? With a "B". And times have been semi-lean the last few years.

        There are multiple measures of profit, but none of them are $10B.

        Per the 10-K for 2022, Intel's operating income was $2.3B and their net income was $8B. Net Income was higher than Operating Income because Intel had gains in equity holdings, interest payments, and favorable tax treatment. Put another way, 3/4 of the year's bottom line profitability was not from building and selling things.

        It should also be noted that almost all of their profits were in Q1. Q2 and Q4 actually had negative income (both operational and net). A big part of that is due to aggressive investments in advancing their fab capabilities (which is a multiyear process that wont deliver much outside the lab until 2024/25).

      • imtringued a year ago

        Intel could simply invest in sublinear deep learning.

        CPU have a significant advantage over GPUs that they have more main memory. The larger your model is, and the smaller the memory of each individual compute node is, the more time is spent moving data around.

        You might object that dense machine learning is mostly compute bound on CPUs but that isn't true. You can use sparse learning algorithms with sublinear complexity. In fact, the papers often show something extremely counter intuitive. The performance is memory bound at low thread counts and is compute bound at high thread counts!

        I don't know the details but I suspect that when you have sparse data, your cores spend a significant amount of the time chasing pointers and this means as you add more cores, more and more of the pointers you are chasing have already been loaded into the cache by a different core.

        One problem with this strategy is that Intel has super linear pricing on their high core count CPUs. The 24 core CPUs might be cheaper than an Nvidia GPU but the 40 core CPUs cost 1x to 3x as much as Nvidia GPUs depending on whether you want a single, dual or quad socket setup and in this case you obviously always want to go quad socket so intel may not be cost effective unless you want to train models that need TBs of RAM.

      • mjevans a year ago

        Re: Graphics / GPUs

        Intel's new ARC cards are a clear step in that direction. Their leadership also seems to be targeting a fully Vulkan / post OpenGL + post DirectX world. It's why the drivers are so big, the hardware only supports the modern methods and anything reliant on the old process requires a compatibility layer.

      • pclmulqdq a year ago

        They didn't fail embedded for lack of trying. Their offering needed WAY too much silicon and power to provide the same computing power as a smaller ARM chip. This may be fundamental to x86 - there's so much cruft in the instruction set that you need a huge chunk of silicon to decode it.

        • jiggawatts a year ago

          > x86 - there's so much cruft in the instruction set that you need a huge chunk of silicon to decode it.

          This is largely a myth. The actual power overhead of the x86 decoder compared to the entire chip is something like 4%, and keep in mind that ARM has a decoder too! I'm not sure what the power requirement of that is, but it has got to be at least 1-2%.

          Put it another way: if you just look at the logic layout of a modern chip, the 4-8 cores take up a fraction of the total area, and then most of that is the AVX vector units and the cache! The decoders are tiny in terms of area, and chips generally speaking use power roughly proportional to area.

          People think of Apple's M1 chips as demonstrating ARM's superiority, but in fact that just demonstrates the superiority of TSMC's bleeding-edge process.

          Current-gen AMD Zen 4 processor cores are faster than M1, and are more power efficient, it's just that they're optimised for desktop PCs instead of laptops.

          Of course, the ARM architecture does have a few benefits. Chief amongst them is the relaxed memory consistency model, which reduces the overheads of instruction retirement (the "back-end" of processors). This benefit scales with the number of processor cores, but that hasn't seemed to have to stopped both Intel and AMD planning for CPU with over 256 hardware threads anyway in the near future.

          • pclmulqdq a year ago

            You are definitely not thinking with embedded in mind. Everything you said applied perfectly to server chips, but not to the embedded world. The Silicon and power budgets are MUCH tighter and allocated very differently in embedded chips than in server CPUs.

            Most of the area on those chips isn't cache, or even memory. It's actually mostly peripherals, with significant area spent on the core itself. And there's a lot less Silicon total - a 7x7 mm die (50 square mm) is a huge embedded chip. Silicon spent on cores, even a tiny bit, is functionality your chip doesn't have.

  • eliaspro a year ago

    Do we need a https://killedbygoogle.com/ equivalent for Intel?

    • atdrummond a year ago

      It would probably be a much tighter circle of interested readers but the kill list is pretty long indeed.

      What makes me sad is not that Intel isn’t a good company - it is. It is that Intel could be an even greater company if it could just get out of its own way.

      • luma a year ago

        It feels like Microsoft under Ballmer. Windows made a lot of money but the focus on it was blocking further success with the other MS properties. It took a decapitation to get them pointed in the right direction.

        • chongli a year ago

          At least we got the developers dance out of the Ballmer era. What do does Intel have to show for it right now?

          • dualboot a year ago

            Optan... er.. nevermind...

        • ChuckNorris89 a year ago

          People keep parroting how bad Balmer was for Microsoft, without knowing that he's the one that invested heavily in Xbox and set the ground work for Azure and cloud services. Nadella continued this.

          • intvocoder a year ago

            Ballmer’s purchase of Nokia was a tremendous waste of money, it’s taken a lot of effort under Nadella to right the ship. The scope and performance of (at the time) Windows Azure was not particularly good under Ballmer, but was repositioned under Nadella. The criticism of Ballmer is not unjustified, he lingered around for years after his effectiveness.

            Personally, I don’t view Microsoft as a consumer company, and Xbox should have been jettisoned after the 360.

            • toast0 a year ago

              > Ballmer’s purchase of Nokia was a tremendous waste of money, it’s taken a lot of effort under Nadella to right the ship.

              Was it the purchase that was the waste, or the subsequent management that wasted the potential?

              Anyway, bringing Intel back into the picture; things would be a lot different if Intel hadn't canceled x86 phone chips around the same time Microsoft was hyping Continuum, the converged phone/desktop feature of Windows Mobile 10. If that was launched on x86 phones with win32 apps, it would have been really interesting. And, if they had that ready to go, maybe there would have been internal excitememt about WM10 and it wouldn't have been such a poor showing. Then again, maybe ending their massive QA program wasn't great for quality, either. I still miss windows phone, but there's no going back.

            • badpun a year ago

              Xbox helps them maintain Windows as the gaming OS for PC (as they both use Direct3D etc., so porting from xbox to PC is easy/cheap). Gaming is not important per se, but is one of key factors when consumers select OS for their home machines (because kids insist on being able to play games). And whatever kids use at home, they'll want to use at work when they grow up.

          • readthenotes1 a year ago

            Microsoft employee told me the biggest difference was that Ballmer encouraged competition between the different groups to the point where they would sabotage each other whereas nadala tried to keep people focused on delivering a product.

  • CoastalCoder a year ago

    > This is why I can never get excited about Intel doing anything outside mainline x86.

    This skepticism could reasonably be extended to x86 as well, given Intel's handling of AVX-512.

    The inconsistent support for various subsets of AVX-512, on different processors, makes it hard to predict whether some future processor will have the particular instructions that matter to your software.

    • colejohnson66 a year ago

      Despite outward appearances, AVX-512 follows two paths: server and consumer. Each year, the support on one tier is larger than before (until Alder Lake, that is). The various "sub instruction sets" are more for categorization reasons than differences in capability.

      If you look at AVX-512's chart on Wikipedia[0], but reorder it into (the discontinued) Xeon Phi, server, and consumer tiers, it's a nice chart showing increasing support as time goes on. For example, -ER, -PF, -4FMAPS, and -4VNNIW weren't really "removed" after Xeon Phi because the server and consumer lines never had them. That chart is just horrible because it tries to show a timeline without clarifying the differences in tiers.

      [0]: https://en.wikipedia.org/wiki/AVX-512#CPUs_with_AVX-512

      • adgjlsfhk1 a year ago

        there still is the fundamental problem that most people who do compiler work mostly don't work on servers so Intel's strategy has kept avx 512 from having good support

        • imtringued a year ago

          I think the biggest issue is that auto vectorization would need a programming language that is designed around it.

          Vectorization is a niche but GPU programming isn't (shader programming is very popular).

          • CoastalCoder a year ago

            > I think the biggest issue is that auto vectorization would need a programming language that is designed around it.

            I don't think that's correct.

            The compilers I work with do a pretty decent job of auto-vectorizing the inner-most loops of an algorithm, especially if they know the loop bounds at compile-time.

  • owlbite a year ago

    Even on mainline x86 they often seem to have had a try twenty things and see what sticks approach to software, discarding the 19 that didn't win, and burning anyone who was trying to use them.

    • ridgered4 a year ago

      GVT-g is the one that got me. Although with all their storage caching antics over the years I knew what I was getting into.

  • tracker1 a year ago

    That's generally how I feel about anything cross-platform from Microsoft... I'll buy into it if it's still supported 3 versions in. And at this point, other than .Net Core and VS Code, I don't really use MS software if I can avoid it.

  • arcticbull a year ago

    Intel also had a pretty robust ARM offering, the Intel StrongARM (acquired from DEC in 1997). They continued to evolve the line, renaming it to XScale at some point. They then sold it to Marvell in 2006.

  • rhelz a year ago

    It doesn't matter whether you are Intel or Exxon, any public company has to exit businesses which are not profitable enough. And when it comes to losing money, failing fast is typically better than failing slow.

    • abfan1127 a year ago

      the issue with this perspective is if people think Intel won't commit to a long term strategy, no one buys in and it dies quickly. Ecosystems take a long time to build momentum. Intel, along with any other cash-cow business plan company, kills ventures before they have time to mature because they don't cash out soon enough.

  • stewx a year ago

    What kind of PowerPC devices did you build?

    • atdrummond a year ago

      I built a custom Debian based OS for Genesi’s EFIKA board then we sold a handheld computer with similar specs in China. GUI used Mozilla’s rendering engine, so one of the first uses of “web apps” as an application distribution platform. This was mostly pre-smartphones in China, so this device would have been viewed as a PMP (portable media player) for games and movies and maybe productivity, not communications.

klelatti a year ago

So launched in August 22, dropped abruptly with no explanation in Jan 23 having signed up 33 ecosystem 'partners' - who have each presumably invested time, money and energy into the project.

I can see Intel needs to save money but this seems pretty counterproductive.

Edit - Even the (Wordpress based) website has failed now!

  • sitkack a year ago

    The Wordpress site was failing from day one, nothing has changed there.

    I am not sure they were that serious about Pathfinder, no energy in the room. But the other riscv vendors will certainly fill their new fab with cores.

    Pathfinder was just the carrot to sell other IP that they would hang off the side.

    • cameron_b a year ago

      I think you're right on with this. It always seemed like a way to usher in the chip production to their fab.

  • pclmulqdq a year ago

    Those partners probably didn't commit to funneling Intel enough fab business, so they got the axe.

    • brucehoult a year ago

      It takes a lot more than 5 months using a chip development system before you're ready to send something to a fab! Like 12-18 months maybe.

dveeden2 a year ago

This seems to be part of some bigger cost cutting changes https://www.tomshardware.com/news/intel-sunsets-network-swit...

  • pclmulqdq a year ago

    Shutting down the network switch business after eating Fulcrum and Barefoot is the bigger news here. So much money went into a pit there.

    • fruffy a year ago

      It's tragic, really. Tofino chips were such a promising and impactful technology.

      • pclmulqdq a year ago

        The Tofino chips were amazingly programmable, and before that, the Fulcrum chips were amazingly fast. Intel got ahold of both of them, and the magic was gone.

        • formerly_proven a year ago

          The latest Tofino 3 is... quite something to look at. Might actually take the cake for largest MCM and largest BGA package at over 10k pins. The POWER5 MCM is a similar size but has much fewer contacts.

      • panick21_ a year ago

        Damn what will happen to Oxide Computers, they relay on that. Are Tofinos still going to be produced?

        Edit: Seems they will support existing costumers and products. Still sad to see, lots of potential there.

        • bcantrill a year ago

          As you note, Tofino 2 is still being supported: the team has been very candid and transparent with us (which we very much appreciate!) and Intel has been both formal and explicit about their ongoing support for Tofino 2. We agree with you that there is lots of potential here -- and we are not backing away from the vision of the software-driven switch (and we are pleased that Intel remains engaged with P4).

        • pclmulqdq a year ago

          They can't cancel a major product line like that without at least 5 years of notice.

  • Dalewyn a year ago

    Intel ARC is not long for this world.

    • onepointsixC a year ago

      Intel needs to be a player in graphics. If they give up on ARC then their future will be constrained as at most a player in the ever shrinking x86 and fabs. Which would be an inglorious ending to what was a powerhouse.

    • CoastalCoder a year ago

      It's at least a plausible theory.

      If Intel cancels ARC, I think their credibility problem will get even worse, given all their marketing.

      I think they'd be perilously close to Killed-by-Google territory at that point. I.e., people would be less willing to make plans/purchases based on future product availability.

      • Dalewyn a year ago

        At this point, so long as Gelsinger remains CEO, I expect many more divestures and sales of company assets to continue happening. He seems to take great pride in "exiting" Intel from various businesses and getting rid of seemingly superfluous projects and investments.

        The grandiose manner of Gelsinger's cost cutting measures almost makes him look like the anti-thesis to the Sunken Cost Fallacy.

Zigurd a year ago

Early in my career I got to meet Pat Gelsinger so I know firsthand he's smart, very energetic, and he has been in IC engineering since masks were cut from Rubylith. He has all the background and knowledge to do the right things.

But what I have heard so far is scattershot: Be proprietary, be open, be exclusive, be a foundry, own an ADAS company, be nowhere in cars but wish you were, etc. Totally unsurprising to find that leads to starts and stops like this one.

Intel had a recipe that worked. It no longer does. Intel needs a new strategy as coherent as the old one but that fits the modern environment.

  • college_physics a year ago

    > Intel needs a new strategy as coherent as the old one

    you are assuming such as strategy exists. The future need not look like the past. In fact on pretty much all counts (geopolitics, sustainability, commoditization) the future seems to be branching unpredictably

    • Zigurd a year ago

      I don't assume one exists. Intel could be strategically screwed. Even if they had a hypothesis for how to create a similarly insurmountable strategy like the combination of fab capability and chip design they had before, what if the hypothesis is wrong or unimplementable?

      • hasmanean a year ago

        Strategy for a multibillion dollar behemoth is not the same as a strategy for a nimble startup.

        For years Intel provided just enough cpu power and cultivated a broad ecosystem of software developers such that any new hardware solution had to compete against the enormously more fun and flexible general purpose software solutions running on windows (and before that, dos).

        Custom closed source Hardware was a dead end. It either worked or it didn’t. Software could be infinitely flexible and was fast to iterate on. And every kid could train themselves to write software almost for free. This is why software ate the world.

        Closed source and complex compiler toolchains with hardware interfaces that require closed source drivers are practically just as inflexible as hardware. They should be avoided at all costs.

      • college_physics a year ago

        > what if the hypothesis is wrong or unimplementable?

        yep. the febrile nature of this decade means many hypotheses will be wrong.

onepointsixC a year ago

This is not good. Intel needs to be doubling down on R&D, new products and services. I'm extremely disappointed in Pat Gelsinger here. He's committing to growing a dividend while free cash flow is negative, while cutting jobs and important R&D.

Pat needs to cut the dividend and use that to invest in the future of the company.

LargoLasskhyfv a year ago

How does that fit to https://www.sifive.com/boards/hifive-pro-p550 which is/will be produced in 'Intel 4' (approx 7nm, EUV)?

  • jasonwatkinspdx a year ago

    That's Intel's foundry services. SiFive running chips there has nothing to do with Intel's software strategy or internal RISC V initiatives.

  • guipsp a year ago

    This is a software package. Not wholly related.

    • LargoLasskhyfv a year ago

      That would be like saying Intel Vtune is unimportant, because community developed GCC and LLVM are good enough. Maybe they are, meanwhile, but pathfinder also had simulators IIRC?

coobird a year ago

Less than a couple months ago, Intel seemed to be very enthusiastic and showcasing a bunch of testimonials from partners:

https://riscv.org/blog/2022/12/intel-pathfinder-for-risc-v-n...

  • pjmlp a year ago

    Nothing new here, I was at Intel's talk in GDCE 2009 about how Larrabee was going to change the world of graphics programming.

    See also Edison, StrongARM,...

    • lloydatkinson a year ago

      I get the feeling internally Intel is following the unfortunately popular paradigms and ideas that Agile (note the capital A instead of lower) enables.

      Commitments to work without anyone asking for it

      When someone does ask for work, the requirements go through N number of middle management layers blurring them every step of the way like Chinese Whispers

      Projects must continue even when it is clear it's going to be a failure because the people running it haven't heard of the phrase "sunk cost fallacy"

      Of the few genuinely good ideas that are produced, they are again strangled by incompetent middle management who are only interested on working on a project to get their next promotion and then quickly "moving on" (abandoning it) when they do get promoted (this seems to be one of the largest complaints from insiders at Google)

      • pjmlp a year ago

        See the history of ISPC, and how it was rescued, it touches on some of those points.

        https://pharr.org/matt/blog/2018/04/18/ispc-origins

        • lloydatkinson a year ago

          > So there was #pragma simd, which sort of worked, unless you called an external function; that problem never got solved. They never understood why someone would want to write a large system that ran completely using all of the vector lanes and couldn’t imagine it was an important use case. (The attentive reader may realize that this execution model precisely describes GPUs.)

          Fuck, that must have been an infuriating experience dealing with the hardware team.

    • pclmulqdq a year ago

      Canceling Larrabee was such a stupid looking decision in hindsight, given CUDA and ARC.

      • meepmorp a year ago

        IIRC, Larrabee eventually morphed into the Phi accelerators (knights mill, bridge, etc.). They hung out in the market for a while, and I think the last iteration was shifted to be more deep learning focused, but I don't think they ever got much traction.

        I agree, it's unfortunate - I'd love it if someone could challenge NVidia's dominance of ML at least a bit.

        • pclmulqdq a year ago

          Honestly, the Xeon Phi wasn't a bad decision if you wanted to challenge NVidia/CUDA. Giving people a "normal" platform like that could have worked if they had pushed the software ecosystem around it and kept up on performance.

          • meepmorp a year ago

            A late reply, but I totally agree. Phi was a super interesting platform, and they probably could've done more with it if they'd invested in the software (and hardware).

mmargerum a year ago

It's all about the next quarter and until that stops being true it's going to hurt U.S. competitiveness

  • allenrb a year ago

    Along this line, I find their cancellation of the Oregon lab and Haifa dev center to be concerning. Sure, in the short term that stuff eats money. In the long term it means you stay relevant.

    • allie1 a year ago

      They haven't cancelled any of their significant capex spending - leading edge fabs in Ohio and Germany.

the_duke a year ago

Can anyone familiar with the RISC-V ecosystem chime in here?

I assume Pathfinder was supposed to funnel customers to their fab business.

Is there just better tooling out there so it's not really worth it, or are they giving up on their fab aspirations again already?

  • ralgozino a year ago

    excuse my ignorance, what is "their fab business"? It's the second time I read it in this thread and I can only think of «fabulous» and it turns the whole phrase into sarcarsm :D

    • greyw a year ago

      semiconductor fabrication

th3sly a year ago

and open source instructions set is an advantage to China as well, perhaps some pressure from "up-there" after the chip act was signed and offered lots of money to private companies?

tw1984 a year ago

well, it turns out 4 core processors on 14nm can't keep you going forever. intel learned it the hard way.

jbirer a year ago

I think Intel realized that RISC-V is at the point where it's not just a pet project that can become a subdiary, but an actual threat

mikerg87 a year ago

I guess this kills off the "Horse Creek" partnership witrh SiFive since I if I read this thing from the summer that pathfinder was the dev kit for a lot of this.

Sigh. Just when I thought Intel was getting better...

badintel a year ago

Noooooooooooooooooooo

I was looking for more competition in the market for overpriced, severely underpowered SBCs for outdated architectures.

sylware a year ago

So? Are they(and/or their owners) quitting RISC-V "board"?

amelius a year ago

Reasons?

  • luma a year ago

    RISC-V isn't x86, and Intel is pathologically unable to move beyond x86.

  • Tuna-Fish a year ago

    Suddenly they don't have infinite money anymore.

    • phkahler a year ago

      Nor a state of the art foundry.

      • dragonelite a year ago

        Well if they can get sub 10nm at scale going they should be fine for western semi conductor companies. Its their only way if amazon, microsoft and other cloud providers charting their own semi conductor future with ARM and RISCV.

        Im sure with the US killing the Taiwans semi industry, by moving its semi fabs to Arizona intel will have no problems recruiting some of the talent.

        • luma a year ago

          > Im sure with the US killing the Taiwans semi industry, by moving its semi fabs to Arizona

          You have seriously misunderstood the scale of TSMCs moves outside of Taiwan.

        • baq a year ago

          Dude the US is trying to not get choked by a semi shortage when China invades Taiwan and the fabs located there blow themselves up before they get exported onto the mainland. Same for EU with their fab investments.

          • hasmanean a year ago

            Lol.

            /tongue in cheek comment to follow …

            They should just move the UN to Taiwan and supply the world…use microchips for peace…deny chip shipments to any country that invades another. /end

            We can have peace in our time.

        • justinclift a year ago

          > Im sure with the US killing the Taiwans semi industry, by moving its semi fabs to Arizona ...

          That doesn't seem to be the goal though?