AlexanderYamanu 2 days ago

Apparantly a WD_BLACK SN770 1TB, fw 731100WD, drive is affected by this update. I can confirm that the update to 24H2 was started and that the SSD has gotten a corrupted partition table. Diskpart shows nothing, TestDisk shows double entries and not markers and the log says check_part_gpt failed for partition. With the corresponding line No FAT, NTFS, ext2, JFS, Reiser, cramfs, or XFS marker in the console. I am currently rebuilding the partition table from gpt backup, but I don't know if the 24H2 update has a repartitioning action in it. Also the boot partition is Bitlocker enabled. It is a nice challenge to try and recover it fully.

But, if the update hasn't been completed and the hammering continues, then I think chances are it might happen again.

My guess is that the firmware of the ssd needs to be updated first to account for the HMD bug?

Oh well...

nine_k 2 days ago

Un-clickbaiting the title: an update to Windows 11 exposes a flaw in poorly built firmware of certain mass storage devices.

  • throwaway290 2 days ago

    Yeah, including enterprise grade SSDs and HDDs;)

    If you push an update that bricks your customers machines you have failed and you have no QA clearly.

    The best they can do to save their ass is say "actually you know what these SSD and HDD are all incompatible with W11" and start a new hardware certification program.

    • smileybarry 2 days ago

      I can't find mentions of any enterprise SSDs in the linked Twitter thread, just consumer SSDs ranging from "budget" to "pro" from various brands. The Twitter author later did some basic testing on a list of SSDs and it looks like it's mostly to do with very budget QLC models: https://twitter.com/Necoru_cat/status/1956949132066898080

      This feels like an extrapolation by the article author based on the fact Phison microcontrollers are also used in enterprise models.

    • xeromal 2 days ago

      100%. If you force updates down peoples throats every 5 seconds and then brick a percentage of customers, it's on you.

    • Ekaros 2 days ago

      Can trash that breaks be even called enterprise? Then again, maybe in tech world only thing about enterprise is the price.

      • throwaway290 2 days ago

        I'm with you, I also think it stop being enterprise if you install Windows 11 on it:)

    • itsmartapuntocm 2 days ago

      This was the reason they built such an extensive application compatibility shim system into Windows 95. If a poorly coded application breaks on an OS upgrade, the user is going to blame Windows, not the application.

    • FirmwareBurner 2 days ago

      >Yeah, including enterprise grade SSDs and HDDs;)

      The article is light in the details of which are those enterprise grade SSDs

    • hulitu a day ago

      > If you push an update that bricks your customers machines you have failed and you have no QA clearly.

      They were talking about Microsoft. QA ? Testing ? That's why they have users. /s

  • morninglight 2 days ago

    So, users should stay with Windows 10 and not upgrade to Windows 11?

    • ghurtado 2 days ago

      No, that's not what's behind talked about

swinglock 2 days ago
  • mananaysiempre 2 days ago

    So... WD/SanDisk only tested the SN770’s firmware with then-contemporary Windows, different behaviour from Linux/ZoL led to breakage, and now Windows too has changed its behaviour enough to expose the breakage?

    (I’m still wondering if the successor, SN7100, is affected, as it’s not mentioned in that thread either way. The upmarket DRAMful SN850X seems not to be affected.)

    Or not? The bug here looks different from that one, which the same outlet reported on[1] earlier.

    [1] https://www.neowin.net/news/wd-ssds-still-block-windows-11-2...

    • swinglock 2 days ago

      Who can say until the dust has settled? My bet is on defective hardware. Maybe they can patch the firmware, or Microsoft (and Linux!) add workarounds in the kernel. But just because a kernel might do such a thing does not make it at fault when it didn't. At least not before the problem makes itself known.

      This could just as well as a bug, be the Windows kernel now being better optimized to take advantage of fast storage media. Perhaps the software cache, scheduling, and what not became a bottleneck, and they fixed that. Unless it's part of the NVMe standard or what have you that the OS IO scheduler should not attempt writing "too fast", it's hard to blame it in good faith.

  • ComputerGuru 2 days ago

    I'm not entirely sure it's the same issue; all the users there seem to have reflashed the SSDs to use 4k sectors and there are many reports of problems going away or never happening if 512b sector size is used. I'm doubtful Windows users are flashing their drives to 4k sectors at the same rate as ZFS-heads.

    • swinglock 2 days ago

      I think you misunderstood that. Configuring your filesystem to use a specific sector size (or whatever term is used for various file systems) does not modify the disk firmware. I'm not seeing it anyway.

      • ComputerGuru 2 days ago

        No, that is one thing you can do (in zfs this is just setting the recordsize property) but that is not what I am talking about. I am talking about reflashing to have the firmware present logical 4k sectors instead of logical 512b sectors. Theoretically this boosts throughput and while it would technically increase latency (you'd have to rewrite a bigger sector for any change) in practice these drives all have physical sectors actually larger than 4k so this isn't an issue. It isn't a full firmware flash but it's a low-level reconfiguration of the firmware (independent of filesystem sector size), and it seems there are bugs in this mode.

        https://support-en.wd.com/app/answers/detailweb/a_id/20968/~...

        • swinglock 2 days ago

          Thanks, now I see.

          My point is various drives are bad and causes issues not only with Windows, be it with some or all firmwares from the manufacturer. Among them the only specific drive mentioned in the article. If WD advertises this mode then it ought to work.

          • ComputerGuru 2 days ago

            Absolutely but my point (or rather, guess) was that it's a separate issue from the one well-documented in that GitHub link as I don't think random Windows users are doing this reconfiguration and the GH link seems to only affect reconfigured drives.

diggan 2 days ago

Is this actually breaking the drives or just making particular files on that particular partition un-readable by Windows? Stated another way: Could a system that dual-boots Linux and Windows actually brick the drives system-wide by what Windows is doing?

  • gruez 2 days ago

    Nobody really knows. The entire article hinges on a random post from twitter. It doesn't even try to corroborate it with third party accounts.

qwertytyyuu 2 days ago

Again? I swear this has happened before

dboreham 2 days ago

Reading the article, the headline is inaccurate/wrong: the problem is crappy SSDs (an age-old problem) that can't handle/brick under some workload presumably involving high write rates. This Windows update happens to include an application that can generate such a workload.

  • rikafurude21 2 days ago

    Why blame the crappy SSDs when the issue seems to be that Microsoft doesnt care about their low end users and happily deploys apps which creates work loads that can literally break hardware? Even if it doesnt break your hardware, why deploy something so bloated?

    • BenjiWiebe 2 days ago

      If your software can break your hardware on a normal PC (not talking industrial robots or something), your hardware is flawed.

      • gchamonlive 2 days ago

        While this is largely true, that's no excuse for a system such as Windows to brick SSDs like that. It's still by far the most popular operating system despite Microsoft's best efforts into enshitifying it.

        Windows is therefore going to be used by a wide variety of systems, including those with low grade hardware parts. So it either has to be part of windows quality control to test these low end systems or it should make it very clear about the system requirements.

        Either way, the problem was patched, which makes it a problem on windows side.

        • Ukv 2 days ago

          > that's no excuse for a system such as Windows to brick SSDs like that [...] it either has to be part of windows quality control to test these low end systems or it should make it very clear about the system requirements

          If this was a widespread problem I'd agree it should've been caught during testing and mitigated, but it doesn't sound like that's the case ("We looked around and could not find other reports resembling such situations"). As far as I'm aware it's just one Twitter thread reporting this, not all/many low end systems. Only so much Windows can do if it's just a faulty batch of SSDs, or even an incorrectly installed part in the user's setup.

          > Either way, the problem was patched, which makes it a problem on windows side.

          Has it been? I can't see Microsoft even verifying that the issue exists yet.

        • nottorp 2 days ago

          Not to mention that I don't see why Windows Defender should generate such heavy write workloads... an antivirus has no reason to. Have they rewriten it in React like the crippled taskbar?

          • gchamonlive 2 days ago

            Seems to me like either technical debt or symptoms of data farming, where the OS hoards data from the system for later analysis

          • c-hendricks 2 days ago

            One part of one component of the task bar is written in React but go off

            • gchamonlive 2 days ago

              The acceptable amount of browser in the taskbar should be zero. Teams widget also gobbled memory just because it spawned an Edge worker, even if you didn't open or logged into teams. You start to make these concessions, no wonder windows is a bloated mess these days.

              • c-hendricks 2 days ago

                Then your prayers have been answered, the single component that uses React in the Windows taskbar doesn't use a web browser.

        • bboygravity 2 days ago

          There's also not really an alternative to Windows on PCs.

          Linux still has tons of driver issues and is a massive time vampire with its "type magic spells into a terminal for hours to get something done" GUI.

          Mac is way more expensive and doesn't run a ton of apps available on Windows.

        • ghosty141 2 days ago

          What? If your SSD is old and close to failure, you can’t blame the software pushing it over the edge. If a company sells parts that can’t handle writes that are within spec then I’m sorry you bought junk…

          • gchamonlive 2 days ago

            Is that the case in the article? Why did it happen after the latest update? Why did it need a patch then? Your comment seems out of context.

            • gruez 2 days ago

              No, your comment is out of context. If you read the comments in this thread, it's clear that the this subthread is presupposing the issue caused by bad hardware, and debating what obligation (if any) microsoft has.

              • gchamonlive 2 days ago

                So care to answer? "Is that the case in the article? Why did it happen after the latest update? Why did it need a patch then?"

                • gruez 2 days ago

                  I replied to your other comment asking the same question 10 minutes ago.

      • wiseowise 2 days ago

        Your finger can also be easily cut off by a wood cutter, doesn't mean that it has to be.

    • FirmwareBurner 2 days ago

      Since when is shitty designed HW bricking itself from OS I/O calls, a software issue?

      Sure, Microsoft can issue a SW workaround to mitigate this issue as they did for Intel Skylake CPU issues, but that doesn't change the fact that the core issue is faulty designed HW.

      Why don't SSD manufacturers test their shit before shoveling it out the door to consumers?

      • bigyabai 2 days ago

        > Why don't SSD manufacturers test their shit

        Most of them do, not that it stops people from buying Temu PCs anyways.

        • FirmwareBurner 2 days ago

          They clearly don't. They just boot windows, run a benchmark and call it a day.

          • bigyabai 2 days ago

            You don't know that. You're manufacturing fiction to support whatever sardonic comment you want to make.

            • FirmwareBurner 2 days ago

              It's been confirmed by other commenters on HN over the years, who had worked in/with the SSD manufactures on firmware/device development. Feel free to not believe it if you want, IDGAF what you believe.

    • palmotea 2 days ago

      There's enough blame to go around, so we don't have to argue about which one party is at fault. BOTH Microsoft and the crappy SSD makers can be wrong at the same time.

      SSD makers should make better products.

      Microsoft should do enough testing so it doesn't push updates that trigger data-losing hardware flaws.

    • cosmic_cheese 2 days ago

      It’s a bit of column A, a bit of column B sort of situation.

      Microsoft should be more careful, but also such blatantly garbage hardware never should’ve shipped. So I’d say hold MS responsible, but also push back against manufactured e-waste like that.

      • gruez 2 days ago

        >Microsoft should be more careful

        What does "more careful" even mean? For instance, if your crappy SSD has some bug that causes data corruption if there are more than 16 writes in flight at a time, and microsoft pushes an update that causes it to do writes more aggressively, should it really be microsoft's fault that your crappy SSD broke? Should microsoft keep throttle windows IO at 2000s HDD throguhput on the off chance that some IDE drive bricks itself when it encounters SSD level workloads?

        • ragequittah 2 days ago

          There was an NVIDIA update recently [1] that caused problems with temperatures being properly reported. Entirely software, killed some video cards. Though entirely nvidia's fault I wouldn't say it's because they make garbage hardware in any way.

          It seems increasingly these problems (including the NVIDIA one) are only cropping up on 24H2 machines which is why I've decided to stick with 23H2 until it's out of its support window. Unfortunately that's coming up pretty soon.

          [1]https://www.neowin.net/news/nvidia-fixes-windows-11-24h2-dri...

        • cosmic_cheese 2 days ago

          It’s not really MS’ fault that crappy SSDs broke, but at the same time it’s only ever Windows that sees these issues. This suggests that this kind of strain on the hardware is unnecessary given the task, which then paints a picture of negligent sloppiness in engineering and QA culture at MS.

          • gruez 2 days ago

            >It’s not really MS’ fault that crappy SSDs broke, but at the same time it’s only ever Windows that sees these issues

            see sibling comment for something similar happening on zfs. linux is also filled with workarounds for crappy SSDs, so "SSDs breaking due to software" isn't exactly exclusively a windows problem. It just gets more attention because of the bigger user base.

            >This suggests that this kind of strain on the hardware is unnecessary given the task, which then paints a picture of negligent sloppiness in engineering and QA culture at MS.

            "straining hardware" shouldn't be a thing for PC components. If it can't handle a given workload, it should throttle/queue the requests, not brick itself. I can grant some leeway for long term wear (eg. heat damage or electromigration), but that's clearly not what's happening here. Can you imagine a network card that bricks itself if you send too many packets, and you have to baby it with how many packets you send in any given amount of time?

            • Ekaros 2 days ago

              GPUs and CPUs control their clock frequency perfectly well depending on load and temperature. I see no reason why every single other component could not do the same.

          • recursive 2 days ago

            Windows has a lot of deployments so it makes sense to see it there more often.

            I don't think Windows has the most hardware compatibility issues.

    • exe34 2 days ago

      if it was Linux they'd be telling us how it's the end of Linux on the desktop.

  • gchamonlive 2 days ago

    I'll keep using Linux then, if my crappy SSD is an excuse to use an actually functional OS, all the better. It's a win-win.

    • jlokier 2 days ago

      That's up to you. But if you have one of the SSD models the article is about, Linux probably won't help. It might make it worse.

      Linux is beautifully optimised for performance. Linux is more likely to write the same data more quickly than Windows, when an application has a large amount of data to write. So if the problem is the SSD fails when a large amount of data is written quickly but within specs, then it's likely to fail on Linux too.

      Unless it's a bug in the Windows driver. But it sounds from descriptions that it isn't a bug in Windows, it's a bug in the SSD that went unnoticed because not many people wrote large amounts of data quickly on systems with those SSDs.

      So it sounds like there may be a model-specific blacklist required in the driver, to detect particular SSDs and reduce the speed they are written to, because they fail when run at the speed they advertise to the OS.

      Or, alternatively, it sounds like those models may require a firmware upgrade from the SSD vendor.

      If either of those are required, a similar workaround will be required in the Linux driver too, to avoid the same problem as soon as someone runs a similar application on Linux.

      Unfortunately, even with a speed-limiting blacklist in the driver, whether in Window or Linux, those SSD models probably still corrupt data from time to time, because speed alone is unlikely to be the underlying cause of corruption. A vendor firmware update, or vendor confirmation that a specific change in the driver avoids the SSD bug, are what's required.

      • gchamonlive 2 days ago

        You are saying we are going to see similar reports for Linux when using these parts? Why did it started happening after a windows update then? And why did it need a patch?

        • gruez 2 days ago

          >You are saying we are going to see similar reports for Linux when using these parts?

          The linux install base is orders of magnitude smaller than windows, so there's less of a chance that it gets picked up by tech media. Moreover if it's really an issue with the underlying hardware itself, a more technical user base would be able to trace the issue to the actual hardware, rather than going straight to social media with "ZOMG windows update bricked my PC!"

          >Why did it started happening after a windows update then?

          Some sort of code change that changes the behavior of the drive, but is technically within spec.

          >And why did it need a patch?

          Because putting in a patch doesn't imply you're at fault. The linux kernel is filled with workarounds for crappy hardware as well. The existence of those workarounds don't imply linux was somehow at fault for those bugs.

          • gchamonlive 2 days ago

            > if it's really an issue with the underlying hardware itself, a more technical user base would be able to trace the issue to the actual hardware, rather than going straight to social media with "ZOMG windows update bricked my PC!"

            So what you are saying is that users went "ZOMG windows update bricked my PC!" And that prompted the article?

            > Because putting in a patch doesn't imply you're at fault

            If you designed a system for certain range of hardware, your latest update broke that design and you had to go back and put a patch, I'd say it's pretty much Microsoft at fault here, even though your comment is right

            • gruez 2 days ago

              >So what you are saying is that users went "ZOMG windows update bricked my PC!" And that prompted the article?

              Most tech "journalism" is basically regurgitating stuff from reddit/twitter/other tech sites, so yes.

              >If you designed a system for certain range of hardware, your latest update broke that design and you had to go back and put a patch, I'd say it's pretty much Microsoft at fault here, even though your comment is right

              So there's no accounting for which party broke the spec? Whoever touched it last is at fault?

        • colejohnson66 2 days ago
          • gchamonlive 2 days ago

            Good thing defaults for domestic systems are ext4 and btrfs, not zfs which needs a ton of memory and server grade hardware?

            • jlokier 2 days ago

              ext4 or btrfs won't save you either.

              Although that bug report is for ZFS on the same SSD as the Windows article (WD SN770), the faults described in the Linux bug report look like would also happen with ext4 or btrfs.

              So I searched for reported problems with SN770 and ext4, and found enough results to convince me.

              That particular SSD fails with Linux and ext4 too.

  • dundarious 2 days ago

    What is the basis for this statement that "the problem is crappy SSDs"? From TFA:

    > The report speculates that this could be due to a malfunction in the drive cache subsystem. Symptoms are said to recur predictably after a system reboot, which temporarily restores drive visibility but does not address the underlying fault. Affected users are said to be consistently experiencing failure under similar workload patterns within minutes.

    > Further analysis has suggested that SSDs built on Phison NAND controllers especially DRAM-less models exhibit failures at lower write volumes. Reports suggest that select enterprise-grade HDDs also display comparable symptoms under intensive writes.

    > The issue definitely bears high similarity to the WD SN770 host memory buffer (HMB) flaw, and in this case, too, restricting or disabling HMB yields no improvement. A suspected memory leak in Windows’ OS-buffered cache region could be the problem.

    Speculation almost entirely revolves around the Windows disk cache subsystem.

    That being said, I completely disagree with most of the replies you are getting. If Intel/AMD released a CPU that exhibited faults at high instruction throughput (making good use of all execution ports, etc.) but within the advertised power limits, I would in no way blame whatever software exhibited the problem, I would blame the chip.

  • oncallthrow 2 days ago

    Please explain to me in precise terms how the headline is "wrong".

    • dkiebd 2 days ago

      SSDs bricking themselves when you write to them is not a Windows issue.

  • lexlambda 2 days ago

    Reading the article further, they do note that "A suspected memory leak in Windows’ OS-buffered cache region could be the problem.", although I concede that no source is provided for that.

  • wildpeaks 2 days ago

    The result is the same from the point of view of the end user, the hardware would have continued working without the software update.

    • notanewbie 2 days ago

      End user here. 4 year old laptop with original storage drive (already replaced original SSD system drive with a new one) updated and corrupted the 1TB HDD. Pulled it out of the machine (HP laptop, turns out it was a Seagate HDD) and replaced it with another SSD. Yes, this laptop was originally a Windows 10. Yes, it was upgraded to Win11 last year.

      Blaming the end user for the update corrupting an existing drive that has had NO ISSUES for over 4 years is about the dumbest and most condescending thing I've seen. Thanks for being absolutely no help at all, everyone but wildpeaks. The very first reply is someone who apparently went over to the FB post and spammed every comment with "CLickbait (sic) doesn't apply to anyone blah blah".

      Considering the giant SHOVE that Microsoft has subjected us to in the last few years to upgrade to the latest and greatest OS, you would THINK that SOMEONE would remember that a lot of those older machines that are now running Win11 should be considered.

      Also, anyone replying to a Windows problem with "well, you should be using Linux, too bad so sad" needs to check their attitude at the door. What an entitled life you must lead.

      Going back to wrestling with trying to access the now external HDD. Do I sound bitter? You're d**ed straight.

  • icehawk 2 days ago

    > the problem is crappy SSDs (an age-old problem) that can't handle/brick under some workload presumably involving high write rates

    which part of the article leads you to believe this?

  • cm2187 2 days ago

    The article also mentions enterprise SSDs.

  • fifteen1506 2 days ago

    Sometimes people have limited amount of money available and buying a new SSD for the sake of buying them is not on top of their priority list.

    Sure, we can always contract a divinacy specialist to guess which hardware may fail next, but I thought the (paid) Windows testers had that job.

FirmwareBurner 2 days ago

>The issue purportedly surfaces during heavy write operations to certain NVMe SSDs as well as HDDs, especially when continuous sustained writes approach 50 GB on drives and exceed 60 per cent controller usage.

  • bobmcnamara 2 days ago

    Developers. Developers. Developers. Developers.

    • throwaway290 2 days ago

      Well. Ballmer said it back when MS needed them to train the LLMs after all. it was a different time!

jrs235 2 days ago

Great. So this might be the cause of ally problems. I just happen to cloning drives to swap out with a larger one and Windows 11 is so slow. Also seems there were issues found that needing fixing after cloning.

tropicalfruit 2 days ago

yep bricked my beelink SER9 which I had for 8 months only

and i see the same error under windows update on my backup pc now:

2025-08 Cumulative Update for Windows 11 Version 24H2 for x64-based Systems (KB5063878) (26100.4946) Failed to install on ‎17/‎08/‎2025 - 0x80073712

articles says August but I had this update stuck on failed for 2 months. bricked last pc now waiting for it to brick my current one.

how long does it take MS to fix something..

  • tempodox 2 days ago

    Until hell freezes over?

    • lazide 2 days ago

      Honestly, I’ve seen a concerning behavior where the failures seem to escalate and get worse over time.

      Flakey WiFi at the start? Just flat out broken now.

      Weird stuff like nearly impossible to find/configure network connections? Worse over time.

      It’s like whatever criteria the MS PM’s are using is prioritizing ‘engagement’ (aka how frustrated and angry someone gets at the OS) over anything else.

      • kotaKat 2 days ago

        > Flakey WiFi at the start? Just flat out broken now.

        I'm finding it super concerning that it seems like Microsoft isn't even pushing driver updates properly nowadays. Everything just sits as an 'optional update' hidden away buried under a couple more menus, and the latest drivers are jumbled in the list, and you can't tell which driver is for which device.

        And that's how I discovered my WiFi was breaking out of the box because Microsoft had newer drivers they didn't push down...

        • jonbiggums22 a day ago

          In my case I turned driver updates off since they kept installing a broken video driver that left me with black screens on boot.

      • vachina 2 days ago

        Microsoft don’t sell OS now. They sell their services using Windows as a platform. Naturally Windows is going to stagnate.

        • stackskipton 2 days ago

          As SRE who deals with Azure, their services are not even Windows OS focused. They do sell a ton of that, mainly expensive solutions to legacy garbage code but a ton more assume open-source languages running on Linux.

        • tempodox 2 days ago

          And sadly Apple is going the exact same route now. Both are stagnating platforms now where bugs only accumulate and hardly get fixed and even the service delivery parts are just barely working. Enshittification in full swing. Both are more than ripe to be eaten by an alternative that puts users front and center again.

  • alecco 2 days ago

    > how long does it take MS to fix something..

    I wouldn't be surprised if the related dev/QA teams were part of recent layoffs/offshoring.

    • mikestew 2 days ago

      Microsoft famously got rid of their test teams something like ten years ago.

    • ComputerGuru 2 days ago

      What QA team? No seriously, all OS QA is now virtualized and automated, no more actual QA team.

      • alecco 2 days ago

        They call it something else and seems to be integrated to Dev, but they do QA. Also I said "dev/QA teams". Dev as in development.

dataflow 2 days ago

What actions can we take here? Is there a list of the particular drives that would be affected by this?

igtztorrero 2 days ago

The most basic thing an operating system should do is keep your files and information safe. Microsoft Windows has been steadily declining in that direction. It's time to punish them and let them see the consequences. It's time to switch to Linux Debian or Ubuntu.

  • exe34 2 days ago

    I grew up with Windows 95. I don't think this is a new trend! To this day I would never trust any computer with my only copy of anything whatsoever that I care about. Three copies on different machines is the minimum if I intend to keep it.

elorant 2 days ago

Just a reminder to switch to Linux in October when win10 reaches EOL.

  • theandrewbailey 2 days ago

    Don't wait until October to switch. Do it now. The water is fine.

wistleblowanon 2 days ago

Another real reason not to move to 11. Thank you Microsoft.

  • lucb1e 2 days ago

    This was being downvoted, but there is actually research showing a link between code age and security bugs: https://security.googleblog.com/2024/09/eliminating-memory-s... "Code matures and gets safer with time, exponentially, making the returns on investments like rewrites diminish over time as code gets older."

    Not an argument for staying on an unsupported OS, but an argument for staying on a stable release as long as possible if you prefer stability over features

    • diggan 2 days ago

      > but there is actually research showing a link

      I think most software developer gets to understand this once they're working with production environments sooner rather than later. Most of the times stuff breaks is when someone changed something and failed to consider something else, not a lot of times things break by themselves. Sometimes, things break by themselves because someone in the past forgot to account for something that will happen far in the future though.

Fairburn 2 days ago

If your rig is stable and dont care about features, block all updates and turn the service off. If you have a prod. rig that is stable, leave it to MS to frack it up. /s

ksec 2 days ago

Neowin. My old account is from 2001. Although I stopped reading around 2010 when I found them too Anti-Apple and extremely Pro Microsoft. But there seems to be a surge in submission and I am not sure if it is good enough for HN.