vardump 9 hours ago

I wonder how long those drives can be powered off before they lose the data. And until they lose all functionality when the critical bookkeeping data disappears.

  • magicalhippo 8 hours ago

    This would depend on how worn they are. Here's an article describing a test a YouTuber did[1] that I watched some time ago. The worn drives did not fare that well, while the fresh ones did ok. Those were TLC drives though, for QLC I expect the result is overall much worse.

    [1]: https://www.tomshardware.com/pc-components/storage/unpowered...

    • 0manrho 7 hours ago

      I remember that post. Typical Tom's quality (or lack there of).

      The only insight you can gleam from that is that bad flash is bad, and worn bad flash is even worse, and that's frankly a stretch given the lack of sample sizes or a control group.

      The reality is that its non trivial to determine data retention/resilience in a powered off state, at least as it pertains to a coming to a useful and reasonably accurate generalism of "X characteristics/features result in poor data retention/endurance when powered off in Y types of devices," and being able to provide the receipts to back that up. There are far more variables than most people realize going on under the hood with flash and how different controllers and drives are architected(hardware) and programmed(firmware). Thermal management is a huge factor that is often overlooked or misunderstood and that has substantial impact on flash endurance (and performance). I could go into more specifics if interested (storage at scale/speed is my bread and butter), but this post is long enough.

      All that said, the general mantra remains true: more layers per cell generally means data per cell is more fragile/sensitive, but that's generally in the context of write cycle endurance.