wtallis 2 years ago

Rather silly title, but that's true of many papers reusing this meme.

The abstract presents an alternative to wear leveling that they refer to as "capacity variance", also known as breaking compatibility with all previous filesystems, drivers, and bootloaders. Wear leveling has been a necessary evil, because without it flash storage simply would not be usable at all as hard drive replacements/successors. Nowadays some of the pieces are in place to allow software to interact with SSDs in a more natural fashion, but it still isn't possible to completely forego the FTL that emulates hard drive behavior.

  • remram 2 years ago

    The best I can say if I get an article titled "Wear leveling considered harmful" is say "no, it's not".

    "Considered harmful" is appropriate for a PSA that the whole community considers something to be harmful. It's terrible for an essay trying to argue that it should be avoided even though that is not the community standard (yet).

    The title should be "Why wear leveling is harmful", "A case against wear leveling", or something similar. I don't understand why people keep using those stupid titles.

    • eternityforest 2 years ago

      If they are proposing something else that does what wear levelling does, it should be "Future replacements for SSD wear levelling" or "Next generation wear levelling for solid state storage", if they are in fact just suggesting we move the leveling to the host.

      Obviously we need some leveling somewhere or there's gonna be problems when someone uses an overwritable file....

    • bitwize 2 years ago

      "Considered harmful" is shorthand for "Why I consider this harmful". Has been since the first one, "GO TO Considered Harmful" by Dijkstra, which came out in a time when everybody used GOTO or its equivalent.

      • afiori 2 years ago

        Dijkstra's title was a shorthand for "I believe you and everybody else should consider this harmful"; appropriate for the strong claim that GOTO was bad and was making programming as a whole worse.

        Using "X considered harmful" in place of "Y is better than X" is just capitalizing on the meme.

      • remram 2 years ago

        Even that has internal logic. "GOTO is considered harmful in the projects that I run, so we don't use it there, and this is why". If an SSD manufacturer explained that they consider the feature harmful, and they are no longer including it in products, that's something.

        Here it's a consumer expressing that they think they don't like the feature and think they would like someone to make products a different way. Completely different.

    • ranger_danger 2 years ago

      You don't understand why clickbait is clickbait?

  • pclmulqdq 2 years ago

    I am not a storage person, but I have been storage-adjacent for a while. That said, it looks like storage abstractions need a huge overhaul, and we are just sitting here pretending things are fine. SSDs are just very different devices than HDDs. Zoned NVMe, which gives an append-only log abstraction, is a big step in the right direction.

    By the way, it turns out that hard drives today also include a fair amount of software (not as much as SSDs, but a lot) to emulate a hard drive of yesteryear. The time of the "block device" may be over.

    • AnIdiotOnTheNet 2 years ago

      Every computing abstraction needs a huge overhaul. We're working on a mountain made of layered abstractions wrapped around concepts from the 70s.

  • cryptonector 2 years ago

    First of all the filesystems out there, ZFS is the best placed to deal with that break and do its own, intelligent wear leveling. So I wouldn't mind ;)

    More seriously, the OS can add its own layer between raw, unleveled SSD, and the filesystem, and do its own wear leveling.

    But, really, the best solution to the need for wear leveling is to switch to CoW filesystems. The biggest problem is the need for many uberblocks, but I guess each transaction's root blocks can pre-allocate the next one's, and then the real uberblocks can be overwritten every N transactions, thus limiting the number of writes to uberblocks. Plus, of course, there should be many uberblocks, not all written every N transactions.

    • mgerdts 2 years ago

      > ZFS is the best placed to deal with that break and do its own, intelligent wear leveling

      Part of wear leveling is to rewrite old static data to blocks that are more worn so that the relatively fresh blocks that have been long squatted upon can take their turn at getting wear.

      ZFS has a long history about not being great at moving data to a different location, even though some of the brightest minds in this space have been highly motivated to solve this. Google "BP rewrite".

      • cryptonector 2 years ago

        I'm quite aware of the BP rewrite problem.

        I should explain that what I had in mind is that nearly worn out blocks should be marked no-longer-to-be-written-to. I left that part out due to writing too quickly. The point is there's no need to move blocks, just stop writing to worn-out blocks, and then the CoW nature of the filesystem will take care of the rest. Eventually you'll run out of non-worn-out blocks and the filesystem will become effectively read-only.

        (Naturally one would want to keep a count of the number of worn-out blocks in the volume, and one would need an internal file in which to keep track of worn-out blocks, probably.)

        As for BP rewrite, the fundamental design fault in ZFS that makes BP rewrite hard is that block pointers include locations in them and so the checksums of blocks that have block pointers must change when any of those pointer-to blocks are relocated. What should have happened instead is that physical locations should have been separated so as to avoid block checksums binding physical block addresses: a) block pointers should have had no physical location in them, b) every block that contain block pointers should have been followed immediately by a "cache" array of physical pointers corresponding to the logical block pointers in that block, c) the "cache" of physical block addresses would then be easily excluded from block checksum computations, d) block caches would have been easy to overwrite.

        Once block checksums in block pointers do not bind physical addresses, you can then traverse the tree and relocate blocks by copying them to new locations then re-writing the physical block addresses in the caches (that are now not bound into those checksums). This is a very simple process. You do need to avoid creating multiple copies of blocks that are reachable via multiple paths (because of hardlinks).

        This proposal tends to obtain replies to the effect of that design increasing the risk of checksum collisions. That's fallacious for cryptographic checksums.

        I continue to be shocked that the BP rewrite problem isn't properly understood.

        However, it is probably too late to fix ZFS to make BP rewrite easy. The reason is two-fold: 1) the lack of BP rewrite has been worked around for things like vdev evacuation, 2) moving physical locations out of the block pointer would essentially yield a completely different on-disk layout for znodes and indirect nodes and so on, and moving to the new on-disk format would result in special-casing a lot of code, and would be quite a process. It would almost be easier to start over.

        • mgerdts 2 years ago

          > The point is there's no need to move blocks, just stop writing to worn-out blocks, and then the CoW nature of the filesystem will take care of the rest.

          Suppose the file system stores photos and videos that the owner tends to keep forever without ever modifying them. The first files will land on NAND blocks with close to zero PE cycles and will stay there forever. Since they are never modified CoW will never cause them to stop squatting. If the drive fills with content like this endurance doesn’t matter unless there’s a crazy amount of atime updates.

          If the static photo and video collection grows to occupy 75% of the space and a more dynamic workload uses some portion of the remaining 25%, about 75% of the drive’s endurance is not accessible.

          • rasz 2 years ago

            FTL handles this case by silently moving data around. Cold data retention is REALLY BAD in modern NAND generations using increasingly smaller geometries. Degraded data requires multiple slower reads. Samsung even had to deploy emergency FW update to make this process more aggressive on 840 evo where drive slowed to a crawl reading >1 year old data.

            https://www.techspot.com/news/60501-samsung-addresses-slow-8...

          • cryptonector 2 years ago

            That's a problem with the SSD's controller. Wearing out in principle should only render cells read-only. But some SSDs are more aggressive.

    • eternityforest 2 years ago

      How does CoW replace levelling? Some CoW systems may include leveling, but CoW doesn't inherently level, does it?

      Plus, BTRFS is known to heavily amplify small writes, so not all of them are suitable.

      • cryptonector 2 years ago

        > How does CoW replace levelling?

        I answer that here: https://news.ycombinator.com/item?id=31903729

        > Plus, BTRFS is known to heavily amplify small writes, so not all of them are suitable.

        The ZFS intent log (ZIL) is really misnamed. It's not an intent log. The ZIL exists to amortize (reduce) the write amplification of CoW.

        In LSM-type databases the log is embraced as part of the on-disk format, but in reality the ZIL is indeed a critical part of the ZFS on-disk format -- the ZIL is optional, but you wouldn't want to not have it.

  • ChrisLomont 2 years ago

    >also known as breaking compatibility with all previous filesystems....

    Pretty much every filesystem in use has the ability to mark sections of a drive as bad and unusable. Pretty much every SSD in use has more blocks than it shows to the filesystem. It would not be hard to leverage both of these and likely implement this paper with zero change to the filesystem or drivers, and the SSD can remap bad sectors to the end of the space on the drive, allowing bootloaders and the like to work exactly as they do now.

    • wtallis 2 years ago

      Remapping bad sectors to the end of the drive doesn't solve much. For starters, GPT actually expects the end of the drive to be usable, so we'd need to introduce a new partition table format or make the drive an active participant in updating the partition table.

      But more importantly, you cannot simply hand-wave away the challenges of having NAND erase blocks that are huge compared to both NAND page sizes and the allocation units used by host software. If you make a drive that is at first glance "compatible" with existing software and filesystems but behind the scenes has to do eg. 24MB read-modify-write cycles for any size of host write, the drive may be much simpler than current SSDs but will be dead within a week or two at most (and painfully slow until then).

      • ChrisLomont 2 years ago

        SSDs already do all you just listed behind the scenes. The FTL is already mapping such things behind the scenes so filesystems don't have to implement it.

        As the paper mentions, changes to these cycles lengthens the lifetime. The filesystem at the os level won't change.

  • cma 2 years ago

    FTL == flash translation layer

  • kragen 2 years ago

    Maybe:

    ① when we really need wear leveling because of previous filesystems, we should implement it as a kernel module instead of in the SSD firmware? Then we can debug the failures it causes, tune its performance to our use case, and tune the software running on top of it to perform well with its strengths and weaknesses. Compatibility with previous bootloaders isn't important because writing a bootloader for a given piece of hardware is a one-afternoon task.

    ② The nearly 3× increase in firesystem capacity the authors report might be worth a significant amount of software work?

    ③ Hard disks are so different from SSDs that we're probably paying much bigger penalties than that 3× for forcing SSDs to pretend to be hard disks?

    To be concrete, hard disks are about eighty thousand times slower for random access than modern SDRAM: 8000 μs versus 0.1 μs. Their sequential transfer rate is maybe 250 megabytes per second, which means that the "bandwidth-delay product" is about 4 megabytes; if you're accessing data in chunks of less than 4 megabytes, you're wasting most of your disk's time seeking instead of transferring data. 25 years ago this number was about 256K.

    A Samsung Evo 970 1TB SSD over NVMe PCIe3 can peak at 100k iops and 6.5 GB/s (personal communication) which works out to 64 kilobytes. I've heard that other SSDs commonly can sustain full read bandwidth down to 4KiB per I/O operation. But writes are an order of magnitude slower (100 μs instead of 10 μs) and erases are an order of magnitude slower than that (1000 μs instead of 100 μs). So, aside from the shitty endurances down in the mere hundreds of writes, even when you aren't breaking it, the SSD's performance characteristics are very, very different from a disk's. If you drive it with systems software designed for a disk, you're probably wasting most of its power.

    For example:

    ⓐ because the absolute bandwidth to memory is on the order of 10% of memcpy bandwidth, and 30× the bandwidth from a disk, lots of architectural choices that prefer caching stuff in RAM pay off much more poorly with an SSD (possibly worsening performance rather than improving it), and architectural choices that put one or more memcpys on the data path cost proportionally much more.

    ⓑ because the "bandwidth-delay product" is 60–1000 times smaller, many design tradeoffs made to reduce random seeks on disks are probably a bad deal. At the very least you want to reduce the size of your B-tree nodes.

    ⓒ because writes are so much more expensive than reads, design tradeoffs that increase writes in order to reduce reads (for example, to sort nearby records together in a database, or defrag your disk) are very often a bad deal on SSDs.

    ⓓ because erases (which affect large contiguous areas in order to get acceptable performance) are so much more expensive than reads or writes, sequential writes are enormously cheaper than random writes, but in a different way than on disks. SMR has a similar kind of issue.

    ⓔ we have the ridiculous situation described in the paper where the SSD wastes 65% of its storage capacity in order to perpetuate the illusion that it's just a really fast disk. 65%! (If the simulator they used is correct.)

    • mgerdts 2 years ago

      > The nearly 3× increase in firesystem capacity the authors report might be worth a significant amount of software work?

      They aren't claiming 2.94x capacity increase, they are claiming 2.94x endurance. Per the paper's claims, drives tend to have about 11% over-provisioning. Assuming compression is not introduced concurrently, the maximum increase in capacity would be 11%.

      > A Samsung Evo 970 1TB SSD over NVMe PCIe3 can peak at 100k iops and 6.5 GB/s (personal communication)

      I think you need to check the 6.5 GB/s. A Samsung 970 Evo is a PCIe Gen 3 x 4 device with peak read throughput of 3.4 GB/s [1]. Even with a faster controller and/or NAND, PCIe Gen 3 x 4 will limit the throughput to 3.938 GB/s [2]. You may be able to get combined read+write performance in excess of 4 GB/s, but I wouldn't count on it. Perhaps instead of a 970 Evo you meant a 980 Pro, which is capable of closer to 1 million IOPS and 7 GB/s.

      Also, you should not assume that the same workload that can generate the maximum number of IOPS is the same one that can generate the maximum throughput. Most commonly IOPS are measured with something like 4 KiB blocks with queue depth 32 and perhaps multiple threads. Maximum throughput is generally achieved with 128 KiB or larger sequential IO with a lower queue depth and/or threads.

      > But writes are an order of magnitude slower (100 μs instead of 10 μs)

      In all of the NVMe datasheets that I've read that provide sufficient detail (i.e. not the consumer drives), writes are generally faster than reads. This is implied by [1] by the fact that it supports 15000 read IOPS and 50000 write IOPS at QD1 with one thread. I think a key reason that writes tend to be faster is because they are written to RAM and written back to NAND (perhaps with better packing) as the drive has spare cycles and/or RAM fills. This brings up the importance of power loss protection, which is generally not found on client drives like the 970 Evo or 980 Pro.

      1. https://semiconductor.samsung.com/resources/data-sheet/Samsu...

      2. https://en.wikipedia.org/wiki/PCI_Express#History_and_revisi...

      • kragen 2 years ago

        Thank you very much for the corrections!

blagie 2 years ago

This paper is nonsense. The workloads presented aren't what wear levelling was designed for. You might have files on your hard drive which are written to many times per second. This includes some log files (e.g. a web server access log), some types of database files, as well as sometimes access time on files you just read (unless you set noatime on your file system, which is a really good idea). Wear levelling was implemented because drives *were* failing, quickly (in a few months).

An attacker, if they targetted this, might be able to break your SSD in about 15 minutes, just by writing the same block over and over. That might even be possible with careful engineering of things like browser storage.

With wear levelling, I've *never* had an SSD wear out. Almost no matter the write amplification and access pattern, they *won't* wear out. A typical SLC drive will be able to handle 100,000 writes. Writing 1TB at 300MB/sec is about an hour. 100,000 hours is about 11 years, of writing data at pretty close to peak SATA capacity. That doesn't mean they won't fail, but not like that.

Making an open source SSD, and implementing a sane layer with things like capacity variance and a file system designed for SSDs makes complete sense, but not for crazy reasons like this one. A good reason, which I think was first raised by Alan Cox (or another of the kernel developers), is that some techniques designed to make hard drives reliable to power failures have the opposite effect on SSDs. SSDs write in larger blocks, and an interrupted write can wipe out unrelated data.

  • kasabali 2 years ago

    > Almost no matter the write amplification and access pattern, they won't wear out. A typical SLC drive will be able to handle 100,000 writes. Writing 1TB at 300MB/sec is about an hour. 100,000 hours is about 11 years, of writing data at pretty close to peak SATA capacity.

    Quoting SLC endurance cycle numbers is disingenuous.

    Try 3000/1000 cycles which are more realistic for consumer TLC/QLC drives and it suddenly becomes less impressive.

    It'll become even less impressive when PLC/HLC cells are deployed in consumer space.

  • slaymaker1907 2 years ago

    I don't think an attack like you describe would be feasible (assuming FS support of capacity variance) unless the target drive was near maximum capacity which already causes issues with write amplification. You're assuming that an attack could write the same block over and over, but that's not simple even with capacity variance since you'd still need GC. The difference is that GC would only consider empty and partially empty blocks whereas wear leveling might move data from a completely utilized block (note I'm talking about a flash block which is much larger than an FS block).

    Suppose your SSD had the following blocks where E means erased and x means deallocated (so it is free, but you would need an erase to use it.:

        B0: [1,x,4]
        B1: [x,E,E]
        B2: [3,10,x]
        B3: [11,12,13]
    
    With wear leveling, a write to address 4 might consider erasing B3 as it moves things around even though it is fully utilized and contiguous addresses because B3 might have much fewer writes than the other blocks. Capacity variance would instead use all available blocks and merely stop using any block that gets too many writes. Equalizing writes among your working set of blocks that get data updates isn't too difficult, it's the RO data that gets tricky to work around.

    In fact, something the paper doesn't discuss but it is totally possible with current APIs would be for a drive manufacturer to just increase the amount of storage held in reserve for GC and hide that extra capacity from the OS. As blocks go bad, you just decrease capacity in the hidden area until the drive gets to the point where it no longer has enough reserve capacity for efficient GC at which the drive reports failure. I think manufacturers may do this already if they detect a bad block, the proposal would just add in extra conditions for when the drive considers a block failed.

    One weakness of the paper is they don't discuss how fast drive capacity would decrease over time.

    • blagie 2 years ago

      I think my post was unclear. The attack (and rapid wear) were both in the absence of wear levelling (of which capacity variance is a form).

      My points, which I admit got lost in there:

      1) With wear levelling, write amplification doesn't matter much, since you have nearly infinite life in either case. SSDs fail, but only in highly esoteric scenarios due to going over write limits.

      2) The write patterns in the paper make no sense, relatively to real-world usage patterns, or what wear levelling is trying to address.

  • rasz 2 years ago

    Link some currently manufactured and available consumer oriented SLC drives :)

zootboy 2 years ago

I'm not super impressed with this paper. It doesn't appear that they've done any tests on actual SSDs. From the paper:

> we evaluate three representative WL techniques [3, 6, 7] that have been compared against a wide variety of other WLs

and

> We extend FTLSim [9]1 for our experiments.

So they picked some algorithms and implemented them in a simulator, with no validation that this is what actual SSDs do?

Also, the code repo that they list consists of nothing but a readme file:

https://github.com/ZiyangJiao/FTLSim-WL

  • seoaeu 2 years ago

    You can make this kind of lazy dismissal of any early stage project. The paper was published at a workshop which is a venue for speculative/unfinished work.

    • zootboy 2 years ago

      If it's truly an early-stage paper, then its title should reflect that. An authoritative, blanket statement like "wear-leveling considered harmful" is disingenuous.

      If this is speculative / unfinished work, I would hope for a title like "certain wear-leveling algorithms may be counter-productive".

      • seoaeu 2 years ago

        They published in a workshop basically called “Hot Takes on Storage Systems”

  • wmf 2 years ago

    As with most things, most academics don't have and can't get access to the internals of real products.

    • reissbaker 2 years ago

      SSDs aren't exactly expensive, and I imagine most of their laptops use SSDs.

      • yjftsjthsd-h 2 years ago

        Can you bypass the default flash translation layer on commercially available drives?

        • yellowapple 2 years ago

          With a sufficiently-narrow soldering tip and a sufficiently-steady hand, sure ;)

          • omegalulw 2 years ago

            Would you trust a paper that reported results based off of experiments involving "sufficiently-narrow soldering tip and a sufficiently-steady hand"?

            • burnished 2 years ago

              Sure, why not? Thats what a lot of real experiments involve anyway, I don't think anything is gained by obfuscating actual methods because they aren't scientistic enough.

            • zootboy 2 years ago

              Honestly, that is what I was expecting given the title of the paper. That or a partial reverse-engineering and emulation of an SSD controller firmware.

            • yellowapple 2 years ago

              With sufficient reproducability, sure ;)

StillBored 2 years ago

Wow, really!

So, you either do some kind of wear leveling in the firmware close to the flash where it knows the details of the actual hardware, or you try and do it in a journaled filesystem, which will have all the same problems, except it has higher latency, consumes more ram, and less knowledge of the hardware.

Basically, I've seen these arguments from a certain sector, where they have predictable workloads, very limited hardware diversity, and a bunch of people who are convinced they are the smartest in the industry.

No thanks,

dontbenebby 2 years ago

There's a whole attack paper on how excessive writes can be harmful, so I suspect trying to reduce them is better than nothing:

https://www.usenix.org/system/files/conference/hotsec12/hots...

(I'm not an EE expert by any stretch, but that paper cites several peer reviewed wear leveling techniques.)

The REAL issue, based on my (amateur) reading, is that the components that handle wear leveling are not open source:

>In all cases commercially available flash controllers provide no details of the internal mechanisms that are used to extend life, as such details remain the competitive advantage in a crowded market space

We are sitting around trying to make inferences about proprietary, unpublished algorithms that have very real effects on people, not just property, when what we probably want is for industry to collaborate on figuring out which techniques work best rather than hoarding that knowledge and forcing folks to get burned once or twice or pay exorbitant amounts of money for research grants to infer what purchasing decisions to make.

  • ChrisLomont 2 years ago

    Conversely, to pay for researchers to invent new techniques, companies need to get that money back by selling capabilities their competitors do not have.

    For an example of the difference, compare the quality and capability of Photoshop versus GIMP (or any open source photo editor) to see the difference.

    After all, photo editing is older than open source, yet open source still cannot make the technologically leading photo editor.

    The same is true across a massive range of software, from CAD to scientific software to audio compositing to movie editing to ......

    • blagie 2 years ago

      That's not really fair. Except for support for color models / spaces, gimp was basically on-par with Photoshop when it came out. Toss in 16 bit color, LAB color, and CMYK, and the two were nearly identical. gimp had better scripting too.

      The key difference is Photoshop has continuously improved, in part due to competitive pressure from tools like gimp. In the meantime, gimp development slowed sometime around 2000, and basically stopped by around 2008, at least from a user-facing perspective. Today, gimp is still very competitive with late-nineties Photoshop (even a bit ahead).

      • ChrisLomont 2 years ago

        It's completely fair, and not just Photoshop. It's the difference between paying programmers via a revenue stream to make polished software and not paying them. One makes better products.

        >The key difference is Photoshop has continuously improved, in part due to competitive pressure from tools like gimp

        If the pressure were from Gimp, they would be so far ahead, since it wouldn't be needed. Gimp is no where near Photoshop in features, usability, or pure technological prowess.

        Same for most any professional task. Open source CAD is so far behind commercial offerings it's comic.

        Paying developers via a revenue stream beats hobby code.

      • 323 2 years ago

        You speak like a programmer. When people say Photoshop is better than Gimp they mean the user experience. Where Gimp is light years behind Photoshop.

        > Today, gimp is still very competitive with late-nineties Photoshop (even a bit ahead).

        Absolutely not, in fact I would say the opposite. 1999 Photoshop (which I used) is still way ahead of today's Gimp user experience.

        The technical features of Gimp are irrelevant (for Photoshop kind of users) if Gimp's interface is atrocious.

        • blagie 2 years ago

          Huh. Are you sure it's not just familiarity? I used both about the same in the late nineties, and they felt similar to me in usability.

          To me, gimp 2022 doesn't feel better or worse than gimp 2000 for usability, and both are about the same as Photoshop 2000.

        • dontbenebby 2 years ago

          Photoshop is not better, because paying in perpetuity is unacceptable.

          I've learned to use GIMP, OS specific tools, or CLI ones, and focused on taking photos well enough I don't need to fuck with HDR or whatever to make them pretty.

          • ChrisLomont 2 years ago

            >Photoshop is not better, because paying in perpetuity is unacceptable.

            If your time is worth zero, then yes, you can use slower tools.

            Professionals learn that getting things done quickly and well has lots of value. Paying for a tool that costs you an hour of labor in income but saves you weeks or gains you vastly more in earnings makes such tools a terrific value for such people.

            If you don't value your time, or cannot use the tool to pay for itself, then don't buy the tool. Many people are not you.

            >and focused on taking photos well enough I don't need to fuck with HDR or whatever to make them pretty

            Those using Photoshop are not merely color touching single, simple photos. They use it for a lot more than that.

            All digital cameras do a lot of in-camera digital processing. Those algorithms are inferior, run on slower hardware, and have many other limitations that a PC does not. And there's a significant amount of work beyond "taking photos well enough." Not many clients want that - they can do what you do too. Compositing, touchups, creative work based on photos is just the tip of the iceberg for professionals.

            Don't like photoshop? Don't buy it. But don't assume you're doing what those who do see value in it are doing.

            • blagie 2 years ago

              I seriously think you mischaracterised your audience. Most people here (probably including dontbenebby) are hobbyists, not professional photographers. There are also plenty of people who use tools a little bit. For example, many software engineers will do some web design, or need a product photo, or whatever.

              Photoshop doesn't pay for itself for that kind of occasional use either, especially when you consider that one wants to be able to reuse and edit files sometimes years later.

              I'll gladly pay $$$ for a work-related tool I use 8 hours per day. I won't drop half a grand a year or whatever on Photoshop.

              (And no, the poster wasn't dissing people who buy Photoshop; a good key to someone talking about themselves is the word "I")

              • ChrisLomont 2 years ago

                >I seriously think you mischaracterised your audience.

                I never thought most people here would use Photoshop enough to pay for it. The point was that tools exist that are worth paying form despite the poster some level up implying open source is always the answer. Being able to realize others have their tools, and you have yours, helps understand why people do pay for things you may not.

                >I'll gladly pay $$$ for a work-related tool I use 8 hours per day.

                And that is exactly the point that led into this thread. You choose your tool. Some choose Photoshop. Some choose SolidWorks. Open source is rarely the best choice for most professionals using software to do their work. It's ok for certain software developers, which is a tiny part of all professional workers using software.

                >And no, the poster wasn't dissing people who buy Photoshop; a good key to someone talking about themselves is the word "I"

                The poster wrote "Photoshop is not better, because paying in perpetuity is unacceptable" before the paragraph starting with "I". That sure looks like he's implying this choice should fit all. And before that the thread had many people claiming GIMP and Photoshop were on parity at this or that time, or under this or that condition. Those also were not "I" statements.

                • blagie 2 years ago

                  > And that is exactly the point that led into this thread. You choose your tool. Some choose Photoshop. Some choose SolidWorks. Open source is rarely the best choice for most professionals using software to do their work. It's ok for certain software developers, which is a tiny part of all professional workers using software.

                  No, open-source is often the best choice for professional workers using software. Open source has several advantages:

                  1) My experience is that open-source is as often better as worse than proprietary. It's random. SolidWorks is a fine example where proprietary is ahead. On the other hand, e.g. my astronomer friends use open-source tools because they're better.

                  2) It's eternal and archival. Things I worked on 15 years ago still work. My proprietary cloud-based project management stuff went poof as soon as the company went out-of-business. Businesses can accumulate value much more easily.

                  3) I work in a regulated industry. Open-source doesn't contain hostile code which calls home with telemetry, activation, etc. and might leak whatever information to some random vendor. It's automatically auditable. It doesn't require contract negotiations. Etc.

                  4) I have access to all of it. There are professionals who spend 40 hours per week doing the same thing, where an investment in Photoshop pays off. There are professionals who spend 40 hours per week doing a variety of things. I've always been in jobs which were more like the latter. A while ago, I was in a business which designed a piece of electronics hardware. They needed to use a tool to 3d print something (a front panel) exactly once in the lifetime of the business.

                  One downside is lack of marketing dollars. My experience is that open source is underutilized outside of software engineering, mostly due to ignorance.

              • dontbenebby 2 years ago

                I'm a hacker and an artist, among other things. Who is to say what a "professional" is? (Not me)

    • dontbenebby 2 years ago

      >For an example of the difference, compare the quality and capability of Photoshop versus GIMP (or any open source photo editor) to see the difference.

      Photo editing has changed, a lot of basic things like crop and export are able to be done in OS specific tools like preview, you used to need full on photoshop.

      Conversely GIMP on MacOS is an absolute disaster that doesn't even use the same UI as the rest of the OS.

      These more utilitarian tools are ridding with usability bugs and probably security vulns as well.

  • slaymaker1907 2 years ago

    It's also really hard to even get access to directly addressable flash (which would make it much easier for researchers to study). This seems to be because the algorithms for wear leveling and virtual addressing are actually huge differentiating factors between manufacturers and the actual flash storage is all very similar.

    • tenebrisalietum 2 years ago

      Hiding flash behind an FTL allows companies to use it to use imperfect flash (flash with bad blocks) make things cheaper. Remember when hard drives had block defect table labels on them, and then those went away when EIDE took over?

slaymaker1907 2 years ago

I think this would work well for operations like combining two drives into one larger logical drive since you would reduce the risk amplification from such a configuration. In the typical case, when you stitch two drives together you end up doubling the risk since if either drive fails, the whole system may be unrecoverable. With capacity variance, you would presumably notice one of the physical drives shrinking giving you warning that the drive needs to be replaced in a user comprehendible way.

orbital-decay 2 years ago

There are many other cases where modern hardware provides opaque high-level abstractions to software, trying to predict the workload at its own under the hood, instead of letting the user control it. CPUs largely work like that with branch prediction for ages, and the crazy situation where they use neural networks to do that doesn't seem to bother anyone.

rhacker 2 years ago

Great so ten years from now after all the manufacturers remove this feature we'll have articles saying that the missing wear leveling feature is causing data loss and early failures.