d33 2 days ago

I worry that 7-Zip is going to lose relevance because lack of zstd support. zlib's performance is intolerable for large files and zlib-ng's SIMD implementation only helps here a bit. Which is a shame, because 7-Zip is a pretty amazing container format, especially with its encryption and file splitting capabilities.

  • dikei a day ago

    I use ZSTD a ton in my programming work where efficiency matters.

    But for sharing files with other people, ZIP is still king. Even 7z or RAR is niche. Everyone can open a ZIP file, and they don't really care if the file is a few MBs bigger.

    • cesarb a day ago

      > Everyone can open a ZIP file, and they don't really care if the file is a few MBs bigger.

      You can use ZSTD with ZIP files too! It's compression method 93 (see https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT which is the official ZIP file specification).

      Which reveals that "everyone can open a ZIP file" is a lie. Sure, everyone can open a ZIP file, as long as that file uses only a limited subset of the ZIP format features. Which is why formats which use ZIP as a base (Java JAR files, OpenDocument files, new Office files) standardize such a subset; but for general-purpose ZIP files, there's no such standard.

      (I have encountered such ZIP files in the wild; "unzip" can't decompress them, though p7zip worked for these particular ZIP files.)

      • throw0101d a day ago

        > You can use ZSTD with ZIP files too!

        Support for which was added in 2020:

        > On 15 June 2020, Zstandard was implemented in version 6.3.8 of the zip file format with codec number 93, deprecating the previous codec number of 20 as it was implemented in version 6.3.7, released on 1 June.[36][37]

        * https://en.wikipedia.org/wiki/Zstd#Usage

        So I'm not sure how widely deployed it would be.

        • xxs a day ago

          Most linux distributions have zip support with zstd.

          • danudey a day ago

            The `zip` command on Ubuntu is 6.0, which was released in 2009 and does not support zstd. It does support bzip2 though!

            • cesarb 21 hours ago

              > The `zip` command on Ubuntu is 6.0, which was released in 2009 and does not support zstd. It does support bzip2 though!

              You probably mean the "unzip" command, which https://infozip.sourceforge.net/UnZip.html lists as 6.0 being the latest, released on 20 April 2009. Relevant to this discussion, new in that release are support for 64-bit file sizes, bzip2 compression method, and UTF-8 filenames.

              The "zip" command is listed at https://infozip.sourceforge.net/Zip.html as 3.0 being the latest, released on 7 July 2008. New in that release are also support for 64-bit file sizes, bzip2 compression method, and UTF-8 filenames.

              It would be great if both (or at least unzip) were updated to also support LZMA/XZ/ZSTD as compression methods, but given that there have been no new releases for over fifteen years, I'm not too hopeful.

            • xxs 16 hours ago

              i meant the "zip support" (as in zlib) with zstd command.

      • dikei a day ago

        Well, only a lunatic would use ZIP with anything but DEFLATE/DEFLATE64

        • redeeman a day ago

          there are A LOT of zip files using lzma in the wild. also, how about people learn to use updated software? should newer video compression technologies not be allowed in mkv/mp4.

          if you cant open it, well.. then stop using 90ies winzip

          • landl0rd a day ago

            No. You can't get people to use updated software. You can't get a number of people to update past windows 7. This has been and will likely remain a persistent issue, and it's sure not one you're going to fix. All it will do is limit your ability to work with people. This isn't a hill on which you should die.

            • redeeman a day ago

              if they want to open certain files, they will update

              • landl0rd a day ago

                No, they're just not going to work with you.

                • TkTech a day ago

                  Yep. Half the world's finances still spin on CSVs and FTP (no, not SFTP, FTP) If your customers request a format, that's the format you're using.

                  • nick238 6 hours ago

                    And if they don't request a format (or if you ask, and the response is "what's a format"), you need to use one that's 99.99% supported.

                • redeeman a day ago

                  im okay with that. That being said, I have not had a single issue delivering zip files with lzma, and i KNOW that I have gotten MANY from the random sources.

                  I would also expect people to be able to decode h265 in an mp4 file.

                  Your proposal seems, to word it bluntly, retarded. You would have mp4 frozen for h264 for ETERNITY, and then invent a new format as replacement? or you would just say "god has bestowed upon the world h264, and it shall be the LAST CODEC EVER!".

                  get with the program. Things change, you cannot expect to be forwards compatible for ever. Sometimes people have to switch to newer versions of software.

                  • GuB-42 21 hours ago

                    It depends on your priorities.

                    If your customer is stuck in the 90s because his 90s technology works perfectly fine and he has no intention to fix things that are not broken. Then deliver stuff that is compatible with 90s technology. He will be happy, will continue to work with you and you will make money.

                    If your customer is using the latest technologies and values size efficiency, then use the latest codecs.

                    I usually default to being conservative, because those who are up to date usually don't have a problem with bigger files, but those who are not are going to have a problem with recent formats. Maybe overly so, but that's my experience with working with big companies with decades long lifecycles.

                    Your job is not to lecture your customer, unless he asked for it. And if he asked for it, he probably expects better arguments that "update your software, idiot". Your job is to deliver what works for him. Now, of course, it is your right to be picky and leave money on the table, I will be happy to go after you and take it.

                    • redeeman 11 hours ago

                      not everything is a client<->customer relationship.

                      Professionally I can definitely support old stuff. It costs extra most often.

                      Conservative doesnt have to be stuck. I am not recommending we send h266 to everyone now, but h265 is well supported, as is AV1.

                      lzma support in zip has been widely supported for many years at this point. I am going to be choosing my "sane defaults", and if someone has a problem with that, they can simply do what they need to do to open it, or provide a damn good reason for me to go out of my way.

          • 1over137 a day ago

            >how about people learn to use updated software?

            How about software developers learn to keep software working on old OSes and old hardware?

            • tiagod a day ago

              What stops you from running updated zip/unzip on an old OS or on old hardware?

              • krapht a day ago

                Nothing, but what stops you from using DEFLATE64?

                Installing new software has a real time and hassle cost, and how much time are you actually saving over the long run? It depends on your usage patterns.

                • RealStickman_ a day ago

                  Supporting old APIs and additional legacy ways of doing things has a real cost in maintenance.

                  • mananaysiempre a day ago

                    So does not supporting them, but the developer gets to externalize those.

                    • redeeman a day ago

                      the developer is hired by someone that gets to make that decision. Ultimately the customer does. Thats why some people spend extreme resources on legacy crap, because someone has deemed it worth it.

                • redeeman a day ago

                  what stops you from installing win95 and winzip?

          • Am4TIfIsER0ppos a day ago

            mkv or mp4 with h264 and aac is good enough. mp3 is good enough. jpeg is good enough. zip with deflate is also good enough.

            • e4m2 a day ago

              "Good enough" is not good enough.

            • dahrkael 15 hours ago

              I started using winrar because winzip wouldnt fit in a floppy disk. so even in the 90s zip wasnt good enough

            • homebrewer a day ago

              In the middle of San Francisco, with Silicon Valley level incomes, very possible. In the real world I still exchange files with users on rustic ADSL, where every megabyte counts. Many areas out there, in rural Mongolia or in the middle of Africa that's just got access to the internet, are even worse in that regard.

            • redeeman a day ago

              h264 is not good enough for many things

      • easton a day ago

        > new Office files

        I know what you mean, I’m not being pedantic, but I just realized it’s been 19 years. I wonder when we’ll start calling them “Office files”.

        • mauvehaus a day ago

          > I wonder when we’ll start calling them “Office files”.

          Probably around the same time the save icon becomes something other than a 3 1/2" floppy disk.

          • jl6 a day ago

            English is evolving as a hieroglyphic language. That floppy disk icon stands a good chance of becoming simply the glyph meaning "save". The UK still uses an icon of an 1840s-era bellows camera for its speed camera road signs. The origin story will be filed away neatly and only its residual meaning will be salient.

          • kevinventullo a day ago

            Nowadays I’ve noticed fewer applications have a save icon at all, relying instead on auto-save.

            • ale42 a day ago

              And some only save to the cloud, whence a cloud icon with an arrow. (Not that I like that, but... that's what we get)

      • guappa a day ago

        You can and I've done it… but you can't expect anything to be able to decompress it unless you wrote it yourself.

      • sidewndr46 a day ago

        Same thing with "WAV" files. There's at least 3 popular formats for the audio data out there.

        • martinald a day ago

          More 'useful' one is webp. It has both a lossy and lossless compression algorithm, which have very different strengths and weaknesses. I think nearly every device supports reading both, but so many 'image optimization' libraries and packages don't - often just doing everything as lossy when it could be lossless (icons and what not).

          • sidewndr46 5 hours ago

            So apparently webp is also 'RIFF" which is the container for WAV files as well it seems. I did not know this. Also webp has its own specialized lossless algorithm. For things like icon art I generally just continue to use PNG. Is there an advantage to using Webp Losslesss?

          • LegionMammal978 a day ago

            It's similarly annoying how many websites take the existence of the lossy format as a license to recompress all WebP uploads, or sometimes other filetypes converted to WebP, even when it causes the filesize to increase. It's like we're returning to ye olden days of JPEG artifacts on every screenshot.

            • danudey a day ago

              I was thinking about this with YouTube as an example. A lot of people complain about the compression on YouTube videos making things look awful, but I bet there's a reasonable number of high-end content creators out there who would run a native(-ish, probably Electron) app on their local system to do a higher-quality encoding to YouTube's specifications before uploading.

              In many (most?) cases, it's possible to get better compression and higher quality if you're willing to spend the CPU cycles on it, meaning that YouTube could both reduce their encoding load and increase quality at the same time, and content creators could put out better quality videos that maintain better detail.

              It would certainly take longer to upload the multiple multiple versions of everything, and definitely it would take longer to encode, but it would also ease YouTube's burden and produce a better result.

              Ah well, a guy can dream.

              • martinald a day ago

                AFIAK you can upload any bitrate to youtube as long as the file is <256GB.

                So you could upload a crazy high bitrate file to them for a 20 min video which I suspect would be close to "raw" quality.

                I don't know how many corners youtube cut on encoding though.

                I suspect most of the problem is people exporting 4k at a 'web' bitrate preset (15mbit/s?), which is actually gonna get murdered on the 2nd encode more than encoding quality on youtubes side?

      • Akronymus 10 hours ago

        So thats why rarely the customer cant open one of the zip files we send over.

      • justin66 a day ago

        > Copyright (c) 1989 - 2014, 2018, 2019, 2020, 2022

        Mostly it seems nutty that, after all these years, they’re still updating the zip spec instead of moving on to a newer format.

        • pornel a day ago

          The English language is awful, and we keep updating it instead of moving to a newer language.

          Some things are used for interoperability, and switching to a newer incompatible thing loses all of its value.

        • 6SixTy a day ago

          .7z and .tar.* have existed for at least 20 years now, but you are unlikely to see a wild 7z file and .tar.* is isolated to the UNIX space

          • danudey a day ago

            Tar files also have the miserable limitation of having no index; this means that to extract an individual file requires scanning through the entire archive until you find it, and then continuing to scan through the rest of the archive because a tar file can have the same file path added multiple times.

            That makes them useful for transferring an entire set of files that someone will want all or none of, e.g. source code, but terrible for a set of files that someone might want to access arbitrary files from.

          • justin66 20 hours ago

            Sure, but that's not really a reason to futilely try to spooge oddball algorithms that nobody is going to adopt into the .zip standard.

    • notepad0x90 a day ago

      I don't know about, had a dicey situation recently where powershell's compress-archive couldn't handle archives >4GB and had to use 7zip. it is more reliable and you can ship 7za.exe or create self-extracting archives (wish those were more of a thing outside of the windows world).

      • landl0rd a day ago

        I understand that security has to compromise for the real world, but a self-extracting archive is possibly one of the worst things one could use in terms of security.

        • notepad0x90 a day ago

          why? and why does it have to be a compromise?

          You're assuming things because things are already done insecurely. You can authenticate the self-extractor as well as the extracted content. The user gets a nice message "This is a 7zip self-extracting archive sent to you by Bob containing the files below".

          As an incident responder, I've seen much more of regular archives being used to social engineer users than self-extracting archives, because self-extracting is not "content executing". it is better for social engineering for users to establish trust in the payload first by having them manually open the archive. if something "weird" like self-extraction happens first, it might feel less trustworthy.

          Oh and by the way, things like PyInstaller or electron apps are already self-extracting and self-executing archives. So are JAR files and android APK's.

          • fsckboy 21 hours ago

            jar files are zip files, so they don't contain "self extract" code, instead they are associated with already installed extraction code.

            however, once extracted, jar files do contain executable code, and that is a security issue. the java model pays attention to security, but if code can do something, it can do something bad. if it can't do something, it's not very useful, is it.

            • notepad0x90 17 hours ago

              the windows kernel executes a self extracting 7z archive, java.exe extracts and executes .jar files. If the 7z self extractor was .net CLR bytecode it would operate very much the same as .JAR files. to your point though, the primary purpose of JAR files is not to compress and transport other files, they're supposed to be executables only. From a user's perspective, abuse potential is the main difference.

    • sidewndr46 a day ago

      What are you compressing with zstd? I had to do this recently and the "xz" utility still blows it away in terms of compression ratio. In terms of memory and CPU usage, zstd wins by a large margin. But in my case I only really cared about compression ratio

      • vlovich123 a day ago

        people tend to care about decompression speed - xz can be quite slow decompressing super compressed files whereas zstd decompression speed is largely independent of that.

        People also tend to care about how much time they spend on compression for each incremental % of compression performance and zstd tends to be a Pareto frontier for that (at least for open source algorithms)

        • bracketfocus a day ago

          This makes sense. A lot of end-users have internet speeds that can outpace the decompression speeds of heavily compressed files. Seems like there would be an irrational psychological aspect to it as well.

          Unfortunately for the hoster, they either have to eat the cost of the added bandwidth from a larger file or have people complain about slow decompression.

          • vlovich123 a day ago

            Well the difference is quite a bit more manageable in practice since you’re talking about single digit space difference vs a 2-100x performance in decompression.

        • sidewndr46 a day ago

          I definitely agree, I basically have unlimited time and unlimited CPU for decompressing. Available memory is huge too. The gains from xz were significant enough that I went with it.

      • landl0rd a day ago

        I usually see zstd on max settings outperform xz on speed and very slightly on compression (though that's a tiny difference).

      • Szpadel a day ago

        in my experience using zstd --long --ultra -22 gives marginally better compression ratio than xz -9 while being significantly faster

        • soruly a day ago

          I think it depends on what you're compressing. I experimented with my data full of hex text xml files. xz -6 is both faster and smaller than zstd -19 by about 10%. For my data, xz -2 and zstd -17 achieve the same compressed size but xz -2 is 3 times faster than zstd -17. I still use xz for archive because I rarely needs to decompress them.

          • Szpadel 16 hours ago

            Try combining it with --long

            My use cases are usually source code, SQL dumps and log files.

            Sometimes xz gave marginally better results, but difference was well below 1%

            • soruly 14 hours ago

              thanks for the tips. As my data has very low entropy, both can compress down to 3-4% of original size, but xz is a lot faster in compression.

              raw size: 9612344 B

              zstd --ultra -22 --long=31 => 376181 B (3.91% original, 4.088s compress, 0.013s decompress)

              xz -z -9 xml => 353700 B (3.68% original, 0.729s compress, 0.032s decompress)

              zstd -17 --long=31 could match the compression time of xz, but the size is bigger (405602 B, 4.22% original)

              If you compare only the compressed size (not to the original size), .zst would be about 6-15% larger than .xz

      • xxs a day ago

        do you have examples where xz 'blows it away', not just zstd -3?

        • sidewndr46 a day ago

          Here are some examples of what I was doing in one case

          https://www.hydrogen18.com/blog/apk-the-strangest-format.htm...

          I was running "zstd --ultra --threads=0" which I assumed was asking it for the absolute maximum

          • sltkr a day ago

            I think your mistake was to use --ultra without a compression level.

            I redid your experiments with rust-wasm-1.83.0-r0.apk:

                                        size       perc   c.time  d.time
                uncompressed:      290072064          -        -
                gzipped original:  105255109     36.29%        -  
                bzip2 -9:          107099379     36.92%    21.1s  11.0s
                bzip3 -b511:        73539847     25.35%    28.9s  32.0s
                xz --extreme -9:    71010672     24.48%   142.0s   3.1s
                lzip -9:            70964413     24.46%   173.5s   5.3s
                zstd --ultra -22:   48288499     16.64%   155.6s   0.4s
            
            It's pretty clear zstd blows everything else out of the water by a huge margin. And even though compressing with zstd is slightly slower than xz in this case (by less than 10%), decompression is nearly 8x as fast, and you can probably tweak the compression level to make zstd be both faster and better than xz.
            • sidewndr46 20 hours ago

              I guess I misunderstood the man page for that option then.

            • ars 19 hours ago

              That was an impressive result, so I tried it on a huge email inbox.

                  uncompressed:    1512662084
                  xz --extreme -9:  508431572  12:47
                  zstd --ultra -21: 508432560  12:44
              
              (-22 ran out of memory.) So at least by me zstd was identical to xz almost to the byte and the second.
              • sltkr 7 hours ago

                It does really vary based on the data set.

                If the email data is mostly text with markup (like HTML/XML), you might want to try bzip3 too.

                It's also possible that a large part of your email is actually already-compressed binary data (like PDFs and images) possibly encoded in base-64. In that case it's likely that all tools are pretty good at compressing the text and headers, but can do little to compress the attachments, which would explain why the results you get are so close.

                • ars an hour ago

                      bzip3 -b511: 580771424  8:51
                  
                  I suspect your theory about compressed attachments is correct, although bzip3 isn't doing very well compared to the rest.
              • ars 17 hours ago

                I got -22 to run:

                    zstd --ultra -22: 494517545 14:00
                
                Pretty minor difference.
          • xxs 16 hours ago

            yup, you should have tried just different -NN, and notice. I had a talk on zstd couple of years back, and one of the points was that it was better than xz across the board.

    • jart a day ago

      Use the pigz command for parallel gzip. Mark Adler also has an example floating around somewhere about how to implement basically the same thing using Z_BLOCK.

    • mrWiz a day ago

      My main use case for 7z is bypassing corporate filters that block ZIPs from being sent.

      • starik36 a day ago

        I think gmail is onto you. They blocked one of my 7z files the other day.

        • mrWiz 3 hours ago

          Thankfully our corporate IT isn't onto me yet.

    • psyclobe 19 hours ago

      zip is such a shit standard, hell there are parts of it that are still undocumented and sharing documents between system zip implementations across mac and windows sometimes fails.

  • Night_Thastus a day ago

    7-zip is the de-facto tool on Windows and has been for a long time. It's more than fast and compressed enough for 99% of peoples use cases.

    It's not going anywhere anytime soon.

    The more likely thing to eat into its relevance is now that Windows has built-in basic support for zipping/unzipping EDIT: other formats*, which relegates 7-zip to more niche uses.

    • malfist a day ago

      Windows has had built in zip/unzip since vista. 7zip is far superior (and the install base proves that)

      • Night_Thastus a day ago

        As mentioned in another comment, zip support actually goes further back as far as '98, but only Windows 11 added support for handling other formats like RAR/7-Zip/.tar/.tar.gz/.tar.bz2/etc.

        That allows it to be a default that 'just works' for most people without installing anything extra.

        The vast majority of users don't care about the extra performance or functionality of a tool like 7-zip. They just need a way to open and send files and the Windows built-in tool is 'good enough' for them.

        I agree that 7-zip is better, but most users simply do not care.

        • landl0rd a day ago

          Windows zip is not in fact good enough. I've run into weird, buggy behavior, hanging on extract, all sorts of nonsense. I can see the argument that a universally-adopted solution is better, but that's different from windows just not working.

          • Night_Thastus a day ago

            I'm not saying I would ever use it. I'm saying that for casual non-power users, it's good enough. They work with it and if it breaks once in a blue moon they don't care. They just want it to open the files they get and give them a way to send files compressed.

            That is enough to bite into 7-Zip's share of users.

      • iamleppert a day ago

        Windows unzip is so ungodly slow and terrible! Long live 7zip!

    • Bender a day ago

      7-zip is the de-facto tool on Windows and has been for a long time.

      Agreed. The only thing I think it has been missing is PAR support. I think they should consider incorporating one of the par2cmdline forks and porting that code to Windows as well so that it has recovery options similar to WinRAR. It's not used by everyone but that should deprecate any use cases for WinRAR in my opinion.

    • izzydata a day ago

      Is there something different about the built in zip context menu functionality now than before? I'm pretty sure you could convert something to a zip file since forever ago by right clicking any file.

      • Night_Thastus a day ago

        It could support basic ZIP files, but only Windows 11 added support for 7-Zip (.7z), RAR (.rar), TAR, and TAR variants (like .tar.gz, .tar.bz2, etc).

        That makes it 'good enough' for the vast majority of people, even if it's not as fast or fully-featured as 7-Zip.

    • anonnon a day ago

      7-zip, through its .7z format, also supports AES encryption. I'd argue it's probably the easiest way to encrypt individual file archives that you need to access on both Windows and Linux. I have a script I periodically run that makes an encrypted .7z archive of all of my projects, which I then upload for off-site backup. (On-site, I don't bother encrypting.)

  • rf15 2 days ago

    Not that many people care about zstd; I would assume most 7-zip users care about the convenience of the gui.

    • arp242 a day ago

      It's been a long time since I used Windows, but back in the day I used 7-Zip exactly because it could open more or less $anything. That's also why we installed it on many customer computers.

      On Linux bsdtar/libarchive gives a similar experience: "tar xf file" works on most things.

      • devilbunny a day ago

        7-Zip is like VLC: maybe not the best, but it’s free (speech and beer) and handles almost anything you throw at it. For personal use, I don’t care much about efficient compression either computationally or in terms of storage; I just want “tar, but won’t make a 700 MB blank ISO9660 image take 700 MB”.

      • tssva a day ago

        Windows 11 has shipped with bsdtar/libarchive for a few years. The gui shell support for archive files was recently changed to use libarchive which has increased the supported archive files which can be opened in the shell.

    • cm2187 a day ago

      in fact this is the first time I even hear about it, and I am semi-IT litterate. The prevalence of a compression standard is about how ubiquitous it is. For that one, I would vote "not even on the radar yet".

    • KronisLV a day ago

      That's basically me! I really like 7-Zip because it opens most archive formats I have to work with and also the .7z format has pretty good compression for the stuff I want to store longer term.

    • snickerdoodle12 a day ago

      That's why 7zip should support it. People care about the convenience of the GUI and we all benefit from better compression being accessible with a nice GUI.

    • Beretta_Vexee a day ago

      I just hope that the recipient will be able to open the file without too much difficulty. I am willing to sacrifice a few megabytes if necessary.

    • jorvi a day ago

      .. but 7-zip has a pretty terrible GUI?

      Hence why PeaZip is so popular, and J-Zip used to be before it was stuffed with adware.

      • sidewndr46 a day ago

        If you're expecting a "mobile first" or similar GUI where most of the screen is dedicated to whitespace, basic features involves 7 or more mouse clicks and for some reason it all gets changed every ~6 months then yes the 7zip GUI is terrible.

        Desktop software usability peaked sometime in the late 90s, early 2000s. There's a reason why 7zip still looks like ~2004

        • wmil a day ago

          When compared to it's contemporaries the 7-zip GUI is noticeably worse. Back in 2004 WinRar and WinZip were both clearly superior.

          • birksherty a day ago

            Not sure what those gui improvments in winrar is. I prefer 7zip over all those for gui too.

        • jorvi 10 hours ago

          You could have taken the 10 seconds to type in "PeaZip GUI" and seen that it is not a mobile interface and it is indeed much nicer than the 7-Zip interface.

          Instead you chose to make a useless snarky comment. Be better.

      • general1726 a day ago

        Most people won't use that GUI, but will right click file or folder -> 7-Zip -> Add To ... and it will spit out a file without questions.

        Granted Windows 11 has started doing the same for its zip and 7zip compressors.

        Same trick goes for opening archives or executables (Installers) as archives.

        • axus a day ago

          Let's chat about Windows 11 right-click menu. I'm pretty sure they hid all the application menu extensions to avoid worst-case performance issues.

          • p_ing a day ago

            Exactly it. 3rd parties injecting their extensions harmed performance, which people turn around and blame Microsoft for.

            • birksherty a day ago

              People are really praising microsoft for for all three new "features" and tracking in win 11.

      • m-schuetz a day ago

        All the GUI I need is right click-> extract here or to folder. And 7zip is doing that nicely.

      • Jackson__ a day ago

        PeaZip is popular? It seems a lot less tested than 7zip; Last time I tried to use it, it failed to unpack an archive because the password had a quote character or something like that. Never had such crazy issues in 7zip myself.

      • tssva a day ago

        I find the PeaZip gui to be awful to use. I much prefer the 7-zip gui.

      • Gormo a day ago

        > .. but 7-zip has a pretty terrible GUI?

        Since you're asking, the answer is no. 7-Zip has an efficient and elegant UI.

    • yapyap a day ago

      if by gui u mean the ability to right click a .zip file and unzip it just through the little window that pops up ur totally right. At least that + the unzipping progress bar is what I appreciate 7zip for

  • Beretta_Vexee a day ago

    You are looking for 7-Zip Zstd: https://github.com/mcmilk/7-Zip-Zstd

    I don't know what your use case is, but it seems to be quite a niche.

  • sammy2255 2 days ago
    • abhinavk 2 days ago

      https://github.com/M2Team/NanaZip

      It includes the above patches as well as few QoL features.

      • birksherty 15 hours ago

        Tried it, compression is worse compared to 7-Zip-zstd by mcmilk for zstd, for same speed. The removal of text from toolbar icons is enough for me to never use it again. 7zip can change file associations directly with it and very easy. Feels like NanaZip is worse in QoL than 7zip.

    • d33 a day ago

      Thanks! Any ideas why it didn't get merged? Clearly 7-Zip has some development activity going on and so does this fork...

      • Beretta_Vexee a day ago

        Working with Igor Pavlov, the creator of 7-zip, does not seem very straightforward (understatement).

      • Tuldok a day ago

        7-zip's development is very cathedral. Igor Pavlov doesn't look like he accepts contributions from the public.

  • m-schuetz a day ago

    Being a bit faster or efficient won't make most people switch. 7z offers great UX (convenient GUI and support for many formats) that keeps people around.

    • rat9988 a day ago

      If anything, the gui and ux is terrible compared to winrar.

  • jccalhoun a day ago

    Since Windows 11 incorporated libarchive back in October 2023 there is less reason to use 7-zip on windows. I would be surprised if any of my friends even know what a zip file is let alone zstd.

    • rs186 a day ago

      If you ever try to extract an archive file of several gigabyte size with hundreds of thousands of files (I know, it's rare), the built-in one is as slow as a turtle compared to 7z.

  • pjmlp a day ago

    As long as it does a better job than whatever Windows team packs into the OS, they're safe.

    Even on latest Windows 11 takes minutes to do what 7-Zip does in seconds.

    Goes to show how good all those leetcode interviews turn out.

    • conkeisterdoor a day ago

      Glad I'm not the only one who feels this way. WinZip is a slow and bloated abomination, especially compared to 7-Zip. The right-click menu context entry for 7-Zip is very convenient and runs lightning fast. WinZip can't compete at all.

      • pjmlp a day ago

        Mixing channels here, WinZip is a commercial product, unrelated to Windows 11 7-zip support, and my comment.

        https://www.winzip.com

  • xxs a day ago

    There are lots of 7zip alike with zstd support (it's a plugin effectively). On [corporate] Windows NanaZip would be my choice as it's available in Windows store.

    on anything else - either directly zstd or tar

avidiax 2 days ago

Why was there a limitation on Windows? I can't find any such limit for Linux.

  • monocasa 2 days ago

    A lot of synchronization primitives in the NT kernel are based on a register width bit mask of a CPU set, so each collection of 64 hardware threads on 64 bit systems kind of runs in its own instance of the scheduler. It's also unfortunately part of the driver ABI since these ops were implemented as macros and inline functions.

    Because of that, transitioning a software thread to another processor group is a manual process that has to be managed by user space.

    • zik a day ago

      Wow. That's surprisingly lame.

      • Const-me a day ago

        The NT kernel dates back to 1993. Computers didn’t exceed 64 logical processors per system until around 2014. And doing it back then required a ridiculously expensive server with 8 Intel CPUs.

        The technical decision Microsoft made initially worked well for over two decades. I don’t think it was lame; I believe it was a solid choice back then.

        • arp242 a day ago

          > Computers didn’t exceed 64 logical processors per system until around 2014.

          Server systems were available with that since at least the late 90s. Server systems with >10 CPUs were already available in the mid-90s. By the early-to-mid 90s it was pretty obvious that was only going to increase and that the 64-CPU limit was going to be a problem down the line.

          That said, development of NT started in 1988, and it may have been less obvious then.

          • p_ing a day ago

            "Server systems" but not server systems that Microsoft targeted. NT4 Enterprise Server (1996) only supported up to 8 sockets (some companies wrote their own HAL to exceed that limit). And 8 sockets was 8 threads with no NUMA back then, not something that would have been an issue for the purposes of this discussion.

            • monocasa a day ago

              Microsoft was absolutely wanting to target large servers at the time. They were actively trying to kill off the vendor unices in the 90s.

              • p_ing a day ago

                They successfully killed off vendor unicies in the 90s, but that was thanks to cheap x86.

                • monocasa 18 hours ago

                  That was what stuck, but supporting the big servers was also part of their multifaceted strategy. That's why the alpha, itanium, powerpc, and mips ports existed.

        • immibis a day ago

          Linux had many similar restrictions in its lifetime; it just has a different compatibility philosophy that allowed it to break all the relevant ABIs. Most recently, dual-socket 192-core Ampere systems were running into a hardcoded 256-processor limit. https://www.tomshardware.com/pc-components/cpus/yes-you-can-...

          • monocasa a day ago

            Tom's hardware is mistaken in their reporting. That's raisng the limit without using CPUMASK_OFFSTACK. The kernel already supported thousands of cores with CPUMASK_OFFSTACK and has at least since the 2.6.x days.

        • rsynnott a day ago

          The Sun E10K (up to 64 physical processors) came out in 1997.

          (Now, NT for Sparc never actually became a thing, but it was certainly on Microsoft's radar at one point)

        • sidewndr46 a day ago

          That was actually the DEC team from what I understand, Microsoft just hired all of their OS engineers when they collapsed

          • meepmorp a day ago

            Dave Cutler left DEC in 1988 and started working on WINNT at MS, well before the collapse.

        • mixmastamyk a day ago

          SGI Origin did by 1996.

          Though MS ported NT to a number of systems (mips, alpha, ppc) it wasn’t able to play in the very big leagues until later.

          I agree it was a reasonable choice at the time. Few were getting mileage out of that many CPUs back then.

        • monocasa a day ago

          I mean, x86 didn't, but other systems had been exceeding 64 cores since the late 90s.

          And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.

          • zamadatix a day ago

            > And x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it.

            If that were the case the above system wouldn't have needed 8 sockets. With NUMA systems the app needs to be scheduling group aware anyways. The difference here really appears when you have a single socket with more than 64 hardware threads, which took until ~2019 for x86.

            • sidewndr46 a day ago

              Why would an application need to be NUMA aware on Linux? Most software I've ever written or looked at has no concept of NUMA. It works just fine.

              • zamadatix a day ago

                The same reasons it would on macOS or Windows, most people just aren't writing software which needs to worry about having a single process running many hundreds of threads across 8 sockets efficiently so it's fine to not be NUMA aware. It's not that it won't run at all, a multi-socket system is still a superset of a single socket system, just it will run much more poorly than it could in such scenarios.

                The only difference with Windows is a single processor group cannot contain more than 64 cores. This is why 7-Zip needed to add processor group support - even though a 96 core Threadripper represents as a single NUMA node the software has to request assignment to 2x48 processor groups, the same as if it were 2 NUMA nodes with 48 cores each, because of the KAFFINITY limitation.

                Examples of common NUMA aware Linux applications are SAP Hana and Oracle RDBMS. On multi-socket systems it can often be helpful to run postgres and such via https://linux.die.net/man/8/numactl too, even if you're not quite the scale you need full NUMA awareness in the DB. You generally also want hypervisors to pass the correct NUMA topologies to guests as well. E.g. if you have a KVM guest with 80 cores assigned on a 2x64 Epyc host setup then you want to set the guest topology to something like 2x40 cores or it'll run like crap because the guest is sees it can schedule one way but reality is another.

            • monocasa a day ago

              There were single image systems with hundreds of cores in the late 90s and thousands of cores in the early 2000s.

              I absolutely stand by the fact that Intel and AMD didn't pursue high core count systems until that point because they were so focused on single core perf, in part because Windows didn't support high core counts. The end of Denmark scing forced their hand and Microsoft's processor group hack.

              • elzbardico a day ago

                AMD and Intel were focused on single core performance, because personal desktop computing was the bigger business until around mid to late 2000s.

                Single core performance is really important for client computing.

                • monocasa a day ago

                  They were absolutely interested in the server market as well.

              • zamadatix a day ago

                Do you have anything to say regarding NUMA for the 90s core counts though? As I said, it's not enough that there were a lot of cores - they have to be monolithically scheduled to matter. The largest UMA design I can recall was the CS6400 in 1993, to go past that they started to introduce NUMA designs.

                • monocasa a day ago

                  Windows didn't handle numa either until they created processor groups, and there's all sorts reasons why you'd want to run a process (particularly on Windows which encourages single process high thread count software archs) that spans numa nodes. It's really not that big if a deal for a lot of workloads where your working set fits just fine in cache, or you take the high hatdware thread count approach of just having enough contexts in flight that you can absorb the extra memory latency in exchange for higher throughput.

                  • zamadatix a day ago

                    3.1 (1993) - KAFFINITY bitmask

                    5.0 (1999) - NUMA scheduling

                    6.1 (2009) - Processor Groups to have the KAFFINITY limit be per NUMA node

                    Xeon E7-8800 (2011) - An x86 system exceeding 64 total cores is possible (10x8 -> requires Processor Groups)

                    Epyc 9004 (2022) - KAFFINITY has created an artificial limit for x86 where you need to split groups more granular than NUMA

                    If x86 had actually hit a KAFFINITY wall then the E7-8800 even would have occured years before processor groups because >8 core CPUs are desirable regardless if you can stick 8 in a single box.

                    The story is really a bit reverse from the claim: NT in the 90s supported architectures which could scale past the KAFFINITY limit. NT in the late 2000s supported scaling x86 but it wouldn't have mattered until the 2010s. Ultimately KAFFINITY wasn't an annoyance until the 2020s.

          • Const-me a day ago

            > other systems had been exceeding 64 cores since the late 90s.

            Windows didn’t run on these other systems, why would Microsoft care about them?

            > x86 arguably didn't ship >64 hardware thread systems until then because NT didn't support it

            For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.

            • monocasa a day ago

              > Windows didn’t run on these other systems, why would Microsoft care about them?

              Because it was clear that high core count, single system image platforms were a viable server architecture, and NT was vying for the entire server space, intending to kill off the vendor Unices.

              . For publicly accessible web servers, Linux overtook Windows around 2005. Then in 2006 Amazon launched EC2, and the industry started that massive transition to the clouds. Linux is better suited for clouds, due to OS licensing and other reasons.

              Linux wasn't the only OS. Solaris and AIX were NT's competitors too back then, and supported higher core counts.

            • rsynnott a day ago

              Windows NT was originally intended to be multi-platform.

              • p_ing a day ago

                NT was and continues to be multi-platform.

                That doesn't mean every platform was or would have been profitable. x86 became 'good enough' to run your mail or web server, it doomed other architectures (and commonly OSes) as the cost of x86 was vastly lower than the Alphas, PowerPCs, and so on.

  • dwattttt 2 days ago

    The linked Processor Group documentation also says:

    > Applications that do not call any functions that use processor affinity masks or processor numbers will operate correctly on all systems, regardless of the number of processors.

    I suspect the limitation 7zip encountered was in how it checked how many logical processors a system has, to determine how many threads to spawn. GetActiveProcessorCount can tell you how many logical processors are on the system if you pass ALL_PROCESSOR_GROUPS, but that API was only added in Windows 7 (that said, that was more than 15 years ago, they probably could've found a moment to add and test a conditional call to it).

    • dspillett 2 days ago

      It isn't just detecting the extra logical processors, you have to do work to utilise them. From the linked text:

      "If there are more than one processor group in Windows (on systems with more than 64 cpu threads), 7-Zip distributes running CPU threads across different processor groups."

      The OS does not do that for you under Windows. Other OSs handle that many cores differently.

      > more than 15 years ago, they probably could've found a moment to add and test a conditional call to it

      I suspect it hasn't been an issue much at all until recently. Any single block of data worth spinning up that many threads for compressing is going to be very large, you don't want to split something into too small chunks for compression or you lose some benefit of the dynamic compression dictionary (sharing that between threads would add a lot of inter-thread coordination work, killing any performance gain even if the threads are running local enough on the CPU to share cache). Compression is not an inherently parallelizable task, at least not “embarrassingly” so like some processes.

      Even when you do have something to compress that would benefit for more than 64 separate tasks in theory, unless it is all in RAM (or on an incredibly quick & low latency drive/array) the process is likely to be IO starved long before it is compute starved, when you have that much compute resource to hand.

      Recent improvements in storage options & CPUs (and the bandwidth between them) have presumably pushed the occurrences of this being worthwhile (outside of artificial tests) from “practically zero” to “near zero, but it happens”, hence the change has been made.

      Note that two or more 7-zip instances working on different data could always use more than 64 threads between them, if enough cores to make that useful were available.

      • dwattttt 2 days ago

        Are you sure that if you don't attempt to set any affinities, Windows won't schedule 64+ threads over other processor groups? I don't have any system handy that'll produce more than 64 logical processors to test this, but I'd be surprised if Windows' scheduler won't distribute a process's threads over other processor groups if you exceed the number of cores in the group it launches into.

        The referenced text suggests applications will "work", but that isn't really explicit.

        • Dylan16807 2 days ago

          They're either wrong or thinking about windows 7/8/10. That page is quite clear.

          > starting with Windows 11 and Windows Server 2022 the OS has changed to make processes and their threads span all processors in the system, across all processor groups, by default.

          > Each process is assigned a primary group at creation, and by default all of its threads' primary group is the same. Each thread's ideal processor is in the thread's primary group, so threads will preferentially be scheduled to processors on their primary group, but they are able to be scheduled to processors on any other group.

          • monocasa a day ago

            I mean, it seems it's quite clear that a single process and all of its threads will just be assigned to a single processor group, and it'll take manual work for that process to use more than 64 cores.

            The difference is just that processes will be assigned a processor group more or less randomly by default, so they'll be balanced on the process level, but not the thread level. Not super helpful for a lot of software systems on windows which had historically preferred threads to processes for concurrency.

            • Dylan16807 a day ago

              > it'll take manual work for that process to use more than 64 cores.

              No it won't.

              • monocasa a day ago

                It absolutely will. Your process is only assigned a single processor group at process creation time. The only difference now is that it's by default assigned a random processor group rather than inheriting the parent's. For processes that don't require >64 cores, this means better utilization at the system level. However you're still assigned <=64 cores by default per process by default.

                That's literally why 7-zip is announcing completion of that manual work.

                • Dylan16807 a day ago

                  The 7zip code needed to change because it was counting cores by looking at affinity masks, and that limits it to 64.

                  It also needed to change if you want optimal scheduling, and it needed to change if you want it to be able to use all those cores on something that isn't windows 11.

                  But for just the basic functionality of using all the cores: >Starting with Windows 11 and Windows Server 2022, on a system with more than 64 processors, process and thread affinities span all processors in the system, across all processor groups, by default

                  That's documentation for a single process messing with its affinity. They're not writing that because they wrote a function to put different processes on different groups. A single process will span groups by default.

      • Dylan16807 2 days ago

        That depends on what format you're using. Zip compresses every file separately. Bzip and zstd have pretty small maximum block sizes and gzip doesn't gain much from large blocks anyway. And even when you're making large blocks, you can dump a lot of parallelism into searching for repeat data.

  • lofties 2 days ago

    Windows has a concept of processor groups, that can have up to 64 (hardware) threads. I assume they updated 7zip to support multiple processor groups.

    • okanat a day ago

      Every operating system, that's relevant in 2025, needs that concept. It is called NUMA. At some point you cannot model the system ignoring the memory affinity / closeness to the cores.

      Modern AMD CPUs are literally consist of core groups on chiplets. It is better for an OS to make decisions / expose APIs for cores that are physically so far away from each other that moving data back-and-forth over the RAM, system bus or interconnect has significant time penalties.

  • silon42 2 days ago

    Maybe WaitForMultipleObjects limit of 64 (MAXIMUM_WAIT_OBJECTS) applies?

    An ugly limitation on an API that initially looks superior to Linux equivalents.

  • xxs a day ago

    WaitForMultipleObjects is limited to 64... since forever.

  • whalesalad a day ago

    Windows is a terrible operating system.

aquir a day ago

7-zip is one of the software that I miss since I’ve moved to macOS

  • MYEUHD a day ago

    If you're talking about the program you use in the terminal, you can install it via homebrew

    • immibis a day ago

      No, the GUI. 7-zip integrates well with the shell: select a group of files, right click -> make zip file, and so on. Or right-click a zip file and select extract. If you're accustomed to Linux you might not know what they're talking about.

      TortoiseGit (and TortoiseSVN) are similarly convenient. Right click a folder with an SVN repo checked out, and select "SVN update". Right-click an empty space, and select "SVN checkout". SVN was the main distribution method for some modding communities before things like Steam Workshop and Github, specifically because TortoiseSVN made it so convenient. Checkout into your addons folder, and periodically update. What could be simpler?

  • DeepSeaTortoise a day ago

    How about PeaZip?

    • aquir a day ago

      I've used PeaZip in the past but only on Windows, I was not aware that a MacOS version exists! I'll give it a try. Cheers

  • yeah879846 a day ago

    Imagine voluntarily moving to mac.

fabiensanglard a day ago

How does that work? You cannot write to disk before you know the compressed size. Or if you do you can use a data descriptor but you cannot write concurrently.

I guess they buffer the compressed stream to RAM before writing to zip. If they want to keep their zip stable (always the same output given the same input), they also need to keep it a bit longer than necessary in RAM.

  • amelius a day ago

    Maybe Windows allows a file to be constructed quickly from parts that are not necessarily multiples of the blocksize. Maybe they have a fast api for writing multiple files and then turning it into a single file. POSIX doesn't allow that, but it is quite old.

  • 1W6MIC49CYX9GAP a day ago

    I think you get different compressed files depending on how many threads you use to compress

leecarraher a day ago

I've used pbzip2 which takes the same parallel blocked compression approach 7zip seems to be taking (using AI's analysis of the changes). Theoretically the compression is less efficient, but i haven't noticed a difference in practice.

ninjis a day ago

I had initially migrated to NanaZip, but with Windows natively supporting the 7z format now, I'm not sure it's needing anymore.

izzydata a day ago

This may or may not be a relevant question, but does the terminology of "zip" have the same origin as the zip disk drive?

  • malfist a day ago

    No. Zip format significantly predates the zip disk.

lihaciudaniel a day ago

7zip has been the greatest usage for limbo x86 on mobile.

You just termux qemu-utils convert your qcow2 partitions to IMG and 7zip can read IMG file

Try yourself to see you can extract from your emulated windows

ltbarcly3 a day ago

Wow, a program that doesn't matter anymore has been very very minimally enhanced on a platform that doesn't matter anymore, benefitting the 7 users that have more than 64 real cores with Windoes and are regularly compressing archives so large that it doesn't drastically reduce the compression ratio to split it into more thsn 64 sections.

Posting this link to hn has consumed more human potential than the thing it is describing will save up to the end of time.

  • tobinc a day ago

    A 1% speed improvement for 1% of 7zip users is several times more productive than your comment.

    • ltbarcly3 12 hours ago

      This will not deliver your stated threshold.

  • starkrights a day ago

    > a program that doesn’t matter anymore

    The rest of this comment has, though gratuitously snarky, a point, but I don’t think claiming that 7zip is irrelevant as an independent statement is even remotely coherent.

    • ltbarcly3 12 hours ago

      tar.zstd is superior in basically every way, open, and portable

      • michaelcampbell 9 hours ago

        It's fine that we run in different circles, but I have yet to see one of these in the wild.

        Betamax was better, too.

  • esafak 20 hours ago

    It's funny 'coz it's true! But no slight against 7zip; it was good for its time.