igrunert a day ago

I recently ported WebKit's libpas memory allocator[1] to Windows, which used pthreads on the Linux and Darwin ports. Depending on what pthreads features you're using it's not that much code to shim to Windows APIs. It's around ~200 LOC[2] for WebKit's usage, which a lot smaller than pthread-win32.

[1] https://github.com/WebKit/WebKit/pull/41945 [2] https://github.com/WebKit/WebKit/blob/main/Source/bmalloc/li...

  • kjksf 18 hours ago

    At the time (11 years ago) I wanted this to run on Windows XP.

    The APIs you use there (e.g. SleepConditionVariableSRW()) were only added in Vista.

    I assume a big chunk of pthread emulation code at that time was implementing things like that.

  • malkia a day ago

    These VirtualAlloc's may intermittently fail if the pagefile is growing...

    • igrunert a day ago

      Ah yeah, I see Firefox ran into that and added retries:

      https://hacks.mozilla.org/2022/11/improving-firefox-stabilit...

      Seems like a worthwhile change, though I'm not sure when I'll get around to it.

      • account42 20 hours ago

        This is something you also need to do for other Win32 APIs, e.g. file write access may be temporarily blocked by anti-virus programs or whatever and not handling that makes unhappy users.

  • adzm a day ago

    Never knew about the destructor feature for fiber local allocations!

andy99 a day ago

I'm a big fan of pigz, I discovered it 6 years ago when I had some massive files I needed to zip and and 48 core server I was underutilizing. It was very satisfying to open htop and watch all the cores max out.

Edit: found the screenshot https://imgur.com/a/w5fnXKS

kjksf 5 days ago

Worth mentioning that this is only of interest as technical info on porting process.

The port itself is very old and therefore very outdated.

mid-kid 20 hours ago

I'm not sure how willing I'd be to trust a pthread library fork from a single no-name github person. The mingw-w64 project provides libwinpthread, which you can download as source from their sourceforge, or as a binary+headers from a well-known repository like msys2.

account42 20 hours ago

> Porting pthreads code to Windows would be a nightmare.

Porting one application using pthreads to use the Win32 API directly is however a lot more reasonable and provides you more opportunity to deal with impedance mismatches than a full API shim has. Same goes for dirent and other things as well as for the reverse direction. Some slightly higher level abstraction for the thnings your program actually needs is usually a better solution for cross-platform applications than using one OS API and emulating it on other systems.

themadsens a day ago

I wish premake could gain more traction. It is the comprehensible alternative to Cmake etc.

  • account42 20 hours ago

    I'd rather everyone use CMake than have to deal with yet another build system. Wouldn't be so bad if build systems could at least agree on the user interface and package registry format.

  • beagle3 a day ago

    Xmake[0] is as-simple-as-premake and does IIRC everything Premake does and a whole lot more.

    [0] https://xmake.io/

  • PeakKS a day ago

    It's 2025, just use meson

    • nly a day ago

      Completely useless in an airgapped environment

      • throwaway2046 a day ago

        Could you elaborate on that?

        • carlmr a day ago

          I'm guessing it needs internet for everything and can't work with local repositories.

          • account42 19 hours ago

            Not really a fan of Meson but I doubt that that's the case as it is very popular in big OSS projects and distributions aren't throwing a fit.

nialv7 a day ago

The best kind of porting - other people have already done most of the work for you!

jqpabc123 5 days ago

This is clearly aimed at faster results in a single user desktop environment.

In a threaded server type app where available processor cores are already being utilized, I don't see much real advantage in this --- if any.

  • GuinansEyebrows a day ago

    depends on the current load. i've worked places where we would create nightly postgres dumps via pg_dumpall, then pipe through pigz to compress. it's great if you run it when load is otherwise low and you want to squeeze every bit of performance out of the box during that quiet window.

    this predates the maturation of pg_dump/pg_restore concurrency features :)

    • ggm a day ago

      Not to over state it, embedding the parallelism into the application drives to the logic "the application is where we know we can do it" but embedding the parallelism into a discrete lower layer and using pipes drives to "this is a generic UNIX model of how to process data"

      The thing with "and pipe to <thing>" is that you then reduce to a serial buffer delay decoding the pipe input. I do this, because often its both logically simple and the component of serial->parallel delay deblocking on a pipe is low.

      Which is where xargs and the prefork model comes in, because instead you segment/shard the process, and either don't have a re-unification burden or its a simple serialise over the outputs.

      When I know I can shard, and I don't know how to tell the appication to be parallel, this is my path out.