eps 5 years ago

This will end up including wrong headers if they happen to be erroneously present on the machine. It basically assumes that if xyz.h is present, then it must be the right header for the job.

The standard approach is more robust and predictable - if we are building for X, then xyz.h must be present. It's a stronger invariant.

  • ComputerGuru 5 years ago

    > The standard approach is more robust and predictable - if we are building for X, then xyz.h must be present. It's a stronger invariant.

    I'm not sure if you presently maintain any popular cross-platform packages (e.g. source code + build scripts) but that hasn't been the case for a long time. With all the different fringe distributions, bastardized kernels, mix of packages installed via the preferred system package manager vs 3rd party package managers like npm, pip, and homebrew, newer versions of libraries built and installed by hand from a tarball or git checkout, etc. all modern build scripts have switched to feature detection rather than (exclusively) platform detection (which I still favor, when I can get away with it).

    autoconf, cmake, meson, etc. all recommend testing the toolchain for a feature (e.g. build a one-liner that tries to call a system API linking first against x, and if that fails, then y, then z) and go with whatever succeeds first (in the order suggested by the developer) leading to horribly complicated messes with infinite permutations.. but generally significantly less user issues when attempting to build a package or library.

    However, I do feel that I may have misunderstood you and this is the approach you are suggesting?

    • compsciphd 5 years ago

      autoconf et all, don't they test that the feature/api works as expected not just that something can compile if the header is included but not actually used?

ori_b 5 years ago

Please don't do this. Instead, write portability shims, and include them unconditionally.

  • ComputerGuru 5 years ago

    This has been my preferred approach for multitargeting since day one (long before there was even a unified interface for creating cross-platform synchronization primitives, let alone filesystem and hardware access), but unfortunately I think we lost that race a long time ago.

  • IloveHN84 5 years ago

    Why not test this under CMake or similar tool?

ska 5 years ago

> Now, the code doesn’t depend on the platform name, which might be better in some cases.

Even the author is hedging with "might". This approach isn't going to give you robust portability.

  • Const-me 5 years ago

    C and C++ are too low level for robust portability.

    The proposal is not a silver bullet for all portability issues. I think it still gonna help, by making slightly easier (on average) to both write and build cross-platform software.

beached_whale 5 years ago

I wrote a tool that tested a compiler for what it says it supports a while back. It's probably not full up to date with the C++20 ones but should be once we know the final set. I haven't found any false positives so far, but def some false negatives(Apple Clang) where they don't report what they support. This may also be due to not fully supporting them. https://github.com/beached/cpp_feature_flags

shin_lao 5 years ago

I would refrain from doing this.

The problem is that you may have the proper include, but may be using the wrong compiler.

Additionally, you may need a specific version of the implementation of the header, not just the presence of the header.

Thoughts?