eloff a year ago

C++ is a great real world example of ignoring the advice "if you find yourself in a hole, stop digging". They have this problem with complexity and the only strategy they have to tackle it is by adding more complexity. Maybe they're going to overflow back to 0. Or they'll come out in China. Who knows. I've moved on.

  • g42gregory a year ago

    I feel that this is a feature, not a bug. While I don't use C++ extensively, I think the Industry needs at least one language, which does not constrain the developer in any way. I think such a language is currently C++. It has all kinds of guidelines now to stay in the "safe zone", but there are many developers who (1) need unfettered access to the hardware and/or (2) need all kinds of language capabilities to write an extreme performance code.

    • JonChesterfield a year ago

      I think the Industry rather likes heavily constrained languages. There's a sense in which that's the point of a language - it's an abstraction that cannot be breached, which limits what you can ask the underlying machine to do.

      C++ won't let you define new types at runtime, but the advantage there is you know that the rest of your C++ codebase won't be doing that. It won't let you reflect on runtime data, but that means various changes you make to your program are undetectable outside of bounds (class or translation unit, sometimes shared library).

      Forth or assembly will let you do whatever the hardware is game for because either can emit raw bytes and then interpret them as machine code. The cost is you have, at best, conventions about what the rest of the codebase is doing.

      If you push too hard towards high performance in C++ the language lets you down. Aliasing rules mean data must be always atomic or never, and can't sometimes be an array of simd values. Or fno-strict-aliasing which costs you elsewhere. No control over calling conventions, instruction selection, register allocation or scheduling. On the bright side you can usually force partial evaluation well enough with template instantiations, but you can't have the inverse where you explicitly fold equal machine code implementations on different types. Plus trivial stuff like the embarrassment of unique_ptr imposing overhead if you pass it to a function. So C++ will get you within N% of optimal, most of the time, and that's usually good enough.

    • virtualritz a year ago

      Could you elaborate what I can do in C++ that I can't do in Rust (using unsafe, if I must)?

      For context: I've been writing code in C++ for most of my career (25+ years) and being for VFX/3D rendering it was mostly performance critical code. Now I write Rust for some very performance critical tasks in finance ...

      • chlorion a year ago

        There are still some things that C++ can do that Rust can't. A few of the larger and common examples would be, specialization, placement and variadic generics.

        At least variadic generics isn't really a performance thing, it's just an example of a rough edge you run into here and there. Specialization and placement can be pretty important for performance though!

        • virtualritz a year ago

          I was asking in the context of OPs claim. And that was regarding low level optimizations. I think only placement is a valid counterexample here but AFAIK there is support for Rust in that, if a bit elaborate in expression.

        • estebank a year ago

          Placement is not yet a language feature, but it can be expressed in a verbose manner and some have written macros for it.

      • kllrnohj a year ago

        > Could you elaborate what I can do in C++ that I can't do in Rust (using unsafe, if I must)?

        Use existing C++ libraries/code. I love Rust, or at least what I've seen of it since I haven't gotten the pleasure to use it in a meaningful capacity, but it's naive to pretend existing code can just be re-written and therefore existing languages should stop improving.

        I do think it'll be fascinating to see how the performance of Rust and C++ ends up shaking out head to head. That is, how much is Rust actually sacrificing in performance by reducing the amount of UB (if anything, or if other aspects allow for better performance on average)

        • ameliaquining a year ago

          The only operations I can think of that are undefined behavior in C++ but not Rust are signed integer overflow (which Rust defines to panic in debug builds or wrap around in release builds) and out-of-range casts from floating-point to integer (which Rust defines to saturate).

          Out-of-bounds index access is a borderline case; both languages have a safe index operation wherein out-of-bounds accesses trap, and a fast one wherein they're undefined behavior. However, C++ gives the convenient bracket syntax, which programmers are likely to reach for by default, to the fast behavior, and requires the safe behavior to be spelled out ("at"), while Rust does the reverse (the fast behavior is "get_unchecked" or "get_unchecked_mut"). Also it seems that not all relevant data types in the C++ standard library support the safe behavior, which is unfortunate.

          In all other cases (that I can think of), Rust prevents undefined behavior at runtime not by changing it to do something defined at runtime instead, but by requiring the programmer to prove to the compiler that the undefined behavior can't happen. This may be annoying, and may encourage programmers to resort to things like RefCell that have runtime costs in order to avoid the difficulty of such proofs, but it should never stop you from doing the unsafe maximum-performance thing if you really want to.

          Rust can also do some optimizations that C++ can't; it has additional forms of undefined behavior, like mutable aliasing and invalid UTF-8 strings, that the compiler can in principle exploit. Probably the more important advantage, though, is that Rust's maintainers can freely make changes to its compiler and standard library that don't preserve compatibility with existing compiled code (as opposed to existing source code), because the language has never promised ABI stability and (unlike C++) doesn't have an entrenched community of users who count on ABI stability and will complain if it's broken. Almost all Rust binaries statically link all dependencies except for libc, and build all statically-linked dependencies other than the standard library (which is tightly coupled to the compiler) from source on every compilation, and this has been the case since the language's beginning. For an explanation of why C++'s need to preserve backwards compatibility for compiled code inhibits optimization, see: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p20...

          As for your other point, I think Rust will need smoother C++ interop than it currently has before it can displace C++, but there's hope. Here's the project I'm currently most optimistic about: https://github.com/google/crubit

      • eloff a year ago

        I also use Rust now. If you use unsafe, there is little you can do in C++ that you can't do in Rust. However, some things like data races and mutable aliasing are permitted in C++, but not in unsafe Rust, as they invoke undefined behavior. The Rust nomicon has more details about what's allowed.

        • virtualritz a year ago

          > However, some things like data races and mutable aliasing are permitted in C++, but not in unsafe Rust, as they invoke undefined behavior.

          Define 'permitteed'. Unsafe is, among other things, exactly that. It tells rustc to permit you to do things that are ... well, unsafe. Including the ones you listed.

          Maybe you don't understand the meaning of unsafe in Rust?

          Unsafe causing UB is a possibility. As is taking two &mut refs to the same memory location using exactly that keyword. And that example works for multi-threaded Rust code using a shared resource accessed as such too.

          • kaba0 a year ago

            The compiler is free to never write out your changes to memory in that shared resource use-case, which you might be depending on. AFAIK rust doesn’t have a standardized memory model which should be needed to write correct code that data races.

            • ameliaquining a year ago

              This isn't a difference between Rust and C++; data races are undefined behavior in both languages. If you want two different threads to be allowed to access the same memory without locking, that's what std::atomic (in C++) or std::sync::atomic (in Rust) does.

              • cyber_kinetist a year ago

                He’s talking about formal memory models, which are a bit different from what you’re talking about (though you’re correct that they are pragmatically similar.) C/C++ has a formalized memory model that describe the high level logical rules/guarantees for writing safe concurrent code, while Unsafe Rust doesn’t have one explicitly written in spec/paper (although they kinda borrowed a less-formalized version of it from C/C++.) It has to do with ironing out hardware-specific behavior to obey a certain set of rules, while making sure all the compiler optimizations do not perform any invalid transformations against these rules. (The Rustonomicon explains this well in layman terms, and acknowledges the complexity of the issue: https://doc.rust-lang.org/nomicon/atomics.html)

                In practice, it seems like the C/C++ model has some glaring flaws anyway, so I can’t say that Rust’s is really “worse”. But since Rust has this mission to be better than C++ in terms of safety, this is one of thorny issues of Rust that need to be tackled to really let its advantages shine.

                • JonChesterfield a year ago

                  I thought the C11 memory model was considered correct, possibly modulo 'consume' which seems to be ignored. The implementation in terms of types instead of operations has a performance cost but otherwise works. What glaring flaws do you have in mind?

                  • cyber_kinetist a year ago

                    Well, not exactly “glaring” (unless you are a PL expert), but for instance there’s a paper that was referenced in that Rustonomicon link:

                    https://fzn.fr/readings/c11comp.pdf

                    And also

                    https://plv.mpi-sws.org/scfix/paper.pdf

                    Though maybe these issues were fixed in C++20 (which I only learned till recently that they’ve revised their memory model there!)

                    • JonChesterfield a year ago

                      Nice references. Going to take me a while to make sense of them. A superficial interpretation is one says relaxed doesn't work and the other says sequentially consistent doesn't work. Not great news tbh, hopefully it'll look better on closer examination.

          • eloff a year ago

            It's undefined behavior. You can write unsafe code that's UB in both Rust and C++ (it's permitted, which I think is your objection), but those are not correct programs and they may behave in surprising ways or break when seemingly unrelated changes are made.

        • shaklee3 a year ago

          if you're using unsafe then what's the argument to use rust? the comparison should be what you can't do in rust without unsafe.

          • undefuser a year ago

            Perhaps you can write a small subset of your program in unsafe Rust, like performance critical inner loop, and then the rest of the program in safe Rust.

        • Thorrez a year ago

          What data races are permitted in C++?

          • tialaramex a year ago

            Make some mutable objects, lets say a std::vector of Geese. Spin up two threads, A and B. Make a pointer, or a reference, or by whatever means you prefer, like an index, the same Goose from the vector, and give both A and B that pointer/ reference/ index whatever.

            Both threads can now change the Goose at the same time. That's a data race. As a result it destroys Sequential Consistency, but that barely matters in C++ because it also is Undefined Behaviour, your program no longer has any meaning.

            In (safe) Rust it won't let you give both A and B a way to mutate the same Goose at the same time, thus data races can't occur, thus you have Sequential Consistency, and your program has a defined meaning.

            • Thorrez a year ago

              Yeah, although I wouldn't exactly call it "permitted" if it's undefined behavior.

              • tialaramex a year ago

                C++ has this IFNDR - "Ill-Formed: No Diagnostic Required" in the standard which marks various things as simultaneously not C++ and so the standard doesn't define what should happen, and yet also your standards conforming compiler is not required to diagnose this and tell you about the problem.

                You can presumably argue that all those things aren't "permitted" but there is no way to detect for sure if you wrote any of them, and if you did your entire C++ program has no defined meaning and might do anything at all.

                There's a reason this exists. Rice's Theorem says non-trivial Semantic questions are Undecidable. You thus can't always correctly decide whether a program has some non-trivial semantic property, but C++ wants to require a whole lot of such properties. So, when it's difficult they just say the compiler must err on the side of emitting nonsense programs.

                [Rust takes the other path here, if the Rust compiler can't be sure that your program has the required semantic properties you get an error. This is rare but annoying, typically you can easily modify your program to satisfy the compiler or when you try to do so you realise actually the compiler was right, this program doesn't have the required semantic properties]

                I am increasingly confident that C++ made the wrong choice here, neither of these outcomes is desirable but the Rust outcome has a negative feedback loop - if changes make Rust's compiler annoy more programmers with spurious errors there's pushback. The C++ approach has positive feedback, as C++ gets less well-defined people's compilers accept whatever nonsense they wrote, nobody tells the committee to stop doing this - until one day it blows up in their face.

      • chrsig a year ago

        > Could you elaborate what I can do in C++ that I can't do in Rust (using unsafe, if I must)?

        Cuda :(

      • shaklee3 a year ago

        template metaprogramming

        • virtualritz a year ago

          The OP was talking about writing performance critical code in either language.

          Template metaprogramming is not a language feature that is needed when I write code that is close to the metal.

          • shaklee3 a year ago

            it absolutely is. I do a lot of embedded programming, and tmp is used heavily in libraries to offload as much as possible to compile time.

            • virtualritz a year ago

              Compile time code generation can be done in many ways.

              Again, templates are not a feature that is part of the close-to-the-metal part of C++.

              What exactly does template metaprogramming in C++ allow you to do that you could not do in Rust? In the context of performance critical code?

              Besides, Rust has generics and macros. The latter are arguable as powerful (if almost as cumbersome to debug) as C++ templates.

              See also https://news.ycombinator.com/item?id=16283956

    • kaba0 a year ago

      > I think the Industry needs at least one language, which does not constrain the developer in any way

      That’s not a thing you want from a programming language. What you can express is just as important as what you can’t.

    • halpmeh a year ago

      C is a way better language in that regard than C++.

  • devnullbrain a year ago

    This is a surface-level meme that just doesn't reflect reality. In old C++ you had to spend all your time worrying about manual ctors and memory management. These problems have been erased by improvements to the language. You don't even use raw pointers any more. The language is less complex to use now than it was 10 years ago.

    If you've moved on then your insight is outdated.

    • _gabe_ a year ago

      I think the third definition here sums up what complex means pretty well:

      > a group of obviously related units of which the degree and nature of the relationship is imperfectly known[0]

      It's impossible for any single person to understand how the C++ language interacts with itself without a reference manual: it's grammar can lead to the most vexing parse; it has metaprogramming builtin using templates or macros, allowing for arbitrary code execution at compile time; it has a ridiculous number of ways to construct an object (move constructors, copy constructors, default constructors, is the object heap or stack allocated, are you using bracket initialization or parenthesis, etc); and more.

      Also, I pointed this out in a comment awhile ago, but as of C++17 and over 20 years of writing technical books about C++, Scott Meyers doesn't trust himself to determine whether a given code snippet is or is not valid C++:

      > It's not that I'm too lazy to do it. It's that in order to fix errors, I have to be able to identify them. That's something I no longer trust myself to do.[1]

      If this language isn't considered complex, I don't know what is.

      [0]: https://www.merriam-webster.com/dictionary/complex

      [1]: https://scottmeyers.blogspot.com/2018/09/the-errata-evaluati...

    • usrnm a year ago

      > In old C++ you had to spend all your time worrying about manual ctors

      That doesn't even make sense. Maybe you meant "destructors" but even then it has nothing to do with reality

      • devnullbrain a year ago

        I mean constructors. It's the exception to have to do anything meaningful in the body of a constructor in modern C++ - you can do a lot with member initialisation lists, delegated constructors and defaulted/deleted constructors (remember boost::noncopyable?). And your assignment operators will actually be exception safe.

        • kajaktum a year ago

          Constructors was a mistake. Stupid and simple initialisation like Rust and Go is the way to go forward.

          • JonChesterfield a year ago

            Constructors induced exceptions, and once we have exceptions they became the failure reporting mechanism of the standard library, leaving us with goto-some-other-function-using-dynamic-scope as one of the foundation blocks of the language. Thus constructors are a reasonable contender for the worst design mistake in C++.

    • pjmlp a year ago

      Kind of true, if we ignore all the code that is in production and no one is going to rewrite for C++20 ways of coding.

      • devnullbrain a year ago

        Which would not have its problems solved by halting C++ development or moving to a new language

        • pjmlp a year ago

          Anyone working on those codebases needs to be aware of the new ways, while mastering the old ways.

          The ideal modern C++ without having to worry about "deprecated" ways of coding, is more the exception than the rule.

  • fluoridation a year ago

    I don't agree with that analogy. For example, templates were for sure complex, and they're very much a complete feature. That is, they haven't been made more complex over time. If we accept that the committee has already dug themselves into a hole, what they're doing is digging themselves out by digging out the entire field to the same depth, thus eliminating the hole originally dug, which I think it's a perfectly reasonable strategy.

    • kllrnohj a year ago

      > For example, templates were for sure complex, and they're very much a complete feature. That is, they haven't been made more complex over time.

      I agree with your broader point but parameter packs (aka, variadic templates) were an addition to templates made in C++11. So strictly speaking they have gotten more complex.

      • fluoridation a year ago

        Fair enough, but that addition wasn't to solve the problem of templates being too complex, but to support usages that were previously impossible.

        • usrnm a year ago

          They were possible and widely used, but a lot less ergonomic. You "just" had to copy-paste some number of instantiatons for the number of arguments between 0 and, say, 50.

          • fluoridation a year ago

            Wasn't it also at the cost of compilation memory? I still remember the time when it was easy to crash compilers by just giving them hard enough templates.

      • cbsmith a year ago

        I mean, that's just the tip of the iceberg. Templates have evolved a great deal over time, and continue to do so (C++20 added Concepts).

        • kllrnohj a year ago

          Constraints and concepts don't increase the scope or feature set of templates, which was GP's broader point. That is, all the additions are around things that are a smaller, but easier to use, subset of the broader feature. `if constexpr` for example just being a way more approachable std::enable_if, not an expansion of the feature set. And concepts then are about clarifying the contract of the template, they don't actually add new capabilities.

          It'd be I think reflection, however that ends up looking, that's the next actual feature set expansion of templates.

          • cbsmith a year ago

            They added a whole new keyword just to support it. Just because C++ templates are Turing complete doesn't mean new features don't count as new features.

  • hinkley a year ago

    C++ is a bit of a honey trap for people who love complexity. Unfortunately it's only caught a small fraction of all of the people who worship at that altar.

    • FpUser a year ago

      I love simplicity. I also love performance. This is why I use modern C++ for backend servers. It allows me to write very simple code that is still very performant (actually leaves standard stacks in the dust). I sure do not fall into honeytraps so for me complex languages like C++ is an advantage as I can always find what works the best for particular task.

      • namkt a year ago

        Do you use any public libraries for those backend servers, or is it all home-grown? Curious what's at play.

        • FpUser a year ago

          yes I do: spp-httplib, taopq, spdlog, rapidjson. Also some other extras and alternatives depending on particular needs but the ones mentioned are enough to get one started on generic web app backend. What I do not and likely will not use are big opinionated frameworks.

          Yes I do write my own libs as well but those are very domain oriented and mostly serve needs of particular application. Nothing exciting there.

          • fredrikholm a year ago

            Do you have any open sourced projects wit this setup? I have a soft spot for (C) single header libraries, would be interesting to compare.

            • kaba0 a year ago

              > I have a soft spot for (C) single header libraries

              May I ask you why? The whole preprocessor macro thing is just a disgusting hack that compiles slowly, is error prone and is not even expressive enough for the most basic of things.

              • FpUser a year ago

                I just tried changing a single file in my real project that uses those 4 libs. 3 seconds from change to starting debugging session.

                Full rebuild of the project took 14 seconds.

                While it is not blazing speed like one gets with the likes of Delphi, Go, etc. it is still quite ok. I am a practical person and can tolerate couple of seconds of compile-run in return to convenient libs.

                And it is sure expressive enough to give me what I want. Yeah, writing template libs is something but fortunately other than writing template here and there I do not really have to do this kind of work.

            • FpUser a year ago

              Unfortunately I have no time for opensource.

              • aninteger a year ago

                Yeah... writing web apps in C++ will do that. No time.

                • FpUser a year ago

                  Take a deep breath and think about what you've just said.

                  In reality if the applications are of any decent size there is basically no difference time-wise doing those either in C++ with libs or PHP/JS/Python. I have fair practical experience on this.

                  As for open source - I run 2 companies and do a lot of development myself. I have family and also do a lot of fitness so I can creak along at my young age of 60. You get the drill.

  • gavinray a year ago

    I don't write C++ for a living, and only have ~2 years' experience writing it for some hobby projects on weekends.

    I've found that the way I'm most productive is by using data-oriented design where I put everything in structs and don't use classes. I still use a lot of modern C++ library features, but it reads more like C.

    For example:

        struct BufferPool
        {
            alignas(PAGE_SIZE) char frames[BUF_POOL_NUM_FRAMES][PAGE_SIZE];
    
            std::atomic_uint32_t free_frames[BUF_POOL_NUM_FRAMES];
            std::atomic_uint32_t pin_count[BUF_POOL_NUM_FRAMES];
            std::atomic_bool     is_dirty[BUF_POOL_NUM_FRAMES];
            std::atomic_uint32_t page_id_to_frame_idx[];
        };
    
        std::span<const char>
        BufferPool_get_record(BufferPool* pool, RecordID rid, int db_file_fd);
    
    What's nice about having all this stuff thrown into the language is that you're free to pick and choose what you want to adopt from it.

    For a long time, I tried to cram everything I wrote in C++ into an OOP pattern with classes, and it just didn't jive with me. Then I stopped doing that and started writing it more like TypeScript/Kotlin/C which I'm more familiar with and it's been a lot more pleasant ever since.

    There's been so much great stuff in C++ 17/20/23 and you can just cherry pick all the bits you want to use out of it and ignore the rest.

    • cbsmith a year ago

      I mean, if you change the name to "BufferPool::get_record" and you use an implicit this pointer instead of your "pool" argument, you have a method. It seems like you're fighting the paradigm more than you need to.

      • gavinray a year ago

        You're not wrong but for some reason this style is easier for me to work with

        I know logically/semantically they're equivalent. The v-table function generated if the method is moved inside the class/struct winds up being identical.

        Dunno what it is, some kind of familiarity bias with the way the code is laid out I guess, not coming from OOP languages

        • cbsmith a year ago

          > The v-table function generated if the method is moved inside the class/struct winds up being identical.

          Unless you add some additional semantics/keywords, it's not going to create a vtable.

          > Dunno what it is, some kind of familiarity bias with the way the code is laid out I guess, not coming from OOP languages.

          I'm sure that's what it is.

        • thelopa a year ago

          Note: it wouldn’t generate a vtable entry unless the method was declared virtual.

          • gavinray a year ago

            Ahh, what do you call the method table thing for non-virtual classes?

            • webstrand a year ago

              C++ uses name mangling to give names to non-virtual methods. They are no different than a C function that takes a pointer, except they may use a different calling convention.

              Even if the method is virtual, C++ compilers make an effort to call methods directly without indirection through a vtable wherever possible.

              • tialaramex a year ago

                The interesting thing is that you can't call the "function that takes a pointer". Accessing it as a method is the only thing which works.

                A long term ambition for C++ and more specifically for Bjarne has been UFCS, Universal Function Call Syntax, which a few other languages have. But C++ can't do this today.

                • kllrnohj a year ago

                  std::mem_fn will effectively give you the underlying function that takes a pointer. Strictly speaking mem_fn is defined to generate the wrapper function, but in practice the optimizer is just going to strip that away and call the name-mangled function directly.

                  Example: https://godbolt.org/z/6Y1raxMce

                  • Matheus28 a year ago

                    You can just use member function pointers without the wrapper. The syntax for calling them is a bit uglier though.

                    • kllrnohj a year ago

                      Yup, but then the call syntax is different so it fails to achieve the "Universal Function Call Syntax" that OP was asking for which std::mem_fn provides.

                  • gavinray a year ago

                    Whoa, this is really cool, thanks for posting

                • cbsmith a year ago

                  Yeah, UFCS remains illusive, but tantalizingly close. It's pretty easy to make

                  ``` struct foo { void bar(); }

                  bar(foo* i) { assert(i != nullptr); i->bar(); } ```

                  work, but there's enough syntactic complexity in the language that it isn't as easy as it should be.

                  • kllrnohj a year ago

                    It's a 1-liner with std::mem_fn

            • cornstalks a year ago

              There is no table if no ‘virtual’ is involved.

              • adrian_b a year ago

                To be more clear, when no "virtual" method is declared, then no table is generated because none is needed.

                In this case, the addresses of all the methods are known at compile time and the compiler knows which to choose when anyone is invoked in the source code, based on the type of the object and on the types of the method arguments.

                When there is at least one "virtual" method and you have a pointer to an object, the type of the object cannot be known at compile-time, so the compiler cannot determine which method to invoke.

                That is why the compiler needs to create a table with pointers to the virtual methods for each class with such methods, and each object of those classes must include a pointer to the corresponding vtable, so that the correct method to invoke can be determined at run time (from the index of the virtual method).

    • dagmx a year ago

      FWIW, structs and classes are largely identical in C++ , with the exception of public/private defaults.

      So using one over the other doesn’t really change anything with regards to data oriented design.

      • gavinray a year ago

        Aye, as opposed to D where structs are value types and classes are reference types

        It's just a small annoyance to have to put "public:" at the top of every data-class definition

        So I use structs everywhere but you're completely right, no real difference

        • dagmx a year ago

          Yeah I do the same as you since I write a ton of Swift (which does the same delineation as D).

          It’s mostly just a mental marker for myself that I want to treat one as a value type.

    • midnightclubbed a year ago

      If I was code reviewing this I would strongly suggest std::array instead of those raw array;) I’m also curious why you have page_id_to_frame_idx[] without a size. If you are using that paradigm to read off the end of the struct then it needs commenting heavily, if it’s a typo then std::array would have caught it.

      I do understand that it’s just a snippet of code to show a style, so ignore this pedant!

  • userbinator a year ago

    The trick with C++ is to use and pick things to work on that haven't devolved into a massive abomination of complexity just to be trendy and "modern". I've found that something like the classic (pun intended) "C with classes" style is a good balance.

    Personally, I prefer to use C (C89) which is high-level enough to be reasonably productive, while at the same time not high-level enough to discourage the creation of overly complex code.

    There are surprisingly frequent posts here from people who have written their own C compiler. I don't think I've seen a single C++ one yet.

    • estebank a year ago

      Are you baiting Walter for a reply? :)

    • JonChesterfield a year ago

      `clang++ -ffreestanding` (without a standard library) is growing on me as a C89 replacement. There's some annoyances like std::launder that you need to implement using compiler intrinsics but it's feasible.

  • klyrs a year ago

    I've got my eye on Cpp2... Herb says he isn't interested in making it an official language but I suspect that once it stabilizes it will be an attractive migration.

    • tialaramex a year ago

      Herb doesn't even want to default to immutable local variables (what he as a C++ person calls "const by default"). Obviously Cpp2 isn't finished enough for this to necessarily jump out, but Herb figures if we make a variable then we must intend to vary it, which is one of those claims I expect from newbie C++ programmers keen to defend their new language.

      No, come on Herb, Kate Gregory has told us why there are variables, they are names for things. The machine doesn't need names, but the human maintenance programmers do. As a Microsoft employee I'm going to hazard a guess that Herb spends a lot less time staring at C++ real humans wrote before they died/ retired/ got fired without notice than Kate does in her consulting job. She knows what she's talking about.

      • ameliaquining a year ago

        I'm honestly not convinced that immutability for local variables is a particularly useful feature. Within the scope of a single function, it's usually pretty easy to see if a variable is being reassigned or not; there aren't the same kind of programming-in-the-large preventing-spooky-action-at-a-distance benefits that come from immutability across API boundaries. You increase the language's complexity and learning curve, for a benefit that's rather speculative and unclear.

        (The exception is closures; prohibiting a variable from being reassigned after it's been closed over is useful because anyone reading the code may expect one of two conflicting behaviors depending on context, and so it's good to instead write the code in a way that doesn't have that ambiguity. Java got this one right.)

        A post articulating this point, by the person primarily responsible for Rust's shared-xor-mutable architecture (so presumably he has some idea what he's talking about, though note that his argument did not carry the day): https://smallcultfollowing.com/babysteps/blog/2014/05/13/foc...

        • tialaramex a year ago

          Notice that what you're asking for is to abolish const local variables. Which also isn't what Herb offers, this was the status quo in C during the K&R era because "const" didn't exist in K&R C. If you actually want const local variables, just not by default, you're not actually agreeing with Niko.

    • elteto a year ago

      But then you have Google with Carbon… which is a direct competitor to cpp2. Apple hasn’t shown interest in either, and MS will of course most likely support cpp2.

      Without the giants aligning their interests I’m afraid there will be a split in the community. Clang’s std C++ library is already behind GCC’s because Apple and Google have “moved” on.

      • bsldld a year ago

        > ...Apple and Google have “moved” on.

        Apple moved to Swift, but what has Google moved to?

        • elteto a year ago

          It’s in the process of “moving” on to Carbon.

          • devnullbrain a year ago

            That is a lot of optimism for a 2022 project from 2022 Google.

            • elteto a year ago

              Everything starts somewhere. I believe Chandler Carruth is leading the project so that means non-trivial resources are being dedicated to it. Their timeline is also fairly aggressive.

              • ameliaquining a year ago

                They're also simultaneously looking at Rust as a possible successor to C++ (e.g., by developing better interop tooling). It's believed that at most one of these projects can succeed in the long term.

                • elteto a year ago

                  Interesting, thanks for the insider take. I honestly would love if they went with Rust.

  • joebaf a year ago

    on the other hand with more options you can express more and write safer code (for example with constinit which solves static order init fiasco...) And if you don't have enough knowledge you can just stick to const.

    What re alternatives for a system programming language? (Rust seems to be fine, but still it's not super easy...)

  • overgard a year ago

    Eh, it's gotten more complex sure, but in real world code most of this solves real problems, and even if it's technically adding to the language in practice most of the new stuff replaces old patterns. I don't think anyone that's used c++20 would want to go back to c++98, or c++11.

  • Animats a year ago

    I'm glad I switched to Rust.

    In C/C++, the language designers can't take anything out. That would break things. There's so much legacy code.

    C hasn't even totally removed "strcat", etc. yet. That should have been removed in the previous century.

wvenable a year ago

A lot of negative comments here but these keywords seem perfectly reasonable to me. If I actually have a place where I'd need them, I'd be able to understand which one to use and why.

I like to be able to do more compile-time evaluation and it's definitely a lot less confusing now than abusing templates for that.

  • kllrnohj a year ago

    Yeah I find these keywords perfectly approachable & straightfoward. I find them easier to understand and rationalize than, say, Kotlin's set of with/let/apply/also/run ( https://kotlinlang.org/docs/scope-functions.html#function-se... )

    The comments all seem like just mindless anti-C++ circle jerking, with zero relevance to what's actually being discussed in the article. But that's sadly quite common with C++ articles here. If the title has C++, expect to see the same rants regardless of the article.

  • throwawaymaths a year ago

    Ok but it's 2022 and something like ziglang can do all of these things with two keywords (const and comptime). Why does C++ need four?

    • kllrnohj a year ago

      It looks like in zig all functions are `constexpr` ?

      As for `constinit`, it looks like ziglang doesn't have an answer for the problem that constinit solves. That is, ziglang appears to have the same bugs here that C++ did before constinit was added.

      • anonymoushn a year ago

        Can you give an example of a bug in a Zig program caused by the lack of constinit? It seems like Zig just does not have the delayed initialization of static and thread-local variables that C++ has, but I'm not completely sure of this.

        • kllrnohj a year ago

          Unless I'm misreading this: https://ziglang.org/documentation/master/#Container-Level-Va...

          zig has lazily initialized at runtime mutable globals, and compile-time initialized constant globals (so C++'s constexpr variables). But what it doesn't have is compile-time initialized mutable globals (which is what `constinit` does)

          • AndyKelley a year ago

                // compile-time initialized mutable global:
                var foo = bar();
            • kllrnohj a year ago

              > If a container level variable is const then its value is comptime-known, otherwise it is runtime-known.

              I'm not sure how to reconcile that Zig documentation with your comment's claim. It appears that Zig relies on opportunistic behaviors here instead of contracts. That is, your var foo could be comptime known, but isn't required to be. Which is fine enough, but not comparable to `constinit` either.

              Alternatively if all container variables are exclusively comptime initialized, then the obvious missing part from Zig would be the inverse of constinit which is the onload initializer behavior.

              • anonymoushn a year ago

                The quoted documentation says if you initialize a mutable variable to 5 at compile time and later change it to 2 and print it, it won't print 5, because uses of that variable in expressions won't be resolved to the constant value used to initialize it at compile time.

                When is onload? I don't think that's a time that occurs during program execution in Zig.

                • kllrnohj a year ago

                  Does Zig not have an equivalent of `__attribute__((constructor))`?

                  If not then that'd be the 3rd type of initialization it's missing, the equivalent of C++'s nothing specified.

                  • ori_b a year ago

                    It's relatively easy to solve the problem in a language with acyclic imports by doing a topological sort of the imports, and initializing in that order. In C++, with textual includes and no standardized control over the linker, you don't have the information or well-defined control to order the initialization, so you have a problem.

                    • kllrnohj a year ago

                      If it was "relatively easy" then there wouldn't be a million dependency injection solutions for Java of ungodly runtime cost to try and solve this problem.

                      People want to initialize stuff at load library time. They will always want to do this. If the language doesn't provide a solution, people will just bolt on shitty add-on toolchains to do it anyway.

                      This isn't a uniquely C++ problem (__attribute__((constructor)) is after all for C, not C++).

                      Take for example Rust, which also doesn't provide an attribute((constructor)) equivalent and says nothing happens before or after main. Except https://crates.io/crates/ctor exists to immediately say "fuck that noise" and racked up >20m downloads.

                      • ori_b a year ago

                        > If it was "relatively easy" then there wouldn't be a million dependency injection solutions for Java of ungodly runtime cost to try and solve this problem.

                        You realize Java has the feature you're discussing, without dependency injection?

                           class Foo {
                              static {
                                   initialize();
                              }
                            }
                        
                        
                        They even have a well defined execution order:

                        https://docs.oracle.com/javase/specs/jls/se8/html/jls-14.htm..., section 8.7, with order defined in section 12.4

                        • kllrnohj a year ago

                          That runs at class load time, not at library/executable load time as c++ static constructors or __attribute__((constructor)) does.

                          So no, that's not equivalent. Not even close. This is why you see things in Java that scan through loaded jars looking for all classes with attributes to just speculatively load. To recreate what c++ has natively & C has with a broadly supported extension.

                    • jcelerier a year ago

                      Okay but we're talking about native languages where you are bound by the rules of your platform's dlopen / LoadLibrary implementation ; this is where so many static initialisation bugs lurk, when you load libraries dynamically

                      • ori_b a year ago

                        No, you're not. The fact that a misfeature exists in the toolchain doesn't force you to use it.

                        You can order the init sections on your own when you generate the binaries and add your own constraints, and if your toolchain doesn't give you that control, you can generate and call your own init functions. (Since I didn't want to dick with the linker too much, I generated my own init functions in my natively compiled language to solve this problem).

                        Your language's dlopen wrapper can also dlsym the initializer and call it if it's present.

                        This is very much a C++ problem, because its separate compilation model throws away initializer ordering, and prevents the compiler from doing anything about it. And, of course, compatibility makes this a tough sell for C++ -- languages unconstrained by history don't have to worry about this.

                        tl;dr: Ignore the constructor attribute/init section, and generate your own function that orders things correctly. Then call it before main.

                        • kllrnohj a year ago

                          > tl;dr: Ignore the constructor attribute/init section, and generate your own function that orders things correctly. Then call it before main.

                          How does my shared library do that?

                          • ori_b a year ago

                                 def loadlib() {
                                    lib = dlopen("libfoo.so")
                                    sym = dlsym(lib, "pkgname.__init__")
                                    sym()
                                    return lib
                                 }
                            
                            Hell, you can even do this with a single constructor attribute that runs the initializer, if you want to be fully dlsym compatible.

                            If you have a DAG of dependencies, it's impossible for your library to care whether main or dlopen has called its initializer, because the same initialization order of dependencies is observed in either case. The only thing you need to beware of is guarding against double-init if you dlopen, and the generated library-level initializer code can do that.

                            • kllrnohj a year ago

                              So the answer is you can't, and you're just too stubborn to say as much. Not all languages need all features, zig is allowed to just be insufficient for things, this being one of them.

                              • ori_b a year ago

                                ???

                                If you generate a single initialization symbol that guards against multiple runs in the init section, this works just fine with dlopen.

                                This isn't even hard, but you need to be able to form a full dag of imports.

                                The only problem is that the linker slams together the .init, .ctors, or .init_array sections in some arbitrary order (usually the order that .o files are passed in on the command line, but there are no guarantees). If you topologically sorted the object and library list by import order, things would work out. But because the linker doesn't cooperate, a language that wants to avoid the initialization order fiasco needs to provide its own ordering. Or its own linker.

                                As far as Zig goes -- I have no clue what it does. Never touched it.

                                Also, as a side note -- initializers were initially designed so that there would be exactly one init function per binary or library, and then people wanted to split the initialization up, so the way the init section ends up getting constructed on most unixes is fascinatingly hacky: The linker links `crti.o`, your .o files, `crtn.o`. crti.o contains a function prologue; the init sections in your .o files contain a sequence of call functions, and crtn.o contains a function epilogue. Stitch them all together in order, and you get a single function that you can call.

                                This is deprecated, and now there's a table of function pointers (.init_array).

                                • kllrnohj a year ago

                                  Libraries don't exist in a vacuum with a single well defined user. If I'm shipping, say, libpng.so, and I need to do a one-time initialize per process, the only feasible way to do that is with a c++ static constructor or __attribute__((constructor)). It's why those things exist. It's why linker .ctors exist. It's why rust-ctor crate exists. It's why Java land re-created that by self-scanning all open jars for classes to randomly load.

                                  This is a very widely used capability and it is very definitely not trivial to just make an init function & define the import dag and have that work reliably. People will forget to call it & most of the time they won't have any clue what their dependency DAG even is. And that's assuming it can even form a DAG at all - cyclic dependencies are absolutely a thing, after all.

                                  • ori_b a year ago

                                    Libraries exist with a well defined compiler. The solution is at the compiler level, using information coming from the import graph that exists in languages with a module system.

                                    And I'm not sure how you can forget to import a library and still expect to use it, assuming your language has a module system.

                                    C and C++ can't solve this problem, of course, due to textual inclusion.

                                    • jcelerier a year ago

                                      > The solution is at the compiler level, using information coming from the import graph that exists in languages with a module system.

                                      no, it isn't because your library has to integrate with other languages when you make native stuff. I should be able to call dlopen from any native language without having to care in which language your library was implemented, just calling my OS's dlopen / LoadLibrary OS API function should be enough. If this doesn't work, then your solution is just not acceptable in many cases, so yes you have to work with its limits.

                                      • ori_b a year ago

                                        Can you explain how a compiler topologically sorting the entries in .init_array makes a library unusable? What breaks?

            • lvass a year ago

              How does one assert bar() or better yet, an arbitrary expression you place there, is pure?

              • anonymoushn a year ago

                Well, the compiler attempts to evaluate bar(), and if it cannot, you get a compile error

    • jeffbee a year ago

      Because of path dependency and backwards compatibility. C++ has 30 years of legacy code to build. Zig doesn't even exist yet; after 6 years it is still in "preview".

  • mhh__ a year ago

    The issue isnt so much where C++ is aiming to go but rather how they're getting there.

    Consider: C++ let's an increasingly large subset of the language be evaluated at compile.

    Current endpoint: the whole language will have constexpr everywhere.

    Correct design: Eagerly try to evaluate everything, ban things that obviously shouldn't or can't be done at compile time.

maldev a year ago

Sadly with all of these, it was needed. Constexpr was an monstrosity. Whoever on the comittee board made it "CAN BE" has caused untold horrors on some projects i've worked on. consteval is nice, it's just constexpr, but forced on the compiler, which is how it should've been. There should never be "CAN BE", it's my least favorite thing about C/C++. The compiler shouldn't override me, and consteval is a step in the right direction and has made me target all my projects to c++20(Which is luckily easy since I don't use STL functions).

  • kllrnohj a year ago

    I disagree that consteval is what constexpr should have been. It's great to have a function that can work both at compile time and at runtime. `std::max` being a simple & obvious such example. You of course need that at runtime, that's the classic verison. But why shouldn't it be able to happen at compile time? That's a silly restriction.

    And thus 'constexpr' - it can run in both modes, depending on when & where it's called.

    'consteval' then is for when it must happen at compile time, which is a rather narrow (but very useful!) subset of constexpr.

    Now, what is silly is that `if constexpr` is only allowed to happen at compile time. That should have been `if consteval` instead, the ordering of these additions is unfortunately reversed.

    • maldev a year ago

      The issue is that it took 7 years for consteval to be added. Constexpr is extremely prone to "Silent" bugs, where you may mistype it or do something not allowed in a constexpr, the compiler builds it, and it's never actually compile time. It doesn't tell you, it doesn't warn you, it's just never going to be compile time since it's not valid. consteval fixes this by being "Must be compiletime". In my view, it's as if we only had template arguments and now they finally added actual arguments.

      It should be explicit on what the program is going to put out, especially in a language like C/C++. When you're writing code close to the metal, you need to make sure it's exactly what you wrote. It may be a target platform difference, since I don't use the STL and mainly using C++ language features to target extremely low level devices or goals. In an application dev environment, I can see how constexpr "doing it for you" is fine, but the fact they only had one and not the other is a travesty and has so many footguns.

      For example, I wrote a DRM program that required an additional buildstep to yell at the developer if they exposed strings or confidential information in the binary. This is solved by consteval, which should've been in there since the beggining.

      • kllrnohj a year ago

        You could always know if a function was compile-time or not based on the usages. Eg, static_assert the output or put it in a constexpr value, and it's guaranteed to be compile-time evaluated or it didn't compile.

        consteval is helpful in iterations during development to catch that, and I like it as such, but it's definitely not a constexpr replacement nor what constexpr should have been. It's a very small subset of constexpr usages.

    • JonChesterfield a year ago

      The proper annotation for a function that can run at runtime or compile time is unannotated.

      • kllrnohj a year ago

        Not in C++ it isn't. If you want to change the spec to allow that go for it. But skimming eg. Zig where that's the case, it's also quite non-obvious from the docs if you can know if a function can be used in a comptime statement or not.

        `constexpr` makes "you can use this at compile time" an explicitly documented attribute of the function, instead of just a guess or by needing to read the implementation.

  • Rusky a year ago

    This is more of a marketing or teaching problem than an actual language design problem. It's not about the compiler overriding the programmer, it's just designed for a different purpose.

    There were already a bunch of places in the language that required compile-time evaluation- array sizes, template args, static_assert, case expressions, etc. "Can be" is all about enabling existing APIs to be used in those contexts. You can't (and don't want to) force those APIs to run exclusively at compile time, because programs are already using them at runtime all over the place!

    This talk about compile time evaluation got some people thinking about all the other things they'd like to do at compile time. And there is a way to do that with just `constexpr`- `constexpr` variable initializers are a new "compile-time-only" context just like array sizes/etc, so you can decide at the leaves what to compute ahead of time, without forcing the callees to be exclusively compile time.

    But there seems to be some kind of disconnect in how people learn about constexpr, where they hear "compile time" and imagine a different design than what actually went into the language, then get even more confused when they hear "can be" and imagine it as something that happens at the compiler's whim rather than deterministically based on the calling context.

  • shaklee3 a year ago

    no, constexpr was correct for a reason. it's great to have a single function that can both be runtime or compile-time if allowed. it results in less code. they knew what they were doing.

  • cbsmith a year ago

    I can't see how constexpr could be used for what constinit does. Indeed, there's a part of me that thinks constinit is kind of a poorly chosen name/keyword.

    • kllrnohj a year ago

      Yeah constinit seems poorly named. Yes it's "constantly initialized", but the result isn't actually "const", it's still mutable! Which is a great property to have as an option (and indeed a big differentiater vs. constexpr), but it seems like `staticinit` or something would have been more clear that it's very unrelated to `const`

  • devnullbrain a year ago

    You can't demand too much too fast from compiler developers or you end up with Itanium. C++20 still isn't widely available. MSVC doesn't even support all of C11.

    • kllrnohj a year ago

      > MSVC doesn't even support all of C11.

      Did you mean C11 there or C++11? I'm not sure why C11 is relevant in a discussion about C++. MSVC's support of C has been abysmal for ages, but this has no reflection on its C++ support. Of which it definitely supports all of C++11. And 14. And 17. And even complete support for C++20

      For C++20, GCC is "only" missing module support, but otherwise supports everything else. Clang is lagging behind, though.

      https://en.cppreference.com/w/cpp/compiler_support

      • devnullbrain a year ago

        I mean C11 because it's an example of a major release in a major language that became unusable in practice because a compiler vendor decided it wasn't important to support it in full. Now you can't bump your compiler version to C11 if you want stable portability. This is relevant because it's what happens to C++ if it demands too much from compiler developers.

        >For C++20, GCC is "only" missing module support, but otherwise supports everything else. Clang is lagging behind, though.

        Then I'll rephrase: it's not only not available widely, it's not available at all.

        • kllrnohj a year ago

          C isn't a major language on windows, at least according to Microsoft, whereas c++ is. It'd be like complaining about the state of Java on iOS. C11 doesn't demand much of anything from the compiler. It's definitely not from overbearing demands made by the language.

jokabrink a year ago

Whenever I see that C++ added some language extension, I can't help but think about Bjarne Stroustrup's "Remember the Vasa" Paper [1] and wonder if he had meant this type of complexity he warned against...

- [1](https://www.stroustrup.com/P0977-remember-the-vasa.pdf)

  • tialaramex a year ago

    "Remember the Vasa" is why you can't have your feature, it doesn't apply to Bjarne.

ergonaught a year ago

This is the sort of thing that pushed me away from C++ a couple of decades ago. I remain perplexed that the "solutions" to C++'s "problems" have largely been to continue overcomplicating the language.

  • devnullbrain a year ago

    Smart pointers overcomplicated more than auto_ptr? Default ctors are an overcomplication? Optionals worse than void pointers, variants worse than unions?

    • catlifeonmars a year ago

      Well now you have both. So yes, strictly worse from a language complexity perspective because now there are two options to choose from, and a significant, nonzero fraction of developers are going to choose the “wrong” option, and leave me to fix their bugs. Hell I even choose the wrong option half the time.

  • jeffbee a year ago

    Literally nothing in the article, except `const`, existed "a couple decades ago".

orange_joe a year ago

I hate C++. I work with it professionally and have for years. The entire language feels like an anti pattern at this point. I’m repeatedly told how incredibly powerful this language is if I only use this preferred set of language sub features. But this whole approach violates the entire premise of a type based language, which is to offload as much complexity as possible onto a compiler so that you can get compile time warnings & errors. What’s the point of a compiled language if half the language constitutes a foot gun.

  • devnullbrain a year ago

    I don't understand how this comment applies to this link. These features are designed around moving as much computation and verification as possible to compile time without using templates. If you need to.

    • chlorion a year ago

      It doesn't, it's just the standard generic "C++ is complex and bad" post that people love to read so much. Every C++ related post will have at least one of these and they are always totally unrelated to anything being talked about.

      • ameliaquining a year ago

        I think the specific connection to the original post is that there are four separate language features that do related but subtly different things, and the differences and interactions between them are difficult to wrap one's head around. I'd like to think that, if you were designing a language's constant/compile-time evaluation features from the beginning without concern for backwards compatibility, you could come up with something equally powerful but conceptually simpler. (One could argue that this is what Zig has done.)

  • cartoonfoxes a year ago

    Same here. I keep eyeing Ada and waiting for the compiler prices to drop.

    • micronian2 a year ago

      Are you unaware of the FSF version of GNAT that is part of GCC that is free to use, even for developing proprietary software?

      • cartoonfoxes a year ago

        I'm aware of it, and I'm not aware of anyone using it for proprietary software in the kinds of projects where Ada would come up in discussion. FSF GNAT + Alire are fine as a hobbyist toy and for open source, but they're not in contention for [workjob]. FSF GNAT isn't just unsupported, but also very thinly documented. And that's if you're only targeting Linux or Windows on x86, and not embedded devices with a cross-compiler you need to build yourself.

        The basic Windows / Linux versions of the cheapest commercial Ada package only recently dropped to high 4-figures / seat. The equivalent cross-compilers are still $$,$$$.

brianjacobs a year ago

In summary:

    constexpr variables are constant and usable in constant expressions

    constinit variables are not constant and cannot be used in constant expressions

    constexpr can be applied on local automatic variables; this is not possible with constinit, which can only work on static or thread_local objects

    you can have a constexpr function, but it’s not possible to have a constinit function declaration
fractalb a year ago

I didn't understand the difference between constinit and consteval. It appears that constinit is just consteval applied to global/static variables. Why two keywords then?

  • gpderetta a year ago

    They are unrelated.

    consteval is applied to functions and asserts that a function definition can be evaluated at compile time, and only at compile time, for all possible arguments; the compiler is required to check this. This is different form contexpr that merely allows it and possibly only for a subset of the arguments.

    constinit is applied to static variables and asserts that the initialization is static (static variables can otherwise still be initialized dynamically on startup or on first use).

    • fractalb a year ago

      Both consteval and constinit are evaluated at compile-time, right? If yes, then why two different keywords?

jgaa a year ago

C++ is starting to show that it's Designed By Committee ;)

I'm a big fan of C++. It's been my primary programming language since 1996. C++11 was awesome! I'm in the minority of the C++ developers that occasionally take the language (and the compilers - just like Internet Explorer was a great pain for "front end" developers for a decade, MSVC was a pain in the a* for C++ developers) for a ride to it's limits.

However, the complexity of the language in C++ 23 is mind mind-boggling.

The complexity in C++ 11 was quite frankly already overwhelming. I've worked as a senior and principle C++ developer in small startups and big established software companies. In the last decade, I've only met a handful of really qualified C++ developers (people who actually know the language, and how to use it effectively). Most of the "Senior C++" developers I've worked with can use basic things in the same way that average Java developers can use Java - but they know nothing about for example template meta-programming or how to do concurrency correctly. Many of them were just C programmers who at some point were thrown into a "C++ project" and learned how to "use classes", but still use C functions for starting new threads. In other words, In 2022 they have not even updated themselves to C++ 11.

It's hard to learn C++ to an advanced level. It's /really/ hard. Most people don't have the personal drive to do it. At least not the majority of the C++ developers I've worked with over the years. They learn the basics, and then just use the knowledge they already have to do their job.

Even I, and friends of mine that are even more enthusiastic about C++ than I am, are starting to feel exhausted by the ever increasing complexity of the language (and the libraries). One of them suggested to me to take a look at Rust. It's a lot simpler to learn Java/Kotlin (applications) or Rust (system programming) than it is even to maintain a good knowledge about C++.

I still think C++ is a lot of fun, and programming exiting things in the language gets me into flow in a way few other activities can. It's also rewarding - I can do incredibly cool things, and I can incrementally improve on the way I solve the same problem each time I run into it in a new project.

But that's me. Most of the C++ code I've seen in small and large companies stinks. The developers have the tool to do wonderful things - but like the Wizard's Apprentice - they don't know the tool very well, and they don't know how to use it. I think most of the projects started today in C++ would be better off written in a simpler language that the developers actually comprehend.

daniel-thompson a year ago

I'm not a language designer and don't want to be one - I'm open to the idea that these new keywords are actually the simplest, most composable way to address these use cases - but good grief! The burden of complexity weighs so heavily on this language that understanding how/when to use its features to solve your problem can easily be as or more difficult than the problem itself.

gpderetta a year ago

It is possible that constexpr keyword for functions will soon go the way of the dodo^Wregister keyword. It is was a pointless keyword added to just to get comp-time evaluation proposal get through the approval process and the the authors have already apologized for it multiple times.

GCC has already added -fconstexpr to implicitly mark all functions as constexpr.

  • cbsmith a year ago

    I see -fconstexpr-depth, -fconstexpr-cache-depth, -fconsexpr-fp-except, -fconstexpr-loop-limit, and -fconstexpr-ops-limit, but not -fconstexpr: https://gcc.gnu.org/onlinedocs/gcc-12.2.0/gcc/C_002b_002b-Di...

    Where'd you see -fconstexpr?

    • gpderetta a year ago

      Sorry. It is -fimplicit-constexpr, new in gcc12: https://gcc.gnu.org/gcc-12/changes.html

      • cbsmith a year ago

        So that doesn't implicitly mark all functions as constexpr. It implicitly marks inline functions as constexpr. It weakens your point a tad, but I see now what you are getting at.

        • gpderetta a year ago

          Yes sorry, I should have been more precise.

          Of course you need the function body to actually do comptime evaluation.

  • nynx a year ago

    This seems like it could cause issues. Being able to evaluate at compile time is part of the api surface of a function. If it’s not explicit, it could be very easy to accidentally change the internals of a library to not support consteval, but without changing the explicit signature.

    • gpderetta a year ago

      The problem is that constexpr already is not sufficient to guarantee that.

      • cbsmith a year ago

        How so? constexpr doesn't guarantee that an expression is evaluated at compile time, but it does guarantee that it could be.

        • jcelerier a year ago

          > but it does guarantee that it could be.

          in which sense do you mean this? consider for instance

              constexpr int foo(int x) 
              { if(x < 0) throw "error"; return x; }
          
          I would say that "guarantee that it could be" means that as long as the function definition itself "builds" / is validated by the compiler, you can use it in a constexpr context yet this is not the case here:

              constexpr int a = foo(123); // works fine
              constexpr int b = foo(-123); // compile error
              int c = foo(-123); // works fine
          
          so having "constexpr" in the API does not mean that your code will always build ; as soon as you have constexpr the entire implementation of the function is part of the API, thus making the keyword moot like gpderetta says
          • cbsmith a year ago

            Yup. Turns out, I was wrong.

        • gpderetta a year ago

          That's what everybody originally believes (me included)!

             constexpr is_right_answer(int x){
                if (x==42) return true;
                else {
                  std::cout <<"try again \n";
                  return false;
            }
          
          A function can be constexpr as long as at least one possible invocation can be compile time evaluated. In fact compilers can't generally prove that a function can't be constexpr.

          I believe consteval has the semantics you want but I've yet to study it in details.

          For more details see: https://www.foonathan.net/2020/10/constexpr-platform/ .

          • cbsmith a year ago

            I stand corrected.

chris_wot a year ago

constinit allows for constant initialization. Why, how enlightening! Now... what is constant initialization?