whizzter 3 days ago

The author brings up the basically the canonical example of where OOP style design shines, and where functional programming will falter.

The simple truth however is that overly going into either functional or OOP camp will hurt because strict adherence becomes subscribing to a silver-bullet.

The middle road is simply a better engineering option, use a practical language that supports both paradigms.

Keep data transforms and algorithmic calculations in functional style because those tend to become hot messes if you rely overly on mutation (even if there is performance gains, correctness is far far easier to get right and write tests for with a functional approach), then there are other concerns where an OOP derived system with inheritance abstractions will make things easier.

  • bob1029 3 days ago

    > The middle road is simply a better engineering option, use a practical language that supports both paradigms.

    Agreed - I would get more specific with this too. Which arrangement makes more sense:

    1. A solution that has an OOP outer shell hosting an FP inner core

    2. A solution that has an FP outer shell hosting an OOP inner core

    I argue that 1 makes way more sense - I look at the OOP/procedural code as the foundation layer upon which the FP code can be executed. This firewalls the messy outside world from the pure maths. For me this would be entry points like BusinessLogic.ExecuteRules() after having externally prepared all of the data contexts for execution. The results of this are then processed again by the external OOP code for downstream handling (writing to database, responding to web client, etc).

    The other way around feels nonsensical to me.

    • zelphirkalt 5 hours ago

      Mutation is infectious. It will take great care to have an OOP core and outer FP shell. The OOP core might have to become completely threadsafe and use mutexes and such, to be reliable. This might be more painful than sticking to either one paradigm.

      • lou1306 2 hours ago

        Maybe OOP is a strong word, but it would make absolute sense to have an imperative core if you absolutely need the highest possible performance, or if the OOP/imperative core is pre-existing and you want to "regulate" access to it through the affordances of FP.

    • cesaref 2 hours ago

      Unless you are on a greenfield project, you don't get to make these decisions like that. You start with a jumble of libraries and helper classes, or a complete app which needs its functionality amending. The right choice will alter depending on where you start if you see what I mean.

      And yes, i'm an advocate for the mix and match approach.

      • thaumasiotes an hour ago

        > a complete app which needs its functionality amending

        Where are you from? This reminds me of a long-since-dead grammatical construction, the "passival", in which -ing participles were used passively - the example I read about was correspondence stating "our garden is putting in order" where the modern language would require "being put".

        In my English, it's still possible to say "the app needs amending" [= "needs to be amended"], or "the app's functionality needs amending" [ditto], but definitely not "the app needs its functionality amending".

    • neonsunset 42 minutes ago

      C# and F#, imperative shell, functional core :)

    • TeMPOraL 4 hours ago

      > The other way around feels nonsensical to me.

      Wonder if that's because you've learned the pattern - "functional core, imperative shell"? I feel it's not a coincidence you used the terms "shell" and "core" in your comment :).

      Myself, I'm very much in favor on a theoretical basis. In practice, I've got bitten by this a couple times - keeping a purely functional core didn't play nice with the modularization and testing setup we had, using C++17 with CMake and GTest/GMock. I blame this on my inexperience, filing off sharp corners as I go along. I'd sure could use a detailed study of how to apply this pattern in modern C++, on an application scale, taking account the real tradeoffs - like how to "firewall" the core from outside mess, and allow tests to poke through that firewall, and not end up in weird linking hell, and not blow a single module into 10 tiny ones, and keep the whole thing debuggable in practice[0].

      One thing I occasionally do even in the "FP inner core" is apply the idea I picked up from Clojure's "transient" data structures: there is no difference between purely-functional, immutable, referentially transparent code, and imperative code carrying mutable state, if no one on the outside can tell. Sometimes it's just easier to make a typical imperative procedural or OOP solution and isolate it in a functional interface.

      --

      [0] - I used to like point-free coding; especially with functional style, it makes for nice, readable, highly-expressive one-liners. But each time I had to debug such code, I ended up regretting it. Now I'm starting to favor more explicit steps with intermediary variables storing results, simply to have good targets for potential breakpoints/tracepoints, and because such code is more convenient to step through. You could say I'm only doing it because the tools I use are limited, but frankly, almost all of "readable code" principles and even the shape of modern languages are driven by limitations of the tools we use, so ¯\_(ツ)_/¯.

      • williamcotton an hour ago

        In at least F# you can step through point-free pipelines with a debugger!

  • piyush_soni 6 hours ago

    > Keep data transforms and algorithmic calculations in functional style

    What annoys me greatly though is kids coding various 'fancy' functional paradigms for data transformation without realizing their performance implications, still thinking they've actually done the 'smarter' thing by transforming it multiple times and changing a simple loop to three or four loops. Example : Array.map.filter.map.reduce. Also when talked about it, they have learned to respond with another fancy term : "that would be premature optimization". :|

    • eigenspace 6 hours ago

      This is just an unfortunate consequence of how map and filter are implemented via iterators.

      Of you work with transducers, the map filter map reduce is still just one single loop.

    • bluGill an hour ago

      That is premature pessimization. They have no idea what premature optimization is as nobody has done that optimization at all since 1990 or so. Premature optimiiation is about manually unrolling loops or doing inline assembly - things any modern compiler can do for you automatically.

    • davedx 5 hours ago

      That kind of thing really depends on the language. Some of the stronger functional languages like Haskell have lazy evaluation, so that operation won't be as bad as it looks. But then you really need to fully understand the tradeoffs of lazy evaluation too.

    • DeathArrow 4 hours ago

      I don't know how things are implemented in other languages but in C# 9, these operations are optimized.

      • TeMPOraL 3 hours ago

        There are ways to keep functional transformations and immutable data structures efficient. Copy-on-write, unrolling expressions into loops, etc. Proper functional languages have them built into the runtime - your clean map-reduce chain will get translated to some gnarly, state-mutating imperative code during compilation. In non-FP or mixed-paradigm languages, where functional building blocks are just regular library functions (standard or otherwise), map-reduce is exactly what it says on the tin - two loops and a lot of copying; you want it fast, you have to mostly optimize it yourself.

        In other words, you need to know which things in your language are considered to be language/compiler/runtime primitives, and which are just regular code.

      • HdS84 4 hours ago

        Most languages don't have these facilities at all - so you need to be really careful what you are doing. This works "fine" with test data, because your test data usually is a few hundert items max. A few years back people at our firm build all data filtering in the frontend, to keep the "backend clean". That worked fine in testing. In production with 100k rows? Not so much.

        Even in C# it depends on the linq provider - if you are talking to a DB, your quers should be optimized. Linq to objects doesn't do that and repeated scanning can kill your performance. E.g. repeated filtering on large lists.

      • high_na_euv 2 hours ago

        Are they? LINQ is usually slower than for loop

        • pjc50 an hour ago

          Quite often it compiles to the same IL. Would you like to provide some godbolt examples where it's significantly different?

          • high_na_euv 20 minutes ago

            Two or so years ive been developing library and i remember where switching from something simple like First or FirstOrDefault to for loop made difference when using benchmark dotnet

            Then I found that it was common knowledge that linq is slower, even among ppl on c#s discord

          • neonsunset 38 minutes ago

            IL is always different since it's a high-level bytecode. The machine code output is different too. Now, with guarded devirtualization of lambdas, in many instances LINQ gets close to open-coded loops in performance, and it is very clever in selecting optimal internal iterator implementations, bypassing deep iterator nesting, having fast-paths for popular scenarios, etc. to achieve very good performance that can even outperform loops that are more naive, but unfortunately we're not there yet in terms of not having a baseline overhead like iterators in Rust. There is a selection of community libraries that achieve something close to this, however. I would say, LINQ performance today gets as close as it can to "it's no longer something to be worried about" for the vast majority of codebases.

    • kosmozaut 6 hours ago

      Why would your example be O(n³)?

      • piyush_soni 6 hours ago

        Oh yes, sorry I meant to write 3 * O(n) which though doesn't change the order is still three times the operations. The example I was remembering was doing filters 'inside' maps.

        • ryandv 3 hours ago

          So... O(n)? Leaving aside the fact that "3 * O(n)" is nonsensical and not defined, recall f(x) is O(g(x)) if there exists some real c such that f(x) is bounded above by cg(x) (for all but finitely many x). Maybe you can say that g(x) = 3n, in which case any f(x) that is O(3n) is really just O(n), because we have some c such that f(x) < c(3n) and so with d = 3c we have f(x) < dn.

          It's not the lower-order terms or constant factors we care about, but the relative rate of growth of space or time usage between algorithms of, for example, linear vs. logarithmic complexity, where the difference in the highest order dominates any other lower order terms or differences.

          What annoys me greatly is people imprecisely using language, terminology, and/or other constructs with very clearly defined meanings without realizing the semantic implications of their sloppily arranged ideas, still thinking they've done the "smarter" thing by throwing out some big-O notation. Asymptotic analysis and big-O is about comparing relative rates of growth at the extremes. If you're talking about operations or CPU or wall clock time, use those measures instead; but in those cases you would actually need to take an empirical measurement of emitted instruction count or CPU usage to prove that there is indeed a threefold increase of something, since you can't easily reason about compiler output or process scheduling decisions & current CPU load a priori.

          • piyush_soni an hour ago

            I do understand 3 * O(n) is just O(n), thanks. I was just clarifying my initial typo. However, it's still three/four times the iterations needed - and that matters in performance critical code. One is terminology, and the other is practical difference in code execution time that matters more, and thus needs to be understood better. You might not 'care about constant factors' but they do actually affect performance :).

    • Gunax 6 hours ago

      Aren't those all linear operations?

      • piyush_soni 6 hours ago

        Yes, wrote quickly without thinking. Even if it doesn't change the complexity, it's still three or four times the operations.

  • mattgreenrocks 3 days ago

    I'm less an OOP fan and more an extreme late-binding fan. It is a really nice lever to use when you need it. You don't necessarily need a language built for it if you have sufficient infrastructure in place (such as dependency injection).

    That said, FP is a great default for most code, and I try not to write any new code in languages that don't support algebraic data types. You can express things much more precisely with them.

    Languages that let you use both styles (Kotlin/Swift) are probably my favorite, despite using Haskell in production at work. Haskell punishes needing to use mutability too harshly IMO.

    • bitwize 6 hours ago

      "Late bind all the things" is pretty much what Alan Kay's idea of OO was. In particular it means that methods are invoked by name and selected based on message name by the receiving object. Any object can be sent any message; how that message is handled is determined by the object at invocation time. No vtbls or other such mechanisms used by statically typed OO languages are involved.

      Kay has spoken positively, for instance, of Erlang, which has nothing like Smalltalk's OO model.

      • DeathArrow 4 hours ago

        But late binding doesn't have performance implications? If the compiler doesn't know the type it can't do optimizations.

        • TeMPOraL 4 hours ago

          It has, but that can be overcome by thick enough runtime.

          My experience is with Common Lisp, which has a quite sophisticated object system (CLOS) with a metaobject protocol. Theoretically, pretty much everything there can be altered on the fly at runtime - classes and methods can be added, modified and deleted, inheritance hierarchy changed, method invocation rules arbitrarily altered, etc. In practice, efficient implementations like SBCL (which tries to eagerly compile everything down to machine code, including at runtime) tend to have a lot of logic for keeping track how how things currently are.

          99.9% of the time, the current shape of the object model is fixed, so it's kept optimized in advance. For example, calling (my-method my-obj) can do anything, depending on what my-method, my-obj, (class-of my-obj), etc. are at the moment - however both you and the runtime know that, right now, my-method is only defined on class-foo, which uses standard object model rules, so the runtime ensures the call is just a fixed-address jump. Define my-method for another class, the runtime will make calls to my-method a simple lookup on a tag (or something equivalent). Mess with the method combination order, or class definitions, and the runtime will redo its own book-keeping and keep the calls as efficient as possible. So you still pay for the flexibility - but mostly just-in-time, in the 0.1% of the time you invalidate some optimization, forcing the runtime to re-optimize itself.

          I don't have as much experience with Smalltalk, but I hear that the story there is similar - you usually have a fat runtime with gnarly, stateful internals that re-optimizes itself on the fly to keep the flexibility performant. It's a nice benefit of image-based languages, where the compiler is an integral part of the runtime.

        • pjc50 an hour ago

          I also have concerns about maintaining correctness in this kind of late binding environment; it's useful to be able to statically verify things up front such as through the type system.

  • tome 3 days ago

    This is strange to me because I don't see FP faltering here at all. I suppose it depends on precisely what you mean by "OOP" and "FP". Below is an example implementation in my Haskell effect system Bluefin. It defines a Logger interface (that can log a message with severity) and then instantiates it with two implementations

    https://hackage.haskell.org/package/bluefin

    1. a logger that logs to stdout

    2. a logger that logs to a file

    The file logger also brackets the opening of the file so that if abnormal termination is encountered then the file handle is guaranteed to be cleaned up. This is similar to RAII.

    I really like this solution! It's just programming against an interface, and then instantiating the interface in different ways. I think an solution using inheritance would be worse, because it would use a special language concept (inheritance) rather than a standard one (function definition).

    Perhaps this is "OOP" and not "FP"? That's fine by me! But in that case I conclude Haskell is an excellent OOP language. (I already conclude that it's an excellent imperative language.)

        {-# LANGUAGE GHC2021 #-}
        
        import Bluefin.Compound
        import Bluefin.Eff
        import Bluefin.IO
        import System.IO
        import Prelude hiding (log)
        
        newtype Logger e =
            -- Log a message with a severity
            MkLogger {log :: String -> Int -> Eff e ()}
        
        withStdoutLogger ::
          (e1 :> es) =>
          IOE e1 ->
          (forall e. Logger e -> Eff (e :& es) r) ->
          Eff es r
        withStdoutLogger io k =
          useImplIn
            k
            MkLogger
              { log =
                  \msg sev ->
                    effIO io (putStrLn (show sev ++ ": " ++ msg))
              }
        
        withFileLogger ::
          (e1 :> es) =>
          FilePath ->
          IOE e1 ->
          (forall e. Logger e -> Eff (e :& es) r) ->
          Eff es r
        withFileLogger fp io k =
          bracket
            (effIO io (openFile fp ReadMode))
            (effIO io . hClose)
            ( \handle -> do
                useImplIn
                  k
                  MkLogger
                    { log =
                        \msg sev ->
                          effIO io (hPutStr handle (show sev ++ ": " ++ msg))
                    }
            )
    • throwawaymaths 3 days ago

      That's fine, but suppose you wanted to swap out loggers (or add an extra logger target) at runtime. Maybe someone wants transiently hooks in an observer by logging into a webpage that shoots the logs at that users browser. I don't know enough Haskell (and it's hard enough to read) that I can't tell if this code can deal with that case.

      • ndriscoll 3 days ago

        Swapping out loggers at runtime is a separate concern (inheritance and OOP don't make that any easier), and is more to do with control flow or whether there's an indirection when accessing the logger.

        • kergonath 6 hours ago

          Polymorphism definitely makes it much easier, and it is a core concept in OOP. I am not advocating for or against OOP, but “swapping behaviour at runtime” is one of the things it’s good at.

          • josephg 5 hours ago

            If you want to be able to swap out implementations, personally I’d much rather rust’s trait system (or Java or typescript’s interfaces) over what “OO languages” like C++ give you. I basically never want a strict tree of object types with inheritance.

            This isn’t an OO idea. I’m pretty sure FP languages like Haskell or ocaml have something very similar.

      • tome 3 days ago

        Sure, it could handle that case. You'd have to write a `withSwappableLogger` function which produces a logger that listens for updates telling it where to log to in future.

  • ndriscoll 3 days ago

    The example is a single-abstract-method class, i.e. a lambda function. You can define it as

      `type Logger = (String, Severity) => IO ()`
    
      `def empty: Logger = (_, _) => IO.unit`
    
      `def aboveSeverity(l: Logger)(minsev: Severity): Logger = (s, sev) => if sev >= minsev l(s, sev) else IO.unit`
    
    Or Curried if you like. In Scala, you might put these on a companion object so that you can give Logger.empty to your tests, etc. You don't need the type to remember which type of logger you had. The only thing you care about is that you can call it.
  • the_gipsy 4 hours ago

    Elm has taught me that pure FP can work beautifully. But it didn't quite catch on, which says something (but not necessarily that this approach is wrong), and it's also clealry not suited for every type of programming (e.g. systems programming or other low level stuff).

    Pure OOP aka Java has been a mistake, even though it's probably the most popular.

    • Nursie an hour ago

      > Pure OOP aka Java has been a mistake, even though it's probably the most popular.

      I mean, Java is hardly 'pure' OOP and these days contains a lot of functional type features.

      I've come to the opinion that sticking to paradigms over practicality is often the problem, and a language that helps you get things done regardless is preferable

  • throwawaymaths 3 days ago

    How do functional styles falter? Erlang's telemetry/logger systems are absolutely fantastic.

    • whizzter 3 days ago

      While Erlang itself is pure, in practice it seems like a lot of practical mutation and state issues are tucked in other processes. Nothing wrong with it and it sidesteps the purity issues that the Haskell example in the article presents.

      • tome 3 days ago

        The supposed problems in the Haskell example presented in the article have nothing to do with purity. Logging is by definition an effect. Haskell supports effects just fine. (I'd personally say better than any other language I know.)

      • throwawaymaths 3 days ago

        Is this no true scottish languaging on what constitutes fp? More to the point with the original article, erlang is assuredly not OOP in the modern sense (no matter how much joe Armstrong tried to rope java shops in by saying that actors are actually the true objects)

        Also:

        > Erlang itself is pure

        Very not true. erlang is extremely impure fp and that's a big part of what makes it what it is.

        • whizzter 2 days ago

          Right, and that's my point about practical languages that promotes a functional style being a better choice. The requirement for purity in Haskell (due to lazy evaluation) makes anything but purity a problem, it seems sometimes that just because things are possible to do purely it's the only way to do things according to adherents.

  • librasteve 3 days ago

    https://raku.org is a great language if you want all the paradigms in a smooth way

    • kamaal 2 days ago

      How is Perl 6/Rakudo coming along these days?

      I mean in terms of overall spec compliance and production use cases.

      Im guessing larger adoption would still be low unless Rakudo can run CPAN modules as is.

      • librasteve 2 days ago

        Raku has a healthy and stable community that is making steady progress in developing the language. Joint us on IRC or Discord (see https://raku.org for details) to get a feel for the activity level. For me it is an opportunity to hang out wiht very deep knowledgable compiler core folk and I have learned tons.

        The big effort going into v.6.e (ie the 5th major release of Perl6 == 6.e) is Raku AST which is now in PREVIEW and is a big boost for writing Slangs (Raku sub languages) since you can now use the built in Grammar parsing to generate AST code - and is the precursor to macros and improved JIT optimisation.

        The Raku spec is actually the ROAST test suite https://github.com/Raku/roast and Raku has been compliant to this since 6.c which was the first production release.

        Raku can run Perl5 CPAN modules (Inline::Perl5) and Python modules (Inline::Python) out of the box and it has deep FFI C NativeCall facilities. There are also a couple of Raku + Rust exemplars such as Dan::Polars (note: I am the author).

        Raku also has some very nice modules - Red is an ORM that leverages traits to make OO/ORM seamless, Cro is a web server framework that uses concurrent features such as Supplies to enable pluggable middleware (think WebSockets).

        As to larger adoption, I would say that this is still slow - there are some reputational concerns around the Perl to Raku transition and some points have been made about slow execution speed (although I would say that Raku is no slower than Python / Ruby were at this point in the development curve and of course there is no GIL limitation and many built in features for multi core such as hypers).

        At the moment, I would say the "killer app" for Raku is the use of Grammars in conjunction with LLMs (language meets language) - take a look at the work of antononcube for some great use cases... https://raku.land/zef:antononcube/LLM::Functions

  • f1shy an hour ago

    For me is hard to understand the opposition of OOP to functional. They are just not exclusive one of each other, neither is exclusive with all other paradigms.

    • mejutoco an hour ago

      OOP puts state and behaviour togethet (tightly coupled) in objects. FP, in a language with expressive types, has well-defined data-only structures (types) and functions that operate on those structures, so state representation and behavior are separated.

      For me, OOP can be done right, but FP is intrinsically better for correctness (it incentivizes correctness)

  • mrkeen 3 days ago

    Semi-mutable, semi-private, semi-nullable, semi-determistic, and semi-statically-typed. Best of both camps.

  • valcron1000 3 days ago

    > The middle road is simply a better engineering option, use a practical language that supports both paradigms.

    I recommend reading "The curse of the excluded middle" paper by Erik Meijer on why this approach does not work.

    • raincole 3 days ago

      It was from 2014.

      In the past decade, we've seen "the middle way" has won.

      - Every mainstream OOP languages now support passing functions as parameters.

      - Optional type annotation became mainstream, e.g. Typescript and modern Python.

      • the_af 3 days ago

        > Optional type annotation became mainstream, e.g. Typescript and modern Python.

        It became mainstream indeed but, in my opinion, it's still a mess. Optional type annotations seem like the worst of every possible world to me, and it leads to unprincipled [1] developers to just not use them or use them so inconsistently it doesn't matter and we revert to "no type annotations at all".

        Just my experience, of course.

        ----

        [1] which, in the wild, is the most common type of developer. And really, features should help with the most common developer, not the most principled ones.

        • mike_hearn 3 days ago

          Typescript/Python aren't good examples of mainstream adoption of Haskell related ideas. Haskell always has types but has inference. A better example would be Kotlin or Swift style languages, where type inference is much more pervasive than in the past albeit deliberately limited so as to not be as extensive as in Haskell.

          • the_af 3 days ago

            I don't think either Swift or Kotlin have "optional type annotations" of the sort mentioned in the comment I was replying to...

            In this context "optional" means "you can either use or not use type annotations", which is not related to type inference [1] but is a way to gradually add types, usually in a half-baked, unsatisfying way.

            [1] because unlike with Kotlin or Swift, there's no inference when you leave out the annotations, but instead you "switch off" static typechecking for portions of the code you don't want to annotate.

            • mike_hearn 2 days ago

              Yes I know, but I think we're discussing the extent to which ideas from FP languages leaked into the mainstream? Optional-as-in-dynamic typing didn't come from the FP world, whereas optional-as-in-inferred did.

      • skitter 2 hours ago

        Optional type annotations are used to improve dynamical languages by adding static typing; we don't see static languages adding untyped places (there's type inference, but there's nothing dynamic about that).

      • mrkeen 3 days ago

        In Haskell, you pay the penalty of using map-reduce style semantics ("why can't I just use for-loops which are so much simpler?") instead of mutation. But the benefit is you get your nice determinism properties.

        The middle-way languages adopted the penalties, but not the benefits. So the real winner was "lose-lose".

        • raincole 3 days ago

          No. The middle-way languages didn't adopt functional programming's properties. They adopted the syntax. The syntax alone, even without the immunatbility and other properties from functional programming, is extremely useful to elimiate boilerplate like Observer and Strategy patterns.

    • red_admiral 3 days ago

      Functional programming has found its way into Java, where it works ok for some tasks; scheme/kotlin might advertise themselves as FP in the sense of "even more FP-ish than Java" but they're really mixed paradigm too, and not just because of backwards compatibility with other JVM libraries.

      We've made the middle road work. It is not the magical 10x speed upgrade that some managers hoped for if you just sprinkle some FP on top of the project like icing, but it works quite well in practice because you have the right tools for dealing with different kinds of subproblems.

      Meijer's paper is what I would call "FP maximalist". (Or "Haskell maximalist", as even OcaML manages without hiding all IO away in monads.) It's a defensible opinion to have, but doesn't have anything to do with the work environments that I have experience with.

    • whizzter 3 days ago

      Most of the "horrible" examples in that paper are uncommon/unrealistic constructs to prove a point, yes there are real footguns in that are analogous to them but even with junior developers around I've yet to see many in practice.

  • pseudonamed 3 days ago

    The middle of the road option in a language like Clojure is almost always functional with a small amount of mutable state.

    Programming styles are just that, style; Language =/= style.

    • heresie-dabord an hour ago

      Language and "style" can be hard to disassociate, though. Unless that kind of flexibility is part of the design ethos of the language.

      The word "style" generally means a superficial characteristic of appearance, but "style" in a discussion of programming languages could mean the appearance of the language, the loop/state paradigm, or the design ethos of the language itself.

      For example, the design ethos of Java is entirely bound to OOP. Python on the other hand has a celebrated syntactic style, an OOP foundation, and a flexible loop/state paradigm.

      Python has flaws, of course. But its development community has paid attention to syntactic style and to the loop/state paradigm. Java became a successful corporate language; Python entered the corporation because it has become a popular language.

  • throwaway2037 6 hours ago

    I like your post. There is a long YouTube video of a dev doing a long breakdown about why FP "lost" to OOP. In reality, it didn't lose...or rather, it won by using the backdoor. OOP integrated many FP features into their own languages. As a result, you can do lots of FPish coding in modern OOP languages.

    • davedx 5 hours ago

      How on Earth did FP "lose"? Most systems I work on these days use TypeScript, it's taken over huge parts of the industry, and OOP in TS is like an after thought that most people forget is even there at all. All the TS codebases I work on are 99% functional, with most JS/TS devs leaning into it more rather than less.

      • josephg 5 hours ago

        Excel is still arguably the world’s most popular programming language. And excel is much more functional than imperative.

        • throwaway2037 4 hours ago

          But VBA is OOP/imperative. And most Excel apps that I know use a combination of FP in the sheet, and OOP/imperative in the VBA.

  • ryandv 3 hours ago

    The characterization of FP-style approaches to the problem makes me question that it is a canonical example at all; rather I would regard it as a strawman of FP style, or otherwise not idiomatic or awkwardly designed. If Haskell is indeed unergonomic for the purpose of supplying multiple different logger implementations, it would seem that it would also be unergonomic for the purpose of supplying multiple different exception/error handling implementations.

    But... it's not? See `catchE` [0] from Control.Monad.Trans.Except:

        catchE :: Monad m => ExceptT e m a -> (e -> ExceptT e' m a) -> ExceptT e' m a
    
    ... where the callsite-provided exception handler is the second parameter to the function, which can do basically anything (especially if the monad m is MonadIO). Hardly unergonomic or overcomplicated in my view, and no effects algebras or even type declarations in sight.

    There is already a canonical example and it is the "Expression Problem," [1], where in FP "you optimize for extensibility of the number of operations in your model, at the cost of reduced developer ergonomics when attempting to extend the number of types you can operate upon. In the latter, object-oriented approach, you have the inverse tradeoff." [2]

    [0] https://hackage.haskell.org/package/transformers-0.6.1.2/doc...

    [1] https://wiki.c2.com/?ExpressionProblem

    [2] https://news.ycombinator.com/item?id=41450851

  • Nathanael_M 7 hours ago

    Would you have any advice or resources for learning paradigms and practical paradigm usage?

  • epolanski 2 hours ago

    I still feel like one missing crucial point is that you can mix both styles effectively.

    Scala, Elixir and some lisps are languages that make this nice.

    TypeScript has libraries like effect-ts where you can leverage OOP where it makes sense too.

kerand 3 days ago

The GoF book did a lot of damage. I've finally read it and was amazed that the entire book is in fact about writing GUIs, which is just one tiny part of programming.

Several patterns are trivial, others are very similar and are just a linked list of objects that are searched for performing some action.

The composition over inheritance meme in the book does not make things easier (there is no "composition" going on anyway, it is just delegation).

Objects themselves for resource cleanup like RAII are fine of course.

  • masfoobar 3 hours ago

    The evolution of OOP gives me fond memories.

    I feel very lucky (in many ways) that I started with VB5 and Turbo C/Pascal. Sure, you can do OOP in VB6 but the books I read did not touch this area.

    Then I tried Java. I hated it! This would have been around 1998.

    The Java books I was reading were huge and half of it was not entirely about Java the language.. but OOP concepts. Some of it was inheritence hell! I remember looking at code that was just a bloat.. when in C it could be a couple of structs and 10 functions.

    Then "Design Patterns" was thrown around in Job adverts. I had to bite the bullet and learn it to get my foot in the door. I was 18 at the time.

    Once I got my foot in the door and gained the experience needed to get good as a developer, you soon get a handle of what works and what doesn't. Am I suggesting OOP is bad? No.. it has its place. I found the GoF is not as serious as it is pushed.

    It was not long before I went full circle, coding more away from typical OOP mindset. I felt like the odd one out when I tried to explain this to co-workers but once we reached 2010's with the coined "Data Oriented Design" I realised my mindset is shared by many.

    Personally -- and.. yes.. this is controversial.. I think OOP set us back 10 years of programming progress. Not because of OOP itself, but how large companies jumped on the bandwagon and encouraged developers that "this is the right direection" so much effort was there, creating large.. bloated.. monolothic code.

    I felt like Dlang was an interesting step forward but, sadly, it is a language that tries to be everything. Now with Rust, Zig and Odin, I do believe that OOP will slowly lose in popularity but it will be a very, very slow decline.

  • rileymat2 3 days ago

    Some people will say that the book is not a training guide but a naming guide for patterns (or antipatterns) people already were using with success.

  • coliveira 3 days ago

    Agreed, OOP became 10 times worse after the GoF book. People started to see these patterns everywhere, and mutated fine looking code to conform to that expectation. Just compare Java classes and interfaces when the language was created (most of that based on previous experience with Smalltalk) and the hot mess that was invented after GoF was released.

  • marcosdumay 3 days ago

    OOP is entirely about writing GUIs. (Well, the C++ idea of OOP.)

    It's a much larger part of programing than most people give credit, but yeah, it's only a part.

    Anyway, the GoF book is about getting good abstractions from the 90s limitations in OOP languages. Nobody should read them now, except for historical reference.

    • AnimalMuppet 3 days ago

      > OOP is entirely about writing GUIs. (Well, the C++ idea of OOP.)

      Nope.

      One thing I like to do in C++ is, on an embedded system, wrap an object around a hardware resource. It means that no other code needs to know about the details of talking to that hardware. Also, if there are multithreading concerns, it leaves a very small set of places to worry about mutexes.

      • marcosdumay 3 days ago

        Ok, so you like to use OOP to emulate a module system.

        Yeah, plenty of people like that too. That doesn't make it an application for OOP. The fact that the 90s killed the modular languages is quite a bummer, as neither modern FP nor C++ style OOP have anything nearly as useful.

        • AnimalMuppet 3 days ago

          Shallow dismissals aside, do you have anything of substance to say? I'd especially like to see your justification for saying "That doesn't make it an application for OOP".

          • marcosdumay 3 days ago

            FP languages will give you the exact same abstraction (not that you'd want to use any on a device driver). And before OOP was hyped and pushed everywhere, you could already get the same thing, but better (not on C, of course).

            The fact that you need objects to get this on your current language is a bug, not a feature.

            (There is the one version of OOP that most people call "multi-agent systems" that would bring very good abstractions to a driver, but it's not C++'s OOP, and you'd need a heavyweight library to get it there.)

            • DeathArrow 4 hours ago

              What modular languages with "multi-agent systems" are you talking about? Please give some examples. The only domain where I saw "multi-agent systems" paradigm being used is AI, not programming languages.

          • loup-vaillant 3 hours ago

            > One thing I like to do in C++ is, on an embedded system, wrap an object around a hardware resource. It means that no other code needs to know about the details of talking to that hardware.

            That, my friend, is modular programming. It is not specific to hardware, and not at all exclusive to OOP. In fact, C can do it just as easily as C++:

              typedef struct {
                  // Private stuff, do not rely on it
                  int   bar;
                  float baz;
              } foo_ctx;
            
              int  foo_init      (foo_ctx *ctx);
              void foo_cleanup   (foo_ctx *ctx);
              int  foo_frobnicate(foo_ctx *ctx, int frobnicator);
            
            And if you don't like putting private data in the header file, we can have a pointer to implementation and hide it even better than idiomatic C++ (I don't like this much, because this requires allocations the above avoids):

              struct foo_ctx;  // forward declare
              typedef struct foo_ctx foo_ctx;
            
              foo_ctx *foo_new       ();
              void     foo_delete    (foo_ctx *ctx);
              int      foo_frobnicate(foo_ctx *ctx, int frobnicator);
            
            Some would say I'm just emulating OOP in C. Except I'm not. There's no class in there, just a data structure with opaque data in it, and functions that operate on it.

            ---

            I find it especially tiring when people use classes to do modular programming, call it "OOP", and assume it doesn't exist elsewhere, or was an OOP invention. OOP coopted modules, and put light syntax sugar to call it its own.

            And I can already hear you scream that I'm "emulating OOP in C", though what I'm actually doing is emulating modular programming in C. And if we insisted on making it look like C++, there are indeed 2 missing ingredients.

            The first one would be overloading on the type of the first argument. With that we would no longer care about name clashing, and could write the module like this:

              typedef struct { int bar;  float baz; } foo;
              int  init      (foo *this);
              void cleanup   (foo *this);
              int  frobnicate(foo *this, int frobnicator);
            
            The second missing ingredient would be syntax sugar so `f(x, y)` could be written `x.f(y)` instead. That way the calling code could really feel OOP, even though it's little more than a thin layer of syntax sugar.

            I have even done it myself. Wrote a scripting language for a client who wanted OOP. Instead I gave them function overloading and universal call syntax, and they didn't notice a thing.

            ---

            When you think about what OOP really is, there's not much. There's the idea of instantiation, where instead of using global variables you define a data structure and instantiate it under one local name. OOP made this popular, though I'm not sure it's not another cooptation. Any language that have user defined types can do that.

            Then there's inheritance, which is mostly a mistake (in my experience inheritance is best avoided unless absolutely necessary — and it is almost never necessary).

            And finally there's subtyping, which is nearly interchangeable with closures. It's a bit more powerful, but it is also a bit more cumbersome, and the added power is rarely needed in my experience.

            ---

            Anyway, when one repeats the above often enough, there comes a time where one had enough, and just write a quick dismissal. When it comes don't to it the conclusion is fairly simple:

            OOP ain't so hot.

amluto 3 days ago

This article describes the OOP approach of changing, say, a Logger to an abstract class with a default implementation, without needing to modify code that uses it, and also describes this as something that could be done in C++. But this doesn’t work at all in C++ if there is ever a Logger on the stack!

    Logger l;
That’s a Logger, and it will not magically become a subclass.

And changing this to go through pointers everywhere and to use virtual functions, in C++, is not very performant. A good JIT compiler may be able to effectively devirtualize it, but C++ compilers are unlikely to be able to do this effectively.

  • MathMonkeyMan 7 hours ago

    In the case of a Logger, though, it is likely passed around by reference or pointer. So users of the Logger (reference/pointer) do not have to change if its definition changes to that of an interface.

    The "Server" class, or whatever instantiates the Logger, will have to change, but that might not be such a big deal.

    So yes, the author was wrong about not having to modify any code, but likely you won't have to modify most code.

userbinator 3 days ago

In moderation, yes. The problem with OOP, like all other paradigms that came before, is applying it dogmatically and excessively, which always happens when there's a lot of hype, novelty, and $$$ to be made.

  • hk1337 3 days ago

    I agree but "moderation" isn't the best word. OOP is good when used appropriately and not when you fixate that everything has to be within the OOP structure.

    It's the same thing with languages, IMO. People try to shoehorn tasks into a language when it would be better off in another language.

    • loup-vaillant 3 hours ago

      > OOP is good when used appropriately

      What are its appropriate uses? The more I program the fewer I see.

  • faizshah 3 days ago

    I think bad OOP is still better than bad functional code. Having seen what happens when a React codebase reaches >100k LOC are we really any better off than a class hierarchy?

    The OOP model fits nicely into most people’s heads because of the analogy to the real world and OOP implementations generally seem to give better signals when you are doing something wrong. Rather than functional implementations which mainly rely on human expert code reviewers and linters to give negative signals instead of say exception hierarchies.

    A lot of the time you don’t realize your codebase is bad until “it’s just a button how hard can it be” somehow takes 3 months and 4 rollbacks.

    At least this is what I have observed working on large codebases.

    • gorjusborg 3 days ago

      > The OOP model fits nicely into most people’s heads because of the analogy to the real world

      It also has been the default programing language paradigm for a while now, which I think explains why experienced programmers can cling to it despite its drawbacks. Try to teach object oriented programming to a beginner and it becomes clear that it isn't as natural as you are suggesting.

      > generally seem to give better signals when you are doing something wrong.

      Better than what? Functional programming is a far simpler abstraction: a function. Data in, data out. That's about as simple as it gets. Also, culturally, functional programmers generally understand the downside of distributed state, and actively try to minimize it. That's a good thing.

      Bad code is bad code, but OOP gives you unique weapons to hurt yourself with. Inheritance allows one to change an unbounded number of objects without you even knowing. Modeling everything as a collection of objects with state tends to lead to programs that need to synchronize distributed state rather than a central state that is easier to keep consistent.

    • wiseowise 5 hours ago

      > Having seen what happens when a React codebase reaches >100k LOC are we really any better off than a class hierarchy?

      I’m not sure if you correctly understand what functional programming is.

      Check out Grokking simplicity, great book.

    • DeathArrow 4 hours ago

      > The OOP model fits nicely into most people’s heads because of the analogy to the real world

      Nothing in real world resembles OOP.

    • veidelis 3 days ago

      I assume you're talking React's hooks. That is not functional programming.

    • Fire-Dragon-DoL 2 days ago

      That's because the programming language used is not FP, even if people pretend to.

      Js is still stateful everywhere, even where you don't want it to be

o_nate 3 days ago

I think its important to be aware of functional style and the weaknesses of OOP, so you can write OOP code that avoids the worst pitfalls. Classes still remain a nice way to organize code, but you should try to make them immutable if you can.

  • DeathArrow 4 hours ago

    > Classes still remain a nice way to organize code

    Using classes as code containers doesn't make your code OOP. I use C# which demands most code being put inside classes, but my code isn't OOP because I seldom use inheritance, encapsulation or method overriding.

raincole 3 days ago

OOP is not fundamentally bad or good. But I had some first-hand experience about how it's taught "wrong".

The following might sound ridiculous, but I swear I'm not making them up:

- In my highschool, students on their "Computer & Information 101" class were asked to answer what polymorphism is. Most of the said students had zero programming experienece at the time.

- In my sophomore year (CS major), students were asked to finish a mini game "with design patterns" and explain what design patterns they used. For most of the said students, that was the first time they wrote a program with more than 300 LoC. Before that, all the assignments they had seen are "leetcode-like", like implementing Sieve of Eratosthenes in C.

  • whstl 3 days ago

    Yes. OOP pedagogy is completely messed up.

    First, examples are often unpractical and divorced from reality. The canonical meme examples of “Dog and Cat inherit from Animal” or “Car inherits from Vehicle” are not really applicable to Veterinary Clinic Software or Self Driving Cars. People use those kinds of examples to say that OOP is good at modeling reality, but business software is 90% of the time better served by a different level of abstraction.

    Second, a lot of descriptive information is passed as prescriptive. Patterns are a good example of how shoehorning concepts in a piece of software without having the knowledge to come up with it by yourself can make it worse, and yet teachers spend whole semesters dedicated to it.

    Third, inheritance is still being taught as a fundamental part of OOP from the beginning in every curriculum I have seen, only to have pretty much everyone else saying “prefer composition” down the line, after the damage is already done.

    • odyssey7 an hour ago

      It’s possible that OOP is taught in universities because professors who grew up with it already know how to fill a semester with this sort of content. The tail inheriting the dog.

      Of all the topics that CS programs teach, it makes intuitive sense that the practical ones in software engineering are the ones most likely to be taught in a way which is irrelevant or backwards. Academia is good at academic things, but its members are typically there in part because they do not have a calling towards practice.

      Perhaps the practical courses shouldn’t exist at all, but it’s tempting to marry the department with industry in some way. We don’t require philosophy majors to take courses in legal practice or whatever job they might take after graduation, because education is one thing and a career is another.

    • simulo 5 hours ago

      These examples (dogs,cars...) are badly used in most programming books, but they can make sense if you show them in context of a small game or whatever else actually draws you an (inter) active virtual car or animal (which made OOP click for me, using processing). Printing "meow" and "woof" in the terminal, however, only makes sense as a demonstration if you already know what it should demonstrate.

      • whstl 15 minutes ago

        Sure, but for games we already have a better technique for doing it: runtime composition.

    • mike_hearn 3 days ago

      It's pretty good at modelling reality and such hierarchies are useful. Look at the code of software that models reality:

      https://dev.epicgames.com/documentation/en-us/unreal-engine/...

      The reason we don't tend to see such hierarchies used in clinic software is that RDBMS engines aren't OOP and don't model inheritance hierarchies well.

      Self driving cars are big piles of C++ so probably have plenty of OOP, albeit given the ML bent probably more for dealing with UI and sensors than modelling the nearby landscape.

      Inheritance is taught because most real OOP codebases use it extensively and it works OK. If you don't understand inheritance you can't understand the standard libraries of most OO languages, most UI toolkits etc.

      • loup-vaillant 3 hours ago

        > It's pretty good at modelling reality and such hierarchies are useful.

        There's just one problem: code should not deal with a model of the world. It should deal with a model of the data.

        A chair is a chair you'd say? Well it depends. Is it a background texture you'd never get close enough to notice it's flat? Is it a static mesh that never moves, and could be considered part of the ground itself? Is it something you can grab and displace? Can it be swung like club, or thrown like a rock, and do damage if it hit you or an NPC?

        Depending on what you want to do with that chair, the data describing it will be very different. And if you have any performance constraint that data will likely be put in very different structures or systems, from landscapes to two-handed weapons. The very concept of "chair" at this point is mostly moot.

        > Inheritance is taught because most real OOP codebases use it extensively and it works OK.

        Avoiding inheritance would have worked better in most such codebases. I have seen things…

        > If you don't understand inheritance you can't understand the standard libraries of most OO languages, most UI toolkits etc.

        Any item behind "etc"? Because so far those two are the only example of inheritance being kind of mandatory. And I'm not even sure about standard libraries, the C++ STL is very light on inheritance, and I see very little of it in Python (at least as a user). I guess Java makes heavy use of it.

      • whstl 2 days ago

        I'd argue that this is an artifact of history at this point. This was the popular style when unreal was made, and that's why it's there. Since Unreal is an old piece of software that powers older games, this code won't get removed. Because of compatibility.

        The industry (and this includes Unreal, albeit slowly) has been moving to runtime composition for a reason: it is simpler and more flexible. And much better at modeling reality.

        Your last paragraph pretty much sums it up: the main reason at this point for teaching implementation inheritance is because of legacy code bases. It can be a valid technique (I personally enjoy the Template Method Pattern, although it is today a maligned pattern) but it does causes more problems than it solves.

        • mike_hearn 2 days ago

          Heh, there's nothing legacy about Unreal. It's the premier game engine in the world today and actively maintained by hundreds of developers. They're now pushing into film and do new releases regularly, often breaking backwards compatibility when they do. Where is this move away from OOP and inheritance? Even newly developed features like Nanite use it:

          https://dev.epicgames.com/documentation/en-us/unreal-engine/...

          But more importantly, is this take falsifiable? What does "old" or "legacy" mean? People have been pushing this line for at least 15 years here on HN, yet what we see in the most well funded and actively maintained codebases is lots and lots of inheritance, with no efforts to remove it. Not just Unreal but also Chrome, MS Office, iOS, Android, Java, and more, all use this technique with no ill effects as far as anyone can tell. When the maintainers talk about what issues they face and are putting refactoring efforts into, inheritance or OOP never seem to be on the list. In the Java case it's actually the opposite, they like to complain about people violating OOP encapsulation and want to make enforcement stricter. Meanwhile heavily hyped successors that lacked it, like Haskell, have vanished without a trace, leaving not even one widely used program in their wake.

          What would it take to falsify the claim that inheritance is a legacy technique? Because I see no real evidence of it. Every codebase I've worked on has used it without anyone remarking on that fact, and it didn't seem to cause issues more often than other design patterns.

          • kryptiskt 3 hours ago

            > Meanwhile heavily hyped successors that lacked it, like Haskell, have vanished without a trace, leaving not even one widely used program in their wake.

            Haskell still is a thing, and Pandoc and shellcheck are widely used, if you aren't, you're missing out.

            And Haskell impressed Tim Sweeney enough that a whole bunch of Haskell people are working on Epic's Verse language.

          • loup-vaillant 3 hours ago

            > What would it take to falsify the claim that inheritance is a legacy technique?

            A demonstration that inheritance is a good technique. Which it is not. It hurts locality of behaviour, and that's bad because it increases the rate of mistakes. There have been studies about this.

          • the-smug-one 5 hours ago

            > Meanwhile heavily hyped successors that lacked it, like Haskell, have vanished without a trace, leaving not even one widely used program in their wake.

            Do you think that inheritance had something to do with that?

          • whstl 2 days ago

            A programming style isn't automatically "good" just because it's financially and technically unfeasible to migrate to something else overnight.

            The reason Unreal uses inheritance is because this is what people did in 1998. The reason it can't stop using is because it's too late to change. There's nothing more to it.

    • superxpro12 3 days ago

      I feel like if these classes focused on where objects can shine, like with dependency injection, we'd feel alot different about things. But instead we focus on modelling the problem, without demonstrating how to solve practical issues, and now we're left with this tool that noones knows how to use much beyond a CS203 exam.

      • DeathArrow 4 hours ago

        >I feel like if these classes focused on where objects can shine, like with dependency injection

        Dependency injection is not OOP. When used in an OOP context, it just solves an issue created by OOP. In an OOP context you can't call code from another object unless you have a reference to it. So you either instantiate the object or use DI to supply an object reference so you can call a method.

        In contrast to that, if you have functions as first class citizens or your code resides in a static class, you seldom need DI because you can call the function or method from wherever you need.

      • whstl 3 days ago

        Indeed. Dependency injection, interfaces, runtime composition, passing objects/lambdas for configuration/changing behavior (Strategy pattern).

        Plus teaching proper separation of concerns rather than the "one line per method" from Clean Code.

        All this would lead to better software than the "Dog inherits from Cat" bs.

  • kjs3 3 days ago

    You had bad teachers which says nothing about the paradigm or how it's taught generally.

    When I did my CS classes (at the beginning of the 'OOP will solve all the problems' era), we had excellent introductions to encapsulation and polymorphism and the rest with detailed examples in (early, cfront) C++ and Modula-2 & Ada. But then we had FP with SML & Hope and a couple of others that were poorly taught and most of us thought "this functional stuff is awful". YMMV.

  • mike_hearn 3 days ago

    The first isn't necessarily related to OOP. My first CS class at university gave everyone a programming quiz, in the very first lecture. The goal was to identify self-taught students so they could be made to help others in group work.

enugu 2 hours ago

This post doesn't seem like a criticism of FP so much as the module system in Haskell, ML has a more powerful module system.

On r/haskell, user mutantmell gave a implementation of the code (assuming such a module system) closely following the Dart code given in the original post.

https://gist.github.com/mutantmell/c3e53c27b7645a9abad7ef132...

https://www.reddit.com/r/haskell/comments/1fzy3fa/oop_is_not...

https://www.reddit.com/r/haskell/comments/1fzy3fa/oop_is_not...

>(Java-style) OOP offers a solution for this: you can code against the abstract type for code that doesn't care about the particular instance, and you can use instance-specific methods for code that does. Importantly, you can mix and match this single instance of the datatype in both cases, which I believe to be a superior coding experience than having two separate datatypes used in separate parts of your code.

>"Proper" module systems (ocaml, backpack, etc) offer a better solution that either of these: when you write a module that depends on a signature, you can only use things provided by that signature. When you import that module (and therefore provide a concrete instance of the signature), the types become fully specified and you can freely mix (Logger -> Logger) and (SpecificLogger -> SpecificLogger) functions. This has the advantage of working very well with immutable, strongly-typed functional code, unlike the OOP solutions.

>This is in essence the same argument for row-polymorphism, just for modules rather than records. It can be better to code against abstract structure in part of your code, and a particular concrete instance that adheres to that structure in other parts.

pyrale 3 days ago

The post's conclusion looks like the author has an axe to grind:

> I think it would be beneficial for the functional programming community to stop dismissing OOP’s successes in the industry as an accident of history and try to understand what OOP does well.

But the author has spent enough time in the haskell ecosystem, and probably has some cause for this statement. I would personally have liked to hear more about that cause, and the perceived issues in the community, rather than code examples.

carapace 3 days ago

I sure am glad that, when I was twelve and learning Pascal at middle school, my teacher took pains to point out that OOP is just a way of arranging code, it doesn't change the code semantically, it's just topological. You avoid a lot of noise and nonsense if you just keep that simple idea in mind: OOP is a style of arrangement, not semantics.

It's especially odd to compare and contrast OOP style with Functional Programming paradigm because these things are orthogonal.

  • coliveira 3 days ago

    My view of OOP is that is a way for people to replicate in code the experience of "bullet lists". In other words, create a hierarchy by a particular method of code engineering that conforms to traditional techniques that many people (specially in management) already use. The disadvantages come exactly from the fact that not everything in math, science and engineering can be easily modeled using bullet lists.

bhouston 3 days ago

I changed my mine on this. I did OOP since the mid-1990s when I learned it in high school up until about 10 years ago. I find OOP works best when you have a single coder who can store the model of the system in this mind and work out how to design the base and abstract classes well. And they also have freedom to refactor THE WHOLE CODEBASE when they get it wrong (probably multiple times.) Then you can make these webs of elegant ontologies work.

But in real life, when there is a team, you run into the fragile base class [1] constantly and changing that base class causes horrible issues across your code base.

I have found that OOP with inheritance is actually a form of tight coupling and that it is best to not use class hierarchies.

I agree with encapsulation and modularity and well defined interfaces (typed function signatures are amazing.) I just completely disagree with inheritance in all forms.

There are no benefits to it (besides feeling smart because you've made an elegant but ultimately brittle ontology of objects and methods), just a ton of downsides.

[1] https://en.wikipedia.org/wiki/Fragile_base_class#:~:text=The....

  • ravenstine 3 days ago

    I totally agree that class hierarchies are mostly a trap, especially when there's more than one developer on a project.

    What I don't understand is why many programmers still use classes even when they plan on avoiding inheritance. I've seen this sort of thing on the past few major projects I've been involved with; initial creators heavily relied on inheritance, future developers realize that was a mistake, then said future developers continue using the class construct of whatever given language.

    At that point, why not just use functions and regular objects? Without inheritance, classes tend to have these other rules and complications that don't seem to really add anything when there is no hierarchy.

    • AnimalMuppet 3 days ago

      OO was created for a reason.

      Before OO, you had these data structures, and they got passed around between functions. Well-designed data structures were not just a random collection of items - they had a relationship between the elements. And you kept finding structures that were in an invalid state - the relationship among the elements had been broken. Well, who did that? You couldn't tell. It could have been any function anywhere in the code that took the structure.

      With OO, you could have a very small set of functions that could change the structure. If the constraints got broken, you had a very small footprint to look at to see how it could have happened. That was the OO advantage.

      And this advantage did not go away with multiple programmers. In fact, it increased. If you had a class that you "owned", some other new (or incompetent) programmer could not break your code without changing that class's code. You didn't have to rely on everyone else always doing the right thing with your structure.

      Note that this advantage is orthogonal to inheritance vs. non-inheritance.

      • sirwhinesalot 4 hours ago

        Encapsulation is not solely an OOP idea, it's also a modular programming thing. That many languages lack proper module support is a whole other matter.

        But there are many advantages to doing encapsulation through modules over classes, if nothing else being able to have the internal functions organized in different files, and having "internal"/"external" functions have the exact same syntax without needing hacks like extension methods.

      • bhouston 3 days ago

        > And you kept finding structures that were in an invalid state - the relationship among the elements had been broken.

        Which is why functions should have typed arguments and results. :)

      • DeathArrow 3 hours ago

        > OO was created for a reason.

        Likewise, null was created for a reason. Not a good reason.

    • bhouston 3 days ago

      > At that point, why not just use functions and regular objects? Without inheritance, classes tend to have these other rules and complications that don't seem to really add anything when there is no hierarchy.

      I have done this myself. In TypeScript I would write a class that supports an interface. There is no hierarchy, but writing as a class gives it structure, a clear name, you know how to instantiate it, what it supports, private methods, private state, static functions, etc. So I find that classes still do great encapsulation for mutable objects.

  • ninetyninenine 3 days ago

    > I have found that OOP with inheritance is actually a form of tight coupling and that it is best to not use class hierarchies.

    Tight coupling is the basis of oop. A method is tied to an instance. Methods cannot be composed with other methods or functions without instantiating state. Methods cannot be moved into other scope.

    The literal definition of an object is a tightly bound set of methods that cannot ever be used without instantiating state.

    • chucksmash 3 days ago

      > Tight coupling is the basis of oop.

      This is not what OOP people are talking about when they talk about tight or loose coupling though.

      They are talking about the relationship between classes.

      • ninetyninenine 3 days ago

        And that’s the fundamental problem. They fail to see that if they took those same methods and made it independent of state then those things are now called functions.

        Functions can be moved to different scopes. Functions don’t rely on state to exist.

        You can compose functions with other functions to build new functions.

        And here’s the kicker. All of these functions did the same thing as the method.

        Functions are more modular. A method, is a restricted function that is tightly bounded with state and all other sibling methods.

        • ajuc 3 days ago

          There's a trade-off.

          Functional programs are easier to read, because the structure makes the state transitions and dependencies obvious - you see your dependencies in the arguments list. But it forces you to basically rewrite big parts of your program after even very simple changes.

          You had the structure of the program so fine-tuned to the dependencies of every part of the code - that when any dependencies change you have to completely change that structure. It's rewrite-only programming style.

          Imperative (and OO) programming idiomatically let you do a bigger mess with side effects, and you know less about the data dependencies just from looking at the function specifications - but it also allows you to do exploratory programming much faster (no need to pass a new argument down 20 levels of your call stack when some code deep down suddenly requires a new argument). And it allows you to modify the behaviour locally without refactoring the whole thing constantly.

          If you have a for loop that filters out even numbers and suddenly you want to sum the numbers and find maximum and minimum too - most of the code stays the same.

          If you have functional code doing the same and want to modify it in similar manner - it's a completely different code. Most people would just rewrite it from scratch.

          And that's just a very small scale example. With larger programs the rewrite gets harder.

          That's why big programs are almost never functional.

          • ninetyninenine 3 days ago

            >If you have a for loop that filters out even numbers and suddenly you want to sum the numbers and find maximum and minimum too - most of the code stays the same.

               x = [1,2,3,4,5,6,7,8]
               even = [i for i in x if x%2 == 0]
               
            Now I want to sum the numbers and add maximum and minimum too.

              s = max(x) + min(x)
              res = reduce(lambda acc, y: acc + y, x, 0) 
            
            I achieved your desired goal without rewriting code? The thing is with functional programming all state is immutable, so you can always access intermediary state without modification of the program at all.

            It's an improvement on imperative and OO. Because I only needed to add additional code to achieve the additional goal and those additions are modular and moveable. With imperative I would be changing the code and changing the nature of the original logic and none of it is modular and all of it is tightly integrated.

            • ajuc 3 days ago

              Sure, if you want to have 4 loops where 1 suffices.

              • ninetyninenine 3 days ago

                The big oh is the same man. It just feels like it’s less efficient but it’s not. Think about it.

                And it’s more modular with more loops. If you’re trying to shove 4 different operations into one loop you’re not programming modularly and you’re trying to take shortcuts.

                • ajuc 3 days ago

                  Reduce FPS of your game from 60 to 15, tell players it's the same cause complexity haven't changed.

                  But it's not even mainly about performance. The structure of the code changes with every requirement change. In a non-artificial code you're doing stuff other than calculating the result, and all the associated state and dependencies now have to be passed to 4 different loops.

                  While in the ugly non-modular imperative code you add 3 local variables and you're done, everything outside that innermost loop stays exactly the same.

                  > you’re trying to take shortcuts

                  Yes, that's the point. I started by admiting FP code is more elegant. But shortcuts are not inherently worse than elegance. They are just the opposite sides of a trade-off.

                  • ninetyninenine 2 days ago

                    tbh FP is so heavily modularized that even the concept of "looping" is modularized away from the logical operation itself. In haskell it looks like this:

                       f = a . b . c . d
                       a1 = map a 
                       b1 = map b
                       c1 = map c
                       d1 = map d
                       f2 = a1 . b1 . c1 . d1
                    
                    Where f is the composition of operations on a single value and f2 operates on a list of values and returns the mapped list.
                  • ninetyninenine 2 days ago

                    Gaming and applications that require extreme performance is the only application where Fp doesn’t work well.

                    > But it's not even mainly about performance. The structure of the code changes with every requirement change. In a non-artificial code you're doing stuff other than calculating the result, and all the associated state and dependencies now have to be passed to 4 different loops.

                    No. I’m saying the initial design needs to be one loop for every module. You then compose the modules to form higher level compositions.

                    If you want new operations then all you do is use the operations on intermediary state.

                    That’s how the design should be. Your primitive modules remain untouched through out the life cycle of your code and any additional requirements are simply new modules or new compositions of unchanged and solid design modules.

                    • DeathArrow 3 hours ago

                      >Gaming and applications that require extreme performance is the only application where Fp doesn’t work well.

                      Sure, if you are considering pure functional programming. But neither pure functional OOP does work well in a performance context.

                      If you mix imperative/procedural and functional programming you can have clarity, ease of use, ease of change and some performance, too.

          • ninetyninenine 3 days ago

            >Functional programs are easier to read, because the structure makes the state transitions and dependencies obvious - you see your dependencies in the arguments list. But it forces you to basically rewrite big parts of your program after even very simple changes.

            Disagree. Readability is opinionated so I won't address that but this is an example of functional:

               gg = (x) => x * x * x
            
               y = 1
               a = (x) => x + 1
               b = (x) => x * 2
               c = (x) => x * x
               d = (x) => x - 4
               f = a . b . c . d
               
               result = f(y)
            
            OOP example:

               class Domain1
                  constructor(n: int)
                      this.x = n
            
                  def gg()
                     this.x *= this.x * this.x      
            
                  def getX() -> int
                     return this.x
            
               class Domain2:
                  constructor(n: int)
                      this.x = n
                  
                  def a:
                      this.x += 1
            
                  def b:
                      this.x *= 2
            
                  def c:
                      this.x *= this.x
            
                  def d:
                      this.x -= 4
            
                  def f:
                     this.a()
                     this.b()
                     this.c()
                     this.d()
            
                  def getX() -> int
                     return this.x
            
              state = Domain(1)
              state.f()
              result = state.getX()   
            
                     
            
            What if realize d and a fits better in Domain1 and I want to compose d and a with gg in the OOP program? I have to refactor Domain2 and Domain1. Or I create a Domain3 that includes a Domain1 and a Domain2

            How do I do it in functional programming?

                  domain3 = gg . d . a
            
            
                  #Note I use the fucntion composition operator which means: a . b = (x) => b(a(x)) or a . b . c = (x) => c(b(a(x)))
            
            
            Functions by nature with the right types are composable without modification. A can compose with B without A knowing about B or vice versa. The same cannot be said for objects.

            Try achieving the same goal with OOP.... It will be a mess of instantiating state and objects within objects and refactoring your classes. OOP is NOT modular.

            It's pretty clear. One style is more modular than the other. Objects tie methods to state such that the methods are tied to each other and can't be composed without instantiating state or doing complex Object compositions and rewriting the Objects themselves.

            In programming you want legos. You want legos to compose. You don't want legos that don't fit such that you have to break the legos or glue them together.

            >Imperative (and OO) programming idiomatically let you do a bigger mess with side effects, and you know less about the data dependencies just from looking at the function specifications - but it also allows you to do exploratory programming much faster (no need to pass a new argument down 20 levels of your call stack when some code deep down suddenly requires a new argument). And it allows you to modify the behaviour locally without refactoring the whole thing constantly.

            you shouldn't be programming with state ever if you're doing FP. You need to segregate state away from your program as much as possible. State by nature is hard to modularize. State should be very simple generic mutation operations like getValue and setValue, it should rarely ever contain operational logic.

            • ajuc 3 days ago

              Your examples are inherently functional, and in practice people would do ABCDService that just returns the result. But ignoring that - how is the code changing when you need to call some external API from the c(x) function and handle the credentials, session, errors etc? Real world has external state and we do need to work with it.

              > you shouldn't be programming with state ever if you're doing FP. You need to segregate state away from your program as much as possible. State by nature is hard to modularize. State should be very simple generic mutation operations like getValue and setValue, it should rarely ever contain operational logic.

              This approach to state management is exactly what is causing the need to rewrite almost everything when requirements change in a functional program.

              I like how clean FP code is when it's done. But I hate writing FP code when I'm not 100% sure what needs to be done and what might change in the future. If I could write imperative code with side effects and once I'm done have it transpiled into efficient, elegant, minimized state functional code - that would be great. Maybe it will happen at some point with AI getting better.

              • ninetyninenine 3 days ago

                > Your examples are inherently functional, and in practice people would do ABCDService that just returns the result. But ignoring that - how is the code changing when you need to call some external API from the c(x) function and handle the credentials, session, errors etc? Real world has external state and we do need to work with it.

                Abcdservice is bad because I want to use a b c and d in different contexts. You’re saying in the real world oop promotes a style where you can’t break down your code into legos. With oop you need to glue a b c and d together.

                My code is not inherently functional. I literally picked the smallest possible logical operations and interpreted them as either functional or oop. And then I tried to compose the logical operations.

                I mean look at it. A b and c are just one or two mathematical operators. If you’re saying this is inherently functional then your saying computing at its most primitive state is inherently functional.

                > This approach to state management is exactly what is causing the need to rewrite almost everything when requirements change in a functional program.

                Not true. Fp programs segregate state. Look at my code. All the code for fp is stateless. The only state is y=1.

                > But I hate writing FP code when I'm not 100% sure what needs to be done and what might change in the future.

                You hate what’s inherently better for the future. Fp is more modular and therefore more adaptable for the future. You hate it because you don’t get it.

                • ajuc 3 days ago

                  > If you’re saying this is inherently functional then your saying computing at its most primitive state is inherently functional.

                  Sure. But computing isn't what most code does.

                  > Fp is more modular and therefore more adaptable for the future

                  That is wrong. It's cleaner to read, but it usually requires more lines of code to be changed when requirements change - so it's less adaptable.

                  Before changes

                  FP:

                     y = 1
                     a = (x) => x + 1
                     b = (x) => x \* 2
                     c = (x) => x \* x
                     d = (x) => x - 4
                     f = a . b . c . d
                  
                     result = f(y)
                  
                  ugly imperative code:

                      y = 1;
                      void doABCD() {
                         y += 1;
                         y *= 2;
                         y *= y;
                         y -= 4;
                      }
                  
                  Now you want to count how many times you squared numbers larger than 1000.

                  FP:

                     y = [1, 0]
                     a = (x) => [x[0] + 1, x[1]]
                     b = (x) => [x[0] \* 2, x[1]]
                     c = (x) => [x[0] \* x, x>1000 ? x[1]+1 : x[1]]
                     d = (x) => [x[0] - 4, x[1]]
                     f = a . b . c . d
                  
                     result = f(y)
                  
                  imperative:

                      y = 1;
                      count = 0;
                      void doABCD() {
                         y += 1;
                         y *= 2;
                         if (y > 1000)
                             count ++;
                         y *= y;
                         y -= 4;
                      }
                  
                  Total lines changed in FP - all except 2. Total lines changed in imperative - 3.

                  Of course you can refactor the FP version to split the part that requires the new state from the other parts. But in any big program that refactor is going to be PITA.

                  Do you get my point now? I'm not saying imperative is better. It is ugly. But it's faster to adapt to the new requirements.

                  • ninetyninenine 2 days ago

                    No way. Imperative is worse.

                    You're just not using FP correctly. You're trying to do something monadic which is something I would avoid unless we absolutely need an actual side effect.

                       y = 1
                       a = (x) => x + 1
                       b = (x) => x \* 2
                       c = (x) => x \* x
                       d = (x) => x - 4
                       f = a . b . c . d
                    
                       result = f(y)
                    
                    You're doing the refactor wrong let me show you. You have to compose new pipelines that reveal the intermediate values.

                       count = 0
                       firstPart = a . b
                       secondPart = c . d
                       countIfGreaterThan1000 = (x, prevCount) => x > 1000 ? prevCount + 1 : prevCount
                    
                       n = firstPart(y)
                       newCount = countIfGreaterThan1000(n, count)
                       result = secondPart(n)
                    
                    The key here isn't the amount of lines of code. The key here is to see that under FP the original code is like legos. If you want to reconfigure your fundamental primitives you just recompose it into something different. You don't have to modify your original library of primitives. With OOP you HAVE to modify it. doABCD() can't be reused. What if I want something additional (doABCD2) that does the EXACT same thing as doABCD() but now without counting the amount of times something was squared and greater than 1000 but now instead I want it for the amount of times the total was greater than 3?

                    You can't reconfigure the code. You have to duplicate the code now.

                    Basically you have to imagine functional programming as pipelines. If you want to add something in the middle of the pipeline, you cut the composition in half and split the pipe. One pipe goes towards the end result of what d outputs and the other pipe goes towards countGreaterThan1000

                    • ajuc 2 days ago

                      So your solution to changing 5 lines out of 7 was to do the refactor I wrote about and change 7 lines :)

                      I agree it's prettier. But it's objectively a larger change than the 3 lines you'd do in the imperative code. And it's pretty much how adapting to changes usually goes with FP. You constantly have to change the outermost structure of the program even if the change in the requirements is localized to one specific corner case.

                      > What if I want something that does the EXACT same thing as doABCD() but now without counting the amount of times something was squared and greater than 1000 but now instead I want it for the amount of times the total was greater than 3?

                      > You can't reconfigure the code. You have to duplicate the code now.

                      I could, but at this point refactoring is warranted.

                          y = 1;
                          y2 = 1;
                          count = 0;
                          _ = 0;
                          count2 = 0;
                          void doABCD(int &y, int &count, int &count2) {
                             y += 1;
                             y *= 2;
                             if (y > 1000)
                                 count ++;
                             y *= y;
                             y -= 4;
                             if (y > 3)
                                 count2 ++;
                          }
                          doABCD(y, count, _);
                          doABCD(y2, _, count2);
                      
                      8 changes. 11 in total for both modifications.

                      In FP you had 7 lines of code changed for the first refactor

                         y = 1
                         a = (x) => x + 1
                         b = (x) => x \* 2
                         c = (x) => x \* x
                         d = (x) => x - 4
                         count = 0;
                         firstPart = a . b
                         secondPart = c . d
                         countIfGreaterThan1000= (x, prevCount) => x > 1000 ? prevCount + 1 : prevCount
                         n = firstPart(y)
                         newCount = countIfGreaterThan1000(n, count)
                         result = secondPart(n)
                      
                      and now you'd have sth like

                         y = 1
                         y2 = 1
                         a = (x) => x + 1
                         b = (x) => x \* 2
                         c = (x) => x \* x
                         d = (x) => x - 4
                         count = 0
                         count2 = 0
                         firstPart = a . b
                         secondPart = c . d
                         countIfGreaterThan = (x, target, prevCount) => x > target ? prevCount + 1 : prevCount
                         n = firstPart(y)
                         newCount = countIfGreaterThan(n, 1000, count)
                         result = secondPart(n)
                         result2 = (firstPart . secondPart) (y2)
                         newCount2 = countIfGreaterThan(result2, 3, count2)
                      
                      That's 7 + 6 = 13 lines for 2 changes if I'm counting correctly.

                      What FP buys you is not deduplication (you can do that in any paradigm) - it's easier understanding of the code.

                      • ninetyninenine 2 days ago

                        Let me emphasize it’s not about prettier. Prettier doesn’t matter.

                        The key is that the original code is untouched. I don’t have to modify the original code. Anytime you modify your original code it means you initially designed poor primitives. It means you made a mistake in the beginning and you didn’t design your code in a modular way. It’s a design problem. You designed your code wrong in the beginning so when a new change is introduced you have to modify your design. This is literally a form of technical debt.

                        Do you see what Fp solves? I am not redesigning my code. I made the perfect abstraction from the beginning. The design was already perfect such that I don’t have to change anything about the original primitives. That is the benefit of Fp.

                        Nirvana in programming is to find the ultimate design scheme such that you never need to do redesigns. Your code becomes so modular that you are simply reconfiguring modules or adding modules as new requirements are introduced. Any time you redesign it means there was technical debt in your design. Your design was not flexible enough to account for changing requirements.

                        Stop looking at lines. In the real world if you modify your code that usually cascades into thousands of changes on dependent code. In Fp I simply link modules together in a different way. The core primitives remain the same. The original design is solid enough that I don’t change code. I just add new features to the design.

                        Also for your example you misinterpreted what I said. I don’t want to change the original signature of doABCD because it’s already used everywhere in the application. I want a new doABCD2 that does exactly the same as the original. Remove the side effect from the original and add a new side effect to the new doABCD of counting something else.

                        Do it without duplicating code or refactoring because duplicate code is technical debt and refactoring old code is admission that old code was not the right design. Be mindful that refactoring the signature means changing all the thousands of other code that depends on doABCD. I don’t want to do that. I want new features to be added to an already perfect design.

                        FP in my opinion, ironically is actually harder to read.

                        • ajuc 2 days ago

                          > You designed your code wrong in the beginning so when a new change is introduced you have to modify your design. This is literally a form of technical debt.

                          Yes. And just like in real life if you want to do business - you have to accept some degree of debt to get anywhere. Trying to predict the future and make the perfect design upfront is almost always a mistake.

                          > Stop looking at lines

                          We can't communicate without establishing some objective measures. Otherways we'll just spew contradictory statements at each other. These toy examples are bad, obviously, but the fact that there's basically no big functional programs speaks for itself.

                          > refactoring old code is admission that old code was not the right design

                          And that's perfectly fine.

                          > I want a new doABCD2 that does exactly the same as the original. Remove the side effect from the original and add a new side effect to the new doABCD of counting something else.

                          According to your definition of "code changed" if I duplicate everything and leave the old lines there - no code was changed which means the design was perfect :)

                          I don't think we'll get to a point where we agree about this. One last thing I'd like to know is why do you think nobody writes big projects in functional languages?

                          • ninetyninenine 2 days ago

                            >Yes. And just like in real life if you want to do business - you have to accept some degree of debt to get anywhere. Trying to predict the future and make the perfect design upfront is almost always a mistake.

                            And I'm saying FP offers a way to avoid this type of debt all together. You can accept it if you want. I'm just telling you of a methodology that avoids debt: A perfect initial design that doesn't need refactoring.

                            >We can't communicate without establishing some objective measures. Otherways we'll just spew contradictory statements at each other. These toy examples are bad, obviously, but the fact that there's basically no big functional programs speaks for itself.

                            Sure then I'm saying lines of code is not an objective measure. Let's establish another objective measure that's more "good": The amount of lines of structural changes made to the original design. It's universally accepted that lines of code aren't really a good measure but it's one of the few quantitative numbers. So I offer a new metric. How many lines of the original design did you change? In mine: 0.

                            I don't want to write the psuedocode for it, but let's say doABCD() is called in 1000 different places as well. Then in the imperative code you have 1000 lines of changes thanks to a structural change. Structural design changes leads to exponential changes in the rest of the code hence this is a better metric.

                            That's an objective measure showing how FP is better. I didn't take any jumps into intuition here and I am sticking with your definition of an "objective measure"

                            >And that's perfectly fine.

                            That's just opinion. Surely you see the benefit of a perfect initial design such that code never needs refactoring. It happens so often in business that it's normal to refactor code. But I'm saying here's a way where you perfect your design in the beginning. That's the whole point of modularity right? It's an attempt to anticipate future changes and minimize refactoring and FP offers this in a way Objectively better than imperative. If your always changing the design when a new feature was added what's the point of writing modular and well designed code? Just make it work and forget about everything else because it's "okay" to redesign it.

                            >According to your definition of "code changed" if I duplicate everything and leave the old lines there - no code was changed which means the design was perfect :)

                            But then you introduced more technical debt. You duplicated logic. What if I want to change the "a" operation. Now I have to change it for both doABCD and doABCD2. Let's assume I have doABCD3 and 4 and 5 and 6 all the way to 20 who all use operation "a" and now they all have to be changed because they all used duplicate code.

                            Let's not be pedantic. Refactoring code is a sign of technical debt from the past. But also obviously duplicating code is also known to be technical debt.

                            >I don't think we'll get to a point where we agree about this.

                            Sure but under objective measures FP has better metrics. Opinion wise we may never agree, but objectively if we use more detailed and comprehensive rules for the metric, FP is better.

                            >One last thing I'd like to know is why do you think nobody writes big projects in functional languages?

                            Part of the reason is because of people with your mentality who don't understand. It's the same reason why the US doesn't use metric. Old cultural habits on top of lack of understanding.

        • DeathArrow 3 hours ago

          > And here’s the kicker. All of these functions did the same thing as the method.

          My only question is if you have function x, y and z, how do you restrict function z so it can only be called from function x, but not from function y? If you have classes you can use access modifiers.

    • the_af 3 days ago

      I don't think a method being tied to an instance is the best case for calling it "tight coupling".

      OOP can be used to design relatively uncoupled systems.

      One of the lessons learned in all these decades is what the grandparent post alludes to, which is distilled into "prefer composition over inheritance". Implementation inheritance (as opposed to interface inheritance) indeed introduces coupling and is therefore discouraged in current advice.

      • gorjusborg 3 days ago

        > OOP can be used to design relatively uncoupled systems.

        Never go full-Object-oriented programming.

        In this case I think it is valuable to make a distinction between OOP which is a style of programming, and object-oriented languages, which are just a language designed with that style in mind.

        I have seen issues in codebases where developers have used OOP as style to aspire to, using it in an academic sense. They tend to try to use inheritance frequently, have deep inheritance trees, and suffer from hidden coupling through these.

        On the other hand, those who use object oriented languages in a mostly functional style (side-effect free functions, effective immutability, and almost no use of inheritance) tend to be much healthier in the long term.

        So it's fine to use OO languages, but never go full OO programming.

        • jnwatson 3 days ago

          I think the poster child of going full OOP (that one can look at) is ACE/TAO [1], an implementation of CORBA. It had deep inheritance trees and abstractions piled on abstractions.

          Similar to Mach and microkernels, folks ran ACE/TAO and thought CORBA was slow, when it was just the implementation that was not built for speed.

          1. https://github.com/DOCGroup/ACE_TAO

        • the_af 3 days ago

          > Never go full-Object-oriented programming.

          Agreed, but that wasn't what I was saying or replying to, was it?

          I was arguing that method implementation tied to an instance isn't the type of thing people mean when they refer to tight coupling. Coupling is related to breakage/maintenance; when you touch this thing here, if it's tightly coupled with some other component, it will also require (sometimes unexpected) changes in that other component.

          Whether one should or shouldn't go full OO is an orthogonal consideration.

          • gorjusborg 3 days ago

            Oh, no, I didn't intend to suggest anything contrary to what you said, I was just adding on.

      • ninetyninenine 3 days ago

        You don’t see the problem.

        It’s better to have all your logic be loosely coupled down to the smallest primitive.

        What’s the point of tying up groups of logic together and glue it up with state and say this is the fundamental unit of composition?

        You see the problem? 2 years down the line you find out that a certain class has methods that are better reused in another context but it’s so tightly coupled that the refactoring is insanity.

        Better to have had state decoupled from function and to have functions decoupled from each other and not tied together by common state. If you do this you get rid of all the fundamental technical debt that arises from oop. You guys don’t see it. Oop is a major cause of technical debt because of tight coupling.

        We can’t predict the future. You can’t guess that a method that exists in class A will 2 years down the line be better fit in class B or as its own class. So because you can’t know the future isn’t it logically better to not couple all your logic together into these arbitrary bundles called classes?

        Break your function down into more smaller modules of computation. The object class is too large.

        But then you ask how do I create bigger abstractions? Just compose functions together to form bigger functions. For state Compose struct types together to form bigger structs. Using this method to build your abstractions allows you to break down your abstractions into smaller units whenever you want!

        You can’t break down the class. The class is stuck. I can’t reuse a portion of state in another context and I can’t do the same thing with my methods. What’s the point of using classes to place arbitrary and pointless restrictions on modularity? None.

        • the_af 3 days ago

          I agree that in many cases there is a problem, and indeed, objects can be designed as too coarse.

          I agree in many cases it leads to problems of composition. Some design principles have been devised to mitigate this, such as the "Single Responsibility Principle" (and others). Nothing is fool-proof however, and everything is further complicated by the fact no-one seems to agree on precise definitions of any principles.

          God Objects are one such know problem of highly coupled, low cohesion functions grouped into arbitrary objects.

          Objects naturally group related functions in some cases (when they truly conform to a coherent entity), so I guess I disagree they are always wrong. But when OOP became fashionable, designers started thinking everything must be an object, and this is obviously wrong -- but is it OOP's fault, or was it the fault of its adopters? The "everything is an object" mantra is indeed misguided when applied to every software system.

          Functions can fall prey to the same faulty thinking. I've seen many times functions "in the wild" that do too many things, tweakeable through too many parameters. They usually must be refactored.

          In fact, refactoring is where you split objects that have become too large or ill-defined for their own good, is it not?

          In the end, I think this is more about good software engineering practices rather than "one must use/must not use OOP/FP" or whatever ?Programming style.

          • ninetyninenine 3 days ago

            >I agree that in many cases there is a problem, and indeed, objects can be designed as too coarse.

            Or don't put your methods in an object at all. Then you don't need to even worry about everything being designed coarse because your object doesn't even exist in the first place.

            >Objects naturally group related functions in some cases (when they truly conform to a coherent entity), so I guess I disagree they are always wrong.

            Think of it like this: You can build a lego project by gluing all the pieces together (aka OOP) but I would say this is always wrong because if you just connect the pieces together without glue they will stick together but they can be split apart at the same time. In OOP your mistakes may not be evident until years later, OR changing requirements make the glue hard to remove...

            Thus I say it's always wrong to use OOP. Just don't glue anything together. Leave it all decoupled. There's no point to bring glue to a lego set.

            >Functions can fall prey to the same faulty thinking. I've seen many times functions "in the wild" that do too many things, tweakeable through too many parameters. They usually must be refactored.

            So? It's not like Taking the SAME function and placing it in an Object doesn't solve this problem. This problem you describe is completely orthogonal to the issue I'm describing because it exists in your logic independent of whether or not that logic is a method or a function.

            >In fact, refactoring is where you split objects that have become too large or ill-defined for their own good, is it not?

            Yeah, If your logic was a collection of functions you don't have to spend the inordinate effort to remove the glue. All you need to do is recompose the lego building blocks in a different way because there wasn't any glue holding it together (if you didn't use OOP)

            >In the end, I think this is more about good software engineering practices rather than "one must use/must not use OOP/FP" or whatever ?Programming style.

            I didn't specify FP here. OOP is NOT good software engineering practice is what I'm saying here.

    • echelon 3 days ago

      Coupling behaviors to types is not a problem. Class inheritance and multi-inheritance with their weird taxonomical trees are the problem.

      There's little practical reason to build a family tree of supertypes and subtypes outside of building GUIs, yet this is how class-based OOP is designed and taught.

      Trait-based OOP gets this right. The hierarchy is completely flat. You simply implement the behaviors for the types you want and don't have to think about grandparent behavior and final interfaces.

      • whstl 3 days ago

        IMO: The hierarchy part itself is not as problematic when there is no implementation inheritance involved. Hierarchical interface chains can be fine in my experience, but I’m open to changing my mind.

        The problem IMO arises when you mix both deep hierarchies and implementation inheritance.

      • ninetyninenine 3 days ago

        > Coupling behaviors to types is not a problem. Class inheritance and multi-inheritance with their weird taxonomical trees are the problem.

        I didn’t mention coupling behaviors to types. I said coupling behaviors to state is the problem. When I couple method a to state it’s a big problem. I want method banana but method banana lives in the jungle and has all the other methods related to jungle.

        So all I wanted was a banana now I have a jungle. Why not think of the banana as modular from the jungle? Why even use oop?

        I agree with you on the inheritance thing. Thats a different issue.

    • DeathArrow 3 hours ago

      > Tight coupling is the basis of oop.

      But then we invented inversion of control principle to remove that tight coupling. :)

    • nsonha 3 days ago

      obviously when they refer to loose coupling, they're refer to that between interface and hidden states/implementation of object.

      It's very easy to deliberately miss the point and complain about the "tight coupling" that, by design, prevents messages to be sent to/called upon invalid states, which functional programming also has, albeit with different a approach using static type inference & pattern matching.

      • the_af 3 days ago

        The tight coupling the OP refers to is between different components, in this case different classes in the same inheritance tree.

        If you use implementation inheritance, i.e. a base class with a tree inheritance of derived classes, there's coupling between the derived classes and the base class, because changes done carelessly in the base class (for example, to accommodate one additional child class) may impact the behavior of already existing derived classes. There are all sorts of principles to minimize this risk, but they are there precisely because it's such a big risk.

        Therefore "inheritance as reuse" introduces dangerous coupling, but it's not a case of the "interface being coupled to the implementation of the object".

        • nsonha 3 days ago

          no one ever brought up "inheritance as reuse" in this thread, don't straw man.

          It's called OOP, NONE of those letters stands for "inheritance"

          • the_af 3 days ago

            > no one ever brought up "inheritance as reuse" in this thread, don't straw man.

            Yes they did. I quote one of the statements at the top of this thread:

            > But in real life, when there is a team, you run into the fragile base class [1] constantly and changing that base class causes horrible issues across your code base.

            The "fragile base class" is one of the problems of "inheritance as reuse". It simply doesn't occur with interface inheritance (the other common type of OOP inheritance).

            > It's called OOP, NONE of those letters stands for "inheritance"

            That line of conversation is fruitless. We're discussing OOP as practiced, not as defined by Alan Kay (who came to regret the name "OOP" anyway).

            OOP as practiced is all about inheritance, and sadly, its pitfalls. This is exactly what other people in this thread are discussing pros and cons of! Not OOP in academia, but in practice.

            Also, it's mentioned in the article which you hopefully have read:

                In this post I use the word “OOP” to mean programming in statically-typed language with:
                
                - Classes, that combine state and methods that can modify the state.
                - Inheritance, which allows classes to reuse state and methods of other classes.
                - Subtyping, where if a type B implements the public interface of type A, values of type B can be passed as A.
                - Virtual calls, where receiver class of a method call is not determined by the static type of the receiver but it’s runtime type.
            • nsonha 3 days ago

              > Tight coupling is the basis of oop. A method is tied to an instance. Methods cannot be composed with other methods or functions without instantiating state. Methods cannot be moved into other scope.

              > The literal definition of an object is a tightly bound set of methods that cannot ever be used without instantiating state.

              That is the nonsensical point I responded too, you're bringing up an actual bigger debate that's out of scope.

      • ninetyninenine 3 days ago

        >obviously when they refer to loose coupling, they're refer to that between interface and hidden states/implementation of object.

        Right and I'm saying it's the tight coupling of Objects themselves that's the main problem.

        >It's very easy to deliberately miss the point and complain about the "tight coupling" that, by design, prevents messages to be sent to/called upon invalid states,

        More like you missed my point. You should avoid coupling logic and state as much as possible period. OOP encourages this. You have one state called A, and A has 20 methods attached to it. Boom now All methods and A are tightly coupled.

        At most one should have only a simple setter and getter on A, and all the transformational logic be pure functions and decoupled from the changing of state.

        Oh and you can use type checking to prevent logic to operate on invalid state.

        • nsonha 2 days ago

          > avoid coupling logic and state as much as possible period

          depending on how you define "state". In OOP, the coupling is between a bunch of transitions (as a single class) to a bunch of states (as a single type). It's actually *loose* coupling that is the problem here because that set-up can easily produce invalid states. It can be addressed with the builder pattern.

          Tight coupling is a problem alright, no one is denying that. It's just there is nothing wrong conceptually about "coupling state to the logic", that is a non-problem, and "tight coupling" has never meant that.

          • ninetyninenine 2 days ago

            Highly disagree state to logic is a huge problem. Once you couple state and logic together you lose modularity so such couplings need to be avoided.

            The reason is as I said. Once state and logic are coupled it becomes entangled with all the logic related to the same state.

            What you’re referring to is a transaction. Where multiple states must be changed at the same time as a transaction. And while you don’t explicitly say it you’re implying that transactions are required. They can’t be avoided. And that’s where I agree. Coupling state and logic is a problem and must be avoided as much as possible, but you can’t fully avoid it.

            The main problem with oop is that it promotes the coupling of state and logic everywhere. The style of programming maximizes tight coupling all over the place. That is the problem and the insight you’re not seeing. Years of doing oop makes you feel that such couplings can’t be minimized. This makes oop highly unmodular. It makes it so that refactoring a of code are normal almost every time a new feature is introduced.

            If you want more insight into what I’m talking about you can follow some of the branching threads that I’m in where I write examples and maybe you’ll see what I’m talking about.

            • nsonha a day ago

              no one is talking about transaction. I'm talking about state machine which is what most program boils down to.

              The problem with OOP is, AGAIN that all states are lumped together in a single type (class) and so are all the state transitions (methods). This IS a problem, but it is not "tight coupling". "Tight coupling" has a known meaning and refers to relationship between components/classes, see what people mean by it in this thread.

              I can simply just repeat that the situation here is actually *loose* coupling, because in order to correctly model state machine, you need to further break down the class into individual states (states not properties, I feel like you need to be noted this distinction) and attribute only the relevant states transitions/methods to each of them, making the coupling of the logic and the state even tighter.

              Functional programming is the same, in a strong type system, which they often have, you can't pass input of this function to another. The solution to that is you write more unixy functions that take more generic types. But for high-level, domain-related functions, this coupling is what you actually want.

              We can disagree on whether "coupling state to logic is a huge problem". My original point is that it's an entire other thing not the particular "tight coupling" that is the downside of OOP, as widely discussed.

              • ninetyninenine 21 hours ago

                > no one is talking about transaction. I'm talking about state machine which is what most program boils down to.

                A transaction is an operation on a state machine in the form you yourself described below:

                > depending on how you define "state". In OOP, the coupling is between a bunch of transitions (as a single class) to a bunch of states (as a single type).

                The bunch of transitions on a bunch of state is a “transaction”

                > "Tight coupling" has a known meaning and refers to relationship between components/classes, see what people mean by it in this thread.

                Tight coupling does not mean that. See the generalized response from ChatGPT: https://chatgpt.com/share/6719060e-9e8c-8001-bdf9-79502712c1...

                Tight coupling encompasses the relationship between components and classes but it has a more generalized meaning illustrated in the link above.

                > I can simply just repeat that the situation here is actually loose coupling, because in order to correctly model state machine, you need to further break down the class into individual states (states not properties, I feel like you need to be noted this distinction) and attribute only the relevant states transitions/methods to each of them, making the coupling of the logic and the state even tighter.

                Not clear what you’re saying here. State on classes does come in the form of properties. They are the same in my mind. Please illustrate the distinction. Also what do you mean by break it down? And how does breaking it down make coupling tighter?

                Are you saying if class A has two properties, say two ints x and y whose state changes need to happen at the same and if I break down x and y into individual classes then to ensure that changes on x and y happen at the same time I have to use object composition and have x and y owned by a third even bigger class and this is the “tighter coupling” you’re talking about that is ironically caused by “loosely coupling” x and y into separate classes?

                Functional programming doesn’t address this if this is what you’re talking about. Functional programming addresses the coupling between method and state.

                Coupling between state and state isn’t an issue in Fp because everything is immutable.

                Why don’t you write some pseudo code so I can better understand what you’re addressing? I was going to do it but I want to be sure about what you’re talking about.

                > Functional programming is the same, in a strong type system, which they often have, you can't pass input of this function to another. The solution to that is you write more unixy functions that take more generic types. But for high-level, domain-related functions, this coupling is what you actually want.

                Again not clear. You talk about state machines then you say Fp is the same when Fp doesn’t have a state machine. Fp is all about writing functions that declare the entire state in one go.

                > I'm talking about state machine which is what most program boils down to.

                In Fp, the state machine is abstracted away from the program so not sure how this relates? State change is a problem that can’t fully go away but it’s segregated from your code as much as possible in Fp.

                Perhaps you should illustrate your point with pseudocode because at this point there could be a number of issues we aren’t seeing eye to eye on some of which may only be related to communication.

                • nsonha 20 hours ago

                  Sorry I just can't

                  • ninetyninenine 19 hours ago

                    It’s fine. My honest interpretation of it is that you’re not well informed. But I gave you the benefit of the doubt thinking that maybe I can learn more in case you actually knew what you’re talking about.

                    Good day to you sir.

  • bluGill 3 days ago

    >I have found that OOP with inheritance is actually a form of tight coupling

    You are not the only one. Prefer composition to inheritance is a saying for a reason. Inheritance is powerful and useful for small problems, but as you say it introduces tight coupling and so should only be used where that is intended. Tight coupling isn't always bad, but it is bad often enough to avoid it.

  • ajuc 3 days ago

    > I find OOP works best when you have a single coder who can store the model of the system in this mind and work how out to design the base and abstract classes well.

    When all the states of the program and all the transitions between them fit into one programmer's head - every programming paradigm works well.

    • bhouston 3 days ago

      Good observation. That makes my statement is an even worse condemnation of OOP that I thought.

  • piva00 3 days ago

    Similar thinking, I've been through much pain from a class hierarchy degrading into an unusable mess, also done a lot of rework refactoring those and it doesn't pay off.

    Instead of inheritance I much rather prefer the composition approach, not extending classes but defining interfaces for the API, using other classes in the composition of an object rather than relying on overrides/implementations from a base class. It's much clearer to reason about (no more 3-4 layers of indirection), easier to refactor as well.

  • bn-l 3 days ago

    Exactly my thoughts. When I started really getting functional programming it was a breath of fresh air. For me the test is not just how fast someone else can pick up the codebase but how fast I can remember how it all works in n years.

  • DeathArrow 3 hours ago

    > But in real life, when there is a team, you run into the fragile base class [1] constantly and changing that base class causes horrible issues across your code base.

    And then you start writing unit tests for everything and wait 10 hours for building and deployment, which is a good thing because your employer will need more people to do the work. Or you can't fix a bug without creating other 10 new bugs, which is a good thing because your employer will need you to solve that 10 new bugs.

    OOP keeps people employed. :)

  • moffkalast 3 days ago

    My thoughts as well. Inheritance forces you into spaghettification sooner or later if you didn't consider everything you'll ever need from the start to the end of the universe. Even if you're working on it by yourself the refactors become frequent and take too much time for little gain.

    Self contained encapsulated parts with an eventbus for entirely decoupled data propagation are much easier to manage and edit. Just duplicate a bit more than you think would be best to reuse, disk space is cheap and if you get that part wrong you're screwed and will need another rewrite.

    And it's probably more performant too with newer hardware since you can usually just spin out any self contained part as its own thread if need be, often with almost zero changes.

  • 000ooo000 3 days ago

    >I have found that OOP with inheritance is actually a form of tight coupling and that it is best to not use class hierarchies [..] I just completely disagree with inheritance in all forms

    This is pretty common advice (at least IME), usually distilled into the form "prefer composition to inheritance".

    • bhouston 3 days ago

      > This is pretty common advice (at least IME), usually distilled into the form "prefer composition to inheritance".

      I think this is why in the current JavaScript era, it is rare to find a popular library that makes significant use of inheritance (as opposed to just supporting interfaces) even though it is supported in the language.

      ThreeJS is one of those exceptions, but they are few and far between.

  • globular-toast 5 hours ago

    A useful rule is not to extend a class you don't own. This is usually taken to mean don't extend a class from a third party library. But it should also mean classes that are shared by multiple developers. In that case, compose, don't inherit.

ildon 3 days ago

I noticed there's an entire paragraph explaining what OOP is, but it might be helpful to clarify that OOP stands for Object-Oriented Programming. Even though it's a well-known acronym, adding that explanation could benefit readers who are new to the concept.

  • Tempest1981 3 days ago

    I remember the early days of HTML, when people excitedly used hyperlinks to define acronyms.

odyssey7 an hour ago

Two words which might change your life: equational reasoning.

epolanski 2 hours ago

The latest state of successful FP projects can be found not in Haskell (which, for all its hype has still to post one killer software when even PHP has many) but in effect systems ala effect-ts or ZIO.

OP's example is trivially solved by services in both, because both treat dependencies as first class constructs and types.

jy14898 4 hours ago

> However, unlike our OOP example, existing code that uses the Logger type and log function cannot work with this new type. There needs to be some refactoring, and how the user code will need to be refactored depends on how we want to expose this new type to the users.

This is odd to me. There are solutions to make data types extensible in haskell, but for example in purescript you'd just use a record. But more importantly, why would you want existing functions to use the new type? Why not just pass the _logger from FileLogger in? The existing functions can't use the _flush ability anyway (in both OOP and FP cases)

ThinkBeat 3 hours ago

It is all just fashion and fads. Use whatever you like best (if the situation permits) otherwise use whatever you are told to use.

"It is not difficult to keep a functional aspect in OO."

"It is not that difficult to do OO in Pascal or similar" (quote from Wirth)

Several functional programming languages includes an OO or OO like feature.

Nothing will fit every situation better than anything else.

dboreham 3 days ago

Although short, this article is quite interesting because it presents code examples from "both sides" and the author seems to have a good understanding of both.

  • sbergot 6 hours ago

    Except it is biased in its conclusion:

    > However, unlike our OOP example, existing code that uses the Logger type and log function cannot work with this new type. There needs to be some refactoring, and how the user code will need to be refactored depends on how we want to expose this new type to the users.

    It is super simple to create a Logger from a FileLogger an pass that to old code. In OOP you also need to refactor code when you are changing base types, and you need to think about what to expose to client code.

    To me option 1 is the correct simple approach, but the author dissmisses it for unclear reasons.

pull_my_finger 3 days ago

Old school OO languages where they had to use classes and objects to patch missing language features probably did suck. OOP as an abstraction API on a modern, type annotated language is really nice and intuitive. Anyone in doubt, but open-minded should be encouraged to checkout a language like Pony[1]. Although it _would_ be nice to have first-class functions instead of the lambda objects they have it's otherwise really nice. No inheritance, real type interfaces and traits instead of "abstract classes". Combine a modern language like Pony with (mostly) sane modern OO advice like in Elegant Objects[2] and you're finally cooking with grease.

[1]: https://tutorial.ponylang.io/types/classes

[2]: https://www.elegantobjects.org/

mekoka 3 days ago

Every code base that I've read that made faithful use of OOP artifacts such as inheritance (in all its forms) has been made more difficult to understand because of it, rarely despite it.

OOP certainly has good features (e.g. encapsulation of state), but I think it tends to shine best when programmers are really aware of the trade-offs. Most aren't. The same person that agrees that mixins are a bad idea in React, will then turn around and happily organize their logic as class-based views in Django.

And due to sunk cost, it's nearly impossible to convince someone who's invested time in this paradigm that the acrobatics are often probably unnecessary.

In my opinion, newer languages expose programmers to better mental models than "the class hierarchy" to solve code organizational problems. Work with Go or Elixir for a while and see your Java and Python improve.

DeathArrow 4 hours ago

GoF patterns, Martin Fowler books and Uncle Bob books had the same impact to programming as the invention of null.

DeathArrow 3 hours ago

For me, the the antidote to OOP everything isn't pure functional programming, but data oriented programming, where your data and the code that performs operations on it are separated.

d_burfoot 3 days ago

One very important issue in OOP is packaging together variable names. You can see the issue by looking at this atrocious function signature from the Python Pandas library:

> pandas.read_csv(filepath_or_buffer, *, sep=<no_default>, delimiter=None, header='infer', names=<no_default>, index_col=None, usecols=None, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=<no_default>, skip_blank_lines=True, parse_dates=None, infer_datetime_format=<no_default>, keep_date_col=<no_default>, date_parser=<no_default>, date_format=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal='.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, encoding_errors='strict', dialect=None, on_bad_lines='error', delim_whitespace=<no_default>, low_memory=True, memory_map=False, float_precision=None, storage_options=None, dtype_backend=<no_default>)

An OOP approach would define a Reader object that has many methods supporting various configuration options (setSkipRows(..), setNaFilter(...), etc), perhaps using a fluent style. Finally you call a read() method that returns the DataFrame.

  • hombre_fatal 3 days ago

    This seems like a trivial example though.

        read_csv(options: Options)
    
    All you've shown is that Options has a lot of configurability, but you'd have the same problem if you posted the whole class with all of its fluent API methods; it would be even more characters.

    Also, you can use built-in control flow logic to modify options before passing them into the function rather than depend on a developer to implement a fluent API for every class.

    But this isn't an example of FP vs OOP. Just replace `read_csv(options)` with `constructor(options)`.

  • lucianbr 3 days ago

    > a Reader object that has many methods supporting various configuration options (setSkipRows(..), setNaFilter(...), etc), perhaps using a fluent style

    I don't understand what that would bring as opposed to the current situation you mention, where you just call read_csv and name each parameter you want with the value you want.

    If anything, I'd say the builder pattern is a crutch for languages that don't have named parameters.

  • coliveira 3 days ago

    While this signature really looks atrocious, it doesn't make the programmer experience bad at all, because all these parameters have default values. So it basically works as an interface with many options with convenient standard values and you can change only what you need.

Hashex129542 6 hours ago

I was actually fan of OOP languages particularly C++ & Java but no improvements so far on the main stream programming languages. Still there are lot of improvements need to do. Still C occupies the first place.

PS: I really hate python style paradigm & declarative programmings. Rust is top of my ignore list.

nashashmi 3 days ago

> Subtyping, where if a type B implements the public interface of type A, values of type B can be passed as A.

I am confused by this statement or it is going against what I understand.

If you have created a Type B variable, and you also have a new interface called A, and B implements A, then why would Type B variable's values be passed to A's values. 'A' is only an interface.

  • Shywim 3 days ago

    It says "values of type B", as in "instance of type B", can be passed as if they are "instance of type A" (which they are actually).

    But yes, values inside B specific to type B will not be able to be accessed when manipulating the type A.

  • mrkeen 3 days ago

    Passed 'as' A, not passed 'to' A.

sirwhinesalot 4 hours ago

Meanwhile us procedural folk think you're all silly.

jauntywundrkind 3 days ago

Dark Side has such powerful allure, such temptation; they are strong emotions that leave such indelible marks.

And I feel like code culture is one place in need of some checks, on it's checking. There's so many wide-ranging beers out there, prejudices. Some have fought those battles & have real experience, speak from the heart. But I feel like over time the tribalisms that form, of passed down old biases, are usually more successful & better magnets when they are anti- a thing than pro a thing.

JavaScript, PHP, Ruby, rust. Systemd, PipeWire, Wayland. Kubernetes. OOP, CORBA, SOAP. These are examples topics are all magnets for very strong disdain, that in various circles are accepted as bad.

It's usually pretty easy to identify the darksiders. Theres almost never any principle of charity; they rarely see in greys, rarely even substantiate or enumerate their complaints at all. I've been struggling to find words, good words, for the disdain which doesn't justify itself, which accepts it's own premise, but the callous disregard & trampling over a topic is something I would like very much to be a faux pas. Say what you mean, clearly, with arguments. Manage your emotional reactions. Don't try to stir up antagonism. If you can, cultivate within yourself a sense of possibility & appreciation, even if only for what might be. Principles of Charity. https://en.wikipedia.org/wiki/Principle_of_charity

I'm forgetting what else to link, but around the aughts this anti- anti-social behavior has a bit of a boom. Two examples, https://marco.org/2008/05/21/jeff-atwood-who-knows-nothing-a... https://steveklabnik.com/writing/matz-is-nice-so-we-are-nice...

(And those on the pro side need to also have charity too.)

The idea of the Speaker For The Dead, someone who tries to paint clearly both upsides and downsides of a thing, is one I respect a lot & want to see. A thing I wish we saw more of.

(I feel like I have a decent ability to see up and down sides to a lot of the techs I listed. One I'd like better illumination on, a speaker for the dead on: CORBA.)

  • throwitaway1123 3 days ago

    I agree with the general sentiment of your comment and I think there are several factors at play here.

    * The human brain is not capable of evaluating and re-evaluating every possible option amongst the plethora of technical choices developers are faced with. This forces us to develop certain coarse grained mental heuristics (prejudices and biases) to navigate technology, and even if these broad generalizations are roughly true initially, we tend not to re-evaluate them over time. This leads to stale biases (e.g. some library/language was missing an API 10 years ago, and someone formed an immutable opinion on it).

    * These broad generalizations lack nuance. I watched a talk recently by Dan Abramov where he calls these heuristics (I'm paraphrasing) a form of information compression [1]. That compression is lossy — it doesn't preserve the original context in which the heuristic was formed.

    * There's also some insecurity at play here too. Developers want to believe that they've chosen The One True Solution, and harshly invalidating the alternatives is one way to reenforce that fantasy.

    * And of course, social media has exacerbated this problem by rewarding inflammatory hot takes. You won't get nearly as many views/upvotes/likes for a sober take that says "technology X is well suited for this narrow use case" as you will for a hot take that says "why technology X failed", or "why everyone hates technology X".

    You might enjoy this link: https://blog.aurynn.com/2015/12/16-contempt-culture

    [1] https://www.youtube.com/watch?v=17KCHwOwgms

    • throwawayie6 4 hours ago

      I think a general lack of criticism also plays a part. When someone has decided that X is the best approach, they tend to point to blog articles that favour their point of view as "proof", while dismissing other point of views as "uninformed".

      A typical example are all those "We rewrote our service from X to Y and got huge benefits" articles.

      - They are ignoring the fact that the new version has the benefit of years of experience with the actual problem domain and can be optimized

      - They also tend to use a different stack such as a more specialized database, async processing using message queues etc. that provides huge benefits.

      Someone will always cherry pick some aspect of that article (language or choice of database) as proof that their point of view is correct, while ignoring the fact that they are not comparing an apple with an apple.

      To get a real comparison they should have written a third system using their new architecture and the old langauge, but that would of course be hard to justify outside of academic research. The developers probably wouldn't do it anyway, because if the old language proved just as effective it would be harder to justify why they chose a new language. Resumé Driven Development is unfortunately a real thing.

  • lyu07282 4 hours ago

    I was such a darksider once, definitely not any more, I don't think there was anything other than gaining broad experience that changed me. I can see how specialization and too narrow focused experience would've resulted in me never changing. In conclusion, the wisdom is just to understand that opinions (including your own) are worthless, there is no real way to tell the objective difference between a good one and a bad one.

    You can know the good and the bad, specifically about something, and that's very useful, but even that list is always going to be incomplete and possibly cherry picked.

jerf 3 days ago

Having chewed on this for a while now, my personal synthesis is this: The problem with OO is actually a problem with "inheritance" as the default tool you reach for. Get rid of that and you have what is effectively a different paradigm, with its own cost/benefit tradeoffs.

Inheritance's problem is not that it is "intrinsically" bad, but that it is too big. It is the primary tool for "code reuse" in an inheritance-based language, and it is also the primary tool for "enforcing interfaces" in an inheritance-based language.

However, these two things have no business being bound together like that. Not only do I quite often just want one but not the other, a criticism far more potent than the size of the text in this post making it indicates (this is a huge problem), the binding introduces its own brand new problem, the Liskov Substitution Principle, which in a nutshell is that any subclass must be able to be be fully substituted into any place where the superclass appears and not only "function correctly" but continue to maintain all properties of the superclass. This turns out to be vastly more limiting than most OO programmers realize, and they break it quite casually. And this is unfortunately one of those pernicious errors that doesn't immediately crash the program and blow up, but corrodes not only the code base, but the architecture as you scale up. The architecture tends to develop such that it creates situations where LSP violations are forced. A simple example would be that you need to provide some instance of a deeply-inherited class in order to do some operation, but you need that functionality in a context that can not provide all the promises necessary to have an LSP-compliant class. As a simple example of that, imagine the class requires having some logging functionality but you can't provide it for some reason, but you have to jam it in anyhow.

It is far better to uncouple these two things. Use interfaces/traits/whatever your language calls them that anything can conform to, and use functions for code reuse. Become comfortable with the idea that you may have to provide a "default method" implementation that other implementers may have to explicitly pick up once per data type rather than get "automatically" through a subclass inheritance. In my experience this turns out to happen a lot less than you'd think anyhow, but still, in general, I really suggest being comfortable with the idea that you can provide a lot of functionality through functions and composed objects and don't strain to save users of that code one line of invocation or whatever.

Plus, getting rid of inheritance gets rid of the LSP, which turns out to be a really good thing since almost nobody is thinking about it or honoring it anyhow. I don't mean that as a criticism against programmers, either; it's honestly a rather twitchy principle in real life and in my opinion ignoring it is generally the right answer anyhow, for most people most of the time. But that becomes problematic when you're working in a language that technically, secretly, without most people realizing it, actually requires it for scaling up.

  • chuckadams 3 days ago

    > Plus, getting rid of inheritance gets rid of the LSP

    No it doesn't. Interfaces need to follow substitutability rules too. Any type you substitute for another does, and that includes things like functions too.

    • jerf 3 days ago

      It does, because it is no longer a matter of substitutability. You are not intrinsically taking an X and modifying it to become a Y while also being an X. You are now just providing an X, and another X, and another X, and another X over there. You need to maintain the constraints of the one interface you are satisfying, but you are not also maintaining the constraints of the superclass, and all of its superclasses. You only have the interface to worry about, only the one dimension, not two (or more, depending on how you count in multiple inheritance languages).

      You also can do things like take a subset of the interface and have things that implement just that. You can't do that to a class hierarchy; once a method is put in the hierarchy, it must be implemented by all children. (Note that if you're jumping up to say "But I can just implement an interface in Java or whatever if I want to do that", you're agreeing with me, not disagreeing. That's not a legal move in inheritance, though. Languages have been very, very slowly but very surely moving away from pure inheritance for a long time now.)

      Having interfaces separated from reusability means you don't have to worry about all these things at once, just the interface.

      • pvg 3 days ago

        I think trying to overformalize this by bringing in LSP might make it harder to understand and easier to nitpick. Another way to look at the same thing is that subclassing/implementation inheritance is an extremely stringent but poorly enforced contract (yeah, also a kind of formalism) that's, in your typical OO language, far too easy break without noticing.

        • jerf 3 days ago

          Ironically, the fact that LSP is complicated and to a first approximation nobody understands it is a major part of my point.

          (Or, if you prefer, LSP itself isn't that complicated conceptually, but if you try to manifest it in reality it turns out to be very complicated in practice. Code makes a lot more guarantees than we think it does. See also https://hyrumslaw.com/ , which a very different view on the same phenomenon.)

          • tome 3 days ago

            I agree. I think the LSP makes it essentially impossible to subclass a concrete superclass, because the subclass must retain all the observable behaviour of the superclass. If it does so then what's the point? On the other hand, implementing an interface is fine. Different implementations of the interface can just uphold the interface invariants without needing any relationship to each other.

    • mrkeen 3 days ago

      It is much harder to violate LSP by writing new code which simply adheres to an interface, than it is to violate by writing new code which is run instead of the old code (and all the old code's side effects)

dkarl 3 days ago

I think this is really comparing programming without effects to programming with effects. If you want the benefits of using an effects system, you'll have to work a little harder for them. If those benefits don't matter to you, then the extra work is for nothing. The article assumes the second case, and doesn't present it as a trade-off, but only as extra work for the same result, which is misleading.

So how could we compare OOP to FP in a way that evens out this difference? It depends on how you define FP.

You can (like this article seem to) restrict the definition of FP to only purely functional programming, in which a program cannot directly execute side effects, and must return a value representing the effects that the runtime system will then execute. Then an apples-to-apples comparison would compare the FP program with an OOP program that uses an effects system to manage its effects.

How do we do that? Well, if we define FP in a way that forces us to use effects, then the definition excludes a language like Scala, which is essentially a side-effecting OO imperative language that has features that enable FP-style programming. Scala isn't FP by the article's definition, because you can write impure code, but it does let you write programs that manage effects using an effects system. So you can do a reasonably fair comparison that way. I think you would discover that the pain of using effects is the same, if not greater, in an OO language where they have to be added as a framework.

Or you could define FP more broadly to include side-effecting languages like Clojure and F#, and you could compare a side-effecting OO program to a side-effecting FP program. This would be tricky because it would be very difficult to draw a style line between OOP and FP. Would you allow the FP program use OO constructs and the OO program to use FP constructs? If so, you might end up comparing two identical Scala programs. Would you ban the FP program from using OO constructs and ban the OO program from using FP constructs? In that case, you would get an OO program in the style of the 1990s or 2000s, which wouldn't be fair to modern OOP.

I don't think either choice really leads to a meaningful comparison between OOP and FP. I think comparisons have to be more specific to be meaningful, and they have to be in the context of a particular application, so you can fairly compare programs that use effects systems with ones that don't. You can compare Java with Haskell for a particular application. You can compare C# with F# for a particular application. You can compare Scala with an effects system like Cats Effect to Scala without an effects system, again for a particular application. These comparisons are more realistic because you can take into account the pros and cons of using an effects system versus not for the given application.

whobre 3 days ago

It’s pretty bad, actually. Especially the Smalltalk/Objective-C flavor with its late binding and messages