warent 5 years ago

There's an unpopular and somewhat seemingly contradictory opinion that I have regarding this, because this isn't the first time I've seen this topic brought up. Mathematics and programming are not really all that related to each other and I think there's an overemphasis on the importance of math in programming for 99% of applications.

Sure, mathematical thinking can be useful, but it's only one type of logical thinking among many types which can be applied to programming.

I've been programming so much for so long now that before I even start writing code my mind launches into an esoteric process of reasoning that I'm not confident would be considered "thinking in math" since I'm not formally skilled in mathematics. It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something. Fortunately, my colleagues are often pleased and sometimes even impressed with my code, and yet I'm not so sure I would consider my process "thinking in math."

So, this isn't necessarily a direct refutation to the article. In fact, maybe what I'm talking about is the same thing as what this article is talking about. But, anyway, my point is that I feel that there's more ways to think about problems and solutions than pushing the agenda of applying formal mathematics.

As an aside, I noticed this part of the article:

"Notice that steps 1 and 2 are the ones that take most of our time, ability, and effort. At the same time, these steps don’t lend themselves to programming languages. That doesn’t stop programmers from attempting to solve them in their editor"

Is this really a common thing? How can you try to implement something without first having had thought of the solution?

  • dkarl 5 years ago

    Math has more aspects than just logical deduction via mechanical rules. Math also has an aesthetic aspect that guides people to find elegant, powerful solutions within the space defined by the mechanical rules. There may be many paths of deduction from point A to point B, which are all mechanically equally valid. But from the human point of view, they have different value. Some will be simple and easy to understand; others will rely on ideas from one or another realm of math, making them friendly to people who understand those ideas. Some will suggest, to the human brain, analogies to other problems. The mathematical validity of the argument is judged according to whether it correctly follows the mechanical rules, but all other aspects are judged by aesthetics and intuition and ultimately by how the solution is received and utilized by other mathematicians.

    If the only aspect of mathematics that you bring into programming is logical deduction by mechanical rules, then I doubt it will help, except for rare cases where you prove or disprove the correctness of code. If, on the other hand, you bring over the aesthetic concern, the drive to make painfully difficult ideas more beautiful (ergonomic) for human brains, then it will help you make your code simpler, clearer, and easier for others to work with.

    Is this really a common thing? How can you try to implement something without first having had thought of the solution?

    It's common, and as you can imagine, it doesn't lead to good outcomes. When people start by coding first, it's so much work they tend to stop at their first solution, no matter how ugly it is. When people start by solving the abstract problem first (at a whiteboard, say) they look at their first solution and think, "I bet I can make this simpler so it's easier to code." The difficulty of coding motivates a bad solution if you start with code and a good solution if you write the code last.

    • misterdoubt 5 years ago

      Ah, well, you just described the relative value of math in much the way I'd describe the relative value of... well, just about any intellectual pursuit. Same in philosophy. Or in law. Or in physics.

      A lot of people with particular interest in one area -- say, mathematics -- don't realize that much of what is important is much more generally applicable.

      It's not that these things are distinctly important for math. It's that they are important for thinking.

      • dkarl 5 years ago

        That's true to a certain extent, but math and programming share the property of being built up from logical building blocks that are combined in strict logical ways. Law and philosophy are built on language and culture; physics is closer but is empirical. Math and programs are built from logic, and this gives them more of a common aesthetic sense.

        For example, in law or philosophy, repeating the same argument multiple times, adapted for different circumstances, can give it weight. In math and programming, the weight of repetition is dead weight that people strive to eliminate. In law and philosophy, arguments are built out of words and shared assumptions that change over time; in math, new definitions can be added, and terms can be confusingly overloaded, but old definitions remain accessible in a way that old cultural assumptions are not accessible to someone writing a legal argument.

        In physics, the real world is a given, and we approximate it as best we can. In math and software, reality is chosen from the systems we are able to construct. Think of all the things in our society that would be different if they were not constrained by our ability to construct software. Traffic, for one — there would be no human drivers and almost zero traffic deaths.

        Where programming differs from math is that math is limited only by human constraints. Running programs on real hardware imposes additional constraints that interact with the human ones.

        • lonelappde 5 years ago

          Modern computing is empirical. That's why MIT switched their intro to CS class from Scheme theory of computation to Python robot controllers.

        • oeoeo00 5 years ago

          It’s possible I am misunderstanding you, but think I agree with this.

          There’s kind of two ideas going on here (in this thread in general), I think.

          One seems to be of a mindset I’d describe as thinking in math means glomming onto knowing linear algebra.

          The other seems to be thinking in interconnections, minimalist definitions, and those abstract concepts that exist in math (and all kinds of things) for connecting discrete ideas into composite ideas.

          One thing that bugs me is code with overly specific semantics, where it reads like that’s the only problem the code could solve.

          When if it’s broken into concepts and abstraction in the PLANNING stage the code ends up being less verbose and descriptive of the human problem and more useful for a variety of problems.

          So instead of code to balance a checkbook, I’d write code to add/subtract numbers and input numbers from my checking account.

          I see a whole lot of code with too much specific semantic meaning. And it ends in practice that we think code in one system is highly specific to that system and minimizes effort to reuse.

          At least that’s been my experience at work. Ymmv

      • furyofantares 5 years ago

        I see math as the language of thinking. Math doesn’t really have a domain beyond: how do we think, how do we know, and how do we communicate our knowledge. The progression of mathematics has been the systematic removal of domain. Numbers are widely applicable because they are very abstract and devoid of domain, and they are one of the least abstract things in mathematics.

        I agree with your gist, there are lots of things where studying that thing is virtuous beyond its direct application. But also, I’d contend that thought is the subject of mathematics and not just a virtuous side-effect.

        • User23 5 years ago

          Math, properly done, is rigorous formal thinking. And it lets us think things we normally couldn't. Nobody can visualize a 100 dimensional object, but a mathematician can easily work with one.

          And as programmers we work with mathematical objects called state spaces, that have vastly more than 100 dimensions.

          That said one can easily be a competent programmer without having much formal mathematical knowledge much like one can easily be a competent ball player without knowing the differential calculus. However, just as modern ball players improve their games with computer aided mathematical analysis of their swings and so on, a programmer can improve the quality of his output by mathematical analysis, in particular via the use of the predicate calculus and its, in my opinion, most useful application of loop analysis.

  • segmondy 5 years ago

    Mathematics and programming are strongly related more than most people think.

    http://www.norvig.com/spell-correct.html How did he solve it? Using probability theory and sets.

    It's not just games, cryptography, finance, signal processing, compression, optimization, and AI that require mathematics, tons of programming does most people just don't realize it and brute force their way to a solution.

    Lot's of real world problem can be solved with algebra, calculus, Boolean algebra, linear algebra, geometry, sets, graph theory, combinatorics, probability and stats. What typically happens is most programmers are giving a problem, and what do they do? They start thinking in code. How did we solve problems before computer?

    Apply that kind of thinking, then solve the problem with mathematics. Your code will often be much smaller and dense. Sure, dealing with output and input doesn't require you to write mathematical code, but the core of your problem can often be solved with some mathematics.

  • LandR 5 years ago

    > Is this really a common thing? How can you try to implement something without first having had thought of the solution?

    Unfortunately, it's incredibly common.

    The result is always almost a mess. Functions that are never called, parameters that are never used, as they discovered their mistake as they were coding but then never went and cleaned up the stuff they don't use anymore. Broken logic, poor performance. Functions with a mess of loops and if statements, nested like 10 indents deep.

    You can tell by looking at code if they were making it up as they were going versus implementing a solution they had thought through before starting coding. It's painfully obvious.

    When you try to solve your problem by coding, I think you are forced to take a myopic view of only subsets of your solutions and it's near impossible to step back at this point and come up with a nicer, more abstract and probably more concise solution. The solution comes out spikey.

    • rowanG077 5 years ago

      Really? I think the best way to solve a problem is to code it. I can never see all the corner cases and logical inconsistencies before I start typing. I have tried to formally model software before I start writing it and in the end it was largely a waste of time because real understanding of the problem comes with coding the solution.

      Of course when doing it like this you write a lot of code which later is unused or bad. But I think that will always happen and it's just a matter of having the discipline to continuously clean up after yourself.

      • braythwayt 5 years ago

        Writing a lot of code you later discard because it isn’t part of the final solution is like throwing clay on the table and then carving away the bits that don’t fit.

        Nobody criticizes the sculptor for the clay that ends up on the floor, and clay is heavy. We carve away bits, they have no mass and don’t need to be swept up, all we have to do is cut them away, revealing the final program.

        • sidlls 5 years ago

          On the other hand the sculptor has considered the type of clay, the quantity, the tools she'd need to use to carve away those bits, and understands enough about what she wants to create to know what bits to carve away first.

          Programming may not be (all) math, but it's not art, either.

          • hutzlibu 5 years ago

            "Programming may not be (all) math, but it's not art, either."

            Huh? Maybe the kind of common 8-5 office programming around buisness logic is not, but to design any bigger project is definitely art.

        • thaumasiotes 5 years ago

          > Writing a lot of code you later discard because it isn’t part of the final solution is like throwing clay on the table and then carving away the bits that don’t fit.

          How do you make a statue of an elephant?

        • Gravityloss 5 years ago

          Planning for realistic levels of waste. Throwing away prototypes. Gradually making changes. Keeping backwards compatibility. Testing. Accommodating users with large collective investment in learning.

        • mannykannot 5 years ago

          This is not a useful analogy; it is more of an excuse for not trying to think things through. Would this be a reasonable analogy for building a bridge?

          If you don't have a reasonably detailed idea of what you want and how to achieve it, you are unlikely to get it.

          https://dilbert.com/strip/1991-9-6

          • zbentley 5 years ago

            > Would this be a reasonable analogy for building a bridge?

            That is also a useless analogy. Do bridge builders get to test and re-test their bridges in the real, non-simulated world? Can they instantly make a copy of their bridge with a few critical differences and see how the two behave? Can they re-build their bridge in minutes?

            Metaphors aside, I think history is ample evidence that "coding your way around a problem" rather than conceptualizing a solution first is a perfectly valid way to approach professional programming. It's not the only way, and it has drawbacks which others have pointed out here. So does the conceptualize-first approach: you might solve the wrong problem, make something inelastic in the face of changing requirements, or fall into the psychological trap of being attached to your mental model even when it turns out that you really didn't think of everything and have to make changes on the fly.

            I'm really tired of people being dogmatic about either approach ("move fast and break things/pivot; anyone else isn't really interested in getting stuff done!", "you're just a messy code monkey unless you can hold the solution in your head before you start!"). It's almost always veiled arrogance rather than honest improvement-seeking, in my experience.

            • mannykannot 5 years ago

              Well, yes, it is a useless analogy... Oh, you meant to say that comparing bridge-building to software development would be a useless analogy? It's not an analogy I made - the point is that just because you can make a cute analogy, it doesn't mean it offers any insight.

              > I'm really tired of people being dogmatic about either approach

              Exactly - and the implication that I am being dogmatic is a straw man. I am simply opposed to arguments that depend on poor analogies.

              Furthermore, all of the bad things that you say can happen if you try to think ahead are as least as likely to happen if you don't, and especially if you have gone in the wrong direction for some time (I know the latter is a manifestation of the sunk-cost fallacy, but it happens a lot on real projects.)

          • rowanG077 5 years ago

            Building software is not even remotely the same thing as building a bridge. It would be more akin to the architect creating the drawings twice for the bridge. Once as an exploratory version and the second one the production version.

            Oh wait that is actually how architects work. In fact at my work we have multiple CAD designers(not architects though) and it's not uncommon for them to completely throw away a design and start over. I think code should be mostly the same.

            • 0x445442 5 years ago

              I'll bet the engineering process of the software written for the Apollo 11 lunar lander was much closer to the bridge building process than you might think. I'll also bet there's a whole host of software projects which use similar processes today. It's just that most of us writing DB skins for "The Enterprise" are rarely, if ever exposed to real engineering for the simple fact that quality software is expensive and typically, our organically grown solutions are good enough.

              • nwallin 5 years ago

                > I'll bet the engineering process of the software written for the Apollo 11 lunar lander was much closer to the bridge building process than you might think.

                Of course, but the Apollo 11 lunar lander was created without the aid of ubiquitous desktop computers. I imagine the SpaceX guidance/control software was written in a way that less resembles bridge-building/Apollo 11 lunar landers and more like the organic processes we see elsewhere in the software industry.

                If Neo were to build a bridge in the Matrix, chances are his processes would bear little resemblance to those of the Army Corp of Engineers.

                • 0x445442 5 years ago

                  > I imagine the SpaceX guidance/control software was written in a way that less resembles bridge-building/Apollo 11 lunar landers and more like the organic processes we see elsewhere in the software industry.

                  For the guidance/control systems, I bet you're wrong.

            • ryandrake 5 years ago

              Maybe if we treated the practice of software development more like bridge building, we would have better reliability, fewer outages, fewer zero-day exploits, fewer patches and bugfixes--software that actually works the first time, and every time for years.

              • Jtsummers 5 years ago

                I work in aviation/defense. They try to treat it just like building bridges, and it's a disaster. Please don't.

                Software is a design practice/process. Not a building process. Any analogy should be to the design phase of other engineering disciplines.

            • mannykannot 5 years ago

              Your designers are working with abstract models. They are thinking about problems at a conceptual model, they are not putting up structures and seeing if they work.

              • rowanG077 5 years ago

                code is also an abstract model.

                The CAD designers absolutely test if things work. Why do you think almost every engineering bureau has 3D printers.

                • mannykannot 5 years ago

                  > Code is also an abstract model.

                  Sure, but it is not the only one. You are allowed to think at other levels, and it can be quite useful, especially on larger systems.

          • drdeca 5 years ago

            In the analogy with the clay, I believe they are both adding and removing clay

      • mannykannot 5 years ago

        Really? yes, it is very common.

        The problem of this approach is that it does not scale to large systems. If you don't spend much time on thinking in the abstract about how it will work and what might go wrong, then, by the time you have written enough code to find that out, you may have gone a long way down the wrong path, and not all architectural-level mistakes and oversights can be patched over.

        No-one does this perfectly -- even people using formal methods will overlook things -- but, on a big project, if you don't put much effort into thinking ahead about how it should work, and try to identify the problems before you have coded them, you are likely to end up where, in fact, many projects do find themselves: with something that is nominally close to completion but very far from working. Those that are not canceled end up looking like legacy code even when brand new.

        • rowanG077 5 years ago

          If you want to have low quality solutions that kinda work on the first try then sure go for it. Your approach will inevitably lead to insurmountable technical debt that can't be paid off.

          Big projects should be cut into smaller pieces where each piece can be relatively easily rewritten.

          • mannykannot 5 years ago

            > Big projects should be cut into smaller pieces where each piece can be relatively easy rewritten.

            To come up with the right smaller pieces, you have to think about how they will work together to achieve the big picture. That means interfaces and their contracts, and if you get them wrong, you end up with pieces that don't fit together, and do not, collectively, get the job done.

            Big problems cannot be effectively solved in a bottom-up manner, and perhaps the most pervasive fallacy in software development today is the notion that the principle of modularity means you only have to think about code in small pieces.

            • rowanG077 5 years ago

              That's my point you CANNOT possibly come up with the right smaller pieces until you have a solution that you have verified works.

              What do you think other engineering principles do? They create a proof of concept. Verify it works and then create the real thing. That is why "real" engineering companies have hundreds of tools to test stuff.

              I really don't understand why people want software to be different. You write some shitty throwaway web app then sure go ahead and don't prototype anything just hire a "software architect" that designs something and use that.

              But do you want something that actually works then that is completely useless. Prototype, verify, start over if necessary. That is the way to write quality software.

              • mannykannot 5 years ago

                >That's my point you CANNOT possibly come up with the right smaller pieces until you have a solution that you have verified works.

                That's beside the point. The point is that coding is not the only way to verification, especially at the architectural level.

                > I really don't understand why people want software to be different.

                It seems to be you who wants to be different. Making prototypes is expensive and time-consuming, so engineers try to look ahead to anticipate problems. Prototyping in software is cheaper, but not so cheap (especially at the architectural level) that thinking ahead isn't beneficial.

              • slx26 5 years ago

                in my opinion the big difference and problem is that space and resources are virtually unlimited, and a "product" can keep changing indefinitely too. and if something fails, in most cases it will fail very differently than in other engineering disciplines. I agree it would be great to write more prototypes and all that, but hey, capitalism: good_enough/shitty makes money, so that's where we are (also, CS is rather new, we are still figuring out a lot of things, still getting deep in the mess)

      • gaze 5 years ago

        I have found that the gap in understanding between thinking you understand something and implementing something as a program is often much smaller than the gap between having programmed something and having proven it.

        • rowanG077 5 years ago

          What do you mean having proven it? As in you tested it and it works or do you mean you have formally using math that your solutions always gives the correct result?

          If it's the former then this is part of building it. An implementation without proper testing is incomplete. If it's the later I actually agree. Only the most sensitive of applications require that level of sophistication though.

          • gaze 5 years ago

            A mathematical proof or analytical solution. I understand that analytical solutions only apply for very narrow ranges of problems but... consider writing an analytical solution to a differential equation versus applying a numerical solver. A numerical solution rarely leads to as deep an understanding as an analytical solution or a number of approximations in various limits. I feel like we're in the limit of something analogous to writing numerical solvers and claiming understanding from observing the output.

      • ben509 5 years ago

        I often take the compromise technique: write a prototype and build off that.

        The prototype is generally a mess, but I throw that out anyway.

    • mruts 5 years ago

      I’ve always thought that you don’t really understand a problem until you have a solution to it. What I like to do is first write a naive solution to a problem so I can understand it better. Then I throw away the code and start again. I invariably learn a lot more about a problem and the most elegant solution than I would have if I had just tried to write a perfect solution right off the bat.

      Code, after all, is cheap (and often totally worthless). More developers should adopt this view. I’ve seen engineers more times than I would care to admit get attached to some piece of code, as if it was some piece of themselves. Code is more akin to dogshit than the limb of a dog.

      • TeMPOraL 5 years ago

        The problems we're solving when building software happen simultaneously at different levels. Some are best solved by prototyping. Others are best solved on a whiteboard. The trick is correctly identifying the problems, and then knowing which layer they belong to. If you try to code up a prototype for a design problem, you'll most likely waste a lot of time and not reach any useful conclusions in the end (or end up shipping the broken thing). If you try to whiteboard a coding problem, you can get stuck forever drawing things, with no result to show for it.

        In my experience, the problem levels go differently than one could naively expect. Data structures, abstractions, module interfaces - all problems dealing directly with code - are best solved first on a whiteboard, where evaluating and iterating through them is cheap and effective. User interfaces, user experience, usefulness of a part of a program - things dealing with business and user needs - are best solved through prototypes, because you can't reasonably think through them on paper, you have to have a working thing to play with.

  • smallnamespace 5 years ago

    > It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something.

    That's what doing math is like too - just substitute axioms, mathematical objects (whether numbers, sets, rings, or whatever is under discussion), potential lemmas and approaches, what bag of mathematical tools (theorems) you can use, and how much closer to a solution when you shift terms in your formulae around.

    Then you write it all down (if you haven't already), simplify it, and clean it up before showing it to others, just like you would code.

    Also, you can map programs to proofs and vice versa: https://en.wikipedia.org/wiki/Curry–Howard_correspondence

    • lioeters 5 years ago

      That sentence jumped out at me too. The intuitive thinking process - unverbalized or half-expressed leaps of logic making interconnections - I imagine mathematicians experience a similar rush of thoughts while solving or exploring.

      All code boils down to operations that can be described mathematically. Software is applied mathematics (with a sprinkle of art, perhaps). I think the reason why some people feel that programming is not closely related to mathematics, is that programmers are thinking and working on top of so many layers of abstraction, it's almost like working with the "stuff of the mind" itself, with models, processes, flows, transformations, events, composing behaviors.

      That said, I relate to what the grandparent commenter is saying. Software allows me to think with visible, malleable and "living" mathematics while building up a system, to ask questions and have a dialogue with it.

      >> there's more ways to think about problems and solutions than..applying formal mathematics

      I agree with this. Often a "looser" approach is needed to explore a problem space, and formal mathematics may not be the best medium for creative problem-solving. On the other hand, the qualities that are valued in software - types, functional programming, test-driven development, etc. - are all about proofs. Not necessarily mathematically rigorous, but the closer you get, the more reliable the logic.

      • asark 5 years ago

        > That said, I relate to what the grandparent commenter is saying. Software allows me to think with visible, malleable and "living" mathematics while building up a system, to ask questions and have a dialogue with it.

        Programming's friendlier to algorithmic thinking (versus equation/identity and proof). The former's really easy for me, and while on paper (aptitude test scores) one might think the latter would be too, it's very, very not. I've only relatively late in life realized I need to reframe any non-trivial math I encounter in terms of algorithms to have any hope of understanding it. It's probably why I bounce off—understand well enough, just strongly dislike—programming languages that try to make code more look more like a math paper (more focus on equality/identity and proof-like structures).

        And yeah algorithms are math, but lots of math's not really algorithms and when someone writes "think in math" that mostly means "think in proofs" to me. If they mean "think in algorithms" then that's close enough to programming—as I see it—already that it's a pretty fine distinction.

    • Sharlin 5 years ago

      Yeah. It is easy to people with no university-level math background to think math is a deterministic, conscious effort to execute what are basically algorithms, like long division with pen and paper.

      Whereas actually ”mathematical thinking”, like coming up with a proof, is an incredibly intuition-guided process, a parallel heuristic search in the solution space, a fundamentally creative endeavour. And as your intuition comes up with promising paths through the search space, you write them down, formalize them, probably discover some corner cases you have to handle, and either continue down that path or realize that it is a dead end and you have to backtrack.

      At least to me, this process is incredibly similar to programming effort. You come up with subsolutions, formalize them, fix issues revealed by the formalization, carry on with the next subsolution or realize that approach can’t work after all, and come up with something else.

  • kd5bjo 5 years ago

    > Is this really a common thing? How can you try to implement something without first having had thought of the solution?

    There appears to be two distinct kinds of programmers that are about equally effective: ones that think through the problem first and then write down the solution on the one hand, and ones that start with something close and then iteratively refine it into the desired result on the other hand.

    When you’re doing things like writing documentation, this is important to remember as the two kinds of programmer will approach the documentation differently — important information needs to be put where both approaches will find it: http://sigdoc.acm.org/wp-content/uploads/2019/01/CDQ18002_Me...

    • lioeters 5 years ago

      Fascinating, thank you for the link to the paper.

      They group these styles as: opportunistic versus systematic approach to programming. Paraphrasing below..

      Opportunistic programmers develop solutions in an exploratory fashion, work in a more intuitive manner and seem to deliberately risk errors. They often try solutions without double-checking in the documentation whether the solutions were correct. They work in a highly task-driven manner; often do not take time to get a general overview of the API before starting; they start with example code from the documentation which they then modify and extend.

      Systematic developers write code defensively and try to get a deeper understanding of a technology before using it. These developers took time to explore the API and to prepare the development environment before starting. Interestingly, they seemed to use a similar process to solve each task. Before starting a task, they would form hypotheses about the possible approach and (if necessary) clarify terms they did not fully understand.

      • optimuspaul 5 years ago

        I've always thought of those two groups using different labels. The Code Artists and the Engineers. The Artists have a strong need to be creating to understand something, whereas the engineer has a strong need to understand before they can create. And those that believe the programming is not an art fall into the latter group.

  • kian 5 years ago

    Mathematical thinking can be extremely useful for programmers who never touch 3d games, or physics engines, or anything else requiring calculus or matrices. Functions are mathematical objects, and can be combined using operators that obey mathematical laws into other functions - thinking of them in this way leads to the functional and concatenative programming paradigms. These combinations can also be rearranged in ways that also behave mathematically (i.e. according to simple rules that do not change, and have been explored for millenia), making it much easier to both refactor and optimize code. They can even be used as foundational abstractions for organizing your code, leading to horizontal rather than vertical abstraction - i.e., using the tower of abstract algebra or category theory types, we can organize our code in a way that anyone who understands that type will immediately grasp, whether or not they understand the internals. Math is everywhere, and you're using it, whether you know it or not. Might as well use it well.

  • geomark 5 years ago

    I've been thinking about this quite a bit, but coming from a different angle. I've been helping at my kid's school with coding clubs for primary students. When teachers are recruiting for the coding club they always mention the students who are good at math as good candidates. But what I have noticed is that the students who do the best in coding are more often musically inclined or linguistically talented. It seems to me to make some sense. The ones who can parse and understand languages at an early age might also have the aptitude for programming. It's a small sample size, a few dozen students. Still seems kind of interesting.

    • Junk_Collector 5 years ago

      It's also worth thinking about that you don't really learn math in primary school so much as you learn numbers and computation, so when a teacher says a child is good at math they usually mean good at numbers. Very few teachers understand math well enough to identify who would be good at it and a lot of unfortunate students find this out when they get to college.

      • geomark 5 years ago

        Sure, I can see that. But isn't it similar with language and other subjects? You just learn the basics, nothing deep, no turns of phrases, little expressiveness.

        Perhaps there is little correlation between those who excel at coding at a young age and those who go on to be good programmers when they get older. I just find it interesting that at this young age I see a correlation between coding skills and language skills more than math (really just arithmetic) skills.

        Another observation was that we did the Hour of Code activity in December last year with Year 2 to Year 6 students (equivalent to Grade 1 to Grade 5 in the US). And in each group there was one or two student who really stood out. And every one of them was a girl. Small sample size of only about 100 students so maybe I shouldn't be wondering what is going on here.

        • justinmeiners 5 years ago

          Math is somewhat unique in that the high-school and early college version is not at all representative of the real thing. Its not "just a taste", its qualitatively different.

          As the other comment above mentioned, I think this has to do with education of the teachers. Very few teachers know what math is either.

    • justinmeiners 5 years ago

      You may be surprised that this also applies to math.

      High-level math values logical and linguistic skills. This is often a hard stopping point for many students who were good at high school computation like calculus.

    • newen 5 years ago

      I have a math undergrad, didn't program much until my senior year. Then went to grad school in computer science. Learning data structure and algorithms was very easy for me compared to other students who came from non-math non-CS backgrounds because writing math proofs and designing algorithms are very very similar and use similar though processes and methods. Udi Manber's Introduction to Algorithms show how you use mathematical thought processes to design algorithms.

    • analog31 5 years ago

      A friend of mine taught high school math until recently. He told me a similar thing about math. The kids who are "good at math" in elementary school don't necessarily become the top math students later on. He felt that his best math students were the ones who were just curious about a lot of things as kids.

  • crimsonalucard 5 years ago

    >Mathematics and programming are not really all that related to each other and I think there's an overemphasis on the importance of math in programming for 99% of applications.

    This is the standard thinking of someone who's not deep into math but deep into programming.

    The two are deeply interrelated and in actuality are one in the same. Knowing math provides deeper understanding of programming. If you want to get better at programming in general, learning every new frameworks or specific technologies is not the path to getting better. Learning math is the path.

    I cannot show you the path for you to understand it, you'll have to walk it yourself to know.

    Suffice to say that there is an area of math that improves programming in a way you can understand. Type checking. Type checking proves that your program is type correct, it comes from math. You know it, and probably use it all the time.

    To extend this, there's this concept of dependent types which also come from math. Dependent types can prove your entire program correct.

    That's right with math you can write a single proof which is equivalent to billions of unit tests that touch the entire domain of test cases, to prove your program 100% correct. It's a powerful feature that comes from math. It's in the upper echelons of programming theory / mathematical theory and thus not trivial to learn. If you're interested you can check out the languages: Coq, agda or idris.

  • diffeomorphism 5 years ago

    > but it's only one type of logical thinking among many types which can be applied to programming.

    Could you elaborate on this? Mathematics is abstract/meta enough that I would consider any type of logical thinking as part of math.

    > It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something.

    That for example sounds very much like "think in math" to me.

  • agentultra 5 years ago

    > Mathematics and programming are not really all that related to each other

    I may be wrong but I believe the Curry-Howard correspondence disproves your claim. One can translate between the two and find that they are equivalent.

    The difficulty is that some programming languages are hard to model mathematically due to the way they were designed and implemented. Some, like Idris or Agda, make it easy to see the correspondences. Others like C or Javascript are harder.

    The key to solving hard problems is being able to think concretely in abstractions. The best language we have for abstraction is pure mathematics.

    • kd0amg 5 years ago

      Curry-Howard maps typical practical tasks on the programming side to completely useless make-work on the math side. Turning a customer's name and address into text to make a mailing label has to happen at some point, but you really don't need to go through all those steps to show that if customers exist, then strings exist.

      • agentultra 5 years ago

        I'm talking about hard problems. A proof for such a program you suggest is trivial and probably not worth knowing. A program is a proof that there exists a type which satisfies a proposition. Not all propositions are interesting and neither are all programs.

  • hinkley 5 years ago

    My second job, the project had been run by a couple of vocational developers. They did okay. No version control, lots of corner cases not covered, lots of code that made inferences from incomplete data. But the team needed to grow significantly and none of this stuff was going to survive other people touching the code.

    One of the bad patterns in the code was very complex nested boolean logic in places. Often with the same condition in several branches.

    So I started using K-maps to untangle these. A few of them were much easier to read, but some of them... some of them it was unclear that all the cases were addressed. So I started putting big block comments above those, but we all know what happens to block comments over time.

    Much later, big conditionals like that I would just move to a separate function, and then split em up to look like normal imperative code, instead of like math.

    The first rule of teamwork is stop trying to be so goddamned clever all the time. It's like being a ball hog in basketball, football, soccer. Use that big brain to be wise instead. Find ways to make the code say what it means and mean what it says. Watch for human errors and think up ways to avoid them.

    Math has very, very little to do with any of that. Psychology is probably a better place to spend your time.

    • higherkinded 5 years ago

      How come math's to blame? That's clearly just some nuts level of incompetence that spawned all of spaghetti described. Lack of any conscious effort. After all, the mathematical thinking mentioned in the article inevitably includes covering the corner cases of the problem at hand as well as of the solution proposed, not to mention simplification and generalization. That's not trying to be smart, that's outright being dumb. (I'm not even sure if you're trolling at this point.)

      • hinkley 5 years ago

        Math isn't to blame. But neither is it the solution.

  • pjc50 5 years ago

    > I've been programming so much for so long now that before I even start writing code my mind launches into an esoteric process of reasoning that I'm not confident would be considered "thinking in math" since I'm not formally skilled in mathematics. It's all just flashes of algorithms, data structures, potential modifications, moving pieces, how they all affect each other and what happens to the entire entangled web when you alter something.

    I think I'd call this "thinking in programming", and it seems like a great way to do it.

    > Is this really a common thing? How can you try to implement something without first having had thought of the solution?

    A distressingly large amount of work I've done has not been greenfield development but things that might be called "maintenance" or "integration". You're not trying to draw a picture on a blank sheet of paper - you've been handed an almost-completely-assembled jigsaw, the photo on the box, and limitless box of random pieces. Your job is then to work out which of the already-assembled pieces is wrong and which of the spare pieces can be used to fill the hole.

    In this context, disposable programs are very useful for finding information about what's going on, sketching possible solutions, and finding out which plausible ideas won't work for reasons outside your control.

    (e.g. this week I wrote a disposable program to use libusb to extract HID descriptors; this duplicated a library we already had but didn't trust, and enabled me to pass a problem over to the team programming the other end of the USB link.)

  • seanmcdirmid 5 years ago

    > Is this really a common thing? How can you try to implement something without first having had thought of the solution?

    Some of us actually think by programming. In that sense, a REPL or notebook is probably a better medium, but the thinking is going on concurrently with prototyping.

    It isn’t so much like “we are solving the problem at the same time we are writing the code for the solution” but more like “we are writing (disposable) code to help us solve the problem.”

  • MaxBarraclough 5 years ago

    > Mathematics and programming are not really all that related to each other

    With respect, that tells us much more about you than about math or programming.

    No Haskell expert, or formal methods expert, or complexity theory expert, would ever make a statement like that.

    You may be right that math is quite a distance from day to day development, of course. (I don't think I'm being pedantic here, but perhaps.)

    > it's only one type of logical thinking among many types which can be applied to programming.

    What do you have in mind? Design patterns and software development practices, or something else?

  • ggm 5 years ago

    Mathematics and programming are not really all that related to each other and I think there's an overemphasis on the importance of math in programming for 99% of applications.

    I think if you regard logic (in philosophy) and maths (as a huge broad field) and computing (specifically a sub-field in maths to some people) its pretty clear that logic and computing have a huge relationship.

    I can think of lots of other subfields in maths, which have huge inter-relationships. Applied maths, whats that got to do with probability? Well.. it turns out that modelling complex systems uses Monte-Carlo methods .. (a fictional example, I suspect, I know the manhattan project people dreamed MC up but its modern applicability is unknown to me)

    You don't think maths informs programming, or its over-stated? I guess thats true, in as much as poetry doesn't inform legal writing. But, I observe that people who do enough poetry or writing to understand the difference between a simile and a metaphor and an allegory, are really on-point communicators, and the law needs that concision and precision.

    I think people with good groundings in maths (and logic) make awesome programmers but its not strictly neccessary to be a mathematician to know how to "speak" in a programming language. What pitfalls you avoid from your knowledge, I cannot say. But I do know that huge pitfalls lie in naieve programming: large loops iterating over un-initialized data structures, not understanding the if-then-else logic or side effects of expressions, tail recursion..

    I think computing is a sub-field in maths. How much it matters depends on how much your code matters.

  • nicoburns 5 years ago

    > Sure, mathematical thinking can be useful, but it's only one type of logical thinking among many types which can be applied to programming.

    Completely agree with this. I did a Maths and Philosophy degree, and I reckon the Philosophy was more useful to my career in programming than the Maths was. Although this probably depends on what kind of programming you do.

    • sabas123 5 years ago

      As somebody interested in both but having mainly been a programmer all my life. Could you describe in what ways philosophy can help somebody?

      My (heavily uninformed) guess would be the ever questioning if our assumptions are actually true or not.

      • mruts 5 years ago

        I've always felt that formal logic has been more useful to programming than math has for me.

  • makz 5 years ago

    I agree with you. There's a popular book they recommend around here for learning linear algebra, which they say its very related to coding.

    I found it to be not the case. Upon reading the first chapters I started wondering how could this be useful for coding. So I jumped to one of the last chapters where they show you practical applications. Upon reading those I thought: "I can do all this in code just fine without using linear algebra".

    I never touched that book again.

    About two years ago or so I started to make little games for the pico-8 fantasy console. There's some math involved there but almost always you don't use the math formulas as you would in a text book, for example, for something simple like drawing a straight line or a circle, finding paths, collisions... there are very specific algorithms for that, they don't look anything like a math formula, even if they are derived from those.

    Just my point of view.

    • thfuran 5 years ago

      > there are very specific algorithms for that, they don't look anything like a math formula

      I'm not sure what you mean by that. I'd describe making a 3D game (engine) with rendering, collision detection, etc as probably one of the math-heaviest areas of programming outside of scientific computing or algorithm R&D.

      • lonelappde 5 years ago

        Numerical solutions in computer programs look very different from the closed form symbolic theorems in math books.

        • higherkinded 5 years ago

          I think it's pretty safe to assume that you haven't heard of ML language family and Hindley—Milner type system. At the lowest possible level, I mean, yes, numeric solutions are computed by the sets of instructions that are pretty distant from the mathematical level of abstraction. But in higher-level languages, everything remotely readable and reliable usually looks like formal math from the textbooks. I'd say that the representation heavily depends on what language you're using to write your programs, yes, though not everything looks like Algol 68 nowadays.

        • kd0amg 5 years ago

          Don't numeric solutions in math books also look very different from closed-form solutions?

    • sigjuice 5 years ago

      Which books is this? Sounds like something I might want to check out. Thanks!

  • crimsonalucard 5 years ago

    Another Point: Math is basically taking a set of primitives/axioms and proving and constructing statements from that set of primitive axioms.

    This is exactly what programming is.

  • higherkinded 5 years ago

    Logic is part of math, not the other way around. Basically the same goes for your "how it affects the web" since that becomes about directed graphs. On algorithms and data structures: do you actually evaluate their complexity if that's not about math? Math is everywhere.

    • scooble 5 years ago

      I think the ghost of Gottlob Frege just gave you a stern look.

    • warent 5 years ago

      I'm always mindful of complexity when designing algorithms but I would barely consider it math. It's not an exact science and you're hardly quantifying anything. Complexities are essentially just eyeballed approximations.

      • higherkinded 5 years ago

        You're right that the complexities are approximate in some sense. But now I actually have more questions:

        1. Are you aware that the complexity analysis isn't about being precise but about being able to predict the time for any given input given some sample? Since from my own experience, it's more of an analytical part and it's about calculations of worst case scenarios/computability of the process overall. Still, it has everything to do about actually predicting the exact values, with the grain of salt that the relativity of the method is.

        2. Are you actually aware that the math isn't about being "precise" in the sense of numbers but about relationships between abstract entities? Ever heard something about category theory or pretty much anything related to the abstract algebra?

        3. Is there anything else than math that helps abstraction in your opinion? For what I know, even mediocre understanding of abstract algebra helps a lot. Please note that this question is totally non-ironic, I'd really want to know.

  • userbinator 5 years ago

    Is this really a common thing? How can you try to implement something without first having had thought of the solution?

    I suspect one of the reasons is that to a casual observer, there is no difference between someone who is thinking deeply about something, and someone who is just daydreaming. They both aren't interacting with the computer and may have their eyes closed. On the other hand, "coding" by constantly banging at the keyboard and mousing around looks productive.

    I am someone who thinks deeply first, and have been told off about it because they thought I was sleeping or otherwise not working.

  • Koshkin 5 years ago

    > overemphasis on the importance of math in programming

    Well, any fool can write a loop. But to do the same thing in constant time instead one might need to use some math.

  • C1sc0cat 5 years ago

    Any relational database is based on set theory - I quite often if I am doing a more complex query will draw it out as a set diagram so I know what I am doing.

  • drbojingle 5 years ago

    What do you mean by math though? Are you focusing on numbers? Usually when I thing about math I'm thinking about boundaries and how inputs effect outputs. Those things do matter to programmers.

  • sound1 5 years ago

    This. If you are solving a mathematical problem then think in math, if you are solving an accounting problem then think like an accountant. Programming is a general purpose tool to solve problems in various domains. Just because computer science has roots in math doesn't mean computer programs must also behave same way. General purpose computing and absractions weren't invented for nothing. [Disclaimer: didn't read the article]

  • lunias 5 years ago

    I agree a lot. Mathematics intersects with programming when you use programming to solve mathematical problems. I find that that happens rarely for me (although it has happened). I feel like the biggest boons for programmers are having a good grasp on logic, pattern recognition, category theory, and the process of abstraction.

  • harry8 5 years ago

    "plan to throw one away, you will anyway" Fred Brooks, the mythical man month.

  • j7ake 5 years ago

    You could be provocative and say even computer science and programming are not very related. Traditional computer science education involves much more chalk and blackboard and less keyboard typing.

  • auggierose 5 years ago

    I'd say every type of logical thinking is math. Otherwise it wouldn't be logical.

pgcj_poster 5 years ago

I was surprised to learn that I really enjoy coding, whereas I rather dislike math. I feel like this article might resonate with some people, but not with me.

> Programming languages are implementation tools, not thinking tools. They are strict formal languages invented to instruct machines in a human-friendly way. In contrast, thoughts are best expressed through a medium which is free and flexible.

I don't find math to be "free and flexible," at least, not compared with prose. It's more of an uncomfortable middle ground. When I write code, the computer forces me to be 100% precise and will spit errors at me as soon as I do something wrong. When I write prose, I can sort of proceed however I like within the very broad allowances of English grammar. But when I write math, I feel like I don't know what's allowed and what isn't, what I have to prove and what I can take for granted, what I have to define and what I don't, etc.

> Just as programming languages are limited in their ability to abstract, they also are limited in how they represent data. The very act of implementing an algorithm or data structure is picking just one of the many possible ways to represent it. Typically, this is not a decision you want to make until you understand what is needed.

I disagree pretty strongly on this point. I find that implementing the structures involved in a problem almost always gives me a better understanding of it and helps me find the solution.

  • FiberBundle 5 years ago

    > When I write code, the computer forces me to be 100% precise and will spit errors at me as soon as I do something wrong.

    This is similar to math though, with the exception that you don't have something or someone telling you that you made a mistake, but you have to seriously question every step in your proof by applying the rules of logic and your knowledge. At a certain point you develop sufficient intuition to spot steps that might be wrong. Most mathematicians ignore steps, which they are not completely confident of being true for the time being, assume the step is true and continue to see whether their derivation leads to what was to be proved, only later checking the steps which they had doubts about. It's more similar to programming than you think. If you like the logical part and algorithmic thinking involved in programming, I'm sure you would also enjoy math if you'd give it a real chance.

  • gwm83 5 years ago

    I've found programming is much more like making art or music. You don't have to be "classically trained", and in fact I think the best I've seen aren't.

    Not trying to romanticize programming. It is a grueling and frustrating experience in my opinion and experience. But those that are good at it can be exceptional at it.

    Development isn't my day job but I read a lot of code and there are those that can write code that is simply beautiful to view and that is highly functional. It truly is an art.

    Math is a secondary concern. Sure, if you are working on hardcore algo stuff, it's heavy on math. But the great, great majority of programmers are not doing that. They are writing logic to achieve a business goal using existing primatives.

  • hangonhn 5 years ago

    I think I am reasonably good at math compared to the average person and really loved it growing up and I would also agree that coding is not that similar to it. Coding is a lot more "tactile". It feels much closer to the arts than to math. The idea that we can treat computing as something purely abstract has never come to fruition for me because there are a lot of details in CS and some details matter a lot.

    When I was starting out in my career, I worked at a hedge fund. The fund had a bunch of physicists and mathematicians working on models and they actually wrote the code for those models. They wrote some of the worst code I've ever seen. For example, rather than structuring their code properly, they would use exceptions to pass messages around. If function A needed some information from many levels deep in the stack, they would just throw an exception with the message inside. Function A would catch the exception. These aren't actual exceptions but they didn't to refactor their code. As you can imagine their code had horrific performance.

    If you abstract away all the details a lot of concepts and constructs in CS look very similar but some of those details that were abstracted away are going to matter a lot when the code is actually run.

  • asark 5 years ago

    > I was surprised to learn that I really enjoy coding, whereas I rather dislike math. I feel like this article might resonate with some people, but not with me.

    I posted about this elsewhere in the thread, but a deeper insight for you may be (was for me) that you have an easier time thinking in terms of steps in a process—algorithms—than identities and proofs.

  • higherkinded 5 years ago

    >I don't find math to be "free and flexible"

    Couldn't disagree with that more. That's a language of its own with a lot of extensions and variations introduced by the people involved. Basically any sound formal system (usually the talk's on rings) is viable, therefore math is indeed free and flexible. All you have is to escape the box of spoken language, the same thing as you'd do to learn a programming language if it's not "verbose" enough to make you think in phrases instead of the language in question.

  • eximius 5 years ago

    > I don't find math to be "free and flexible," at least, not compared with prose.

    Then you haven't done math to a sufficient level. Which most people don't if they aren't math majors.

    I'm not talking about calculus or differential equations, etc. Even engineers and CS focuses too entirely on calculation (though CS has its own kind of proofs which are more what I'd call math). Besides mathematicians, only physicists occaisionally look at math this way.

    At a certain point, math is about proofs which are a kind of rigorous prose. My math tests in upper level courses were done in essay blue books up to 10 pages of single space text, on one particularly long test.

    There are multiple ways to prove a theorem. There are multiple ways to write a program. Some are shorter, some are longer. Some are more cryptic and hard to follow. Some rely on the work of others to outsource your own efforts. They are really quite similar except for math doesn't have a compiler (Coq and it's I'll excluded).

pron 5 years ago

As the author notes, Leslie Lamport makes much of the same point, but more rigorously. You can find it in many of his writings, e.g. http://lamport.azurewebsites.net/pubs/state-machine.pdf

Lamport's TLA+ makes this formal. It is a language based on simple mathematics + some temporal logic for reasoning about discrete systems (software, hardware) as well as hybrid discrete-continuous systems, and is increasingly used in industy to lower the cost of software development (Amazon, Microsoft, Oracle and others). The idea of directly representing the relationships between abstractions and their implementation is the organizing principle of TLA+. For example, no program in any programming language (at least not in its runnable portion) can directly express Quicksort, as even though its specification (https://en.wikipedia.org/wiki/Quicksort) is complete, none of its steps is deterministic enough to be conveyed to a computer; the best a programming language can do is describe one particular implementation (/realization/refinement) of Quicksort. In TLA+, you can specify Quicksort itself precisely, and then show that a particular sorting program is indeed an implementation of Quicksort.

  • svieira 5 years ago

    > and then show that a particular sorting program is indeed an implementation of Quicksort.

    I wasn't aware that TLA+ made this part possible. How do you map from TLA+ to C++ (for example) with certainty?

    • pron 5 years ago

      Oh, you usually specify in multiple levels of detail in TLA+ and relate them and only informally relate them to code, but if you want a formal relation to code you could compile your program to TLA+ (e.g. http://tla2014.loria.fr/slides/methni.pdf).

      ... But you probably don't really want to do that. Code-level verification using any "deep specification" tool (TLA+, Coq, Isabelle, Lean, F* etc.) is extremely limited in scale compared to specifying in TLA+ at a higher level. Because there is no known way to directly verify programs larger than several thousand lines affordably, and because that's precisely the kinds of programs that most engineers need to verify most of the time, it's far more common to use TLA+ at a higher-than-code level.

    • agentultra 5 years ago

      Refinement calculus is what it's formally referred to as. As pr0n has mentioned you start with a high-level, abstract model. You then create a second model that implies every invariant of the first while adding more detail. You proceed like this to get closer to the implementation. If you are satisfied they are one and the same you can stop.

      However the goal isn't often to verify every single line of code. That would be prohibitively time consuming and expensive. The ideal use for this stuff is to verify the hard parts that are critical to get right. Verifying that a critical section of code will not lead to a deadlock or resource contention might be really important and so you could start with verifying that particular system.

  • anaphor 5 years ago

    For most programs, you can just use PlusCal instead of straight TLA+ and get most of the benefits. Unless you're writing something that is heavily concurrent or distributed.

    • pron 5 years ago

      I prefer TLA+ (it's both simpler and more powerful than PlusCal), except when specifying something at the code level (e.g. something like weak-memory-model concurrency algorithm).

      • hwayne 5 years ago

        For 95% of people, PlusCal will suit them just fine.

        I'm starting to see people say "I shouldn't learn PlusCal because it's not really TLA+", get disheartened about how difficult TLA+ is to learn, and believe they aren't able to use formal methods. I'd rather 10 people use a slightly-more-limited tool than 1 person use "the real thing".

        • pron 5 years ago

          I don't think TLA+ is "the real thing." I just find it easier. If others find PlusCal easier, then they should definitely start with that.

        • raegis 5 years ago

          I have the same opinion about the beauty of TLA+ vs PlusCal, so I find PlusCal frustrating to read. But I don't think the original poster was giving a dismissive opinion of PlusCal. Lamport's genius here is that he made TLA+ accessible via PlusCal. I appreciate this, as well as the video lectures which he made because he "realized that people don't read anymore...".

          • anaphor 5 years ago

            The video lecture series is amazing and should be mandatory viewing for any CS or SE program IMO.

UK-AL 5 years ago

I honestly believe the majority of the problems in the industry comes from the refusal to treat programming as mathematics.

Everything has to be "easy", so anyone can understand from a basic level.

It's one of the reasons we don't like verifying software using TLA, coq(proofs for programs), refinement types or using functional programming techniques. "It's too difficult for the average programmer".

The first excuse is that we don't have enough time to do things "right". When I get the metrics in how much time we waste fixing bugs after the fact, then it moves on the too difficult argument.

The result is we have an entire ecosystem full of buggy unreliable software.

  • TrackerFF 5 years ago

    Well, from a practical standpoint, it's more desirable to put out a partially-working system fast, which can be improved upon as it is live - than spend too much time planning and implementing a fully working system. Note: By partially and fully, I simply mean systems with more and less bugs.

    Rarely do companies and startups have the luxury to just lean back and take their time. It's a race against competitors, and everyone wants the advantage of being first.

    I don't blame the devs, as much as I blame the market. People start taking shortcuts when they're judged by how fast they can crank out codes, and whether they can finish their sprints on time. You develop a culture of constantly putting out fires.

    Hell, for some consulting firms this is a profitable business model: Deliver a partially working product, then spend 10-15 years on patching it up, on your clients bill.

    • hwayne 5 years ago

      This is why I usually pitch formal specification as "you build your program faster and spend less time debugging it later." Framing it as a cost-saving measure over a "well ya GOTTA be correct" measure.

    • naturlich 5 years ago

      I agree that that is the pragmatic approach given market constraints, but the market is not providing greater value in driving this hack-and-patch approach.

    • 1PlayerOne 5 years ago

      So true. Slick presentation, indifferent or bad design and implementation. And then patching and billing client all the long day. Sad!

  • UncleMeat 5 years ago

    Its difficult for all programmers. The tooling still isn't there.

    Try verifying the correctness of a JavaScript program with model checking. It won't work. Or try verifying correctness with abstract interpretation. For any sound system the output will just be "Top".

    We have very deliberately developed technologies that enable rapid development. For "make it so a little message appears on the screen saying what day it is" features this works very well but it means even our formal analysis methods fall over on dynamic languages with heavy framework use.

    This isn't just laziness. I'm a hardcore PL person and even I think that soundness is largely a mistake when making real analysis engines.

heinrichhartman 5 years ago

I spent more than 10years in Academia doing math. One thing I loved about this time, is that I was able to sit down anywhere and pound at my current research problems, without having any additional notes or books with me. Just pen and paper.

Formal reasoning feels very empowering.

You write down assumptions, apply transformation, arrive at conclusion. Proof one Lemma at a time. Work through some examples. Eventually you arrive at a deeper understanding of the problem, and maybe even a solution.

When I started working in Software, I largely lost the ability to reason formally about the thing I am doing. Also I need manuals and computers around to make progress. This still frustrates me today. And I have tried hard to reason formally about code, to gain back this experience:

- Doing Lambda Calculus by hand is possible.

- LISP is already tedious (scoping rules/mutable state!).

- Register machines are a nightmare to do by hand (See Knuth's books)

- The semantics of C are nearly impossible to write down by hand, and work with.

I am excited about this blog post, because it shows a new way of approaching the situation. Model the problem domain with mathematical language, and leverage the Lemmas/Theorems in your implementation.

Some, food for thought. Thanks Justin!

  • justinmeiners 5 years ago

    Glad you liked it.

    I mentioned in another comment, I am really emphasizing modeling the problem mathematically, not really using formal programming methods like lambda calculus. Sounds like you got my idea!

stcredzero 5 years ago

Some even defend a form of Linguistic Determinism that thinking is limited to what is typable.

Expressible conscious thought is limited to just a bit beyond what is readily typed or spoken. Not all thought is conscious. Just about every take-home test in grad school, I'd wrestle with the problem, go to bed thinking I was going to flunk, then find myself writing down the answers over breakfast.

There's the "tip of the tongue" experience, where you know you should be able to know something, but can't quite get it out. This not only happens with memory. It also happens with problem solving. This tells me that there's also unconscious thought and inexpressible thought we are conscious of.

codr7 5 years ago

It's not math, though.

It's abstract reasoning, as is math. And that's about as deep as the similarities go.

Math can't deal with imperfect input and/or side effects without turning into something else. And code without real world side effects is useless, as is real world perfection.

  • pron 5 years ago

    > Math can't deal with imperfect input and/or side effects without turning into something else.

    This is just not true. Differential equations certainly deal with change over time (except it's not called "side effects" there) and imperfections. The analog for discrete systems in temporal logic.

    • sixbrx 5 years ago

      It's not called side effects there because there are none, in the modern mathematical formulation at least. The goal is to identify solution functions, and those are just sets of ordered pairs or something similarly inert.

      • pron 5 years ago

        Except that even in programming, the name "side-effect" is not too clear. It stems from the fact that subroutines sometimes behave a bit like functions, and when they don't, we call the added behavior a "side effect". If you want to be mathematically precise, subroutines are predicate transformers, and, again, there are no "side effects" (This is a mistake functional programmers make when they speak of referential transparency; most languages are referentially transparent, and that was the whole point of the paper that first introduced the term to programming. They are only not referentially transparent with respect to an incorrect semantic model). The philosophical difference between a subroutine with side effects and a derivative is small.

  • smallnamespace 5 years ago

    > Math can't deal with imperfect input

    See stats, probability, information theory, and signal analysis

    • mjburgess 5 years ago

      That's a different sense of "imperfect", meaning, "inexact".

      Here: Malformed

      • mannykannot 5 years ago

        There's some non-trivial math behind the methods for defining and selecting well-formed data.

        • nomel 5 years ago

          And they tend to always accept data that is malformed to look like regular data.

          • mannykannot 5 years ago

            Putting aside the question of whether this is a tendentious exaggeration, it is more of a problem in ad-hoc code.

  • JustFinishedBSG 5 years ago

    > Math can't deal with imperfect input and/or side effects without turning into something else.

    It very much can though.

    • codr7 5 years ago

      Not in any sense that matters here, no.

      There's no math equation for reading from a socket. Code is not math and math is not code. It's possible to sort of hide the fact by stacking enough abstractions on top, but in the end it's going to be the same old code that makes it happen.

      • UK-AL 5 years ago

        Linear types and session types can enforce protocols for reading from a socket.

  • _v7gu 5 years ago

    Well, you have Maybe for imperfect inputs and monads for side effects.

    • codr7 5 years ago

      But Haskell isn't math, it's code dressed up in a math costume.

      • FridgeSeal 5 years ago

        Haskell is code (attempting to) implement category theory. It’s about as close to a programming language representing maths as we have (probably along with APL type languages)

        • Syzygies 5 years ago

          My first language (not counting Basic or Fortran on punched cards) was APL, and my current language is Haskell. I'm a PhD mathematician. Luckily, I was trained far enough down the street from MIT to escape their Lisp world view, so we coded our computer algebra system in C, and it was fast enough to succeed and bring us tenure. Today, we'd choose Haskell.

          Thinking in Haskell is the same feeling as thinking about math research. I know mathematicians who can only code in Haskell.

          The trouble with discussing languages online is it's harder to assess if each party has actually used each language. The dogma in such discussions is completely "welcome to my world" familiar to me as a mathematician. We all have different opinions, and we're all sure we're right.

          • codr7 5 years ago

            Haskell is definitely one of the languages that looks most like math. Being used to math, and solving math problems, I can certainly see how that's helpful.

            Making the jump to stating that everyone would be better off coding math style, which is what some are desperately trying to pull off; doesn't make any sort of sense.

            It's remarkable how far you can get within such a rigid and formal framework. But for most messy real world problems, there are better solutions. Lisp being the most powerful invented so far.

      • uryga 5 years ago

        would you consider Lambda Calculus to be math?

  • mannykannot 5 years ago

    In general, the point is better made with analytical thinking, rather than math specifically, but if you end up in a place where you need the math, it is important to recognize that. Cryptography is perhaps the starkest example, but math also comes into play in reasoning about concurrency, and in determining if your proposed architecture will perform adequately.

  • xhgdvjky 5 years ago

    side effects are an implementation detail

m3at 5 years ago

1. Identify a problem 2. Design algorithms and data structures to solve it 3. Implement and test them ... In practice, work is not so well organized as there is interplay between steps. You may write code to inform the design.

As the author acknowledged, real life rarely allow such clean division.

One tool that I find very useful to interleave the three - or at least to allow shorter loops - is jupyter notebooks. The name is quite accurate, it can be used as a notebook to come up with solutions, and can easily be discarded once used. Unlike prototype code which has a tendency to evolve into the final codebase.

It's a common tool in data science but I'm not sure about other fields. Has someone used it for other purposes?

[1] https://jupyter.org/

Edit: formatting

hannofcart 5 years ago

I think that the total programming man hours spent on writing pure math computation would be dwarfed by the much larger number (I think) spent on doing menial computation: fetching data, transforming it and sending it forward/rendering some output. I suppose this article is targeted at that privileged former group.

  • mpweiher 5 years ago

    "One of the miseries of life is that everyone names everything a litte bit wrong, and so it makes everything a little harder to understand in the world than it would be if it were named differently. A computer does not primarily compute in the sense of doing arithmetic. Strange. Although they call them computers, that's not what they primarily do. They primarily are filing systems. People in the computer business say they're not really computers, they are "data handlers". All right. That's nice. Data handlers would have been a better name because it gives a better idea of the idea of a filing system."

    Richard Feynman

    https://youtu.be/EKWGGDXe5MA?t=278

  • 0815test 5 years ago

    Just because some tasks are 'menial', doesn't mean they don't involve math at some level. You still want to get your data transforms right, and make sure that what you're fetching/sending over is what these other services expect.

    • jhbadger 5 years ago

      Indeed. Many people are turned off by reading Knuth because his "Art of Programming" deals with tasks, such as sorting, that are viewed as menial and (almost) never implemented by hand these days. But there was a lot of math involved in figuring out how to make such tasks as efficient as possible, as Knuth shows. Theoretical computer science is math (but then so is much of theoretical anything).

  • justinmeiners 5 years ago

    See my example. It was very short and helped me clarify immensely what I was going to write.

    You're right that if I am going to add another template to a Django site, there isn't much math to think about. But anything bigger than that, there are always questions worth considering.

arendtio 5 years ago

IMHO, the biggest problem with math is the complex syntax. I mean, you can't just read it like a normal language, but instead, you have to know exactly what each symbol means and in which order they have to be evaluated.

It feels a bit like having a programming language that uses unique emojis as function names.

Personally, I find it much easier to read code (from a high-level programming language), than to read math formulas.

  • heinrichhartman 5 years ago

    Oftentimes people make the mistake, of jumping directly to the formulas when reading mathematical texts. You will not have enough context to parse them!

    Instead, think of a formula like a very dense sentence in a novel.

    The protagonists are usually introduced in the paragraph before. You are assumed to know their names to make sense of the formula.

KoenDG 5 years ago

I must have some deep-rooted, maybe repressed, anger towards maths.

I can program just fine, write functions left and right, receive data, do something with it, return a result...

But in maths? Oh dear god no I couldn't write a math function on a piece of paper to save my life.

Someone recently pointed out the parallels and while I couldn't deny it, as it was plain as day... I never once considered it during all these years.

Kinda scary.

  • codebje 5 years ago

    The most common intersection, IMO, is inductive reasoning. If you can write a working recursive function, you are also writing an inductive proof. The only explanation I have for why people think there's either no link to math, or that they can program but can't do math, is a cultural sledgehammer smashed into our collective skulls while we're young that tells us we can't.

    This is a shame, because we spend a lot of effort in deliberately avoiding common abstractions in case it scares programmers away, but really we're just making it harder for everyone to learn all these things. We either eventually recognise the underlying pattern through our brain's capacity to extrapolate from examples (hard, slow), or learn the pattern explicitly and recognise it, or often never learn the pattern and have to learn each new language's way of doing things one by one.

    And of course when we either don't know or deliberately avoid effective abstractions we make or propagate mistakes. See Java's first crack at Future, for example, which was practically useless since the only thing you could do with a Future was wait for it to be the present.

deadgrey19 5 years ago

Thinking in math first has been the catch cry of functional programmers (and their formal logic/verification friends) for decades. And there's nothing wrong with it, unless the problem you are trying to solve actually requires performance. Then, you have to think in "system" first. For example: Write a program that captures network packets and stores them to disk as fast as possible. There's no maths to think of here. The complexity is all in the "fast" part, and to solve that, a deep understanding of the architecture of the system is necessary. Fancy algorithms (maths) will not help you here. e.g. Will compression help? Depends on the properties of the system. Will batching help? Depends on the system. Will threading help? Depends on the system.

  • chongli 5 years ago

    What you're describing is a very limited view of math, resembling the general public view of math as being algebra, trigonometry, geometry, and calculus; that is, all the math people are exposed to in secondary school. Look further and you'll see disciplines such as mathematical logic, combinatorics, and graph theory, without which you wouldn't have networking or binary or computers at all, really.

    I don't know what you mean by "fancy algorithms", but all algorithms that run on your computer have a basis in math.

    As for making things go as fast as possible, well that's a special case of the field of optimization, another mathematical discipline.

    • deadgrey19 5 years ago

      What I'm discussing here is the general concept of modelling your program formally before writing it (as per the article). What I'm arguing is that this type of approach is only possible for a certain set of applications which take the form of y = f(x), where f(x) is some type of data transformation /computation operation (e.g. calculate the GCD of these ints, find the shortest path through a given graph, sort this set etc). There's another set of applications which I/O bound, are very important, and yet, the "computation" that they preform is limited to none. These applications are rooted in, and bounded by system parameters, like understanding how disks work, how network cards work, how CPUs work etc. This is an optimisation problem, but not one that can be modelled mathematically (in any reasonable way) because of the vast complexities of the system. Building a TLA+ "proof" of this system will reduce trivially to x = x, and yet, the system is still important, and difficult to write well.

      • tonyarkles 5 years ago

        > this type of approach is only possible for a certain set of applications which take the form of y = f(x), where f(x) is some type of data transformation /computation operation (e.g. calculate the GCD of these ints, find the shortest path through a given graph, sort this set etc)

        These days I'm trying to be mostly an embedded guy, and 100% understand what you're talking about re: problems that don't lend themselves well to mathematical modelling. Figuring out that your SPI bus is going slow because you've got the wrong multipler in a clock domain isn't a math problem :)

        What I'd like to add to your y = f(x) examples though is that many Business Problems can (and probably should!) be modelled as y=f(x) type problems. I've seen a ton of business logic over the years that modifies objects in a pretty ad-hoc manner and is incredibly hard to reason about, especially in the big picture. The vast majority of the time, those problems can be modelled roughly as:

          new_state = f(old_state, event)
        
        When you start modelling the business problems more formally like that, you can start using things like TLA+ to do model checking and find gaps in your formulation of the problem. Maybe you've got an state/event pairing that you haven't thought of. Maybe there's a way to get a model into a state that it can't escape from. TLA+ is useful for a lot more than verifying "calculate the GCD of these ints, find the shortest path through a given graph, sort this set", and I want to make sure people reading this don't write it off as a mathematical curiosity.

        I've done a few embedded implementations that had pretty complicated state machines under the hood (off the top of my head, a LoRaWAN implementation). I modelled the states in TLA+, and it was a wonderful platform for discovering the flaws in the model I'd put together. It took a couple iterations before the model checker was happy, and from there the implementation was mostly mechanically translating my TLA+ model into code. There was some housekeeping stuff to keep track of (the TLA+ model was an abstraction), but it pretty much worked first try.

    • munificent 5 years ago

      > that's a special case of the field of optimization, another mathematical discipline.

      I'm not sure how you can claim that the entire field of optimization is a "mathematical discipline". Algorithm analysis is, I suppose, but most other practical optimization work has little if anything to do with math.

      When I've spent time doing optimization work, it has often involved things like:

      * Discovering some API I'm using is particularly slow and finding an alternate one that's faster.

      * Adding caching.

      * Reorganizing code to take advantage of vector instructions. (Well, I haven't personally done this, but I know it's a think many others do.)

      * Reorganizing data to improve CPU cache usage.

      * Evaluating some things lazily when they aren't always needed.

      * Making objects smaller to put less pressure on the GC.

      * Inlining functions or switching some functions to macros to avoid call overhead.

      * Tweaking code to get some values into registers.

      • tonyarkles 5 years ago

        I'm not sure why you're being downvoted, but I agree with all of your points. Those aren't things that really lend themselves well to mathematical modelling. But... there is a huge field of math that does apply to this: statistics.

        The first two cases are somewhat special:

        - It may be immediately obvious that an API is terrible, and that the replacement is not. If API 1 takes 1 sec to call, and API 2 takes 100ms to call, easy choice without stats.

        - Caching can be dangerous. While not really a stats problem, you do need to have a really solid model of what is getting cached, and how to know when to invalidate those cache entries.

        For the rest of the examples you provided, you're making changes that may make the problem better, may have no effect, or may make the problem worse. You absolutely need to use statistics to determine whether or not changes like those are actually having an effect. Performance analysis is part math and part art, and without the math background, you're likely going to be spinning your wheels a bunch. Beyond stats, fields like queuing theory are going to make a huge impact when you're doing performance optimization in distributed systems.

      • vikiomega9 5 years ago

        What you're describing is not optimization the math field, and second _some_ of the example do have basis in mathematics.

        > Discovering some API I'm using is particularly slow and finding an alternate one that's faster. On it's own it has nothing to do with Math, but writing code as components/services/abstract layers with well-defined boundaries/interfaces/types mean it's easier to reason about the code and avoid bugs. Implicitly here I'm saying we should use a language that has strong type support.

        > Adding caching. This is memoization and without the basis that general code functions should behave like their math counterparts this is hard to reason about.

        > Reorganizing code to take advantage of vector instructions These are mapping operations that are well defined in functional languages. The vectorized interface numpy provides is an abstraction of maps.

        > Making objects smaller to put less pressure on the GC This is orthogonal to the actual math basis for the code. For example using enums over strings is a localized change.

  • traderjane 5 years ago

    You don't need fancy math because someone else did it for you, but if the industry is to compete and advance, it's going to need all sorts of people. The fact that you don't have to juggle every mathematical complexity underlying your domain is a success of human cooperation and compartmentalization. It's certainly not free or inherent.

    • deadgrey19 5 years ago

      How so? Can you give an example?

      • philote 5 years ago

        For example, if you do end up needing compression, are you going to write you own compression library or use an existing one somebody else already wrote?

        • deadgrey19 5 years ago

          I think you're missing the point. I'm talking about the concept of mathematically modelling the program first, before writing it (as per the article). This process only applies to certain types of applications, ones that are rooted in algorithmic transformations (like compression), rather than in system and I/O operations, like copying a packet from a network RX buffer into a disk block, in the most efficient way.

  • injb 5 years ago

    I agree about thinking "in system", but your "as fast as possible" comment is interesting, because I think this actually highlights the kind of problem that mathematical thinking is good at fixing.

    Why "as fast as possible"? Usually when programmers say this it's because they think "the faster, the better!". But obviously there is some speed beyond which there's no discernible improvement. At that point, pursuing the as-fast-as-possible mandate at the expense of other concerns is the wrong thing to do.

    Therefore, there's an assumption built into this statement that the system will never be that fast, and therefore "as fast as possible" is a good target. Needless to say this isn't a safe assumption.

    The thing is that we're built to be really bad at knowing when we're making assumptions. Thinking mathematically is to some extent a way of trying to overcome that limitation.

  • justinmeiners 5 years ago

    This article is not about formal verification.

    Why would mathematical thinking involve rejecting physical realities? If there is a performance constraint you are trying to optimize, account for it.

    > Write a program that captures network packets and stores them on disk as fast as possible

    How are you going to do that without formulas involving:

    - network bandwidth - disk bandwidth - SATA bandwidth - packet ordering - compression time and ratios - amdahl's law

    This sounds ripe for mathematical reasoning! Absolutely the way you model the problem is informed your knowledge of the hardware.

_zachs 5 years ago

I disagree with the author’s points that you can’t represent a graph with a single interface. Write an interface with the methods you need, i.e., `findNeighbors()`, `getAllFriends()`, `retrieveDogPhotos()`, and implement it for the specific underlying data structure/implementation of Graph you choose later. A graph is a graph, the interface doesn’t care if you’re using an adjacency matrix or a node. I don’t see how writing a mathematical function would be more robust and helpful in designing and implementing those features.

  • justinmeiners 5 years ago

    Can you represent Facebook's graph of friends as an interface?

    Do you think they also do a lot of graph theory about it?

    • _zachs 5 years ago

      `getFriends(userId)`, how's that? :D

analog31 5 years ago

I'm not employed as a programmer, so I'm somewhat detached from the business. I've noticed that the programmers and other kinds of engineers tend to split off into two different camps. There are "qualitative" engineers who are good at arranging and organizing things, and fitting things together. There are "quantitative" engineers, who solve problems through the application of math and science principles.

A small difference along one of these directions amplifies itself over time, as problems are assigned based on aptitudes and interests, resulting in a growing division. And I don't know if there's a consistent ratio across industries or businesses, but at the shop where I work, it's roughly 90% qualitative, and 10% quantitative. Any problem involving math is brought to the small handful of "math people." They end up being busy enough with just that kind of work and nothing else. If a project runs into a math problem, it grinds to a halt until one of the math people can spare some attention.

Likewise, the qualitative folks are perpetually busy too. Nobody is short of work because of gaps in their abilities. So I think it's quite fair to say that a programmer can do without math, if they find the right niche, but a programming project might need one or two math people from time to time.

enriquto 5 years ago

I do that. Then people hate me because all my variable names are single letters...

At least, for functions use self-documenting names instead of "f", "g", as in math.

  • onion2k 5 years ago

    I do that. Then people hate me because all my variable names are single letters...

    Then you're thinking in math and also writing in math, so you're getting the second bit wrong. You need to write in code, and code should be optimized for readability.

    • cartlidge 5 years ago

      I thought the point of code is to make it more difficult for the non-initiate to read it...

      Actually, I think a good, justly polymorphic function should probably have a word of a reasonable length and parameters that are called `a`, `x`, `f`, since the parameters convey almost no information. If they have any greater length, it's just a restatement of the known information about the type.

      The more unique information a name conveys, the less information the types convey and the bigger a chance you have of bugs or coding yourself into a hole.

      But also, if you've called it `x` hopefully there's only one thing it can be and its scope is just one or two lines so you've written a single unit. If there's anything else `x` could be, then you've got a problem - your unit is too much.

      Not every piece of code should be written this way, but your vocabulary should be built up of pieces like this.

    • skohan 5 years ago

      I think single letter variable names are fine within the scope of a single function. Especially if I'm doing something with a lot of math, like a lighting calculation, the computations are often more readable with short variable names. I'll also usually give a detailed description of what the variable is in comments for clarity.

      • onion2k 5 years ago

        I think single letter variable names are fine within the scope of a single function.

        I don't. One of the things I like most about eslint is the id-length rule - https://eslint.org/docs/rules/id-length

        • skohan 5 years ago

          I think it depends. In the context of a complex mathematical formula, where you have many intermediate results with very abstract meanings, I think it's more clear to use a single character name than to try to be descriptive with something like: `productOfLuminanceAndDotProductOfSurfaceNormalAndLightDirectionDividedByScatteringConstant`.

          I think there are a lot of advantages to abiding by styling conventions and using linters to set a baseline, but there are always exceptions to the rule.

          • onion2k 5 years ago

            * productOfLuminanceAndDotProductOfSurfaceNormalAndLightDirectionDividedByScatteringConstant*

            Note that the id-length rule also allows you to limit the maximum length of a variable name. This is why.

            • skohan 5 years ago

              I understand that, but what in your mind would be a good replacement to name such a complex, abstract value? A long acronym perhaps? I don't see how that's any more clear or less arbitrary than a single character name.

          • wa1987 5 years ago

            Agreed. I'd much prefer something along these lines as opposed to using an overly long name:

            `

            /* product of luminance and dot product of surface normal and light direction divided by scattering constant */

            const prod = ...

            `

        • mruts 5 years ago

          I find that I can reason about code more effectively the shorter the variable names are. Rob Pike writes the same thing in the Practice of Programming.

  • indigochill 5 years ago

    For that matter, use self-documenting names in math, too. And please don't recycle notation for completely unrelated concepts, even if they are in different fields.

    Coming to math from computing, the only explanation I can come up with for the disaster that is mathematical notation is that mathematicians are universally sadomasochists.

  • justinmeiners 5 years ago

    names work well in programs because the variable typically models something in the world we understand. company.name, NetworkConnection, etc.

    In math and in programming this isn't always the case. Often you have a variable "epislon" that means "the tiny amount of space between this circle and the other circle that is shrinking".

    In these cases there is no good name to give the variable. The meaning comes from the context.

    In programming I follow the practice of using longer identifiers for more global scope, and shorter ones for smaller scope.

    Which do you prefer?

    (define (sqr x)(* x x))

    (define (sqr number_to_square) (* number_to_square number_to_suqare))

    • bcrosby95 5 years ago

      Either over this:

      (define (s x) (* x x))

      Which is effectively what the OP does when they write their example. Their simple example quickly becomes unreadable after a few definitions.

  • C1sc0cat 5 years ago

    Even for loop variables like I and J and using x, y and z for dimensions ?

  • breck 5 years ago

    Mathematical notation is a primitive notation, and will go the way of Roman Numerals. I would recommend the book "Clean Code", which might convince you to lengthen your identifier names :)

  • agumonkey 5 years ago

    we need to make a sml company

agentultra 5 years ago

I think there are languages and tools that mathematically minded programmers will find accessible enough and useful to aid them in their thinking. Dependently-typed programming languages such as Lean [0] and Agda [1] are both expressive enough to search for proofs to theorems and practical enough to execute programs.

And in the design space when we're thinking about problems of concurrency or liveness there are great tools like TLA+ that take a pure mathematical model and automate the checking that it satisfies our expectations. [2]

It's not all figures and drawings these days! I see maths and engineering integrating more closely in the future.

[0] https://leanprover.github.io/

[1] https://github.com/agda/agda

[2] https://lamport.azurewebsites.net/tla/tla.html

bcrosby95 5 years ago

> As you read this example, I think your tendency may be to think that its statements are obvious.

Not to me. The statements suffer from a common math problem: using single letters. if the names were better, maybe they would be obvious. But instead I have to keep back referencing former definitions to remember what they were.

I have to do something similar with p(1) and p(2) - I need to make sure my memory is correct on which data is in which place. If you could reference them in a more obvious way that would help.

I also have to make an assumption from the very start - what "t" refers to is only obvious in definition 3, when it was used in definitions 1 & 2.

It's ironic that the article recommends thinking in math and writing in code when they have thought in math and written in math.

dr0wsy 5 years ago

How do I express my models of programs on paper that doesn't easily translate into math? I understand that the part of programs there you have a formula on beforehand (e.g., convert between Celsius and Fahrenheit) is easily expressed in a mathematical formula before implementing it. However, how should I express I/O?

Often when I have a problem that isn't easily expressed in mathematical notation (albeit to my limited knowledge of math), I usually got a good idea of how I could express it in code.

When I write pseudo code it often feels like I already have the code in my mind before I describe in plain English. That feels like a waste of time. So pseudo code doesn't feel like a great tool to express models of my programs.

  • jcora 5 years ago

    Math isn't formulas. The part of math that is most useful for programming are algebraic structures. Many "patterns", conventions, frameworks, etc., are just bastardizations of mathematical patterns. What really matters is the composition of concepts, and what better way to think about that than mathematically?

    So if you want your programming to reap the benefits of (others', mostly) mathematical reasoning--use a functional language that is all about expressing the ways in which things compose!

    IO is modelled pretty well through monads. As are many other things, like nondeterministic processes, exceptions, state, etc.

    • dr0wsy 5 years ago

      > So if you want your programming to reap the benefits of (others', mostly) mathematical reasoning--use a functional language that is all about expressing the ways in which things compose!

      I do prefer using functional language over imperative ones, but I don't understand that is connected to model software on paper?

      > IO is modelled pretty well through monads. As are many other things, like nondeterministic processes, exceptions, state, etc.

      Isn't there any easier way to describe I/O than with category theory? I understand that it's _possible_ to model IO with monads, but how would I communicate it with colleagues that doesn't have a background in category theory (or myself for that matter)?

      Part of the benefits with modelling a program on paper should be to make communication easier. And to require people you communicate with to have knowledge in category theory to understand your design fells silly.

      I probably misinterpret you, so could you please give a more detailed explanation or example?

      • naturlich 5 years ago

        You don't need to know category theory. You just need to understand a few things that came from it, like `Maybe`, which is the most obviously useful and trivially easy monad. Just forget about the fact that it's a monad and any general rules about monads and just learn how to use `Maybe`.

        From there, input handling is just a state machine. Easy to draw on paper as a graph.

        > Part of the benefits with modelling a program on paper should be to make communication easier. And to require people you communicate with to have knowledge in category theory to understand your design fells silly.

        This is an odd complaint, because you can also say: requiring people you communicate with to have knowledge in (state machines | graphs | `if` statements | ...) to understand your design feel silly.

        • dr0wsy 5 years ago

          > You don't need to know category theory. You just need to understand a few things that came from it, like `Maybe`, which is the most obviously useful and trivially easy monad. Just forget about the fact that it's a monad and any general rules about monads and just learn how to use `Maybe`.

          I didn't convey my question clearly. What I'm wondering is how I should express programs, or part of them, using mathematical notation when I don't see them being mathematical in nature to begin with?

          An example:

            def hello(name):
                if (name == "Bob"):
                    print("Hello, " + name)
                else:
                    print("Hello, World!")
          
          How could I express this easily using mathematical notation?

          It just feels weird that to convey this simple program on paper both I and the person I try to communicate with needs to have a grounding in category theory.

          Hope you're able to understand my question. :)

          • naturlich 5 years ago

            That was the second part of my answer: a graph on paper. Draw nodes of execution and represent the branch as two arrows coming from the node and heading to new nodes. Label the arrows with the predicates of the branch.

            This is both definitely useful and definitely math.

            • dr0wsy 5 years ago

              > That was the second part of my answer: a graph on paper. Draw nodes of execution and represent the branch as two arrows coming from the node and heading to new nodes. Label the arrows with the predicates of the branch.

              Thanks! You don't happen to have some learning resources for modelling programs as state machines? I can't find anything when I search.

              • naturlich 5 years ago

                This is more of a practical writeup on state machines than a theoretical one: http://gameprogrammingpatterns.com/state.html

                You might also know these as flowcharts when applied to program flow control.

                I don't think I answered the original question well. I just wanted to point out that math is more than just equations. Others have done a much better job in this thread.

              • jcora 5 years ago

                This isn't about modelling rather a programming technique, but check out Idris for its dependent typing. It lets you encode state machines in types, which means your programs literally will not typecheck if you try for example to withdraw money from an unauthorized ATM, or candy from an empty store. You can make verified network protocols and drivers like this.

        • jcora 5 years ago

          > because you can also say

          I don't think you can actually:D

          I mean for each category of programmer there is a pretty clear line separating common knowledge from things you can't expect people to know. And for pretty much every category of programmer, if statements and category theory are the opposite ends of that line.

          I mean I feel like you agree with this based on your first paragraph, his complaint isn't odd because it's saying you can't expect people to know category theory, it's because he thinks that category theory is necessary here.

      • jcora 5 years ago

        > but I don't understand that is connected to model software on paper

        I was talking more generally about techniques that allow for sneaking in mathematical thinking in various stages of development, not just modelling. Largely because of the very precise types which make a large part of your program verifiable (and force you to think about a lot of things you wouldn't have if you were using, say, JavaScript).

        I was recently making a script engine for a game. It was really neat to realize that the "runScript" method was literally just a mapping between two monads. No special state inbetween, no complex logic, no file lookup or anything like that. These types of insight accumulate, and there's really a tonne of stuff to learn (this potential for learning the language itself feels much greater in functional programming for me).

        > Isn't there any easier way to describe I/O than with category theory?

        This isn't category theory! Do you really think every working Haskell programmer is some mathematician? No. Look at this random image I googled, you think Haskell programmers understand this? https://i.stack.imgur.com/4IzGk.png Most mathematicians don't!

        The notion of a monad in functional programming might be inspired by category theory, but you're really better of not taking that connection too seriously. Functors, applicatives, and monads are all very simple notions that should be understood as programming constructs, not arcane math. If you want an area of math to research to most benefit your functional programming, that is undoubtedly mathematical logic and/or intro-level type theory, and not category theory. (This should take you in the direction of dependent types.)

        Really, types are the key. The notion of a monad is best understood not through vague real-world analogies with sandwiches, but through the type and implementation of its >>= method. The reason for that is that the point of monads is in composition. And basic linear algebra is enough to understand the importance of composition, not category theory. Just look at the Maybe monad to immediately understand it: Nothing >>= f = Nothing, Just x >>= f = f x, where f : a -> Maybe b. Isn't this a really clear, intuitive way of composing operations which might fail?

        Same goes for IO. The only thing you're doing is composing some values. When you compose an IO Int with some function of the type Int -> IO (), you get back a value of type IO () (which your runtime executes if you bind it to main). All of this is right in the type, and it's just as intuitive a way of composing IO values as Maybe ones, IMO.

        You get the added benefit of execution becoming not a side-effect, but a first-class member. Evaluation of IO programs is not their execution, you could evaluate putStrLn "asdf" a million times without it being executed. You can literally store those programs (values) somewhere and execute them later.

  • justinmeiners 5 years ago

    Author here.

    I would say that modeling the computational process in math is not typically helpful (you already have a formal programming language) you should model the real-world (or at least computer world) problem you are trying to solve.

    Carefully define operations and constraints, introduce abstractions for solving them, etc.

    Do you have an example of an I/O problem you have thought about that you would like me to talk more about?

    • dr0wsy 5 years ago

      Thanks for the article and discussion that sprang up around it!

      The question I'm trying to ask is how I should express (part of) programs on paper using math when I don't see how they are related to math.

      Example: How could I easily express a function that takes a string as input and outputs the string capitalized using math notation?

      I understand that the problem you solve in the post should probably be put on paper first before you began writing any code.

      My problem is to express programs in math when "calculate" isn't part of the problem description, like it is in the example in the blog post.

      Edit: Changed example question.

      • justinmeiners 5 years ago

        This is a great question. What I really tried to emphasize in the article is that math is not a formal language. Since its just you and coworkers, write out the parts you need as you need them, and ignore details. After all, its an exercise to help you think.

        If I needed uppercase I would just say:

          `up(s)` is a function that maps a string s to its uppercase string
        
        Writing that down, you and I already have a pretty good idea of how it works, or its included in our libraries, and we can figure out the details when writing the code.

        This is of course assuming `uppercase` is a minor part of another algorithm. If it was the subject of discussion I might describe it like this:

          Let `u(c)` be a function that maps a characters to its uppercase character. In ASCII `u(c) = c + K` where K is some offset.
        
          To capitalize an entire string we need to apply that function to each character.
        
          Let `up(s)` = (u(c_1), u(c_2), ... u(c_n))
          where the string s = (c_1, c_2, ... c_n)
        
        Perhaps that helps, but I don't think it addresses your question directly. I think you need some experience reading math to know how to describe problems, but I don't think you need to know fancy math. A knowledge of functions, sets, tuples, logic, summations, etc, will get most of what you need.

        I highly recommend that book I linked in the article: "Introduction to Graph Theory" By: Trudeau

        • dr0wsy 5 years ago

          Thanks for the explanation! An example to check if I understood you correctly.

            `func1(r, s)` is a function that sends a request `r` to a given server `s`. It returns a status code from the server.
          
          How does that sound? It feels more like a spec than "doing math".

          > I highly recommend that book I linked in the article: "Introduction to Graph Theory" By: Trudeau

          I'll check it out! Does the book cover all the relevant parts you mentioned before?

          Btw, it seems you don't link the book in the article. At least I couldn't find it when looked now.

  • wa1987 5 years ago

    > However, how should I express I/O?

    First thing that comes to mind is category theory, especially monads. Haskell's I/O monad for example: https://en.wikipedia.org/wiki/Monad_(functional_programming)...

    • codr7 5 years ago

      But why? Having to sweet talk the compiler into simply reading/writing data by building towers of super abstract mathematical concepts doesn't look like a win from here. If that's the cure, I'll take the disease.

      • wa1987 5 years ago

        The question was about expressing I/O as a mathematical concept (on paper), not about talking to compilers.

        • codr7 5 years ago

          Haskell and others have pretty much proved it's possible in practice as well. But from my experience the cure is worse than the disease. Wishing that software was as formal as math is one thing. Pretending it is leads nowhere worth going.

          I have most of my feet in the C++/Common Lisp camps these days. Where you accept that the world is a messy place with complex problems that don't fit neatly into labeled boxes, and use the most powerful tools you can think of to deal with it.

runeks 5 years ago

I agree that math is a useful tool in programming, but domain knowledge always trumps math knowledge. Case in point:

> Recently, I worked on an API at work for pricing cryptocurrency for merchants. It takes into account recent price changes and recommends that merchants charge a higher price during volatile times.

You shouldn’t look at the past price for a cryptocurrency to determine the current one. If few trades happen in a period you’re using outdated data.

The price of interest for merchants who want to sell cryptocurrency is the best bid (highest-priced buy order) in a market where fiat bids on crypto (e.g. BTC/USD, ETH/USD). You should be downloading order book data from exchanges (e.g. [1]), and quoting the best bid to merchants.

Also, what you call high volatility (of past trade prices) might simply be a proxy for a large difference between the highest-priced buy order and lowest-priced sell order (large “spread”). Instead of looking at volatility of past trades, I recommend you look at monitoring the spread of the order book of interest. Although this might not be all that relevant, since merchants (who sell crypto for fiat) are only interested in the price they can sell for, not the price they can buy for.

[1] https://docs.pro.coinbase.com/#get-product-order-book

floor_ 5 years ago

I wish math whitepapers were written like code instead of in mathematical notation.

  • raxxorrax 5 years ago

    The syntax of mathematics hasn't grown very well in my opinion. It is clear for people already into the relations described, but often lacks context for anybody else. Code doesn't suffer from this, since any context must be explicitly defined. Absolutely nobody without a significant bag of premises can read e=mc² (yeah, yeah physics...) and understands the correlation between mass and energy. It is just not happening. And for 99% percent this context is much more important and the actual relations are often trivial.

    Additionally, I would clearly recommend learning math in English, since there are a lot of synergies that can be really, really helpful in the jargon of computer science. Even if it is just the little nudge that allows you to connect problems.

    Furthermore you can unlearn a tool if you don't use it. I learned frequency dissection in school. Forgot about it 2 weeks later. Only as I started to implement my own shitty jpeg-compression is ehrn I really started to use it as a tool. Had to relearn it of course. Turns out there are many applications for general image recognition. Great. Now the math got useful and is indeed needed.

    There are some "purely" mathematical tools like adding zeros, multiplicating by 1 or logic that allows for further transformations. But I would argue that it doesn't as much help to solve a problem as it helps to verify the solution.

    But some thousand years ago someone must have made the decision to make mathematical notations specifically unreadable when expressed with ascii symbols. Really don't like that guy.

  • JustFinishedBSG 5 years ago

    Please god no, it would be unbearable.

    And they are called papers.

  • justinmeiners 5 years ago

    Reading math is an acquired skill. Once you have it, these things aren't annoying. See my comment above.

    However, in this article I am not advocating that mathematical thinking is using one letter notation. I simply follow that convention for standard things like function.

carapace 5 years ago

Hmm...

Shake up your thinking about math: "Iconic Math" http://iconicmath.com/

Math-first programming: CQL - Categorical Query Language

> The open-source Categorical Query Language (CQL) and integrated development environment (IDE) performs data-related tasks — such as querying, combining, migrating, and evolving databases — using category theory, a branch of mathematics that has revolutionized several areas of computer science.

https://www.categoricaldata.net/

Thinking about physics math in computer language: "Structure and Interpretation of Classical Mechanics" by Gerald Jay Sussman and Jack Wisdom with Meinhard E. Mayer.

https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...

CaioAlonso 5 years ago

"The natural language which has been effectively used for thinking about computation, for thousands of years, is mathematics."

I'm not sure if this is true. Harold Abelson creates the distinction[1] between Mathematics being the study of truth and Computing being the study of process. It seems to me that these really are different things and Mathematics isn't the "natural language" to discuss computations, but rather truth and patterns. But of course process (computing) can only happen within the boundaries of mathematical truths and patterns.

[1] https://www.youtube.com/watch?v=2Op3QLzMgSY the first few minutes

  • justinmeiners 5 years ago

    I believe this is a comment about interests of the fields akin to differences between math and physics.

    They still approach computation using mathematical reasoning methods. Note how they define car and cdr and how they approach problems in those videos.

    I believe Abelson and Sussman use the kind of mathematical reasoning I am talking about in all their work. SICP being a prime example.

invalidOrTaken 5 years ago

What knits together "thinking in math" and "writing in code" is...human effort. If the domain is mathy, then it won't take much. If it's not...

Math is the world without abstraction leaks; programming is the act of plugging those leaks.

anderspitman 5 years ago

I might be misunderstanding here, but this sounds like it might lead to too much up front design, which is a trap I'm naturally inclined to fall into. Again maybe I'm misunderstanding. I found the example pretty opaque (granted I gave up trying to understand it very quickly).

I do think starting with basic architecture and design is a good idea, but it's important to jump in and test your assumptions before you become too attached to them. In my mind the ideal flow goes something like this:

1. Short design/modeling/architecture/etc session

2. Test assumptions by hacking together a quick prototype

3. Revamp design.

4. Implement more robust prototype.

5. Iterate as necessary.

davesmith1983 5 years ago

This sort of of article seems to come up every so often. The vast majority of programs are boring programs that are replications of business processes that were on paper in the past. The vast majority of code I have written is "if the user is from this country and they have an address show this screen". That is business rules and I would wager outside of specialist fields that is the vast majority of the work that programmers do.

Before I was a programmer I studied mechanical engineering. There is a lot of maths in that and the closest thing you get to programming is Control Engineering.

bluetwo 5 years ago

Good coders take a problem domain and turn it into what looks like code.

Great coders craft code that looks like the problem domain.

vikiomega9 5 years ago

If we start with the notion that everything is a model and that some are less wrong, then being as close as possible to math simply means the models are in some sense provably correct. Some models need more rigour in being proved as such.

wa1987 5 years ago

This article reminds me of something I'm wondering about for too long now.

What is the meaning of the below terms (possibly in various contexts).

- abstraction

- indirection

- encapsulation

- information hiding

How do these relate to and differ from each other?

I feel these get conflated a lot. Would be nice to tell them apart.

cutler 5 years ago

Mathematical thinking is only relevant to a subset of programming tasks. Your typical CRUD web app has nothing to do with maths but functional-style data processing may well fit the model. See DHH's distinction between engineers and developers where he characterises Ruby/Rails programmers as writers, many of whom come from an arts or humanities background.

ErotemeObelus 5 years ago

Programming is reasoning about complex closed systems. Mathematics is reasoning about complex open systems (like the universe). Programming has more in common with biology than it does with math!

gitrebase 5 years ago

I like that Python is quite close to my raw thoughts for simple problems. So writing an algorithm in Python almost feels like writing pseudocode. Heck, these days given an option between writing pseudocode and writing Python code, I choose Python for simple problems.

Is there a similar programming language that makes mathematicians feel at home? Something that makes them feel that they would rather write their implementation in that language itself instead of writing it with math notation on paper first?

  • mruts 5 years ago

    I’ve always been in the weak Sapir-Whorf hypothesis camp that your tools of expression influence and (sometimes) define your thoughts. A great example is the idea of matrices in math. Matrices don’t allow you to represent anything that a system of equations can’t. But it turns out that they are a very helpful tool and let you abstract over the problem space, much in the same way that higher order functions do.

    It’s exactly like Paul Graham says, you might think that Python is just allowing you to write executable pseudo-code, but the interaction isn’t so simple.

    I’ve programmed a lot of Python and when I first started out, I felt like it was very frictionless, like you said. An easy way to put down thoughts. But as I learned more about functional programming and type theory, I realized that Python is inadequate and operates a top low level. i.e it feels like there’s so much friction there.

    I have used a variety of languages professionally (Scala, Haskell, OCaml, Racket, C, and Python mostly) and they all fall short (some more than others) on what I feel like I should be able to express. But if I had to chose, I would probably say OCaml or Racket come the closest to my thoughts, depending on the problem.

    Anyway, my point is that it’s not obvious how your tools affect the level and abstraction of your thoughts. It’s almost always a bi-directional relationship, and therefore, choosing (or making) the right tool and method of abstraction is very important. See Beating the Averages[1]. PG talks about the a hypothetical language called Blub. Blub isn’t the best, but it’s not the worst either. If there was a platonic form of Blub, it would most definitely be Python.

    [1]http://www.paulgraham.com/avg.html

    • gitrebase 5 years ago

      Can you describe some attributes of Ocaml and Racket that make them good contenders for expressing thoughts of mind?

      Also is it Racket specifically that makes it a good contender or is it the fact that it is a Lisp that makes it a good contender? Would any other Lisp like Scheme or Clojure or Common Lisp be equally good?

  • grovehaw 5 years ago

    Some people feel at home with APL (A Programming Language) or its descendants. It was first developed by Ken Iverson as a notation to be used on paper for communicating procedures and algorithms. It was the subject of a book published in 1962, then became a programming language running on IBM mainframes in 1966.

    The following line of code produces all the prime numbers below the value R.

      (~T∊T∘.×T)/T←1↓⍳R
    
    More history and a full explanation of this code can be found at https://www.computerhistory.org/atchm/the-apl-programming-la...
    • kragen 5 years ago

      A fairly precise raw Python translation for the interested:

          T = range(1, R)[1:]                       # T←1↓ιR
          u = [[t*u for u in T] for t in T]         # T∘.×T
          v = [t for ei, t in
                zip([any(t in ui for ui in u) for t in T], T)
               if not ei]                           # (~T∈u)/T
      
      As the standard poem on the subject by Dave Touretzky and Don Libes says:

          I wrote some hacks in APL,
          each on a single line.
          They're mutually recursive,
          and run in n-squared time!
      
      In this case, of course, it runs in R-cubed time, not n-squared time. There's a perhaps more common, but slightly longer, APL one-liner for finding primes that does run in O(R²) time instead; from https://aplwiki.com/SieveOfEratosthenes †:

         (2=+⌿0=(⍳X)∘.|⍳X)/⍳X
      
      Or, eliminating the inessential variations from the above version:

         (2=+⌿0=T∘.|T)/T←⍳R
      
      If you want APL and are stuck with Python, you can probably get most of the APL you want in Numpy. A slightly looser translation of the first algorithm into Numpy:

          import numpy
      
          T = numpy.arange(2, R)
          print T[~numpy.equal.outer(T, numpy.multiply.outer(T, T)
                                     ).any(axis=1).any(axis=1)]
      
      Recent versions of Numpy have numpy.isin, which works like APL ∈, which would save you the .outer.any.any nonsense.

      A much more compelling demonstration of APL is, in my mind, the interactive development process leading up the the one-liner Game of Life in this livecoding video: https://www.youtube.com/watch?v=a9xAKttWgP4

      † This is not the Sieve of Eratosthenes, despite the article title; the Sieve is an immensely more efficient algorithm than trial division, producing the same results in near-linear time.

      • kragen 5 years ago

        I realize now that the first line should say range(1, R+1)[1:]. We regret the error.

  • heinrichhartman 5 years ago

    > Is there a similar programming language that makes mathematicians feel at home?

    The problem is not writing stuff down. The problem is reasoning about what you have written down. With popular languages it's really hard, to say what a line of code does. It depends on so many things (global state, scoping, local state), that you have to spell out. Google "Semantics of Programming Languages" to get an idea, what's involved with formally reasoning about code.

    To have a chance to do manipulations by hand, you have to give up, at least:

    - Mutable State - Side-effects (I/O)

    (pure) Scheme and Haskell come into mind as contenders.

  • n4r9 5 years ago

    Hard to imagine that a programming language could ever be as flexible as mathematical writing, but the more extensible, abstractable, and I suppose functional it is the better.

simonsays2 5 years ago

In language, you are supposed to think in the language you are speaking. Attempting to translate the words you are speaking as you think them limits your skill in the language and makes you sound like an idiot.

I think the same holds for computer code. Think in code, write in code. Doing the translation bit is silly.

Especially because code expresses different things than math.

steve76 5 years ago

GOD NO!

JUST READ THE MANUAL, THE FIRST PAGE OF THE TUTORIAL.

IF IT'S NOT WHAT YOU WANT, DUMP IT OFF ON SOME KID, AND SEND OUT SOS.