weinzierl 2 days ago

"See below for a desciption of the contents of the thesis, at four different levels of assumed background knowledge: 'Basic' (no assumed familiarity with type theory and category theory), 'Intermediate' (some familiarity), 'Advanced' (significant assumed familiarity), and 'Expert'."

This is a brilliant idea and very well executed. I almost did not click the link because I would put myself into the 'Intermediate' (some familiarity) bucket and most blog posts are either too basic or too advanced for me. Glad I clicked.

moi2388 2 days ago

“Or maybe s and t represent possible states of some computer system, and Hom(s,t) is the type of processes that, when executed in state s, will result in state t. We definitely want to allow these processes to be irreversible”

Why? Irreversibility is what prevents computers from theoretically not requiring energy for their computation.

I’d rather truly have reversible computation ;)

cubefox 2 days ago

... and for those who were wondering about the TLD:

> In our biggest launch to date, Google Registry is adding eight new extensions to the internet: .dad, .phd, .prof, .esq, .foo, .zip, .mov and .nexus, now publicly available at a base annual price through your registrar of choice.

> The .phd, .prof, and .esq TLDs are perfect for professionals who want to show off their credentials and expertise.

https://www.registry.google/announcements/introducing-8-new-...

  • tristramb 2 days ago

    ..or for non-professional who want to mislead people into thinking that they have those credentials and expertise.

    • pentaphobe 2 days ago

      100%

      Though it's notable [^1] that only in America does "esquire" imply any credentials or expertise

      Tonnes of non-special boys in the UK get "esquire" as a suffix (maybe some go on the become attorneys, but unrelated :)

      [1]: EDIT: on second thought - it's not "notable" at all, perhaps "interesting" was the word I sought..

    • philipallstar 2 days ago

      To be fair, though, if you can get funding for your PhD then you can do this already with an actual PhD.

  • meindnoch 2 days ago

    i'm tired boss

    • cubefox 2 days ago

      So you don't want to show off your credentials and expertise??

casey2 2 days ago

I feel that massive effort is being misdirected because of some fundamental misunderstandings

Programming languages, like all languages, exist for communication between humans. A better programming language for a particular task is one that lowers friction needed to communicate ideas about particular programs.

The power of modern languages come from us quickly being able to understand programs. All languages rely on convention and assumption. The vast majority of bugs don't come from under-specification, but rather poor/miscommunication. Under-specification might explain why you wasted an hour reading source code to fix a minor problem, poor communication explains why we haven't yet automated every conceivably meaningful task computers are physically capable of already.

  • ndriscoll 2 days ago

    This work is more focused on proof assistants. i.e. programming languages designed for highly generic algorithms (mathematical proofs). I don't see how miscommunication or bugs are possible in this context. The assumptions and conclusions are all laid out in the types. Barring a bug in the type checker or use of e.g. `sorry`, a proof that compiles shouldn't be able to contain a bug.

    • simiones 2 days ago

      This is true in the sense that such a proof can't have "implementation bugs". But it can very much still have "specification bugs", where the formal specification doesn't actually match the fuzzy intention of whoever asked for the work.

      For a very simplistic example, say I ask for a trusted program that can add up all of my expenses. The person who writes the spec than defines this as `add_expenses xs = reduce - xs`. They then prove that write a program that matches this specification. The program will be "bug free" in the sense that it will perfectly match this spec. But it will certainly not be what I asked for - so from my point of view, it would be an extremely buggy program.

      For a more subtle example, consider encryption algorithms. It's generally fairly easy to specify the encryption scheme, and then check that a program implements that exact algorithm. And yet, it's very likely that a program built only to such a spec will actually leak secrets through various side-channels in practice (either key-dependent timing or even key-dependent CPU usage, and others). Again, it would be fair both to say that the implementation is bug-free (it perfectly matches the spec) and that it is deeply buggy (it doesn't actually protect your secrets).

      • ndriscoll 2 days ago

        None of this has anything to do with proof assistants. What you're saying is mathematical modeling doesn't perfectly capture real world phenomena. Everyone knows this. It also has nothing to do with proof assistants. The statement I was replying to was

        > I feel that massive effort is being misdirected because of some fundamental misunderstandings

        The misunderstanding here is with them and what they're looking at. The author doesn't discuss business problems, but lists this as a motivation in their "basic" summary:

        > Category theory is a powerful and highly-general branch of mathematics, which is applied for all kinds of purposes. It has been formalized in several existing proof assistants, but only as a result of a ton of difficult work. The type system of (1,1)-directed type theory is specifically optimized for category theory; it manages to automate a lot of the difficult "bureaucracy" of formalizing category-theoretic results, so it doesn't require as much work.

        i.e. it makes certain proofs easier.

      • seanhunter 2 days ago

        A formal proof really can’t have a specification bug. You say in the declaration what proposition you intend to prove and if you succeed in instantiating that type then you have succeeded in your proof.

        The system also will only allow that proof to be used for the proposition you have declared so even if you somehow declared the wrong thing and proved it without realising, that wouldn’t affect the consistency of the system at all.

    • cubefox 2 days ago

      A formal proof can contain a "miscommunication bug" if it proves not the exact thing that it was intended to prove. This is just like an ordinary program that doesn't exactly do what it is supposed to do.

      • ndriscoll 2 days ago

        The types say exactly what is proved. There's no real room for confusion. With a written proof you could accidentally make some assumption in the middle (like using some lemma that assumes a thing is finite or compact or whatever when you did not state that assumption), but a computer proof cannot do this. It won't typecheck.

        Reading mathematical sentences (types, basically) is like step one in maturing from baby to toddler in math.

        • simiones 2 days ago

          > The types say exactly what is proved.

          To whom? Fully formalized mathematics is extremely dense and difficult, and few mathematicians actually practice it, especially for complex problems. Just because the symbols are theoretically unambiguous doesn't mean that any human can actually understand what they mean, and whether they accurately encode some less formal system you were hoping to study.

          And this becomes much more problematic once you go from the realm of pure mathematics into other realms, like engineering or economics or anything else that is seeking to model real-world phenomena. A model can be perfectly self-consistent and still be a very bad and even useless model of a real-world thing. And it can be highly non-trivial to figure out to which extent and under what constraint a model actually matches the phenomenon you intended.

          For examples of both kinds of issues, look at theoretical physics. Lots of major physics theories are only partly formalized, and rely on unproven or sometimes even known to be unsound mathematical foundations (like the Dirac delta at the time it was first introduced, or like renormalization in quantum physics). And, of course, there are still open questions on how well these theories actually model various physical phenomena (like GR and black hole curvature singularities).

          • egorelik 2 days ago

            Sometimes, and for some people, the right language can actually simplify problems and hide complexity that isn't immediately necessary to the problem at hand. The whole idea is to abstract away models and worry about them separately. This is one of the motivations for the various synthetic theories that are getting researched in the last several years - the one mentioned here happens to be for Category Theory.

            This doesn't work for everyone though; some people find it easier to stay grounded in specific models.

          • ndriscoll 2 days ago

            I replied to the other stuff elsewhere, but FWIW for the amount I've played with typing up undergrad tier knowledge in Lean, I've found it to be pretty straightforward. Fully formalized math isn't dense. Actually it's the opposite: full of tedious bookkeeping that you learn how to formalize in an intro logic/proofs course. The hard part is making it more dense so that you can read it, and something like Lean actually makes things surprisingly less tedious than you'd expect.

        • cubefox 2 days ago

          > The types say exactly what is proved. There's no real room for confusion.

          Since Lean code isn't the same as natural language or even unobservable human intention, there might always be a mismatch between what the author of a Lean proof claims (or believes) it proves, and what it actually proves.

          • ndriscoll 2 days ago

            Anyone capable of reading and writing proofs can also read/write propositions. That's step one. Writing propositions in formal language is exactly why there's no mismatch: it proves what it says it proves. There is no ambiguity. There's nothing else to believe.

            Like if you have some f:R->R, a:R in context and write ∀ϵ>0,∃δ>0 s.t. x: R, |x−a|<δ ⟹ |f(x)−f(a)|<ϵ, no one is confused about what you mean.

            If you instead write ∃δ>0 s.t. ∀ϵ>0 x: R, |x−a|<δ ⟹ |f(x)−f(a)|<ϵ, you've written something different. Everyone in the field knows this, and actually it's even more obvious when you look at types in something like Lean: roughly ϵ -> (δ, h) vs. (δ, ϵ->h).

            • cubefox 2 days ago

              Since it's clearly not always trivial to translate conjectures (let alone whole proofs) into Lean code, translating Lean code into understandable conjectures can't be always trivial either.

  • lisper 2 days ago

    > Programming languages, like all languages, exist for communication between humans.

    That is manifestly false. Machine language and Brainfuck are both programming languages, and they are not designed for communication between humans. To the contrary, Brainfuck is deliberately designed to confuse humans.

    Programming languages exist to describe processes with sufficient precision that they may be carried out by a machine. You might want a programming language to also serve the secondary purpose of communicating something to a human that reads it, but the vast majority of extant languages are not designed with this goal in mind.

    > The power of modern languages come from us quickly being able to understand programs.

    Again, no. The power of modern languages comes from providing higher-level abstractions that are closer to human thought processes. Modern programming languages allow you to think directly in terms of things like numbers, arrays, strings, etc. rather than forcing you to think of everything in terms of raw bits. There are many languages that are notorious for making code much easier to write than to read.

    > The vast majority of bugs don't come from under-specification, but rather poor/miscommunication.

    No, the vast majority of bugs come from a mismatch between the code and the programmer's mental model of what the code does, i.e. the process that the code describes. You can have bugs in code written by a single human. There is no possibility of miscommunication because there is no communication. There is just the code and the programmer's brain.

  • OgsyedIE 2 days ago

    That's a mischaracterisation of programming. The goal of programming is to communicate exact specifications, as any text on Pascal, Agda, Rust, C, TLA+ or Lustre will show. Exactness comes before all else.